uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,565,115 | arxiv | \section{Introduction}
The generation of gauge configurations with two dynamical Wilson fermions,
started by the {\small\sf SESAM }-collaboration in spring '95 and expanded by the
formation of the {{\small\sf T}{$\chi$}{\small\sf L}} -collaboration in '96, is now well under way. The
primary goal, the generation of a set of configurations
comprising ${\cal O}(5000)$ trajectories at three sea-quark masses
with $m_\pi/m_\rho = 0.84, 0.76, 0.69$ at $\beta = 5.6$ on
$16^3 \times 32$ lattices was achieved well on time at the end of
'96. Given the rather small box-size of this simulation the aim
of the {{\sf T}{$\chi$}{\sf L}}-collaboration is to study finite size effects by
simulating at the lightest of {\small\sf SESAM}'s mass-values but on a $24^3
\times 40$ lattice. In addition, we are pushing to even smaller
quark masses, hopefully close to $m_\pi/m_\rho \simeq 0.5$. Last but
not least, in an attempt to clarify issues surrounding the approach to
the chiral limit, we have added a fourth sea-quark value to the $16^3
\times 32$ simulation and the data acquisition for this mass has just
been completed.
\par
In this short note, I summarize some of the main findings of our
analysis of two-point correlators on these gauge configurations with
the emphasis on the methods we have applied; results are
preliminary and are to be updated soon, when data at the additional
sea-quark mass at $m_\pi/m_\rho = 0.81$ will be included in the
analysis. This note is probably best read in conjunction with
S.~G\"usken's plenary talk writeup \cite{stephan} summarising {\small\sf SESAM}'s
attempt to identify sea-quark effects.
\section{Results}
{\bf Error analysis.} One of the first of {\small\sf SESAM}'s results
\cite{sesamauto} was to find that,
given long enough trajectory lengths in the Hybrid Monte Carlo (HMC),
we can obtain clear signals in the autocorrelation functions for
simple observables (Wilson loops, correlators). We find autocorrelation
times which are much lower,
i.e.${\cal O}(25)$ trajectory lengths, than some of the conjectures
to be found in the early HMC papers. The statistical errors presented in this
paper are obtained from an analysis of 200 configurations (per
sea-quark) picked from a total trajectory length in the HMC of 5000. A
blocking analysis is performed and the quoted
errors are always from a blocksize of 6, where we find errors to run
into plateaus. I believe this is the first time trajectory lengths in
a HMC are long enough to present a reliable error estimate! Obviously,
this is a vital step in any attempt to find signals of unquenching,
particularly in the area of spectrum calculations since quenched
spectra are (i) in agreement with experiment within ${\cal O}(10 \%)$
(see e.~g. \cite{speclat97}) and (ii) can be measured to
ever higher precision.
\\
{\bf Analysis with two degenerate light quarks.}
Next, I would like to discuss some issues raised recently by {\small\sf SESAM }
concerning the analysis of dynamical configurations with 2 degenerate
light Wilson fermions \cite{sesamquarks}. The new feature of our
analysis is the use of spectrum data with unequal valence and sea
quark masses, in particular to determine quantities with strange quark
content. We find our data to be well parameterized by a linear ansatz
both in the sea and valence quark masses (see figure 5 in
\cite{stephan}). We have applied this method to the determination of
the strange quark mass \cite{sesamquarks}, which, so far
\cite{guptaquarks}, has only been obtained in a sea of strange
quarks. The effect is a significant increase in the two-flavour
strange quark mass, whereas our dynamical light quark masses are much
lower than the quenched ones. A surprising feature in the analysis is
obtained when we attempt to study the light quark masses at fixed sea
quark mass (this has been termed ``partial quenching''). We find that
valence quark masses need to be tuned to negative values to make the
pseudoscalar masses vanish. In other words: the critical kappa-values
at fixed sea quark lie below the true critical kappa of pions with
equal strange and valence quark content. We have noted
\cite{sesamquarks} that light quark masses measured with respect to
the partially quenched critical kappa values are much higher,
practically in agreement with the values found in the quenched
theory.
\\
{\bf Spectrum Results.} The spectrum and decay constant results are
displayed in figures \ref{spectrum} and \ref{decay}. We compare the
dynamical results to (i) a quenched simulation we have performed at
$\beta = 6.0$; the two simulations are at comparable lattice
spacings. To mimic the situation of the dynamical simulation, the
quenched chiral extrapolations were performed linearly; (ii) the result
one obtains (partially quenched) by working at a fixed sea-quark mass
of $\kappa = 0.1575$.
\begin{figure}
\vspace{-1cm}
\begin{center}
\centerline{
\smallpspicture{spectrum.eps}
}
\vspace{-1cm}
\caption{\label{spectrum}Spectrum results.}
\end{center}
\vspace{-1cm}
\begin{center}
\smallpspicture{decay.eps}
\end{center}
\vspace{-1.5cm}
\caption{\label{decay}Decay constants and J-parameter.}
\vspace{-0.2cm}
\begin{center}
\smallpspicture{lattice.eps}
\end{center}
\vspace{-1.4cm}
\caption{\label{lat}Lattice spacings from different observables ($F$
denotes the value obtained from the static potential at $r_0$; the
energy level splittings are obtained from lattice NRQCD \cite{achim}.).}
\end{figure}
Studying observables with only light quark content, we see some
improvement for the $\Delta$, the decay constants and the $J$
parameter, albeit within large errors. The nucleon, however, seems to
be unimpressed by the presence of sea quarks. In the strange quark
sector the familiar problem from quenched QCD is still present; the
$K$ and the $K^*$ cannot both be matched simultaneously to
experiment.
\par
In figure \ref{lat} I show the lattice spacings obtained from four
different observables in light and heavy quark physics. Much better
agreement seems to be obtained in the full theory.
\\
{\bf Finite volume errors.} It was pointed out last year by
S.~Gottlieb \cite{specgottlieb} that little is known so far about the size of finite
volume errors in dynamical Wilson fermion simulations. Figure
\ref{finvol} shows the spectrum data from
the {\small\sf SESAM } and {{\small\sf T}{$\chi$}{\small\sf L}} collaborations at $m_\pi/m_\rho = 0.69$ obtained
on lattices of $16^3$ and $24^3$. While the errors are yet too large
for definitive statements there is a general downward trend of the
order of $2, 3$ and $5 \%$ respectively for pseudoscalar, vector and
nucleon masses.
\begin{figure}
\begin{center}
\smallpspicture{volume.eps}
\end{center}
\vspace{-1.6cm}
\caption{\label{finvol}Finite volume effects in the spectrum data.}
\vspace{-0.7cm}
\end{figure}
\\
{\bf Quadratic contributions.} Given that we have been using only
three values of the sea-quark mass in our present analysis we have been restricted to
linear chiral extrapolations. Particularly for the vector but also
for the nucleon, the data seem to exhibit some downward curvature
towards lighter sea-quark masses. Even with only three sea-quark masses
at our disposition there is a nice consistency check for the curvature
of the nucleon due to the Feynman-Hellman theorem which combines the
spectrum analysis with our calculation of the $\pi$-nucleon sigma term
\cite{stephan,jochen}. Given that
\begin{equation}
\sigma = m_q \left< P| ( u \bar u + d \bar d)|P \right> = m_q
{\partial m_P \over \partial m_q} \, ,
\label{constrain}
\end{equation}
we can use our results for $\sigma$ to constrain the linear and
quadratic terms of the proton mass. The result of this test is shown
in figure \ref{sigmatest} where the linear fit to the data is also
plotted. The nucleon data are very well described by
eq.~\ref{constrain}. The effect is to lower the mass of the nucleon in
the chiral limit by ${\cal O}(10 \%)$. It is a very pleasing result to
find the spectrum data match up so well with the flavour-singlet
operator calculation.
\begin{figure}
\begin{center}
\tinypspicture{NUCLEON_v_mq_quad_corr_1_156775__sigma_test.ps}
\end{center}
\vspace{-1.7cm}
\caption{\label{sigmatest}Linear fit of the nucleon data and
quadratic fit constrained by the $\sigma$ term.}
\vspace{-0.3cm}
\end{figure}
\section{Conclusions}
We are currently in the middle of the analysis of the spectrum data of
{\small\sf SESAM}'s and {{\small\sf T}{$\chi$}{\small\sf L}} 's large statistics simulation of QCD with 2 dynamical
flavours. We now have good control over statistical errors and have
developed a new method to analyse particles with strange quark content
in a sea of light quarks. We are pushing to lighter quark masses,
$m_\pi/m_\rho \approx 0.5$ and to larger lattices, enabling a badly
needed finite volume study. Whereas the quark masses are found to be
rather sensitive to the inclusion of sea-quarks a clear sign of
unquenching is still missing in the spectrum data.
|
1,108,101,565,116 | arxiv |
\section{Introduction}
\input{intro}
\section{Basic properties of solutions}
\input{basic}
\section{Characterisation of regular evolutions}
\input{regular}
\section*{Acknowledgements}
The author wishes to express his gratitude towards Piotr Mucha for suggesting the problem, encouragement and helpful discussions. Special thanks are also due to Jose Maz{\'o}n for his informative comments.
The work has been supported by the grant no.\;2014/13/N/ST1/02622 of the National Science Centre, Poland.
\bibliographystyle{plain}
|
1,108,101,565,117 | arxiv | \section{Introduction}
Learning visual features effectively has a profound influence on the recognition performance~\cite{boureau-cvpr10-learning,shih-cvpr17-deep}. Upon handling large-scale natural images, self-supervised visual representation learning benefits downstream recognition tasks via pretext feature training. Existing SSL methods typically leverage two branches to measure the similarity between different view representations derived from the same input image. By maximizing the similarity between the correlated views within one image (e.g., BYOL~\cite{grill-nips20-byol}, SimSiam~\cite{chen-arxiv20-simsiam} and Barlow Twins~\cite{zbontar-icml21-barlow}), or minimizing the similarity between views from different images (e.g., MoCo~\cite{he-cvpr20-moco} and SimCLR~\cite{chen-icml20-simclr}), these methods are shown to be effective towards learning self-supervised visual representations.
SSL evolves concurrently with the transformer. Debuting at natural language processing~\cite{vaswani-nips17-trans,devlin-arxiv18-bert}, transformers have shown their advantages to process large-scale visual data since ViT~\cite{dosovitskiy-iclr20-image}. The encoder-decoder architecture in this vision transformer consistently explores global attention without convolution. This architecture is shown effective for visual recognition with~\cite{carion-eccv20-end,zheng-arxiv20-rethinking,srinivas2021bottleneck} or without CNN integration~\cite{liu-arxiv21-swin,fan-arxiv21-multiscale}. Inspired by these achievements via supervised learning, studies~\cite{chen-arxiv21-mocov3,caron-arxiv21-emerging,atito-arxiv21-sit,xie-arxiv21-swin} arise recently to train transformers in a self-supervised manner. These methods maintain most of the SSL pipeline (i.e., encoder, projector, and predictor) utilized for training CNN encoders. Without significant framework alteration, original SSL methods for CNN encoders can be adapted to train transformer encoders and achieve favorable performance.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=\linewidth]{figures/intro}
\end{tabular}
\caption{SSL framework overview. The solid lines indicate network pipeline, and the dash lines indicate network updates. MoCo V3~\cite{chen-arxiv21-mocov3} explores visual attention by explicitly taking a vision transformer as encoder, while SimCLR~\cite{chen-icml20-simclr} and BYOL~\cite{grill-nips20-byol} do not learn an attentive CNN encoder. Our \algoname{CARE}\xspace framework consists of a C-stream and a T-stream to explore visual attention in CNN encoders with transformer supervision. Note that only target CNN encoder (i.e., CNN$_1$) is preserved after pre-training for downstream evaluation. We do not show projectors in (b) and (d) for simplicity.}
\label{fig:teaser}
\vspace{-4mm}
\end{figure}
The success of using transformer encoders indicates that visual attention benefits encoder backbones in SSL. On the other hand, in supervised learning, CNN attention is usually developed via network supervision~\cite{pu-nips18-deep}. However, we observe that existing SSL methods do not incorporate visual attention within CNN encoders. This motivates us to explore CNN attention in SSL. We expect CNN encoders to maintain similar visual attention to transformers for recognition performance improvement with lower computational complexity and less memory consumption.
In this paper, we propose a CNN Attention REvitalization framework (CARE) to make CNN encoder attentive via transformer guidance. Fig.~\ref{fig:teaser} (d) shows an intuitive illustration of CARE and compares it with other state-of-the-art SSL frameworks. There are two streams (i.e., C-stream and T-stream) in CARE where each stream contains two branches. C-stream is similar to existing SSL frameworks with two CNN encoders, two projectors, and one predictor. T-stream consists of two transformers, two projectors, and one predictor. T-stream takes CNN encoder features as input and improves feature attention via transformers. During the training process, we perform SSL in both streams simultaneously and use the T-stream output to supervise C-stream. The self-supervised learning in T-stream ensures attentive features produced by transformers are suitable for this SSL scenario. Meanwhile, we use the attention supervision on C-stream. This supervision enables both C-stream and T-stream to produce similar features. The feature representation of CNN encoders is improved by visual attention from transformers. As a result, the pre-trained CNN encoder produces attentive features, which benefits downstream recognition scenarios. Experiments on standard image classification, object detection, and semantic segmentation benchmarks show that the proposed CARE framework improves prevalent CNN encoder backbones to the state-of-the-art performance.
\section{Related works}
In the proposed \algoname{CARE}\xspace framework, we introduce transformers into self-supervised visual representation learning. In this section, we perform a literature survey on related works from the perspectives of visual representation learning as well as vision transformers.
\subsection{Visual representation learning}
There is an increasing need to learn good feature representations with unlabeled images. The general feature representation benefits downstream visual recognition scenarios. Existing visual representation learning methods can be mainly categorized as generative and discriminative methods. The generative methods typically use an auto-encoder for image reconstruction~\cite{vincent-icml08-extracting,rezende-icml14-stochastic}, or model data and representation in a joint embedding space~\cite{donahue-nips19-large,brock-iclr19-large}. The generative methods focus on image pixel-level details and are computationally intensive. Besides, further adaption is still required for downstream visual recognition scenarios.
The discriminative methods formulate visual representation learning as sample comparisons. Recently, contrastive learning is heavily investigated since its efficiency and superior performance. By creating different views from images, SSL obtains positive and negative sample pairs to constitute the learning process. Examples include memory bank~\cite{wu-cvpr18-unsupervised}, multi-view coding~\cite{tian-eccv20-CMC,tsai-iclr21-multi}, predictive coding~\cite{henaff-icml20-data,tsai-iclr21-rpc}, pretext invariance~\cite{misra-cvpr20-self}, knowledge distillation~\cite{Fang-iclr21-distill,caron-arxiv21-emerging} and information maximization~\cite{hjelm-iclr19-learning}. While negative pairs are introduced in MoCo~\cite{he-cvpr20-moco} and SimCLR~\cite{chen-icml20-simclr}, studies (e.g., BYOL~\cite{grill-nips20-byol} and SimSiam~\cite{chen-arxiv20-simsiam}) show that using only positive pairs are effective. Also, clustering methods~\cite{caron-eccv18-deep,caron-arxiv20-unsupervised} construct clusters for representation learning. The negative pairs are not introduced in these methods. Besides these discriminative methods focusing on image data, there are similar methods learning representations from either video data~\cite{wang2019unsupervised,han-nips20-self,kong-nips20-cycle,wang2021unsupervised,pan2021videomoco,wang2020self,wang2021self} or multi-modality data~\cite{alayrac-nips20-self,alwassel-nips20-self,asano-nips20-labelling}. Different from these SSL methods, we use the transformer architectures to improve CNN encoders attention.
\subsection{Vision transformers}
Transformer is proposed in~\cite{vaswani-nips17-trans} where self-attention is shown effective for natural language processing. BERT~\cite{devlin-arxiv18-bert} further boosts its performance via self-supervised training. The sequential modeling of transformer has activated a wide range of researches in natural language processing~\cite{tetko-natural20-state}, speech processing~\cite{synnaeve-arxiv19-end}, and computer vision~\cite{han-arxiv20-survey}. In this section, we only survey transformer-related works from the computer vision perspective.
There are heavy researches on transformers in both visual recognition and generation. ViT~\cite{dosovitskiy-iclr20-image} has shown that CNN is not a must in image classification. DETR~\cite{carion-eccv20-end}, Deformable DETR~\cite{zhu-iclr20-deformable} and RelationNet++~\cite{Cheng-nips20-od} indicate that transformers are able to detect objects with high precisions. SETR~\cite{zheng-arxiv20-rethinking} brings transformers into semantic segmentation while VisTR~\cite{wang-arxiv20-end} has shown transformers are able to perform video object segmentation. TrackFormer~\cite{meinhardt-arxiv21-trackformer} introduces transformers into multiple object tracking. A general form of transformer is formulated in NLM~\cite{wang-cvpr18-non} for video classification. Furthermore, transformers have been show effective in image generation~\cite{parmar-icml18-image} and image processing~\cite{chen-arxiv20-pre} scenarios. Examples include image super-resolution~\cite{yang-arxiv20-learning}, video inpainting~\cite{zeng-eccv20-learning}, and video captioning~\cite{zhou-cvpr18-end}. There are several emerging studies~\cite{chen-arxiv21-mocov3,caron-arxiv21-emerging,atito-arxiv21-sit,xie-arxiv21-swin} on how to use self-supervised learning to improve a transformer backbone. The learning paradigm for CNN encoders is adapted to the transformer without significant alteration. Different from existing methods that focus on learning a transformer encoder backbone with supervised or self-supervised learning. We explore how to use the transformers as guidance to enhance CNN visual attention. The pretrained CNN encoder benefits downstream recognition scenarios.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=0.95\linewidth]{figures/pipeline}
\end{tabular}
\caption{The pipeline of \algoname{CARE}\xspace. It consists of C-stream and T-stream. C-stream is similar to the existing SSL framework, and we involve transformers in T-stream. During training, we perform SSL in each stream (i.e., $\mathcal{L}_c$ and $\mathcal{L}_t$), and use T-stream outputs to supervise C-stream (i.e., $\mathcal{L}_{att}$). The CNN encoder becomes attentive via T-stream attention supervision.}
\label{fig:pipeline}
\vspace{-4mm}
\end{figure}
\section{Proposed method}
Our \algoname{CARE}\xspace framework consists of C-stream and T-stream. Fig.~\ref{fig:pipeline} shows an overview of the pipeline. We first illustrate the detailed structure of these streams. Then, we illustrate the network training process. The CNN encoder features are visualized as well for attention display.
\subsection{CNN-stream (C-stream)}\label{sec:cstream}
Our C-stream is similar to the existing SSL framework~\cite{grill-nips20-byol} where there are two CNN encoders, two projectors, and one predictor. The structures of the two encoders are the same, and the structures of the two projectors are the same. Given a training image $x$, we process it with a set of random augmentations to create two \emph{different} augmented views. We feed these two views to C-stream and obtain corresponding outputs $f_1(x)$ and $f_2(x)$, respectively. Then, we compute a loss $\mathcal{L}_c$ to penalize the dissimilarity of the outputs. This loss term is the mean squared error of the normalized feature vectors and can be written as what follows:
\begin{equation}\label{eq:lc}
\mathcal{L}_c=2-2\cdot \frac{\Braket{f_1(x), f_2(x)}}{\left\Vert f_1(x)\right\Vert_2\cdot \left\Vert f_2(x)\right\Vert_2}
\end{equation}
where $\|\cdot\|_2$ is the $\ell_2$ normalization, and the $\Braket{,}$ is the dot product operation. As the inputs of C-stream are from one image, the outputs of C-stream are supposed to become similar during the training process.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=0.8\linewidth]{figures/trans}
\end{tabular}
\caption{Transformer framework. The architectures of the two transformers in T-stream are the same. Each transformer consists of $n$ attention blocks. We show one attention block on the left, where the detailed structure of one self-attention layer is shown on the right.}
\label{fig:trans}
\vspace{-4mm}
\end{figure}
\subsection{Transformer-stream (T-stream)}
The T-stream takes the output feature maps of the CNN encoders as its inputs, which are set in parallel to the C-stream. It consists of two transformers, two projectors, and one predictor. The structures of the projectors and the predictor are the same as those in the C-stream. The structures of two transformers share the same architecture, which consists of $n$ consecutive attention blocks where each block contains two Multilayer Perception (MLP) layers with one multi-head self-attention (MHSA) layer in between.
We mainly follow the design of~\cite{srinivas2021bottleneck} to construct the attention block in transformer as shown in Fig.~\ref{fig:trans}.
The input feature map (denoted as $s$) of an attention block is first processed by the a MLP layer for dimension mapping, and then passes the self-attention layer and finally the another MLP layer. MHSA consists of multiple attention heads that process the input features in parallel. In one attention head, as illustrated on the right of Fig.~\ref{fig:trans}, the input feature map is mapped to the query feature ($q$), the key feature ($k$), and the value feature ($k$) via 3 different MLP layers $w_q, w_k$, and $w_v$, respectively. As detailed in Eq.~\eqref{eq:attention}, the query $q$ and key $k$ are multiplied to form the content-based attention, and $q$ and the position encoding $p$ are multiplied to form the position-based attention.
\begin{equation}\label{eq:attention}
\operatorname{Attention}(q,k,v)=\operatorname{softmax}(\frac{qp^{\rm{T}}+qk^{\rm{T}}}{\sqrt{d_{k}}})v
\end{equation}
where $d_k$ is the dimension of the query and the key. There are learnable parameters in the positional encoding module~\cite{parmar-icml18-image} to output $p$ that follows the shape of $s$. In Eq.~\eqref{eq:attention}, we perform matrix multiplication between $q$ and $p^{\rm T}$, $q$ and $k^{\rm T}$, and the softmax output and $v$ by treating each pixel as a token \cite{vaswani-nips17-trans} (i.e., for a feature map with $c$ channels and spatial dimension of $h \times w$, it forms $h \cdot w$~ $c$-dimensional tokens and thus obtains a matrix of size $h \cdot w \times c$). Besides, we perform matrix addition between $qp^{\rm T}$ and $qk^{\rm T}$. The output of the second MLP layer is added to the original input feature map $s$ via a residual connection \cite{he-cvpr16-deep}, and finally passes a ReLU activation layer.
In T-stream, the outputs of the two transformers are feature maps with 2D dimensions, which are then average-pooled and sent to the projectors and the predictor. We denote the outputs of T-stream as $f_3(x)$ and $f_4(x)$. Following the dissimilarity penalty in Sec.~\ref{sec:cstream}, we compute $\mathcal{L}_t$ as follows:
\begin{equation}\label{eq:lt}
\mathcal{L}_t=2-2\cdot \frac{\Braket{f_3(x),f_4(x)}}{\left\Vert f_3(x)\right\Vert_2\cdot \left\Vert f_4(x)\right\Vert_2}.
\end{equation}
Besides introducing SSL loss terms in both streams, we use the T-stream output to supervise the C-stream. This attention supervision loss term can be written as:
\begin{equation}\label{eq:ltsc}
\mathcal{L}_{\rm att}=\|f_1(x)-f_3(x)\|_2+\|f_2(x)-f_4(x)\|_2
\end{equation}
where the C-stream outputs are required to resemble the T-stream outputs during the training process. Note that the network supervision in Eq.~\ref{eq:ltsc} can not be simply considered as the knowledge distillation (KD) process. There are several differences from 3 perspectives: (1) The architecture design between ours and KD is different. In KD~\cite{hinton2015distilling}, a large teacher network is trained to supervise a small student network. In contrast, the CNN backbones are shared by two similar networks in our method. The different modules are only transformers, lightweight projectors, and predictor heads. (2) The training paradigm is different. In KD, the teacher network is typically trained in advance before supervising the student network. In contrast, two branches of our method are trained together from scratch for mutual learning. (3) The loss function in KD is normally the cross-entropy loss while we adopt mean squared error. During KD, supervision losses are also computed between feature map levels. While our method only computes losses based on the network outputs.
\subsection{Network training}
The proposed \algoname{CARE}\xspace consists of two streams and the loss terms have been illustrated above. The final objective function for network training can be written as:
\begin{equation}\label{eq:total}
\mathcal{L}_{\rm total}=\mathcal{L}_c+\mathcal{L}_t+\lambda\cdot \mathcal{L}_{\rm att}
\end{equation}
where $\lambda$ is a constant value controlling the influence of the attention supervision loss. After computing $\mathcal{L}_{\rm total}$, we perform back-propagation only on the upper branches of the C-stream and T-stream. Specifically in Fig.~\ref{fig:pipeline}, the CNN encoder$_1$, the projector$_1$, and the predictor$_1$ are updated via the computed gradients in C-stream. Meanwhile, the Transformer$_1$, the projector$_2$, and the predictor$_2$ are updated via computed gradients in T-stream. Afterwards, we perform a moving average update~\cite{lillicrap-iclr2016-continuous,grill-nips20-byol,he-cvpr20-moco} on the momentum CNN encoder$_2$ based on the CNN encoder$_1$, on the momentum projector$_1$ based on the projector$_1$, on the momentum transformer$_1$ based on the transformer$_1$, and on the momentum projector$_2$ based on the projector$_2$.
We only use positive samples when training the network.
As analyzed in~\cite{grill-nips20-byol}, using momentum projectors and a predictor is shown important for self-supervised learning. These modules prevent CNN features from losing generalization abilities during the pretext training. Besides, we experimentally found that the momentum update is effective in preventing trivial solutions. In our network, we adopt a similar design in both streams to facilitate network training and obverse that using only positive samples does not cause model collapse. After pretext training, we only keep the CNN encoder$_1$ in \algoname{CARE}\xspace. This encoder is then utilized for downstream recognition scenarios.
\renewcommand{\tabcolsep}{1pt}
\def0.19\linewidth{0.19\linewidth}
\begin{figure*}[t]
\footnotesize
\begin{center}
\begin{tabular}{cccccc}
\vspace{-0.2mm}
\small\rotatebox{90}{\qquad (a) Inputs} &
\includegraphics[width=0.19\linewidth]{figures/attention/15_gradcam_original.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/35_gradcam_original.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/73_gradcam_original.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/182_gradcam_original.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/207_gradcam_original.jpg}\\
\small\rotatebox{90}{\qquad (b) C-stream} &
\includegraphics[width=0.19\linewidth]{figures/attention/15_gradcam_cam_1.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/35_gradcam_cam_1.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/73_gradcam_cam_1.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/182_gradcam_cam_1.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/207_gradcam_cam_1.jpg}\\
\small\rotatebox{90}{\qquad (c) \algoname{CARE}\xspace} &
\includegraphics[width=0.19\linewidth]{figures/attention/15_gradcam_cam_0.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/35_gradcam_cam_0.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/73_gradcam_cam_0.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/182_gradcam_cam_0.jpg}&
\includegraphics[width=0.19\linewidth]{figures/attention/207_gradcam_cam_0.jpg}\\
\end{tabular}
\end{center}
\caption{Attention visualization of CNN encoders. We train two ResNet-50 encoders by using only C-stream and the whole \algoname{CARE}\xspace method, respectively. By taking the same image in (a) as inputs, the attention maps of these two encoders are shown in (b) and (c). The attention learned via \algoname{CARE}\xspace is more intense around the object regions shown in (c). In the attention visualization maps, pixels marked as red indicate the network pays more attention to the current regions.}
\label{fig:vis_att}
\vspace{-4mm}
\end{figure*}
\subsection{Visualizations}
Our \algoname{CARE}\xspace framework improves CNN encoder attention via transformer guidance. We show how encoders attend to the input objects by visualizing their attention maps. The ResNet-50 encoder backbone is used for visualization. We train this encoder for 200 epochs using only C-stream and the whole \algoname{CARE}\xspace framework, respectively. For input images, we use~\cite{selvaraju-iccv17-grad} to visualize encoder responses. The visualization maps are scaled equally for comparison.
Fig.~\ref{fig:vis_att} shows the visualization results. The input images are presented in (a), while the attention maps from the encoders trained with C-stream and \algoname{CARE}\xspace are shown in (b) and (c), respectively. Overall, the attention of the encoder trained with \algoname{CARE}\xspace is more intense than that with C-stream, which indicates that T-stream in \algoname{CARE}\xspace provides effective supervision for CNN encoders to learn to attend to object regions.
The T-steam helps CNN encoders adaptively choose to focus on local regions or global regions. For example, when global information is needed for classification, the CNN encoder learned by \algoname{CARE}\xspace will pay more attention to the whole object, as in the last column in (c), rather than a limited region, as shown in (b). On the other hand, when local information is sufficient for classification, the CNN encoder learned via \algoname{CARE}\xspace will pay more intense attention to the specific regions (e.g., the animals' heads in (c) on the first and second columns). The attention maps shown in the visualization indicate that the CNN encoder becomes attentive on the object region via transformer guidance in our \algoname{CARE}\xspace framework.
\section{Experiments}
In this section, we perform experimental validation on our \algoname{CARE}\xspace method. First, we introduce implementation details. Then, we compare our \algoname{CARE}\xspace method to state-of-the-art SSL methods on standard benchmarks, including image classification, object detection, and semantic segmentation. Furthermore, we conduct ablation studies to analyze each component of the proposed \algoname{CARE}\xspace method.
\subsection{Implementation details}\label{sec:implementationDetails}
{\flushleft \bf{Image pre-processing}.}
The training images we use during pretext training are from the ImageNet-1k \cite{russakovsky2015imagenet} dataset. We follow~\cite{grill-nips20-byol} to augment image data before sending them to the encoders. Specifically, we randomly crop patches from one image and resize them to a fixed resolution of $224\times 224$. Then, we perform random horizontal flip and random color distortions on these patches. The Gaussian blur, the decolorization, and the solarization operations are also adopted to preprocess these patches.
{\flushleft \bf{Network architectures.}}
We use ResNet encoder backbones~\cite{he-cvpr16-deep} (i.e., ResNet-50, ResNet-101, and ResNet-152) in our experiments. The architectures of the projectors and the predictors are the same and follow~\cite{grill-nips20-byol}. Each projector and predictor consist of a fully-connected layer with a batch normalization and a ReLU~\cite{nair-2010-relu} activation, followed by another fully-connected layer. The transformer in the T-stream contains $n$ attention blocks as shown in Fig.~\ref{fig:trans}.
{\flushleft \bf{Network training process.}}
We use the SGD optimizer with a momemtum of 0.9 during pretext training. The base learning rate is set as $0.05$ and scaled linearly with respect to the batch size~\cite{goyal-2017-accurate} (i.e., $\rm{lr}_{\rm{base}}=0.05\times\rm{BatchSize}/256$). We start the pretext training with a warm-up of 10 epochs where the learning rate rises linearly from $10^{-6}$ to the base learning rate ($\rm{lr}_{\rm{base}}$). Then, we use a cosine decay schedule for the learning rate without restarting it~\cite{loshchilov-2016-sgdr,grill-nips20-byol} to train the network. The momentum update coefficient of network parameters (denoted as $\tau$) is increased from 0.99 to 1 via a cosine design (i.e., $\tau=1-(1-\tau_{\mathrm{base}})\cdot(\cos(\pi t/T)+1)/2$, where $t$ is the current training step and $T$ is the total number of training steps). We train \algoname{CARE}\xspace using 8 Tesla V100 GPUs with a batch size of 1024. The automatic mixed precision training strategy~\cite{micikevicius2017mixed} is adopted for training speedup.
\subsection{Comparison to state-of-the-art approaches}
We compare feature representations of CNN encoders learned by our method and state-of-the-art SSL methods. Comparisons are conducted on recognition scenarios, including image classification (self-supervised and semi-supervised learning configurations), object detection, and semantic segmentation.
\begin{table}[t]
\caption{Linear evaluations on ImageNet with top-1 accuracy (in $\%$). We highlight the best experimental results under the same model parameters in \textbf{bold.}
}
\label{table:linear_classif}
\begin{subtable}{.38\linewidth}
\small
\centering
\caption{Classification accuracy by using the ResNet-50 encoder.}
\label{table:linear_classif_1}
\begin{tabular}[t]{l| c c c }
\toprule
Method & 100ep & 200ep & 400ep \\
\midrule
CMC~\cite{tian-eccv20-CMC} & - & 66.2 & - \\
PCL v2~\cite{li-2020-prototypical} & - & 67.6 & - \\
\algoname{SimCLR}\xspace~\cite{chen-icml20-simclr} & 66.5 & 68.3 & 69.8\\
\algoname{MoCo}\xspace v2~\cite{MoCov2} & 67.4 & 69.9 & 71.0\\
SwAV~\cite{caron-arxiv20-unsupervised} & 66.5 & 69.1 & 70.7 \\
SimSiam~\cite{chen-arxiv20-simsiam} & 68.1 & 70.0 & 70.8 \\
InfoMin Aug.~\cite{tian-2020-makes} & - & 70.1 & - \\
\algoname{BYOL}\xspace~\cite{grill-nips20-byol} & 66.5 & 70.6 & 73.2 \\
Barlow Twins~\cite{zbontar-icml21-barlow} & - & - & 72.5 \\
\algoname{CARE}\xspace (ours) & $\bf{72.0}$ & $\bf{73.8}$ & $\bf{74.7}$ \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}{.62\linewidth}
\small
\centering
\caption{Classification accuracy via CNN and Transformer encoders.}
\label{table:linear_classif_2}
\begin{tabular}[t]{l l c c r r }
\toprule
Method & Arch. & Param. & Epoch & GFlops & Top-1 \\
\midrule
CMC~\cite{tian-eccv20-CMC} & ResNet-50(2$\times$) & 69M & - & 11.4 & 70.6\\
BYOL~\cite{grill-nips20-byol} & ResNet-50(2$\times$) & 69M & 100 & 11.4 & 71.9\\
BYOL~\cite{grill-nips20-byol} & ResNet-101 & 45M & 100 & 7.8 & 72.3\\
BYOL~\cite{grill-nips20-byol} & ResNet-152 & 60M & 100 & 11.6 & 73.3\\
BYOL~\cite{grill-nips20-byol} & ViT-S & 22M & 300 & 4.6 & 71.0\\
BYOL~\cite{grill-nips20-byol} & ViT-B & 86M & 300 & 17.7 & 73.9\\
MoCo v3~\cite{chen-arxiv21-mocov3} & ViT-S & 22M & 300 & 4.6 & 72.5 \\
\algoname{CARE}\xspace (ours) & ResNet-50 & 25M & 200 & 4.1 & $\bf{73.8}$\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 69M & 100 & 11.4 & $\bf{73.5}$\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 69M & 200 & 11.4 & $\bf{75.0}$\\
\algoname{CARE}\xspace (ours) & ResNet-101 & 45M & 100 & 7.8 & $\bf{73.5}$\\
\algoname{CARE}\xspace (ours) & ResNet-152 & 60M & 100 & 11.6 & $\bf{74.9}$\\
\bottomrule
\end{tabular}
\end{subtable}
\vspace{-0.5em}
\end{table}
\begin{table}[t]
\caption{Linear evaluations on ImageNet with top-1 and top-5 accuracy (in\%). We present the experimental results of different CNN encoders that are trained by using more epochs (e.g., 200 epochs, 400 epochs and 800 epochs).}
\label{table:linear_classif_more}
\renewcommand{\tabcolsep}{3.5mm}
\small
\centering
\begin{tabular}[t]{l l c c r r r }
\toprule
Method & Arch. & Param. & Epoch & GFlops & Top-1 & Top-5 \\
\midrule
BYOL~\cite{grill-nips20-byol} & ResNet-50 & 25M & 800 & 4.1 & 74.3 & 91.7\\
BYOL~\cite{grill-nips20-byol} & ResNet-50(2$\times$) & 69M & 800 & 11.4 & 76.2 & 92.8\\
BYOL~\cite{grill-nips20-byol} & ResNet-101 & 45M & 800 & 7.8 & 76.6 & 93.2 \\
BYOL~\cite{grill-nips20-byol} & ResNet-152 & 60M & 800 & 11.6 & 77.3 & 93.3\\
\algoname{CARE}\xspace (ours) & ResNet-50 & 25M & 200 & 4.1 & 73.8 & 91.5\\
\algoname{CARE}\xspace (ours) & ResNet-50 & 25M & 400 & 4.1 & 74.7 & 92.0\\
\algoname{CARE}\xspace (ours) & ResNet-50 & 25M & 800 & 4.1 & 75.6 & 92.3\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 69M & 200 & 11.4 & 75.0 & 92.2\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 69M & 400 & 11.4 & 76.5 & 93.0\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 69M & 800 & 11.4 & 77.0 & 93.2\\
\algoname{CARE}\xspace (ours) & ResNet-101 & 45M & 200 & 7.8 & 75.9 & 92.7\\
\algoname{CARE}\xspace (ours) & ResNet-101 & 45M & 400 & 7.8 & 76.9 & 93.3\\
\algoname{CARE}\xspace (ours) & ResNet-101 & 45M & 800 & 7.8 & 77.2 & 93.5\\
\algoname{CARE}\xspace (ours) & ResNet-152 & 60M & 200 & 11.6 & 76.6 & 93.1\\
\algoname{CARE}\xspace (ours) & ResNet-152 & 60M & 400 & 11.6 & 77.4 & 93.6\\
\algoname{CARE}\xspace (ours) & ResNet-152 & 60M & 800 & 11.6 & 78.1 & 93.8\\
\bottomrule
\end{tabular}
\vspace{-0.5em}
\end{table}
{\flushleft \bf{Self-supervised learning on image classifications.}}
We follow~\cite{he-cvpr20-moco} to use standard linear classification protocol where the parameters of the encoder backbone are fixed and an additional linear classifier is added to the backbone. We train this classifier using SGD for 80 epochs with a learning rate of 0.2, a momentum of 0.9, and a batch size of 256. The {ImageNet}\xspace training set is used for the training and the {ImageNet}\xspace validation set is used for evaluation.
Table~\ref{table:linear_classif} shows the linear evaluation results with the top-1 accuracy. We show the classification results by using the ResNet-50 encoder learned via different SSL methods in Table~\ref{table:linear_classif_1}. In this table, our \algoname{CARE}\xspace method consistently outperforms other methods under different training epochs. Specifically, our method achieves a 74.7\% top-1 accuracy under 400 training epochs, which is 1.5\% higher than the second-best method BYOL. Meanwhile, we compare our method to other methods that use CNN and transformer (i.e., ViT~\cite{dosovitskiy-iclr20-image}) encoders in Table~\ref{table:linear_classif_2}. The results show that, under similar number of parameters of CNN and transformer encoders (i.e., ResNet-50 and ViT-S), \algoname{CARE}\xspace achieves higher accuracy than other SSL methods.
This indicates that \algoname{CARE}\xspace improves CNN encoders to outperform transformer encoders by utilizing visual attention.
Besides, we provide the linear classification results of different CNN encoders (e.g., ResNet-101 and ResNet-152) with more training time in Table~\ref{table:linear_classif_more}, where our \algoname{CARE}\xspace method also consistently prevails.
\begin{table}[t]
\caption{Image classification by using semi-supervised training on ImageNet with Top-1 and Top-5 accuracy (in \%). We report our method with more training epochs in the supplementary files.}
\label{table:semi_exp}
\begin{subtable}{.38\linewidth}
\small
\centering
\caption{Classification accuracy by using the ResNet-50 encoder.}
\label{table:linear_semi_1}
\renewcommand{\tabcolsep}{0.6mm}
\begin{tabular}[t]{l c c c c c}
\toprule
Method & Epoch & \multicolumn{2}{c}{Top-$1$} & \multicolumn{2}{c}{Top-$5$} \\
& & $1\%$ & $10\%$ & $1\%$ & $10\%$ \\
\midrule
Supervised~\cite{zhai2019s4l} & - & $25.4$ & $56.4$ & $48.4$ & $80.4$ \\
\midrule
PIRL \cite{misra2020self} & - & - & - & $57.2$ & $83.8$\\
\algoname{SimCLR}\xspace \cite{chen-icml20-simclr} & 800 & $48.3$ & $65.6$ & $75.5$ & $87.8$\\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & $800$ & $53.2$ & $68.8$ & $78.4$ & $89.0$\\
\algoname{CARE}\xspace (Ours) & 400 & \bf{60.0} & \bf{69.6} & \bf{81.3} & \bf{89.3} \\
\bottomrule
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}{.555\linewidth}
\small
\centering
\caption{Classification accuracy by using other CNN encoders.}
\label{table:linear_semi_2}
\renewcommand{\tabcolsep}{.8mm}
\begin{tabular}[t]{l l c c c c c}
\toprule
Method & Arch. & Epoch & \multicolumn{2}{c}{Top-$1$} & \multicolumn{2}{c}{Top-$5$} \\
& & & $1\%$ & $10\%$ & $1\%$ & $10\%$ \\
\midrule
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & ResNet-50(2$\times$) & 100 & 55.6 & 66.7 & 77.5 & 87.7\\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & ResNet-101 & 100 & 55.8 & 65.8 & 79.5 & 87.4\\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & ResNet-152 & 100 & 56.8 & 67.2 & 79.3 & 88.1\\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 100 & 57.4 & 67.5 & 79.8 & 88.3 \\
\algoname{CARE}\xspace (ours) & ResNet-50(2$\times$) & 200 & 61.2 & 69.6 & 82.3 & 89.5 \\
\algoname{CARE}\xspace (ours) & ResNet-101 & 100 & 57.1 & 67.1 & 80.8 & 88.2\\
\algoname{CARE}\xspace (ours) & ResNet-101 & 200 & 62.2 & 70.4 & 85.0 & 89.8\\
\algoname{CARE}\xspace (ours) & ResNet-152 & 100 & 59.4 & 69.0 & 82.3 & 89.0\\
\bottomrule
\end{tabular}
\end{subtable}
\vspace{-0.5em}
\end{table}
{\flushleft \bf{Semi-supervised learning on image classifications.}}
We evaluate our \algoname{CARE}\xspace method by using a semi-supervised training configuration on the ImageNet dataset. After pretext training, we finetune the encoder by using a small subset of ImageNet's training set. We follow the semi-supervised learning protocol~\cite{grill-nips20-byol,chen-icml20-simclr} to use $1\%$ and $10\%$ training data (the same data splits as in~\cite{chen-icml20-simclr}) during finetuning. Table~\ref{table:semi_exp} shows the top-1 and top-5 accuracy on the {ImageNet}\xspace validation set. The results indicate that our \algoname{CARE}\xspace method achieves higher classification accuracy than other SSL methods under different encoder backbones and training epochs.
\begin{table}[t]
\small
\caption{Transfer learning to object detection and instance segmentation. The best two results in each column are in bold. Our method achieves favorable detection and segmentation performance by using limited training epochs.}
\label{tab:detection}
\begin{center}
\renewcommand{\tabcolsep}{1.8mm}
\begin{tabular}{lc|ccc|ccc|ccc}
\toprule
& & \multicolumn{3}{c|}{COCO det.} & \multicolumn{3}{c|}{COCO instance seg.} & \multicolumn{3}{c}{VOC07+12 det.}\\
\cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}
Method & Epoch & AP$^{bb}$ & AP$^{bb}_{50}$ & AP$^{bb}_{75}$ & AP$^{mk}$ & AP$^{mk}_{50}$ & AP$^{mk}_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\midrule
Rand Init & - &26.4 &44.0 &27.8 &29.3 &46.9 &30.8 &33.8 &60.2 &33.1 \\
Supervised & 90 & 38.2 &58.2 &41.2 &33.3 &54.7 &35.2 &53.5 &81.3 &58.8 \\
\midrule
PIRL\cite{misra2020self} & 200 & 37.4 & 56.5 & 40.2 & 32.7 & 53.4 & 34.7 &55.5 &81.0 &61.3 \\
MoCo\cite{he-cvpr20-moco} & 200 &38.5 &58.3 &41.6 &33.6 &54.8 &35.6 &55.9 &81.5 &62.6 \\
MoCo-v2\cite{MoCov2} & 200 &38.9 &58.4 &42.0 &34.2 &55.2 &36.5 &57.0 &82.4 &63.6 \\
MoCo-v2\cite{MoCov2} & 800 & {39.3} & 58.9 & {42.5} & {34.4} & 55.8 & 36.5 & {57.4} & 82.5 & {64.0} \\
SwAV\cite{caron-arxiv20-unsupervised} & 200 & 32.9 & 54.3 & 34.5 & 29.5 & 50.4 & 30.4 & - & - & - \\
SwAV\cite{caron-arxiv20-unsupervised} & 800 & 38.4 & 58.6 & 41.3 & 33.8 & 55.2 & 35.9 & 56.1 & {82.6} & 62.7 \\
Barlow Twins\cite{zbontar-icml21-barlow} & 1000 & 39.2 & 59.0 & {42.5} & 34.3 & {56.0} & 36.5 & 56.8 & {82.6} & 63.4 \\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & 200 &{39.2} &{58.9} &{42.4} &{34.3} &{55.6} &{36.7} & 57.0 & 82.3 & 63.6 \\
\algoname{CARE}\xspace (Ours) & 200 &\bf{39.4} &\bf{59.2} &\bf{42.6} &\bf{34.6} &\bf{56.1} &\bf{36.8} & \bf{57.7} & \bf{83.0} & \bf{64.5} \\
\algoname{CARE}\xspace (Ours) & 400 &\bf{39.6} &\bf{59.4} &\bf{42.9} &\bf{34.7} &\bf{56.1} &\bf{36.9} & \bf{57.9} & \bf{83.0} & \bf{64.7} \\
\bottomrule
\end{tabular}
\end{center}
\label{tab:coco1x_200e}
\vspace{-2em}
\end{table}
{\flushleft \bf{Transfer learning to object detection and semantic segmentation.}}
We evaluate \algoname{CARE}\xspace's representations on the downstream object detection and semantic segmentation scenarios. We use the standard VOC-07, VOC-12, and COCO datasets~\cite{everingham2010pascal,lin2014microsoft}. We follow the standard protocol~\cite{he-cvpr20-moco} to integrate the pretext trained CNN encoder into Faster-RCNN~\cite{ren2015faster} when evaluating object detection results on VOC-07 and VOC-12 datasets. On the other hand, we integrate this encoder into Mask-RCNN~\cite{he2017mask} when evaluating object detection and semantic segmentation in COCO dataset. The ResNet-50 encoder is used in all the methods. All detectors are finetuned for 24k iterations using VOC-07 and VOC-12 training sets and are evaluated on the VOC-07 test set. On the COCO dataset, all models are finetuned via the $1\times$ schedule. The results are averaged over five independent trials.
Table~\ref{tab:coco1x_200e} shows the evaluation results.
Compared to the supervised method, our \algoname{CARE}\xspace improves detection (i.e., $1.4\%$ on COCO and $4.4\%$ on VOC) and segmentation (i.e., $1.4\%$ on COCO) performance. On the other hand, our \algoname{CARE}\xspace method compares favorably against other SSL methods. Note that the results of our method are reported under 200 and 400 epochs, which are still higher than other methods under 800 epochs. This indicates the effectiveness of our \algoname{CARE}\xspace method on learning the CNN encoder backbone. The comparisons on COCO datasets are similar to those on VOC. Specifically, our \algoname{CARE}\xspace method achieves a $0.5\%$ AP$^{bb}$ increase and a $0.4\%$ AP$^{mk}$ increase upon MoCo v2 under 200 epochs. The performance improvement is mainly brought by the visual attention integration from transformer guidance.
Besides, we further evaluate \algoname{CARE}\xspace's representation on the COCO dataset via a more powerful detector, the feature pyramid network (FPN)~\cite{lin2017feature}, and report the results in Table~\ref{tab:detection3}. We follow the same evaluation protocol introduced above.
The detectors are trained with 1$\times$ schedule (90k iterations) for fair comparisons.
Again, \algoname{CARE}\xspace trained for 200/400 epochs outperforms other state-of-the-art SSL methods trained for 800/1000 epochs on object detection and semantic segmentation on the COCO dataset, which suggests that the CNN encoder in \algoname{CARE}\xspace is empowered by the attention mechanism of the transformer which supervises the CNN encoder in the pretraining.
\begin{table}[t]
\small
\caption{Transfer learning to object detection and instance segmentation with the Mask R-CNN R50-FPN detector. The best two results in each column are in bold. Our method achieves favorable detection and segmentation performance by using limited training epochs.}
\label{tab:detection3}
\begin{center}
\renewcommand{\tabcolsep}{3.5mm}
\begin{tabular}{lc|ccc|ccc}
\toprule
& & \multicolumn{3}{c|}{COCO det.} & \multicolumn{3}{c}{COCO instance seg.} \\
\cmidrule(lr){3-5}\cmidrule(lr){6-8}
Method & Epoch & AP$^{bb}$ & AP$^{bb}_{50}$ & AP$^{bb}_{75}$ & AP$^{mk}$ & AP$^{mk}_{50}$ & AP$^{mk}_{75}$ \\
\midrule
Rand Init & - &31.0 &49.5 &33.2 &28.5 &46.8 &30.4 \\
Supervised & 90 &38.9 &59.6 &42.7 &35.4 &56.5 &38.1 \\
\midrule
PIRL\cite{misra2020self} & 200 & 37.5 & 57.6 & 41.0 &34.0 & 54.6 & 36.2 \\
MoCo\cite{he-cvpr20-moco} & 200 &38.5 &58.9 & 42.0 &35.1 & 55.9 &37.7 \\
MoCo-v2\cite{MoCov2} & 200 &38.9 &59.4 &42.4 &35.5 & 56.5 &38.1 \\
MoCo-v2\cite{MoCov2} & 800 & {39.4} & 59.9 & {43.0} & {35.8} & 56.9 & 38.4 \\
SwAV\cite{caron-arxiv20-unsupervised} & 200 & 38.5 & \bf{60.4} & 41.4 &35.4 & 57.0 & 37.7 \\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & 200 &{39.1} &{59.5} &{42.7} &{35.6} &{56.5} &{38.2} \\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & 400 &{39.2} &{59.6} &{42.9} &{35.6} &{56.7} &{38.2} \\
\algoname{BYOL}\xspace\cite{grill-nips20-byol} & 800 &{39.4} &{59.9} &{43.0} &{35.8} &{56.8} &\bf{38.5} \\
Barlow Twins\cite{zbontar-icml21-barlow} & 1000 & 36.9 & 58.5 & {39.7} & 34.3 & {55.4} & 36.5 \\
\algoname{CARE}\xspace (Ours) & 200 &\bf{39.5} &{60.2} &\bf{43.1} &\bf{35.9} &\bf{57.2} &\bf{38.5} \\
\algoname{CARE}\xspace (Ours) & 400 &\bf{39.8} &\bf{60.5} &\bf{43.5} &\bf{36.2} &\bf{57.4} &\bf{38.8} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-2em}
\end{table}
\subsection{Ablation studies}
In our \algoname{CARE}\xspace method, visual attention is explored via transformers to supervise C-stream. We analyze the influence of attention supervision by using different values of $\lambda$ in Eq.~\eqref{eq:total}. Also, we analyze how the number of attention blocks and the positional encoding affect feature representations. Besides, we further take an investigation of the sequential and parallel design of T-stream. We use the ResNet-50 as encoder backbone and the number of training epochs is set to 100. The top-1 image classification accuracy on ImageNet via SSL is reported to indicate feature representation effectiveness.
{\flushleft \bf Supervision influence $\lambda$.}
We study how the attention supervision term $\mathcal{L}_{\rm att}$ in Eq.~\eqref{eq:total} affects feature representation by using different values of $\lambda$. Table~\ref{tab:abl_lambda} shows the evaluation results. When we set $\lambda$ as 0, the CNN encoder is learned without attention supervision. In this case, the performance decreases significantly. When we increase the value of $\lambda$, the attention supervision increases as well. We observe that $\lambda=100$ achieves the best performance and adopt this setting in the overall experiments.
\begin{table}
\small
\begin{minipage}{0.3\linewidth}
\centering
\caption{Analysis on $\lambda$. }
\renewcommand{\tabcolsep}{6mm}
\begin{tabular}{c | c }
\toprule
$\lambda$ & Top-1 \\
\midrule
0 & $70.52$ \\
1 & $70.88$ \\
10 & $70.96$ \\
100 &\textbf{$72.06$} \\
250 & $72.00$ \\
\bottomrule
\end{tabular}
\label{tab:abl_lambda}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\caption{Analysis on $n$. }
\renewcommand{\tabcolsep}{6mm}
\begin{tabular}{c | c }
\toprule
$n$ & Top-1 \\
\midrule
2& $71.08$ \\
3 & $71.60$ \\
4 & $71.79$ \\
5 &\textbf{$72.06$} \\
6 & $69.37$ \\
\bottomrule
\end{tabular}
\label{tab:abl_number_t}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\centering
\caption{Analysis on the positional encoding. }
\renewcommand{\tabcolsep}{4mm}
\begin{tabular}{l | c }
\toprule
Position encoding & Top-1 \\
\midrule
none & $69.49$ \\
sin-cos absolute~\cite{vaswani-nips17-trans} & $66.68$ \\
learnable absolute~\cite{srinivas-2021-bot} & $72.01$ \\
learnable relative~\cite{shaw-2018-self} &\textbf{$72.06$} \\
\bottomrule
\end{tabular}
\label{tab:abl_pos}
\end{minipage}
\vspace{-2em}
\end{table}
{\flushleft \bf Number of attention blocks.}
We analyze how the capacity of transformer in T-stream affects the recognition performance. We set $n=[2,...,6]$ to constitute five transformers in T-stream with increasing capacities. Then, we report the recognition performance of the corresponding CNN encoders in Table~\ref{tab:abl_number_t}. When $n$ is larger, the transformer capacity increases and stronger visual attention are explored. This attention supervises C-stream to improve the encoder. However, the recognition performance drop when $n=6$. This may be due to the broken balance between the attention loss $\mathcal{L}_{\rm att}$ and the original SSL loss $\mathcal{L}_c$. In our experiment, we set $n=5$ in our \algoname{CARE}\xspace method.
{\flushleft \bf Positional encoding.}
We analyze how positional encoding affects the final performance. Numerous positional encoding settings~\cite{vaswani-nips17-trans,srinivas-2021-bot,shaw-2018-self} are adopted for analysis in Table~\ref{tab:abl_pos}.
Without positional encoding, the performance decreases significantly. Meanwhile, the conventional fixed sine-cosine encoding setting is not suitable for CARE. We experimentally find that using learnable parameter configuration in positional encoding improves recognition performance and adopt~\cite{shaw-2018-self} in \algoname{CARE}\xspace.
{\flushleft \bf Sequential design \textit{v.s.} parallel design.} Experimental results on training ResNet-50 with 100 epochs indicate that parallel design is effective than sequential design (i.e., 72.02\% v.s. 69.32\%). This is because during the sequential training process, both the CNN encoders and the transformers are optimized together rather than the CNN encoders themselves. This prevents attention supervision from training the CNN encoders thoroughly.
\section{Concluding remarks}\label{sec:conclusion}
Transformers have advanced visual recognition via attention exploration architectures. In self-supervised visual representation learning, studies emerge to utilize transformer backbones for recognition performance improvement. This indicates that visual attention explored by the transformers benefit SSL methods. Motivated by this success, we investigate how to explore visual attention effectively to benefit CNN encoders in SSL. We propose CARE to develop a parallel transformer stream together with the CNN stream.
The visual attention is thus explored via transformers to supervise the CNN stream during SSL. Although the limitation occurs that more computational cost is spent on the SSL and attention supervision loss term, the learned CNN encoder becomes attentive with transformer guidance and does not consume more costs in downstream tasks. Experiments on the standard visual recognition benchmarks, including image classification, object detection, and semantic segmentation, indicate that CARE improves CNN encoder backbones to a new state-of-the-art performance.
{\flushleft \bf Acknowledgement}. This work is supported by CCF-Tencent Open Fund, the General Research Fund of Hong Kong No.27208720
and the EPSRC Programme Grant Visual AI EP/T028572/1.
\clearpage
{\small
\bibliographystyle{plainnat}
|
1,108,101,565,118 | arxiv | \section{Introduction}
The past few years have seen an enormous increase in the observational data collected for galaxies that had formed in the first billion years of the Universe thanks to a combination of state of the art observatories (most notably the Hubble Space Telescope; {\it HST}) as well as refined selection methods. In the latter category, the Lyman Break technique has been exceptionally successful at building up a statistically significant repository of $z \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 6$ Lyman Break Galaxies \citep[LBGs; e.g.][]{mclure2009,mclure2010,mclure2013,livermore2017,bouwens2015,
bouwens2010a,bowler2014a,atek2015,oesch2014}. The measured ultra-violet (UV) luminosity (between $1250-1500$\AA\, in the rest-frame) from the above-mentioned works has been been used to construct the evolving UV luminosity function (UV LF) all the way to $z \sim 10$ allowing unprecedented studies on the key feedback physics of early galaxies. One of the key feedback effects is associated with Type II supernovae that can potentially heat or blow-out a significant (or even all) of the gas content in low-mass halos \citep[e.g.][]{1999ApJ...513..142M}. The second feedback effect is that associated with cosmic reionization in the redshift range $15 \gtrsim z \gtrsim 6$ \citep{fan2006, stark2011, planck2018}.
During reionization, photoionization heating from the continually rising UV background (UVB) can raise the gas temperature to about $2 \times 10^4$ K in ionized regions \citep{miralda-escude1994}, which, in principle, could result in the UVB photo-evaporating gas from the lowest mass galaxies suppressing further star formation. Given that many existing models assume these galaxies to be the key reionization sources \citep{choudhury2007,finlator2011,wise2014,robertson2015,2017ApJ...836...16D}, the impact of this UV feedback is critical both for galaxy formation as well as the process of reionization.
However, so far, the fluctuating UVB has only been measured at relatively low-redshifts \citep[$z \sim 5-6$;][]{2015MNRAS.447.3402B,2015MNRAS.453.2943C,2017MNRAS.465.3429C}. Further, since the baryonic content of a halo exposed to a UVB depends on a multitude of parameters, including the redshift, the thermal history and the intensity of the UVB, the halo baryon fraction during reionization remains a matter of debate \citep{okamoto2008, wise2012b, hasegawa2013, sobacchi2013b}. A number of works find the lowest mass haloes to be impervious to the UVB unless the key reionization sources are either molecular-cooling driven \citep{sobacchi2013b} rapidly losing their gas after SN explosions \citep{pawlik2015} or low-mass galaxies that contain little/no molecular gas in the first place \citep{gnedin2014}. On the other hand, other works find the UVB to suppress the star formation rate at high-$z$ \citep{petkova2011, finlator2011, hasegawa2013}. Naturally while the first school of thought would predict no impact of the UVB on the UV LF \citep[e.g.][]{gnedin2014}, in the latter case, the faint-end slope of the UV LF (typically denoted by $\alpha$) would become shallower due to the decreasing star formation efficiencies of low-mass haloes \citep[see e.g.][]{2015ApJ...806...67D,bremer2018}.
In this paper, we propose a {\it proof-of-concept} calculation that uses the observations of the faint-end of the UV LF in different fields to yield hints on the fluctuating UVB. Our calculations are based on the premise that supernova feedback, effectively depending on the ratio between the star formation rate and halo potential should be the same in every field observed, barring cosmic variance. On the other hand, feedback from a fluctuating UVB can potentially result in UV LF faint-end slopes that will vary from field to field. This is an ideal time to undertake such analyses given that the James Webb Space Telescope ({\it JWST}) is expected to re-observe the six lensed Hubble Frontier Fields yielding a significant sample of $z \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 6$ galaxies extending to UV magnitudes as faint as ${\rm M_{UV}} \sim -12.5$.
\begin{figure*}
\center{\includegraphics[width=0.75\textwidth]{lumfun_z.pdf}}
\caption{The evolving UV luminosity function (LF) for $z \simeq 6-10$ with the model parameter values [$10^3 \epsilon_*, Q_{\rm HI}, \log_{10}(M_{\rm crit} / \rm M_\odot)]$ as marked at the top of each panel. The points with error-bars represent the observational data
\citep{mclure2009,livermore2017, bouwens2015,
bouwens2010a, mclure2010, mclure2013, bowler2014a,
atek2015,
oesch2014}, while the different curves show the predictions from our model. The green dotted (blue dashed) curves are the UV LFs for the neutral (ionized) regions. Note that the faint end of the LFs in the ionized regions are affected by UV feedback. The red solid curves denote the globally averaged UV LF. }
\label{fig:lumfun_z}
\end{figure*}
\section{Theoretical model}
\subsection{Modelling the Ultra-violet luminosity function}
The modelling of galaxy formation, in general, involves a number of complex physical processes \citep[for reviews on different aspects of galaxy formation, see, e.g.][]{1988RvMP...60....1O,2005ARA&A..43..769V,2007ARA&A..45..565M,2014ARA&A..52..291C,2015arXiv151103457K,2015ARA&A..53...51S}. The simplest models assume that each dark matter halo contains only one galaxy and the luminosity of the galaxy is primarily determined by the corresponding halo mass. In that case, the observed UV LF can be modelled as a scaled halo mass function (HMF) at that redshift.
In this work, we assume that in absence of any feedback, the UV luminosity of a halo is proportional to the halo mass, $M_h$, such that
\begin{equation}
{\rm L^{nofb}_{1375}}(M_h) = \epsilon_* ~\left(\frac{\Omega_b}{\Omega_m}\right) ~ M_h~ l_{1375},
\label{eq:L_Mh}
\end{equation}
where the term $(\Omega_b/\Omega_m)$ represents the cosmological baryon fraction. Further, $l_{1375}=10^{33.07}~\mbox{erg~s}^{-1}$ \AA$^{-1}~\rm M_\odot^{-1}$ is the specific ultra-violet luminosity for a newly formed stellar population assuming a metallicity of $5\%$ of the solar value and a Salpeter initial mass function (IMF) between $0.1-100\rm M_\odot$. Finally, $\epsilon_*$ is the fraction of baryons in the halo that get converted into stars. Physically, $\epsilon_*$ is the product of the baryon fraction that can cool and the cold gas fraction that can form stars. We assume the combination $\epsilon_*~l_{1375}$ to be independent of $M_h$ (although it can depend on $z$). Note that any deviation of $l_{1375}$ from this fiducial value can be absorbed within the unknown parameter $\epsilon_*$.
The relation between $l_{1375}$ and $M_h$ gets modified in presence of feedback processes. The radiative feedback arising from the UVB can suppresses the gas fraction in low mass haloes in ionized regions. We assume that the decrease in the total galaxy luminosity due to this UV radiative feedback can be modelled through the simple relation \citep[e.g.][]{sobacchi2013b}
\begin{equation}
{\rm L^{uvfb}_{1375}}(M_h) = \epsilon_* ~2^{- M_{\rm crit} / M_h}~\left(\frac{\Omega_b}{\Omega_m}\right)~M_h~l_{1375},
\label{eq:L_Mh_fb}
\end{equation}
where $M_{\rm crit}$ is the critical halo mass characterizing the effect of feedback. In fact, the above form implies that the luminosity of a galaxy in a halo of mass $M_{\rm crit}$ ($0.1 M_{\rm crit}$) decreases by a factor $2$ ($\sim 1000$) in presence of feedback. Although more complicated forms for UV feedback suppression exist in the literature \citep{2000ApJ...542..535G}, the above simple form has been shown to serve the purpose of modelling the evolving UV LF at high redshift \citep[see e.g.][]{2015ApJ...806...67D}.
The UV luminosities obtained above can be converted to an absolute UV magnitude (in the standard AB system) using ${\rm M_{UV}} = -2.5 \log_{10}({\rm L}_{1375}) + 51.60$ where ${\rm L}_{1375}$ is the total UV luminosity (in ${\rm erg\, s^{-1} \, {Hz}^{-1}}$) from the galaxy.
Naturally, the UVB will be non-zero only in volumes that are ionized, while neutral regions would be devoid of any ionizing photons. Consequently, radiative feedback will suppress the gas content in only those galaxies which form in already ionized regions. If $Q_{\rm HI}$ is the \emph{neutral} volume fraction of the universe, we expect that a fraction $Q_{\rm HII} \equiv (1 - Q_{\rm HI})$ of galaxies will be affected by feedback \citep{2005MNRAS.361..577C,2017ApJ...836...16D}. Under these assumptions, one can compute the globally averaged UV LF as a combination of a fully-suppressed UV LF in ionized regions ($\Phi^{\rm uvfb}$) and an unaffected UV LF ($\Phi^{\rm nofb}$) in neutral regions such that
\begin{eqnarray}
\Phi({\rm M_{UV}}) \!\!\!\!\!\! &=& \!\!\!\!\!\! (1 - Q_{\rm HI})~\Phi^{\rm uvfb}({\rm M_{UV}}) + Q_{\rm HI}~\Phi^{\rm nofb}({\rm M_{UV}})
\nonumber \\
\!\!\!\!\!\! &=& \!\!\!\!\!\! \frac{\ensuremath{{\rm d}} n}{\ensuremath{{\rm d}} M_h} \left[Q_{\rm HII}~\frac{\ensuremath{{\rm d}} M_h}{\ensuremath{{\rm d}} {\rm L^{uvfb}_{1375}}}~\frac{\ensuremath{{\rm d}} {\rm L^{uvfb}_{1375}}}{\ensuremath{{\rm d}} {\rm M_{UV}}}
+ Q_{\rm HI}~\frac{\ensuremath{{\rm d}} M_h}{\ensuremath{{\rm d}} {\rm L^{nofb}_{1375}}}~\frac{\ensuremath{{\rm d}} {\rm L^{nofb}_{1375}}}{\ensuremath{{\rm d}} {\rm M_{UV}}} \right],
\nonumber \\
\end{eqnarray}
where $\ensuremath{{\rm d}} n / \ensuremath{{\rm d}} M_h$ is the halo mass function\footnote{In this work, we use the HMF ($\ensuremath{{\rm d}} n / \ensuremath{{\rm d}} M_h$) of \citet{1999MNRAS.308..119S,2001MNRAS.323....1S}. We use a flat $\Lambda$CDM cosmology with $\Omega_m = 0.308, \Omega_b = 0.0482, h = 0.678, n_s = 0.961, \sigma_8 = 0.829$ \citep{2014A&A...571A..16P}.}.
Thus in our model the UV LF can be calculated once we fix three parameters: $\epsilon_*, M_{\rm crit}$ and $Q_{\rm HI}$.
\begin{table}
\begin{center}
\begin{tabular}{|c|ccccc|}
\hline
$z$ & 6 & 7 & 8 & 9 & 10 \\
\hline
$10^3 \epsilon_*$ & 1.5 & 2.5 & 3.3 & 2.5 & 4.7 \\
\hline
\end{tabular}
\end{center}
\caption{Values of $\epsilon_*$ constrained from the bright-end of the UV LF at the different redshifts shown in Columns 2-6.}
\label{tab:eps_*}
\end{table}
\subsubsection{Constraints on the star formation efficiency}
\label{sfe}
We start by discussing the observational constraints on the star formation efficiency parameter $\epsilon_*$. When $M_h \gamma\gamma M_{\rm crit}$, the haloes hosting galaxies are so massive that UV feedback effects are quite unimportant and in that case, the UV LF becomes independent of $Q_{\rm HI}$ and is entirely determined by the single free parameter $\epsilon_*$. We can exploit the above fact and fix the value of $\epsilon_*$ by comparing our predicted UV LF with the observations at the bright end (${\rm M_{UV}} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} -17$) as shown (by green dotted lines) in Fig. \ref{fig:lumfun_z}. The values of $\epsilon_*$ obtained by this comparison are listed in Table~\ref{tab:eps_*} at each $z$ .
We also show the feedback affected UV LF appropriate for galaxies in the feedback-affected HII regions (by blue dashed lines in the same figure). In order to compute these, we fix the value of $M_{\rm crit} = 10^{9.5} \rm M_\odot$ independent of the redshift which is consistent with the findings of, e.g., \citet{2000ApJ...542..535G}. For each redshift, we choose the value of the the third free parameter $Q_{\rm HI}$ so that the total UV LF (red solid lines in the same figure) gives a reasonable visual fit to the available data. The respective values of the 3 free parameters, [$10^3 \epsilon_*, Q_{\rm HI}, \log_{10} (M_{\rm crit} / \rm M_\odot)$], are indicated above each panel of the figure. This essentially shows that there exist combinations of the three parameters which can provide a satisfactory fit to the data for this simplified model of the evolving UV LF. The effect of UV feedback, as one can see from the figure, is to essentially flatten the faint-end slope of the UV LF which is a direct consequence of the suppression of luminosity in low-mass galaxies. It is worth mentioning that the currently available data points at the faint-end are not accurate enough to constrain $M_{\rm crit}$ and $Q_{\rm HI}$ stringently because of their large error-bars -- it is therefore quite possible that there exist other combinations of the parameter values which can provide an equally good fit to the data.
\begin{figure*}
\center{\includegraphics[trim=0 0 0 10,clip,width=0.85\textwidth]{faint_end_slope.pdf}}
\caption{The dependence of the faint-end slope $\alpha$ of the UV LF (corresponding to the red solid curves in \fig{fig:lumfun_z}) on $M_{\rm crit}$ and $Q_{\rm HI}$ for different redshifts. The black dashed curves in the three panels in the top row denote the allowed $1-\sigma$ ranges in $\alpha$ obtained from the available observational data.}
\label{fig:faint_end_slope}
\end{figure*}
\subsubsection{Constraints on the fluctuating UVB}
We now extend the concepts described in the previous section to probe the impact of UV feedback from a patchy ionizing background. Given that UV feedback directly only affects the faint-end slope, we now restrict our discussions to constraining the value of $\alpha$ using observations from forthcoming facilities such as the {\it JWST}. For definiteness, we define the faint end as consisting of galaxies with ${\rm M_{UV}} \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} -17$, although minor variations of this threshold are not expected to affect our conclusions.
Since the parameter $\epsilon_*$ (Sec. \ref{sfe} above) is already fixed by the bright-end, we can compute $\alpha$ for all possible combinations of $M_{\rm crit}$ and $Q_{\rm HI}$. The plot of $\alpha$ as a function of $M_{\rm crit}$ and $Q_{\rm HI}$ is shown in \fig{fig:faint_end_slope}. To understand the dependence of $\alpha$ on the two parameters, let us concentrate on the first panel on the left hand side ($z = 6$). When the universe is mostly neutral $Q_{\rm HI} \to 1$, UV feedback effects are quite negligible resulting in $\alpha$ being independent of $M_{\rm crit}$. At the other extreme, when $Q_{\rm HI} \to 0$, we find that the slope flattens ($\alpha$ increases) with increasing $M_{\rm crit}$ (for a fixed $Q_{\rm HI}$). This is simply because UV feedback becomes more severe and hence leads to suppression in the luminosity from an increasing fraction of low-mass haloes. For a fixed value of the critical halo mass, say, $M_{\rm crit} \sim 10^{9} - 10^{10} \rm M_\odot$, we find that the slope flattens with decreasing $Q_{\rm HI}$. This effect arises because of UV feedback affecting a larger fraction of $M_h \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} M_{\rm crit}$ haloes. Interestingly, we find that the slope is largely independent of $Q_{\rm HI}$ for $M_{\rm crit} \sim 10^{8} - 10^{8.5} \rm M_\odot$. This is because for such small values of the critical mass, UV feedback only affects the lowest-mass galaxies which are below the observational limits. The same qualitative conclusions hold for the other redshifts as well. We find that for the same value of $M_{\rm crit}$ and $Q_{\rm HI}$, the slope is steeper at higher redshifts. This is because the HMF at the small mass end steepens with increasing redshift.
We also show in the figure the presently available observational constraints on $\alpha$ taken from \citet{2014MNRAS.445.2545D}. The two dashed lines in each panel show the $1-\sigma$ limits at the corresponding redshift. Interestingly, one can constrain $Q_{\rm HI} < 0.2 (0.5)$ at $z = 6 (7)$ at $1-\sigma$ confidence level with the available data. Clearly the constraints degrade as we go to higher redshifts because of the lack of data points at the faint-end and hence it is almost impossible to put any constraint on $\alpha$ at $z \geq 8$.
Although the effect of the radiative feedback on the UV LF has been well-studied \citep[see, e.g.,][]{2007MNRAS.377..285S,2014NewA...30...89S,2016MNRAS.463.1968Y,2017MNRAS.464.1633F,2018arXiv180505945S}, the discussion above provides a rather quantitative and direct way to constrain UV feedback parameters using the observed UV LF. However, the underlying model used suffers from a significant shortcoming which is related to the degeneracies between different types of feedback. E.g., (type II) supernova feedback would also tend to suppress star formation in low and intermediate mass haloes, and can potentially lead to flattening in the faint-end slope \citep{1999ApJ...513..142M,2003MNRAS.339..312S,2007ApJ...670....1G,2012MNRAS.421.3522H,2014NewA...30...89S}. While one can, in principle, incorporate the effects of SN feedback in the model we are using, this would lead to more free parameters and it would become almost impractical to constrain the parameters with sufficient accuracy. This then warrants the question whether observations of flat $\alpha$ do indeed allow us to probe the patchy UV background in presence of other complicated physical processes. This degeneracy between different feedbacks affecting the faint-end of the UV LF can, in principle, be lifted by observing different volumes or fields on the sky. If the process of reionization is indeed patchy, as is predicted by almost all existing models, it is expected that the ionization and thermal states of the intergalactic medium (IGM) in different volumes would be different. In that case, the UVB and the impact of UV feedback (for galaxies having the same luminosity) would vary from field to field which would be manifested as a scatter in $\alpha$. It is worth emphasising that supernova feedback, which depends on the balance between the star formation rate and the underlying dark matter halo potential, is not expected to change from field to field (except for the cosmic variance). We thus propose that one can study the effects of radiative feedback by observing the UV LF across a number of different fields.
Once we measure the value of $\alpha$ to sufficient accuracy in different patches of the sky, we can use the panels of \fig{fig:faint_end_slope} to put constraints of $M_{\rm crit}$ and $Q_{\rm HI}$ for \emph{each patch}, assuming that we have already fixed $\epsilon_*$ using the bright-end. Assuming that $M_{\rm crit}$ does not vary across fields, this would allow us to constrain $Q_{\rm HI}$ in each field. Any scatter in $\alpha$ and hence $Q_{\rm HI}$ would allow us to constrain the UVB fluctuations. As is clear, it is not possible to obtain sufficiently constrained values of $\alpha$ in individual field with the current data. However, in the very near future, the {\it JWST} is expected to re-observe the six lensed Hubble Frontier Fields. Given its capability of observing down to ${\rm M_{UV}} \sim -15$, combined with moderate lensing magnifications of a factor of 10, we expect a significant sample of $z \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 6$ galaxies extending to magnitudes as faint as ${\rm M_{UV}} \sim -12.5$ over $\sim 10\times 10$ Mpc patches. The scatter in the value of $\alpha$ from these fields would provide an ideal test of patchy UV feedback at high-$z$ using the faint-end of the UV LF.
\section{Summary}
In recent times, the availability of high-quality data on high redshift Lyman Break Galaxies (LBGs), particularly the UV luminosity function (UV LF), has opened up the possibility of understanding various physical processes related to early galaxy formation in great detail.
We present a proof-of-concept calculation based on the faint-end of the UV LF to constrain the fluctuating UV background (UVB) during reionization. As per our current understanding, the photo-heating arising from UV radiation will suppress star formation in low mass haloes in ionized regions. Because
reionization is patchy, the severity of this feedback will be different in different volumes of the universe. With this in mind, our concept consists of (i) a simple model of UV LF based on scaled halo mass function, combined with an exponential suppression of the star formation in galaxies formed in ionized regions, and (ii) comparing the model with the observed UV LF in different patches in the sky. The scatter in the UV LF across different patches, in principle, should probe the patchy UV feedback at high redshifts. The currently available data is not sensitive enough to constrain the fluctuating UVB by measuring the LF in different patches of the sky. One expects that, in the very near future, the {\it JWST} will re-observe the six lensed Hubble Frontier Fields with unprecedented sensitivity, thus enabling measurement of the faint-end slope of the UV LF in different patches. These observations would serve as ideal tests of our proof-of-concept.
Finally we comment on possible complications to be accounted for while comparing the model with the data. Firstly,
in addition to the patchy UVB, there could be some scatter in the UV LF across different patches arising from the underlying cosmic variance. Furthermore, the clustering of galaxies would lead to correlation between their positions and the feedback-affected ionized regions. All such issues are best addressed through numerical simulations, which we plan to take up in more detail in the future.
\section*{Acknowledgments}
TRC acknowledges support from the Associateship Scheme of ICTP, Trieste. PD acknowledges support from the European Research Council's starting grant DELPHI (717001) and from the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. PD thanks R. Bouwens, N. Gnedin, P. Oesch and Z. Haiman for illuminating discussions.
\bibliographystyle{mnras}
|
1,108,101,565,119 | arxiv | \section{Introduction}
\label{sect:introduction}
Today, after decades of intensive research, cancer is still one of the most deadly diseases worldwide, killing millions of people every year. Cancer is mainly caused by somatic mutations that affect critical genes and pathways. These mutations are mostly triggered by environmental factors (e.g. obesity, smoking, alcohol, lifestyle,...) often promoted by certain genetic configurations.
In the last two decades, large-scale projects, such as the Cancer Genome Atlas project (TCGA), which has produced massive amounts of multi-omics data, have launched to improve our understanding of cancers \cite{TCGA2}. In this context, developing statistical algorithms able to interpret these large data sets and to identify genes that are the origin of diseases and their causal pathways still remains an important challenge.
Genes are commonly affected by genomic changes in the pathogenesis of human cancer. Cancer is moreover a heterogeneous disease, with affected gene sets that may be highly different depending on subtypes, and thus requires different treatments of patients. Specific analyses of subtypes have for example revealed significant differences between breast cancer subgroups \cite{Lehman11} but also pancancer similarities between breast and bladder cancer subgroups \cite{Damrauer14}.
Using transcriptional data allows to look beyond DNA, that is to study abnormalities in terms of gene expression. As a common approach, differential expression analysis, for which statistical procedures have been intensively explored, can be performed and altered genes are then differentially expressed genes
\cite{Kaczkowski16}.
This points to relevant genes but does not take into account the regulations (activation and inhibition) between genes.
The approach we consider consists in taking into account the regulation structure between genes. We particularly focus on transcription factors (TFs), for their major role played in the regulation of gene expression, which make them an attractive target for cancer therapy
\cite{Nebert02,Yeh13}.
Regulation processes between TFs and their targets are usually represented by Gene Regulatory Networks (GRNs).
In the last few years, many different methods have been proposed to infer GRNs
from collections of gene expression data. In a discrete framework, gene expression can be discretized depending on their status (under/over-expressed or normal) and truth tables provide the regulation structure \cite{Elati11}. In the continuous case, regression methods, including the popular Lasso \cite{Tibshirani96} and its derivatives, have provided powerful results \cite{Vignes11,Liu08}.
A deregulated gene then corresponds to a gene whose expression does not correspond to the expression level expected from its regulators expression.
It is different from the notion of differential expression since a loss of regulation between a target gene and one of its regulating TFs implies a loss of correlation between them but not necessarily differential expression.
Conversely, a TF can be differentially expressed and one of its targets not, precisely because it is deregulated.
To discover deregulated genes, a first possibility is to infer one network per condition and to compare them. Statistical difficulties due to the noisy nature of transcriptomic data and the large number of features compared to the sample size can be taken into account by inferring the networks jointly and penalizing the differences between them \cite{Chiquet11}. A second possible approach is to assess the adequacy of gene expression in tumoral cells to a reference GRN, in order to exhibit the most striking discrepancies, i.e. the regulations which are not fulfilled by the data \cite{Guziolowski09,Karlebach12}. Such methods however focus on checking the validity of the network rather than highlighting genes with an abnormal behavior. Finally, analyses may be conducted at the pathway level rather than the gene level \cite{Tarca09,Vaske10}. They are then not network-wide in the sense that each gene has a deregulation score by pathway it belongs to and pathways are treated independently. Moreover, as the pathways are extracted from curated databases, the regulations taken into account are not tissue-specific.
Here, we propose a statistical deregulation model that uses gene expression data to identify deregulated TFs involved in specific subtypes of cancer. This paper is organized as follows:
in Section 1, we present the 3-steps method we developed and our validation procedure. In Section 2, we illustrate its interest on the TCGA bladder cancer data set. We show that it can be used complementary to differential expression analysis to point to potential biomarkers of cancers.
\vspace{-0.1cm}
\section{Methods}\label{sec:methods}
\subsection{Overview of the Procedure}
Our approach for the identification of deregulated transcription factors (TFs) involved in
cancers
\noindent is based on a 3-steps strategy that $(i)$ creates a reference gene regulatory network (GRN), which represents regulations between groups of co-expressed TFs and target genes using a reference data set (Step 1), $(ii)$ computes a deregulation score for each target gene in each tumor sample by comparing their behavior with the reference GRN (Step 2), $(iii)$ identifies the most significant TFs involved in the deregulation of the target genes in each sample from specific cancer subtypes (Step 3).
These steps are presented in Figure \ref{fig:overview} and described in detail in the next sections.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=14cm]{workflow2.pdf}
\end{center}
\caption{Workflow of the proposed 3-steps algorithm for identifying TFs involved in specific cancer subtypes.
}\label{fig:overview}
\end{figure}
\vspace{-0.1cm}
\subsection{Step 1: Inferring a Gene Regulatory Network}
Step 1 of the algorithm consists in inferring a GRN that connects TFs to their downstream targets. Among the large number of existing methods, we choose hLICORN, available in the {\tt CoRegNet} R-package \cite{CoRegNet}. This algorithm is based on a hybrid version of the LICORN model \cite{Elati07}, in which groups of co-regulated TFs act together to regulate the expression of their targets (Figure \ref{fig:licorn}). More precisely, LICORN uses heuristic techniques to identify co-activator and co-inhibitor sets from discretized gene expression matrices and locally associates each target gene to pairs of co-activators and co-inhibitors that significantly explain its discretized expression.
The hybrid variation of LICORN then ranks the local candidate networks according to how well they predict the target gene expression, through a linear regression, and selects the GRN that minimizes the prediction error.
This selection step limits the effects of overfitting, induced by the model complexity, especially the large number of features (genes) as compared to the sample size \cite{Chebil14}.
In this work, we slightly enrich the LICORN model by creating a copy of each TF in the target layer to allow regulations between TFs.
\begin{figure}[!ht]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\draw (0,0) node{TF$_1$};
\draw (0,-0.5) circle(0.1);
\draw (1.5,0) node{TF$_2$};
\draw (1.5,-0.5) circle(0.1);
\draw (3,0) node{TF$_3$};
\draw (3,-0.5) circle(0.1);
\draw (4.5,0) node{TF$_4$};
\draw (4.5,-0.5) circle(0.1);
\draw (6,0) node{TF$_5$};
\draw (6,-0.5) circle(0.1);
\draw (7.5,0) node{TF$_6$};
\draw (7.5,-0.5) circle(0.1);
\draw (9,0) node{TF$_7$};
\draw (9,-0.5) circle(0.1);
\draw (0.5,-1.2) rectangle (1,-1.6);
\draw (3.75,-1.4) circle(0.3);
\draw (5,-1.2) rectangle (5.5,-1.6);
\draw (8.25,-1.4) circle(0.3);
\draw[->,thick] (0,-0.6) -- (0.72,-1.2);
\draw[->,thick] (1.5,-0.6) -- (0.78,-1.2);
\draw[->,thick] (3,-0.6) -- (3.72,-1.1);
\draw[->,thick](4.5,-0.6) -- (3.78,-1.1);
\draw[->,thick] (4.5,-0.6)--(5.22,-1.2);
\draw[->,thick](6,-0.6) -- (5.28,-1.2);
\draw[->,thick] (7.5,-0.6)--(8.22,-1.1);
\draw[->,thick] (9,-0.6)--(8.28,-1.1);
\draw (2.25,-3.6) node{$g_1$};
\draw (6.75,-3.6) node{$g_2$};
\draw (2.25,-2.8) circle(0.1);
\draw (6.75,-2.8) circle(0.1);
\draw[->,thick] (0.75,-1.6)--(2.22,-2.8);
\draw[->,thick] (3.75,-1.7)--(2.28,-2.8);
\draw[->,thick] (5.25,-1.6)--(6.72,-2.8);
\draw[->,thick] (8.25,-1.7)--(6.78,-2.8);
\draw (11,0) node{TFs};
\draw (11,-1.4) node{Co-regulators};
\draw (11,-2.8) node{Target genes};
\end{tikzpicture}
\end{center}
\caption{Example of LICORN graph involving 7 TFs and 2 target genes. TFs are gathered into groups of co-expressed genes that co-regulate (square for co-activators, circle for co-inhibitors) each target gene.
}\label{fig:licorn}
\end{figure}
To construct a specific GRN, note that one may prefer using another inference method \cite{Chiquet12} or a pre-existing regulatory network, which can be loaded from the RegNetwork database \cite{Liu15}.
Here, we focus on hLICORN since the induced model is particularly suitable for the rest of our analysis. In addition, it was shown to provide powerful results for cooperative regulation detection, especially on cancer data set \cite{Elati07,CoRegNet}.
\vspace{-0.1cm}
\subsection{Step 2: Computing a Deregulation Score}\label{sec-EM}
Step 2 of the algorithm aims at identifying deregulated target genes by carefully comparing their expression across all tumor samples with the reference GRN inferred in Step 1. For this purpose, we use the method described in \cite{deregScore}, which assumes that all genes from a hLICORN model are allowed to be deregulated, i.e. not to respond to their regulators as expected.
More precisely, according to the hLICORN model, each gene $g$ is connected with a set of co-regulated TFs split into a group of co-activators $\mathcal{A}$ and co-inhibitors $\mathcal{I}$.
A binary deregulation variable $D_g$, assumed to be non-zero with probability $Y$, is then introduced to compare the true status $\mathcal{S}_g$ (under/over-expressed or normal) of each target gene in each tumor sample with its expected value $\mathcal{S}_g^*$, resulting from a truth Table (see Figure
\ref{fig:EM} (b)) and the inferred GRN. To avoid discretization of the data, the status of all genes are considered as hidden variables.
The model is described in Figure \ref{fig:EM} (a).
\begin{figure}[!ht]
\begin{subfigure}{0.53\textwidth}
\begin{center}
\begin{tikzpicture}[scale=1.1]
\draw (0,0) circle(0.1);
\draw (0.5,0) circle(0.1);
\draw (1,0) circle(0.1);
\draw (3.25,0) circle(0.1);
\draw (3.75,0) circle(0.1);
\draw[rounded corners] (-0.25,0.25) rectangle (1.25,-0.25);
\draw[rounded corners] (3,0.25) rectangle (4.05,-0.25);
\draw (0.5,0.5) node{Co-activator set $\mathcal{A}$};
\draw (3.5,0.5) node{Co-inhibitor set $\mathcal{I}$};
\draw[->,thick] (0.5,-0.25)--(0.5,-0.75);
\draw[->,thick] (3.5,-0.25)--(3.5,-0.75);
\draw(0.5,-1.05) circle(0.3);
\draw(3.5,-1.05) circle(0.3);
\draw (0.5,-1.05) node{\scriptsize $\mathcal{S}_{\mathcal{A}}$};
\draw (3.5,-1.05) node{\scriptsize $\mathcal{S}_{\mathcal{I}}$};
\draw (-1,-0.8) node{Collective};
\draw (-1,-1.3) node{status};
\draw (1.87,-2.3) node{\scriptsize $\mathcal{S}^*_g$};
\draw (1.87,-2.3) circle(0.3);
\draw[->,thick] (0.5,-1.34)--(1.60,-2.15);
\draw[->,thick] (3.5,-1.34)--(2.15,-2.15);
\draw (-0.5,-2.2) node{Expected status};
\draw (1.87,-3.65) node{\scriptsize $\mathcal{S}_g$};
\draw (1.87,-3.65) circle(0.3);
\draw (-0.5,-3.65) node{True status};
\draw[->,thick,dashed] (1.87,-2.6)--(1.87,-3.35);
\draw (1.87,-4.65) circle(0.1);
\draw (1.87,-5.15) node{ $X_g$ expression of gene $g$};
\draw[->,thick] (1.87,-3.95)--(1.87,-4.55);
\draw (3.5,-3.05) node{\scriptsize $D_g$};
\draw (3.5,-3.05) circle(0.3);
\draw (3.8,-2.5) node{Deregulation variable};
\draw[->,thick] (3.2,-3.05)--(2.15,-3.65);
\end{tikzpicture}
\end{center}
\caption{Deregulation model.}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\begin{center}
\begin{tabularx}{0.85\textwidth}{c c c c}
\toprule
& \multicolumn{3}{c}{Activator} \\
Inhibitor & \multicolumn{3}{c}{collective status} \\
\cmidrule(lr){2-4}
collective status & - & 0 & + \\
\midrule
- & \textcolor{white}{e}0\textcolor{white}{e} & + & + \\
0 & - & 0 & + \\
+ & - & - & - \\
\bottomrule
\end{tabularx}
\end{center}
\caption{Truth table.}
\end{subfigure}
\caption{(a) The deregulation model \cite{deregScore} used to compute a deregulation score for each target gene in each sample: each gene $g$ is associated to a hidden status $\mathcal{S}_g$ (under, over-expressed or normal). Target genes are allowed to be deregulated, i.e. not follow their co-regulator rules (Truth table (b)). The binary variable $D_g$ indicates whether the corresponding target gene $g$ is deregulated ($D_g=1$) or not ($D_g=0$). The deregulation score $Y$ of gene $g$ in sample $j$ is then the probability, given the observation, that $D_g=1$ in sample $j$.
(b) LICORN truth table, which gives the expected status of a target gene according to the collective status of its co-activators and co-inhibitors. Collective status are set by default to 0 except if and only if all of its elements share the same status. This table is derived from biological experiments \cite{Elati07}.
}\label{fig:EM}
\end{figure}
As the number of hidden variables grows exponentially with the number of genes, the likelihood of the model rapidly becomes intractable. The unknown parameters, including
the deregulation score $Y$, are thus estimated using a dedicated EM-algorithm (see \cite{deregScore} for more details).
Note that the deregulation score $Y$ does not capture information about differentially expressed genes but genes whose expression does not correspond to the level expected from its regulator expression.
\subsection{Step 3: Identifying Deregulated TFs
}\label{sec-Beta}
Step 3 consists in identifying TFs that cause deregulations of target genes. Our approach is based on linear regression models, in which we try to explain the deregulation score of all target genes in one sample (Step 2) using their co-regulator TFs as explanatory variables (Step 1).
Assume that we have
$q$ TFs and
$p$ target genes. Denote by $Y_{ij}$ the deregulation score of target gene $i$ ($1\leq i \leq p$) in sample $j$ ($1\leq j \leq n$) and $G:=(G_{i\ell})_{1\leq i \leq p,1\leq \ell\leq q}$ the GRN adjacency matrix, whose non-zero elements encode the structure (edges) of the graph.
We then cast our model as follows:
\begin{equation}\label{eq:B}
\forall j \in \llbracket 1,n \rrbracket, \forall i \in \llbracket 1,p \rrbracket, Y_{ij}=G_{i\ell} \cdot B_{\ell j } + \varepsilon_{ij},
\end{equation}
or, in a matrix form, $Y=G\cdot B +\varepsilon$, where each element $B_{\ell j}$ of matrix $B$, to estimate, measures the deregulation importance of TF $\ell$ in sample $j$ and $\varepsilon$ stands for the presence of noise.
Solving the $B$-estimation problem (\ref{eq:B}) can be viewed as a classical multi-task linear learning problem, in which the
number of observations
is
the number of target genes $p$,
the number of linear tasks is $n$
and
the number of variables $q$.
To estimate $B$, we use a constrained least squares estimation procedure. As we only expect to find TFs positively causing the deregulation of their targets in each sample, we consider the induced constrained optimization problem:
\begin{eqnarray}
\forall j \in \llbracket 1,n \rrbracket, \ \ \hat{B}_{\cdot j} &:=& \underset{\beta\in \mathbb{R}^q}{\operatorname{argmin}} \Vert Y_{\cdot j} -G \beta \Vert_2^2, \label{eq:opt}\\
& s.t& \ \ \forall \ell \in \llbracket 1,q \rrbracket, 0\leq \beta_{\ell} \leq 1 \nonumber
\end{eqnarray}
where $\Vert . \Vert_2^2$ stands for the euclidian norm.
The closer $\hat{B}_{\ell j}$ is to 1, the more important the role of TF $\ell$ in the deregulation of its targets in sample $j$.
To solve Eq. (\ref{eq:opt}), we use the {\tt limSolve} R-package.
\subsection{Correcting Expression Data
} \label{sec-correc}
Gene expression is commonly affected by copy number alterations (CNA) \cite{Aldred05}. Step 2 of our procedure is particularly sensitive to CNA, associating high deregulation scores to amplified or deleted target genes \cite{deregScore}. Indeed, the number of copies of a gene can strongly influence its expression, independently from its regulators expression, making some regulations wrongly deregulated.
To remove CNA effects on gene expression and improve the rest of our analysis, we preprocess target genes expression data beforehand as proposed in \cite{SegCorr}. Gene expression is considered as linearly modified by CNA through the linear regression model:
\begin{equation}\label{eq:CNVcorrec}
X_{ij} = \alpha_0 + \alpha_1 \mbox{CNA}_{ij} + \varepsilon_{ij},
\end{equation}
where $X_{ij}$ is the expression of gene $j$ in sample $i$ and $\mbox{CNA}_{ij}$ its associated copy number. Let $\hat{\alpha}_0$ and $\hat{\alpha}_1$ be the estimated solutions of Eq. (\ref{eq:CNVcorrec}), the corrected expression is then given by:
$$\tilde{X}_{ij} = X_{ij} - \hat{\alpha}_0 - \hat{\alpha}_1 \mbox{CNA}_{ij}.$$
\section{Results and Discussion}
\subsection{The Bladder Cancer Data Set}\label{sec-data}
We apply our method on bladder cancer data, produced in the framework of the
Cancer Genome Atlas (TCGA) project and available at the Genomic Data Commons Data Portal (\url{https://portal.gdc.cancer.gov/}).
These data include a set of
401
bladder cancer samples with gene expression and copy number for a total number of
15,430 genes,
split into
2,020
TFs and
13,410
targets.
Gene expression data were produced using RNA-sequencing on bladder cancer tissues. Preprocessing is done by log-transformation and quantile-normalization of the arrays.
Missing values are estimated using nearest neighbor averaging \cite{Troyanskaya01}.
TCGA samples are analyzed in batches and significant batch effects are observed based on a one-way analysis of variance in most data modes. We apply Combat \cite{Johnson07} to adjust for these effects.
Genes are finally filtered based on their variability:
among them,
we only keep the $75\%$ most varying genes.
Based on RNA-seq data analysis from the TCGA data portal, samples are split into five subtype
: basal-squamous (BaSq), luminal (Lum), luminal-infiltrated (LumI), luminal-papillary (LumP) and neuronal (NE) with different characteristics \cite{TCGA_subtypes} (Table \ref{samples}).
\begin{table}[!ht]
\begin{center}
\begin{tabularx}{0.55\textwidth}{c c c c c c}
\toprule
Subtypes &BaSq & Lum & LumI & LumP & NE \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}
Samples &131&44&74&134&18\\
\bottomrule
\end{tabularx}
\end{center}
\caption{Molecular subtypes distribution of the 401 bladder cancer samples
\cite{TCGA_subtypes}
.}\label{samples}
\end{table}
\subsection{Description of the Procedure Results}
\paragraph{GRN network.}
To validate our method, we have to provide a tissue-specific reference GRN (Step 1), which is computed given a first set of reference samples.
In many cancers, the pure normal tissue of origin is not available.
Here, we work with the five different subtypes of the TCGA data set presented in Section \ref{sec-data}.
Using samples from one subtype as test cases and the rest as reference, we infer five different GRNs.
Each of them reflect
averaged relationships between genes for patients
who are not part of one specific subtype.
Due to the very-high heterogeneity of cancers, especially of bladder cancers \cite{Knowles14,Togneri16}, we think that our method will still point to relevant deregulations of specific subtypes.
After calibrating the internal parameters of the hLICORN agorithm, the GRNs we infer are made of an averaged total number of $28,246$ edges connecting $586$ TFs to $3,432$ of their targets.
These networks are relatively sparse, each of the target genes being associated with an averaged number of 8 TFs.
\paragraph{Deregulation scores.}
We then run five times the EM procedure (Step 2) on the five subsets of the gene expression data matrix to compute a deregulation score of each target gene in each sample of each subtype.
From now on, all samples are treated individually, the results reflecting how genes behaved in each sample of one subtype in comparison to reference samples from all other subtypes.
To check the effect of the copy number correction we apply at the beginning of our procedure,
we
compare the distribution of the deregulation scores across copy number states.
To this aim, we use the TCGA CNA thresholded data set, which
associates to each gene-sample pair a copy number state of ``0'' for the diploid state (two copies), ``1'' for a copy number gain, ``-1'' for a copy number loss, ``2'' for an amplification and ``-2'' for a deletion.
We then test for significant differences between the diploid state and the altered states (-2,-1,1,2) using Student tests. Results in terms of p-values, which are corrected for multiple hypothesis testing using the FDR \cite{Benjamini95}, are presented in Table \ref{tab:CNV}. With corrected p-values ranging from 0.10 to 1, deregulation scores are no longer associated with CNA.
\begin{table}[!ht]
\begin{center}
{\small
\begin{tabularx}{1\textwidth}{c c c c c c c c c c c c c c c c c c c c}
\toprule
\multicolumn{20}{c}{ Subtypes} \\
\cmidrule(lr){1-20}
\multicolumn{4}{c}{BaSq} & \multicolumn{4}{c}{Lum} &\multicolumn{4}{c}{LumI} & \multicolumn{4}{c}{LumP} &\multicolumn{4}{c}{NE}\\
\cmidrule(lr){1-4} \cmidrule(lr){5-8} \cmidrule(lr){9-12} \cmidrule(lr){13-16}\cmidrule(lr){17-20}
-2&-1&1&2 & -2&-1&1&2 &-2&-1&1&2 &-2&-1&1&2 & -2&-1&1&2 \\
1&0.81&0.28&1 &1&1&1&1&1&0.25&0.10&0.60&1&1&1&1&1&1&1&1\\
\bottomrule
\end{tabularx}
}
\end{center}
\caption{Corrected p-values for Student tests when comparing the distribution of the deregulation scores between the diploid state
(0) and
each altered state (-2,-1,1,2) for each subtype.
}\label{tab:CNV}
\end{table}
\paragraph{Deregulated TFs.}
We finally apply Step 3 of our method to identify TFs involved in the deregulation scores of the target genes, that is having a non-zero coefficient in $\hat{B}$, as given in Eq. (\ref{eq:B}).
We then rank the TFs according to their number of non-zero coefficients across all samples belonging to each specific subtype. Results are presented in Table \ref{tab:b}.
\begin{table*}[!ht]
\begin{center}
{\small
\begin{tabularx}{1\textwidth}{c c c c c c c c c c}
\toprule
\multicolumn{10}{c}{Subtypes}
\\
\midrule
\multicolumn{2}{c}{BaSq}& \multicolumn{2}{c}{Lum} & \multicolumn{2}{c}{LumI} &\multicolumn{2}{c}{LumP} &\multicolumn{2}{c}{NE}
\\
\cmidrule(lr){1-2} \cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}
TF & \small{$\% \hat{B}$} & TF &\small{$\% \hat{B}$} & TF & \small{$\% \hat{B}$} &TF & \small{$\% \hat{B}$} &TF & \small{$\% \hat{B}$}
\\
\midrule
SPOCD1 &\small{92\%}& ZNF268 &91\% &TSHZ1 &88\% &RARB &84\%& FAIM3& 89\%\\
ZNF382 &\small{86\%}& HES2& 80\%& ZNF354B& 88\%& RFX5 &84\%& SMARCA2& 83\% \\
RCOR2 &\small{86\%}& TBX2& 80\%& AR &85\%& CBFA2T3& 83\%& RARB &78\%\\
ATM &\small{83\%}& PRDM8 &75\%& HES2& 82\%& TBX18& 81\%& ZNF235& 78\%\\
HABP4 &\small{83\%}& TSHZ1& 75\%& HTATIP2& 81\%& TBX3& 79\%& TBX2& 72\%\\
IRX3& \small{82\%} &ZNF354C &73\% &MAFG &80\%& PTRF& 79\% &STAT3 &72\% \\
IFI16 &\small{79\%}& RARB &70\%& ENO1 &80\% &TBX2 &70\%& HIF1A &72\%\\
TEAD2& \small{79\%} &KLF13 &70\%& TBX2& 74\%& PPARG &76\% &THRA &72\%\\
NOTCH4 &\small{79\%} &SCML2 &68\%& ZNF563&74\%& NCOR2 &75\%& PIR& 67\%\\
SNAI2& \small{79\%} &SNAI3& 68\%& IRX3& 72\%& ZFP2& 75\%&FOSL1& 67\%\\
\bottomrule
\end{tabularx}
}
\end{center}
\caption{List of the 10 most important TFs for explaining the deregulation scores of their targets and number of non-zero coefficients in $\hat{B}$ (in $\%$) across all samples from each subtype.}\label{tab:b}
\end{table*}
\subsection{Discussion}
\paragraph{Top TFs include biomarkers of bladder cancer.}
Among TFs of Table \ref{tab:b}, we retrieve characteristic genes of bladder cancer subtypes.
For instance, SNAI2, which is deregulated across 79\% of the BaSq samples, is particularly well-known for its implication in EMT pathways for cancer patients \cite{Cobaleda07} and its capacity to discriminate between basal and luminal subgroups \cite{Mistry14}.
The presence of NOTCH4 in BaSq samples
is particularly interesting as it is part of the NOTCH pathway, whose inactivation tends to promote bladder cancer progression \cite{Maraver15}. Research works also focus on
its implication on the basal subgroup \cite{Greife14}.
Similarly, TBX2, involved in all three luminal subtypes is an indicator of luminal cancers \cite{Dhawan15}. We can finally emphasize the presence of PPARG in LumP, whose high level of expression is used to describe luminal subtypes \cite{Choi14}.
\paragraph{Deregulation is complementary to differential gene expression analysis}
Differential gene expression analysis consists in performing statistical analysis to discover quantitative changes in terms of expression levels between groups. It is frequently used in cancer research to identify genes with important changes between tumor and normal samples, called differentially expressed genes (DEGs) \cite{limma}.
We perform differential gene expression analysis using the \texttt{R}-package \texttt{limma} \cite{limma} on all samples from each subtype when comparing to samples from all other subtypes.
We then verify whether the identified DEGs are different from the deregulated TFs derived from our method (Figure \ref{fig:DEG}). To this aim, we use the following thresholds: a gene is called DEG for p-values smaller than 0.01 whereas it is deregulated for a subtype as soon as it is deregulated ($\hat{B}\neq 0$) for more than the 50\% of the subtype samples.
This threshold is purely arbitrary but is not crucial, as the results remain almost the same with slight changes.
As shown in Figure \ref{fig:DEG},
except for BaSq and LumP subtypes, more than the 70\% of the identified deregulated TFs are not differentially expressed, which means that our procedure does not only point to DEGs.
\def(0,0) circle (0.8cm){(0,0) circle (0.8cm)}
\def(0:1cm) circle (0.8cm){(0:1cm) circle (0.8cm)}
\begin{figure}[!ht]
\begin{center}
\begin{tikzpicture}[scale=1]
\begin{scope}[fill opacity=0.5]
\fill[magenta] (0,0) circle (0.8cm);
\fill[cyan] (0:1cm) circle (0.8cm);
\end{scope}
\begin{scope}
\draw (0,0) circle (0.8cm) node[left] {$879$};
\draw (0:1cm) circle (0.8cm) node [right] {$48$};
\draw(0.5,0) node {$107$};
\draw(0.5,1.2) node {BaSq};
\end{scope}
\begin{scope}[shift={(2.8cm,0cm)}, fill opacity=0.5]
\fill[magenta] (0,0) circle (0.8cm);
\fill[cyan] (0:1cm) circle (0.8cm);
\end{scope}
\begin{scope}[shift={(2.8cm,0cm)}]
\draw (0,0) circle (0.8cm) node[left] {$193$};
\draw (0:1cm) circle (0.8cm) node [right] {$65$};
\draw(0.5,0) node {$17$};
\draw(0.5,1.2) node {Lum};
\end{scope}
\begin{scope}[shift={(5.6cm,0cm)}, fill opacity=0.5]
\fill[magenta] (0,0) circle (0.8cm);
\fill[cyan] (0:1cm) circle (0.8cm);
\end{scope}
\begin{scope}[shift={(5.6cm,0cm)}]
\draw (0,0) circle (0.8cm) node[left] {$366$};
\draw (0:1cm) circle (0.8cm) node [right] {$57$};
\draw(0.5,0) node {$23$};
\draw(0.5,1.2) node {LumI};
\end{scope}
\begin{scope}[shift={(8.4cm,0cm)}, fill opacity=0.5]
\fill[magenta] (0,0) circle (0.8cm);
\fill[cyan] (0:1cm) circle (0.8cm);
\end{scope}
\begin{scope}[shift={(8.4cm,0cm)}]
\draw (0,0) circle (0.8cm) node[left] {$784$};
\draw (0:1cm) circle (0.8cm) node [right] {$45$};
\draw(0.5,0) node {$91$};
\draw(0.5,1.2) node {LumP};
\end{scope}
\begin{scope}[shift={(11.2cm,0cm)}, fill opacity=0.5]
\fill[magenta] (0,0) circle (0.8cm);
\fill[cyan] (0:1cm) circle (0.8cm);
\end{scope}
\begin{scope}[shift={(11.2cm,0cm)}]
\draw (0,0) circle (0.8cm) node[left] {\textcolor{black}{337}};
\draw (0:1cm) circle (0.8cm) node [right] {$46$};
\draw(0.5,0) node {$12$};
\draw(0.5,1.2) node {NE};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Venn Diagrams representing the number of DEGs (in pink), the number of deregulated TFs identified by our method (in blue) and their intersection.}\label{fig:DEG}
\end{figure}
\section*{Conclusion}
With the aim of understanding the deregulation processes in tumoral cells, we develop a 3-steps strategy that measures the influence of TFs in the deregulation of genes in tumor samples.
A list of TFs characterizing given subtypes can then be established. Even if a biological experimental validation
should be done in future work, it seems that it can be used complementary to differential gene expression analysis to point to potential biomarkers of cancers.
An open question, which has also to be tackled, is to determine in which extend the information carried by mutations can explain the deregulations. Mutation data are particularly hard to explore in this context due to various reasons : first of all, mutations do not necessarily affect gene expression. Secondly, in cancers, besides the most significant mutated genes, many sequencing projects have shown that genes are mutated in less than 5\% of the samples.
In this work, among the identified TFs of Table \ref{tab:b}, we find ATM, which is highly deregulated (83\%) and mutated (15\%) for BaSq samples. Mutations of ATM have been recently shown to be associated with shorter survival in urothelial cancers \cite{Yin18}.
As a preliminary result, we observe that 95\% of the mutated BaSq samples corresponds to non-zero $\hat{B}$ coefficients (Table \ref{tab:mut}).
This table is unfortunately still too unbalanced to positively conclude for a significant association but supplementary works need to be done to go further.
\begin{table}[!ht]
\begin{center}
\begin{tabularx}{0.36\textwidth}{c c c}
\toprule
& $\hat{B}\neq 0$ & $\hat{B}=0$ \\
\cmidrule(lr){2-3}
Non mutated& 91 & 21\\
Mutated &18 & 1 \\
\bottomrule
\end{tabularx}
\end{center}
\caption{Confusion matrix indicating the association between mutation and deregulation status $\hat{B}$ for TF ATM across all 131 basal samples.
}\label{tab:mut}
\end{table}
\label{sect:bib}
\bibliographystyle{plain}
|
1,108,101,565,120 | arxiv | \section{Introduction}
\label{intro}
In this paper, we consider a box-constrained global optimization problem of the form:
\begin{equation}
\label{eq:opt-problem}
\begin{aligned}
& \min_{\mathbf{x}\in D} && f(\mathbf{x})
\end{aligned}
\end{equation}
where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a Lipschitz-continuous, potentially ``black-box'' objective function, and $\mathbf{x}$ is the input vector.
Thus, we assume that the analytical information of the objective function $f$ is unknown and can only be obtained by evaluating $f$ at various points of the feasible region, which is an $n$-dimensional hyper-rectangle
\[
D = [ \mathbf{a}, \mathbf{b}] = \{ \mathbf{x} \in \mathbb{R}^n: a_j \leq x_j \leq b_j, j = 1, \dots, n\}.
\]
Moreover, $f$ can be non-linear, multi-modal, non-convex, and non-differentiable.
The optimization community attracted considerable interest from the simplicity and efficiency of the deterministic \texttt{DIRECT}-type algorithms.
The original \texttt{DIRECT}{} algorithm was developed by Jones et al.~\cite{Jones1993} and is a well-known and widely used solution technique for derivative-free global optimization.
The \texttt{DIRECT}{} algorithm extends classical Lipschitz optimization~\cite{Paulavicius2006,Paulavicius2007,Paulavicius2008,Paulavicius2009b,Pinter1996book,Piyavskii1967,Sergeyev2011,Shubert1972}, where the need for the Lipschitz constant is eliminated.
This feature made \texttt{DIRECT}-type methods especially attractive for solving various real-world optimization problems~(see, e.g., \cite{Baker2000,Bartholomew2002,Carter2001,Cox2001,Serafino2011,Gablonsky2001,Liuzzi2010,Paulavicius2019:eswa,Paulavicius2014:book,Stripinis2018b} and the references given therein).
Furthermore, the extensive numerical benchmarks in~\cite{Rios2013} revealed an encouraging performance of the \texttt{DIRECT}{} algorithm among other tested derivative-free global optimization approaches, belonging to genetic~\cite{John1975}, simulated annealing~\cite{Kirkpatrick1983}, and particle swarm optimization~\cite{Kennedy1995}.
Typically, the \texttt{DIRECT}-type algorithms include three main steps: selection, sampling, and partitioning (subdivision).
At each iteration, a specific \texttt{DIRECT}-type algorithm identifies (selects) the set of potentially optimal hyper-rectangles (POHs) and then samples and subdivides them.
The original \texttt{DIRECT}{} algorithm uses hyper-rectangular subdivisions based on $n$-dimensional trisection.
The objective function is evaluated at the center points of the newly-formed sub-rectangles.
Moreover, if several dimensions have the maximum side length, \texttt{DIRECT}{} starts trisection from the dimension with the lowest $w_j$ and continues to the highest~\cite{Jones2021,Jones1993}.
Here $w_j$ is defined as the best function values sampled along dimension $j$
\begin{equation}
w_j = \min \{ f(\mathbf{c} + \delta\mathbf{e}_j), f(\mathbf{c} - \delta\mathbf{e}_j) \},
\end{equation}
where $j \in M$ (set of dimensions with the maximum side length), $\delta$ is equal to one-third of the maximum side length, $\mathbf{c}$ is the center of the hyper-rectangle, and $\mathbf{e}_j$ is the $j$th unit vector.
\Cref{fig:divide} illustrates the selection, sampling, and subdivision (trisection) in the original \texttt{DIRECT}{} algorithm for a two-dimensional \textit{Branin} test function.
Since the original \texttt{DIRECT}{} algorithm was published, various \texttt{DIRECT}-type extensions and modifications have been proposed.
One large group of existing modifications aim to improve the selection of POHs (see, e.g., ~\cite{Baker2000,Gablonsky2001:phd,Mockus2017,Paulavicius2019:eswa,Stripinis2018a}), while the other group concentrates on different partitioning techniques (see, e.g.,~\cite{Jones2001,Liu2015b,Paulavicius2016:jogo,Paulavicius2013:jogo,Sergeyev2006}).
In addition, the authors also make some modifications to the other steps of their algorithms.
Consequently, it is unclear which suggested improvements have the most potential within the \texttt{DIRECT}{} algorithmic framework.
\begin{figure}[ht]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
ylabel = {$c_2$},
enlargelimits=0.05,
title={Initialization},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{only marks,mark=*,color=black}
\addlegendentry{Sampling point}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.05pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $1$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,blue!30,fill=blue!50,opacity=0.4}
\addlegendentry{Selected POH}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,0.3333) (0,0.3333)};
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,0.6666) (0,0.6666)};
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0.3333) (0,0.6666) (0.3333,0.6666) (0.3333,0.3333)};
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(1,0.3333) (1,0.6666) (0.6666,0.6666) (0.6666,0.3333)};
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,0.3333);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.16666)} node[yshift=-8pt] {\tiny $2.41$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.83333)} node[yshift=-8pt] {\tiny $95.84$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.5)} node[yshift=-8pt] {\tiny $13.10$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.83333,0.5)} node[yshift=-8pt] {\tiny $51.39$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $2$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,black,fill=white,opacity=0.5}
\addlegendentry{Unselected region}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0.3333) rectangle (axis cs:0.6666,0.6666);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,0.6666);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0.6666) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,0.3333);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.6666,0.3333) rectangle (axis cs:1,0.6666);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0.6666) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0.3333) rectangle (axis cs:0.3333,0.6666);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.16666)} node[yshift=-8pt] {\tiny $2.41$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.83333)} node[yshift=-8pt] {\tiny $95.84$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.5)} node[yshift=-8pt] {\tiny $13.10$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.83333,0.5)} node[yshift=-8pt] {\tiny $51.39$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.16666)} node[yshift=-8pt] {\tiny $70.96$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.83333,0.16666)} node[yshift=-8pt] {\tiny $14.69$};
\end{axis}
\end{tikzpicture}}
\caption{Illustration of selection, sampling, and subdivision (trisection) used in the original \texttt{DIRECT}{} algorithm~\cite{Jones1993} on a two-dimensional \textit{Branin} test function in the first two iterations}
\label{fig:divide}
\end{figure}
We address this problem by comparing various proposed candidate selection and partitioning techniques for the remaining algorithmic steps under the same conditions.
This way, we seek to improve the efficiency of existing \texttt{DIRECT}-type algorithms by creating new combinations based on the previous proposals.
Twelve mostly new \texttt{DIRECT}-type algorithmic variations are introduced and investigated using three selection and four partitioning schemes.
The rest of the paper is organized as follows.
\Cref{rewiev} reviews the original \texttt{DIRECT}{} algorithm and well-known \texttt{DIRECT}-type modifications proposed for the candidate selection and subdivision.
The obtained new combinations are described in \Cref{sec:combinations}.
An extensive experimental analysis using traditional test problems is presented in \Cref{sec:experiments}, while on GKLS-type test problems in \Cref{sec:exp-GKLS}.
Finally, in \Cref{sec:conclusiuo}, we conclude the paper.
\section{Overview of candidate selection and partitioning techniques used in \texttt{DIRECT}-type algorithms}
\label{rewiev}
This section reviews the most well-known strategies for selecting and partitioning potentially optimal candidates in \texttt{DIRECT}-type algorithms.
We start with a brief review of the main steps of the original \texttt{DIRECT}{} algorithm, with particular emphasis on candidate selection and partitioning techniques.
\subsection{Original \texttt{DIRECT}{} algorithm}
The original \texttt{DIRECT}{} algorithm is a deterministic derivative-free global optimization~\cite{Horst1995:book,Sergeyev2017:book,Strongin2000:book} algorithm subject to simple box constraints.
The main steps of \texttt{DIRECT}{} are summarized in~\Cref{alg:direct}.
At the \textbf{Initialization} step (see~\Cref{alg:direct}, Lines~\ref{alg:initialization_begin}--\ref{alg:initialization_end}), \texttt{DIRECT}{} normalizes the search region $D$ to unit hyper-rectangle $\bar{D}$ and refers to the original space $D$ only when evaluating the objective function.
Regardless of the dimension $n$, the first evaluation of the objective function is performed at the midpoint of the unit hyper-rectangle $\mathbf{c}^1 = (1/2, \dots,1/2) \in \bar{D}$.
\begin{algorithm}[ht]
\normalsize
\LinesNumbered
\SetAlgoLined
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKwData{Mmax}{M$_{\rm max}$}
\SetKwData{Kmax}{K$_{\rm max}$}
\SetKw{And}{and}
\SetKw{Or}{or}
\texttt{DIRECT}($f$,$D$,$opt$);\\
\Input{Objective function $(f)$, search domain $(D)$, and adjustable algorithmic options $(opt)$: tolerance ($\varepsilon_{\rm pe}$), maximal number of function evaluations ($\Mmax$) and algorithmic iterations ($\Kmax$) ; }
\Output{The best found objective value $(f^{\min})$, solution point $(\mathbf{x}^{\min})$, and record of various performance metrics: percent error $(pe)$, number of iterations $(k)$, number of function evaluations $(m)$;}
\algrule
\nonl \textbf{Initialization step:} \\
\textit{Normalize} the search domain $D$ to be the unit hyper-rectangle $\bar{D}$; \label{alg:initialization_begin} \\
\textit{Evaluate} the objective function at the center point ($\mathbf{c}^1$) of $\bar{D}$ and set: \\
$\mathbf{c}^1 = \left(\frac{1}{2}, \frac{1}{2}, ..., \frac{1}{2}\right)$; \\
$x^{\min}_j = \mid b_j - a_j \mid c^1_j + a_j, j=1, \dots, n$; \tcp*[f]{referring to $D$}\\
$f^1 = f(\mathbf{x}^{\min})$, $f^{\min} = f^1$; \\
\textit{Initialize} performance measures: $k=1$, $m=1$, $pe$;\label{alg:initialization_end} \tcp*[f]{\textit{pe} defined in \eqref{eq:pe}}
\While{$pe > \varepsilon_{\rm pe}$ \And $m < \Mmax$ \And $k < \Kmax$ }{
\textbf{Selection step:} \textit{Identify} the set $S_k$ of POHs using \Cref{def:potOptRect} \; \label{alg:selection_begin}
\ForEach{$\bar{D}^j_k \in S_k$}{
\textbf{Sampling step:} \textit{Evaluate} $f$ at the newly sampled points in $\bar{D}^j_k$; \label{alg:sampling}\\
\textbf{Subdivision step:} \textit{Trisect} $\bar{D}^j_k$ as illustrated in \Cref{fig:divide} \; \label{alg:subdivision}
}\label{alg:global_end}
\textit{Update} $f^{\min}, \mathbf{x}^{\min}$, and performance measures: $k$, $m$ and $pe$;
}
\textbf{Return} $f^{\min}, \mathbf{x}^{\min}$, and performance measures: $k$, $m$ and $pe$.
\caption{Main steps of the \texttt{DIRECT}{} algorithm}
\label{alg:direct}
\end{algorithm}
Two of the most critical steps in the original \texttt{DIRECT}{} and other existing modifications are \textbf{Selection} and \textbf{Subdivision}.
\subsubsection{Original selection strategy}
\label{sssec:DIRECT-selection}
Let the current partition at the iteration $ k $ is defined as
\[
\mathcal{P}_k = \{ \bar{D}^i_k : i \in \mathbb{I}_k \},
\]
where $ \bar{D}^i_k = [\mathbf{a}^i, \mathbf{b}^i] = \{ \mathbf{x} \in \bar{D}: 0 \leq a_j^i \leq x_j \leq b_j^i \leq 1, j = 1,\dots, n, \forall i \in \mathbb{I}_k \} $ and $ \mathbb{I}_k $ is the index set identifying the current partition $ \mathcal{P}_k $.
The next partition, $\mathcal{P}_{k+1}$, is obtained by subdividing selected POHs from the current partition $ \mathcal{P}_k $.
Note there is only one candidate, $\bar{D}^1_1$, that at the first iteration ($k=1$), which is automatically potentially optimal.
The formal requirement of potential optimality in subsequent iterations is stated in \Cref{def:potOptRect}.
\begin{definition}
Let $ \mathbf{c}^i $ denote the center sampling point and $ \delta^i_k $ be a measure (equivalently, sometimes called distance or size) of the hyper-rectangle $ \bar{D}^i_k$.
Let $ \varepsilon > 0 $ be a positive constant and $f^{\min}$ be the best currently found value of the objective function.
A hyper-rectangle $ \bar{D}^h_k, h \in \mathbb{I}_k $ is said to be potentially optimal if there exists some rate-of-change (Lipschitz) constant $ \tilde{L} > 0$ such that
\begin{eqnarray}
f(\mathbf{x}^h) - \tilde{L}\delta^h_k & \leq & f(\mathbf{x}^i) - \tilde{L}\delta^i_k, \quad \forall i \in \mathbb{I}_k, \label{eqn:potOptRect1} \\
f(\mathbf{x}^h) - \tilde{L}\delta^h_k & \leq & f^{\min} - \varepsilon|f^{\min}|, \label{eqn:potOptRect2}
\end{eqnarray}
where
\begin{equation}
\label{eq:space_original}
x^i_j = \mid b_j - a_j \mid c^i_j + a_j, j=1,\dots,n,
\end{equation}
and the measure of the hyper-rectangle $ \bar{D}^i_k$ is
\begin{equation}
\label{eq:distance}
\delta^i_k = \frac{1}{2} \| {\mathbf{b}}^i - {\mathbf{a}}^i \|_2.
\end{equation}
\label{def:potOptRect}
\end{definition}
The hyper-rectangle $ \bar{D}^j_k $ is potentially optimal if the lower Lipschitz bound for the objective function computed by the left-hand side of \eqref{eqn:potOptRect1} is the smallest one with some positive constant $\tilde{L}$ in the current partition $ \mathcal{P}_k $.
In~\eqref{eqn:potOptRect2}, the parameter $\varepsilon$ is used to protect from an excessive refinement of the local minima~\cite{Jones1993,Paulavicius2014:jogo}.
In~\cite{Jones1993}, the authors obtained good results for $\varepsilon$ values ranging from $10^{-3}$ to $10^{-7}$.
A geometrical interpretation of POH selection using \Cref{def:potOptRect} is illustrated on the left panel of \Cref{fig:selection}.
Here each hyper-rectangle is represented as a point.
The $x$-axis shows the size of the measure $(\delta)$ while the $y$-axis -- the objective function value attained at the midpoint $(\mathbf{c})$ of this hyper-rectangle.
The hyper-rectangles meeting conditions \eqref{eqn:potOptRect1} and \eqref{eqn:potOptRect2} are points on the lower-right convex hull (highlighted in blue color).
However, such a selection strategy can be especially inefficient, e.g., for symmetric problems.
There may be many POHs with the same diameter $\delta^i_k$ and objective value, leading to a drastic increase of selected POHs per iteration.
To overcome this, authors in \cite{Gablonsky2001} proposed selecting only one of these many ``equivalent'' candidates.
In \cite{Jones2021}, the authors revealed that such modification could significantly increase the performance of the \texttt{DIRECT}{} algorithm.
In this paper, we call this an \textit{improved original selection strategy}.
\subsubsection{Original partitioning scheme}
\label{sssec:DIRECT-partitioning}
In the \textbf{Sampling} and \textbf{Subdivision} steps (see Algorithm~\ref{alg:direct}, Lines~\ref{alg:sampling} and \ref{alg:subdivision}), a hyper-rectangular partition based on $n$-dimensional trisection is used.
Using this scheme, the POHs are partitioned into smaller non-intersecting hyper-rectangles (see~\Cref{fig:divide}), containing the lower function values in larger new hyper-rectangles.
\subsection{Other candidate selection schemes in \texttt{DIRECT}-type algorithms}
\label{modifications2}
Various improvements and new ideas for candidate selection were proposed in the literature.
To prevent the \texttt{DIRECT}{} algorithm from being sensitive to the objective function's additive scaling, authors in~\cite{Finkel2006} introduced a scaling of the objective function values by subtracting the median value calculated from the previously evaluated function values.
More specifically, in the selection step, a new \texttt{DIRECT-m}{} replaces \eqref{eqn:potOptRect2} from \Cref{eqn:potOptRect2} to:
\begin{equation}
f(\mathbf{x}^i) - \tilde{L}\delta^i \leq f^{\min} - \varepsilon|f^{\min} - {f}^{\rm median}|. \label{eqn:potOptRectz}
\end{equation}
Similarly, in~\cite{Liu2013}, authors adopted the similar idea in \texttt{DIRECT-a}{}.
At each iteration, instead of the median value (${f}^{\rm median}$), authors proposed to use the average value $({f}^{\rm average})$:
\begin{equation}
f(\mathbf{x}^i) - \tilde{L}\delta^i \leq f^{\min} - \varepsilon|f^{\min} - {f}^{\rm average}|. \label{eqn:potOptRect4}
\end{equation}
The authors in~\cite{Finkel2004aa,Liu2015b,Liu2015} showed that different schemes controlling the $\varepsilon$ parameter in \eqref{eqn:potOptRect2} could increase the efficiency of the \texttt{DIRECT}{} algorithm, especially when needed to fine-tune the solution to higher accuracy.
In order to verify this, the experimental investigation of the original \texttt{DIRECT}, \texttt{DIRECT-m}{} (based on \cref{eqn:potOptRectz}), and \texttt{DIRECT-a}{} (based on \cref{eqn:potOptRect4}) algorithms on an extensive set consisting of $81$ test and six engineering problems (from \texttt{DIRECTGOLib v1.0} \cite{DIRECTGOLib2022v10}) were performed in \cite{Stripinis2021:dgo}.
Our investigation revealed that in solving engineering problems, a significant performance difference was not observed.
However, on $81$ test problems, the original \texttt{DIRECT}{} proved to be more efficient, and using the same stopping conditions solved $10$ and $23$ more test problems than \texttt{DIRECT-m}{} and \texttt{DIRECT-a}{} accordingly (for details, see Table $3$ in \cite{Stripinis2021:dgo}).
Based on this, the original eq. \eqref{eqn:potOptRect2} was used in an improved original selection strategy in our experimental study.
Below we focus on the other two selection schemes considered in this research.
\subsubsection{Aggressive selection}
\label{sssec:aggressive-selection}
In~\cite{Baker2000}, the authors relaxed the selection criteria of POHs and proposed an aggressive version of the \texttt{DIRECT}{} algorithm (\texttt{Aggressive DIRECT}).
The main idea is to select and divide at least one hyper-rectangle from each group of different diameters $(\delta_k^i)$ containing the lowest objective function value.
Such aggressive selection ensures much more objective function evaluations per iteration compared to other existing POH selection schemes.
From the optimization point of view, such an approach may seem less favorable since it ``wastes'' function evaluations by exploring unnecessary (non-potentially optimal) hyper-rectangles.
However, such a strategy is much more appealing in a parallel environment, as was shown in~\cite{He2009part2,He2009part1,He2010,Watson2001}.
In \cite{He2008}, authors showed that limiting the refinement of the search-space when the size of hyper-rectangles $ \delta^i_k $ reached some prescribed size $ \delta^{\rm limit} $, the memory usage reduces from $10 \%$ to $70 \%$, and the algorithm can run longer without memory allocation failure.
In the experimental part (described in \Cref{sec:experiments}), the limit parameter $ (\delta^{\rm limit}) $ was set to the size of a hyper-rectangle that has been subdivided $ 50 n $ times.
The $ \delta^{\rm limit} $ parameter is intended for the same purpose as the equation \eqref{eqn:potOptRect2} and tries to avoid wasting function estimates by ``over-exploring'' the local minimum area.
We call this an \textit{improved aggressive selection strategy}.
A geometrical interpretation of the aggressive selection is shown in the middle panel of \Cref{fig:selection}.
\begin{figure}[htb]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{groupplot}[
group style={
group size=3 by 1,
x descriptions at=edge bottom,
y descriptions at=edge left,
vertical sep=0pt,
horizontal sep=6pt,
},
height=0.5\textwidth,width=0.7\textwidth,
]
\nextgroupplot[
title = {\parbox{3.7cm}{Original \texttt{DIRECT}{} selection}},
xlabel = {\large $\delta$},
ylabel = {\large Function values},
ymin=0,
ymax=1,
xmin=0,
xmax=0.6,
ytick distance=0.2,
xtick distance=0.2,
legend style={font=\scriptsize},
height=0.6\textwidth,width=0.5\textwidth,
enlargelimits=0.05,
]
\addlegendimage{black!60,only marks,mark=*,mark size=2pt}
\addlegendentry{non-potentially optimal}
\addlegendimage{blue,mark=*,mark size=2.5pt}
\addlegendentry{potentially optimal}
\addplot[black!60,only marks,mark=*,mark size=2pt] table[x=D,y=F] {data/ddr2.txt};
\addplot[blue,mark=*,mark size=2.5pt] table[x=D_pot_opt,y=F_pot_opt] {data/ddr2.txt};
\nextgroupplot[
title = {\parbox{3.7cm}{Aggressive selection}},
xlabel = {\large $\delta$},
ymin=0,
ymax=1,
xmin=0,
xmax=0.6,
ytick distance=0.25,
xtick distance=0.2,
height=0.6\textwidth,width=0.5\textwidth,
enlargelimits=0.05,
]
\addlegendimage{black!60,only marks,mark=*,mark size=2pt}
\addlegendentry{non-potentially optimal}
\addlegendimage{blue,mark=*,mark size=2.5pt}
\addlegendentry{potentially optimal}
\addplot[black!60,only marks,mark=*,mark size=2pt] table[x=D,y=F] {data/ddr1.txt};
\addplot[blue,mark=*,mark size=2.5pt] table[x=D_pot_opt,y=F_pot_opt] {data/ddr1.txt};
\nextgroupplot[
title = {\parbox{3.7cm}{Pareto selection}},
xlabel = {\large $\delta$},
ymin=0,
ymax=1,
xmin=0,
xmax=0.6,
xtick distance=0.2,
height=0.6\textwidth,width=0.5\textwidth,
enlargelimits=0.05,
]
\addlegendimage{black!60,only marks,mark=*,mark size=2pt}
\addlegendentry{non-potentially optimal}
\addlegendimage{blue,mark=*,mark size=2.5pt}
\addlegendentry{potentially optimal}
\addplot[black!60,only marks,mark=*,mark size=2pt] table[x=D,y=F] {data/ddr.txt};
\addplot[blue,mark=*,mark size=2.5pt] table[x=D_pot_opt,y=F_pot_opt] {data/ddr.txt};
\end{groupplot}
\end{tikzpicture}}
\caption{Comparison of three different selection schemes (original \texttt{DIRECT}, aggressive, and Pareto) applied on the same set of points}
\label{fig:selection}
\end{figure}
\subsubsection{Two-step-based Pareto selection}
\label{sssec:two-step-selection}
In a more recent modification \texttt{DIRECT-GL}~\cite{Stripinis2018a}, we proposed a new two-step-based selection strategy for the identification of the extended set of POHs.
In both steps, \texttt{DIRECT-GL}{} selects only Pareto optimal hyper-rectangles: in the first step, non-dominated on size (the higher, the better) and center point function value (the lower, the better), while in the second, non-dominated on size and distance from the current minimum point (the closer, the better) and takes the unique union of identified candidates in both steps.
We note this scheme does not have any protection against over-exploration in sub-optimal local minima regions.
A geometrical interpretation of the selection procedure is shown in the right panel of \Cref{fig:selection}.
In the first step, \texttt{DIRECT-GL}{} selects Pareto hyper-rectangles concerning the size and function value.
Therefore, unlike the \texttt{Aggressive DIRECT}{} strategy, hyper-rectangles from the groups where the minimum objective function value is higher than the minimum value from the larger groups are not selected in \texttt{DIRECT-GL}{}.
Compared to the original selection (Definitions~\ref{def:potOptRect}), in \texttt{DIRECT-GL}, the set of POHs is enlarged by adding more medium-sized hyper-rectangles.
In this sense, Pareto selection may be more global than the original \texttt{DIRECT}{} selection.
Additionally, in the second step, \texttt{DIRECT-GL}{} selects the hyper-rectangles that are non-dominated concerning the size and distance from the current minimum point.
This way, the set of POHs is enlarged with various size hyper-rectangles nearest the current minimum point, assuring a broader examination around it.
\subsection{Other partitioning schemes in \texttt{DIRECT}-type algorithms}
\label{modifications}
\subsubsection{Trisection strategy, along single the longest side}
\label{sssec:trisection-along-single-side}
In \cite{Jones2001}, the author proposed a revised version of the original \texttt{DIRECT}{} algorithm.
One of the main modifications is to trisect selected POHs only along the single longest side (coordinate), see~\Cref{fig:dividere}.
If there are several equal longest sides, the coordinate that has been split the least times during the entire search process so far is selected.
If there is a tie on the latter criterion, the lowest indexed dimension is selected.
In \cite{Jones2021}, authors showed that dividing a selected rectangle on only one the longest side instead of all can significantly increase the convergence speed.
\begin{figure}[ht]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
ylabel = {$c_2$},
enlargelimits=0.05,
title={Initialization},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{only marks,mark=*,color=black}
\addlegendentry{Sampling point}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.05pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $1$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,blue!30,fill=blue!50,opacity=0.4}
\addlegendentry{Selected POH}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.3333,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.5)} node[yshift=-8pt] {\tiny $13.10$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.83333,0.5)} node[yshift=-8pt] {\tiny $51.39$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $2$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,black,fill=white,opacity=0.5}
\addlegendentry{Not selected region}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.3333,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0.3333) rectangle (axis cs:0.3333,0.6666);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0.6666) rectangle (axis cs:0.3333,1);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.5,0.5)} node[yshift=-8pt] {\tiny $24.13$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.5)} node[yshift=-8pt] {\tiny $13.10$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.83333,0.5)} node[yshift=-8pt] {\tiny $51.39$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.16666)} node[yshift=-8pt] {\tiny $70.96$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.16666,0.83333)} node[yshift=-8pt] {\tiny $5.24$};
\end{axis}
\end{tikzpicture}}
\caption{Two-dimensional illustration of the partitioning technique used in the revised version of the \texttt{DIRECT}{} algorithm~\cite{Jones2001} on a two-dimensional \textit{Branin} test function}
\label{fig:dividere}
\end{figure}
\subsubsection{Diagonal trisection strategy}
\label{sssec:diagonal-trisection}
Adaptive diagonal curves (\texttt{ADC}) based algorithm was introduced in \cite{Sergeyev2006}.
Independently of the problem dimension, the \texttt{ADC}{} algorithm evaluates the objective function $f(\mathbf{x})$ at two vertices of the main diagonals of each hyper-rectangle $ \bar{D}_k^i$, as shown in \Cref{fig:divide_adc}.
Same as in the revised version of \texttt{DIRECT}{} \cite{Jones2001}, each selected POH is trisected along just one of the longest sides.
Such a diagonal scheme potentially obtains more comprehensive information about the objective function than center sampling.
The center sampling strategies may sometimes take many iterations to find the solution when a hyper-rectangle containing the optimum has a midpoint with a very bad function value, which makes it undesirable for further selection.
The \texttt{ADC}{} algorithm intuitively reduces this chance for both sampling points in the hyper-rectangle containing the optimum solution by sampling two points per hyper-rectangle.
Therefore, better performance could be expected, especially solving more complex problems.
The main advantage of such a strategy is that it addresses one of the well-known algorithmic weaknesses of the original \texttt{DIRECT}.
The feasible region boundary points can only be approached arbitrarily closely but never sampled using the center sampling technique.
Authors in \cite{Huyer1999,Liu2015b} have shown that the latter fact can cause very slow convergence to an optimum if it lies on the feasible region's boundary.
\begin{figure}[ht]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
ylabel = {$c_2$},
enlargelimits=0.05,
title={Initialization},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{only marks,mark=*,color=black}
\addlegendentry{Sampling point}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.05pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(1,1)} node[yshift=-8pt,xshift=-12pt] {\tiny $145.87$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0,0)} node[yshift=8pt,xshift=12pt] {\tiny $308.12$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $1$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,blue!30,fill=blue!50,opacity=0.4}
\addlegendentry{Selected POH}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(1,1)} node[yshift=-8pt,xshift=-12pt] {\tiny $145.87$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0,0)} node[yshift=8pt,xshift=12pt] {\tiny $308.12$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0)} node[yshift=8pt,xshift=-12pt] {\tiny $14.34$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,1)} node[yshift=-8pt,xshift=12pt] {\tiny $100.60$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $2$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,black,fill=white,opacity=0.5}
\addlegendentry{Not selected region}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.3333,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0.3333,0.3333) rectangle (axis cs:0.6666,0.6666);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.3333,0) rectangle (axis cs:0.6666,0.3333);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.6666,0) rectangle (axis cs:1,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(1,1)} node[yshift=-8pt,xshift=-12pt] {\tiny $145.87$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0,0)} node[yshift=8pt,xshift=12pt] {\tiny $308.12$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0)} node[yshift=8pt,xshift=-12pt] {\tiny $14.34$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,1)} node[yshift=-8pt,xshift=12pt] {\tiny $100.60$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0.6666)} node[yshift=-8pt,xshift=12pt] {\tiny $88.90$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,0.3333)} node[yshift=-8pt,xshift=-12pt] {\tiny $20.60$};
\end{axis}
\end{tikzpicture}}
\caption{Two-dimensional illustration of the diagonal trisection strategy introduced in the \texttt{ADC}~\cite{Sergeyev2006} algorithm on a two-dimensional \textit{Branin} test function}
\label{fig:divide_adc}
\end{figure}
\subsubsection{Diagonal bisection strategy}
\label{sssec:diagonal-bisection}
\texttt{BIRECT}{} (\texttt{BI}secting \texttt{RECT}angles)~\cite{Paulavicius2016:jogo} is motivated by the diagonal partitioning approach~\cite{Sergeyev2006,Sergeyev2008:book,Sergeyev2017:book}.
The bisection is used instead of a trisection typical for diagonal-based and most \texttt{DIRECT}-type algorithms.
However, neither sampling at the center nor the diagonal's endpoints are appropriate for bisection.
Therefore, in \texttt{BIRECT}, the objective function is evaluated at two points lying on the diagonal equidistant between themselves and a diagonal's vertices (see \Cref{fig:divide_birect}).
Such a sampling strategy enables the reuse of the sampling points in descendant hyper-rectangles.
Like the \texttt{ADC}{} algorithm (see \Cref{sssec:diagonal-trisection}), \texttt{BIRECT}{} samples two points per hyper-rectangle.
Therefore, more comprehensive information about the objective function is considered compared to the central sampling strategy used in most \texttt{DIRECT}-type algorithms.
\begin{figure}[ht]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
ylabel = {$c_2$},
enlargelimits=0.05,
title={Initialization},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{only marks,mark=*,color=black}
\addlegendentry{Sampling point}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.05pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:1,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,0.3333)} node[yshift=8pt,xshift=-12pt] {\tiny $20.60$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0.6666)} node[yshift=8pt,xshift=12pt] {\tiny $88.90$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $1$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,blue!30,fill=blue!50,opacity=0.4}
\addlegendentry{Selected POH}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.05pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.5,1);
\draw [black, thick, mark size=0.1pt,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.5,1);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,0.3333)} node[yshift=8pt,xshift=-12pt] {\tiny $20.60$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0.6666)} node[yshift=8pt,xshift=12pt] {\tiny $88.90$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.8333,0.3333)} node[yshift=-8pt,xshift=-12pt] {\tiny $26.79$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.1666,0.6666)} node[yshift=-8pt,xshift=12pt] {\tiny $2.92$};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=0.45\textwidth,height=0.45\textwidth,
xlabel = {$c_1$},
enlargelimits=0.05,
title={Iteration $2$},
legend style={draw=none},
legend columns=1,
legend style={at={(0.925,-0.2)},font=\normalsize},
ylabel style={yshift=-0.1cm},
xlabel style={yshift=0.1cm},
ytick distance=1/6,
xtick distance=1/6,
every axis/.append style={font=\normalsize},
yticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
xticklabels={$0$, $0$,$\frac{1}{6}$, $\frac{1}{3}$, $\frac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$, $1$},
]
\addlegendimage{area legend,black,fill=white,opacity=0.5}
\addlegendentry{Not selected region}
\addplot[thick,patch,mesh,draw,black,patch type=rectangle,line width=0.3mm] coordinates {(0,0) (1,0) (1,1) (0,1)} ;
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0.5,0) rectangle (axis cs:1,1);
\draw [black, thick, mark size=0.1pt, fill=blue!50,opacity=0.4,line width=0.3mm] (axis cs:0,0.5) rectangle (axis cs:0.5,1);
\draw [black, thick, mark size=0.1pt ,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.5,1);
\draw [black, thick, mark size=0.1pt ,line width=0.3mm] (axis cs:0,0) rectangle (axis cs:0.5,0.5);
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,0.3333)} node[yshift=8pt,xshift=-12pt] {\tiny $20.60$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.6666,0.6666)} node[yshift=8pt,xshift=12pt] {\tiny $88.90$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.8333,0.3333)} node[yshift=-8pt,xshift=-12pt] {\tiny $26.79$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.1666,0.6666)} node[yshift=-8pt,xshift=12pt] {\tiny $2.92$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.1666,0.1666)} node[yshift=-8pt,xshift=12pt] {\tiny $70.96$};
\addplot[only marks,mark=*, mark size=1.5pt,black] coordinates {(0.3333,0.8333)} node[yshift=8pt,xshift=-12pt] {\tiny $61.85$};
\end{axis}
\end{tikzpicture}}
\caption{Two-dimensional illustration of the diagonal bisection strategy used in the \texttt{BIRECT}{} algorithm~\cite{Paulavicius2016:jogo} on a two-dimensional \textit{Branin} test function}
\label{fig:divide_birect}
\end{figure}
\section{Summary of new \texttt{DIRECT}-type algorithmic variations}
\label{sec:combinations}
In this section, we define new variations of the \texttt{DIRECT}-type algorithms.
In total, twelve variants of \texttt{DIRECT}-type algorithms are constructed (see \Cref{tab:combo}) by combining three different selection and four partitioning techniques reviewed in the previous section.
\begin{table}[ht]
\centering
\caption{Abbreviations of new twelve \texttt{DIRECT}-type algorithmic variations based on three different selection and four partitioning strategies}
\label{tab:combo}
\resizebox{\textwidth}{!}{
\begin{tabular}{r|r|p{2cm}p{2cm}p{2cm}p{2cm}}
\toprule
\multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{Partitioning strategy}}\\
\cmidrule{3-6}
\multicolumn{2}{c}{} & N-DTC & 1-DTC & 1-DTDV & 1-DBDP \\
\midrule
\multirow{3}{*}{\rotatebox{90}{\parbox{1.6cm}{\textbf{Selection scheme}}}} & IO & N-DTC-IO & 1-DTC-IO & 1-DTDV-IO & 1-DBDP-IO \\
&&&&&\\
& IA & N-DTC-IA & 1-DTC-IA & 1-DTDV-IA & 1-DBDP-IA \\
&&&&&\\
& GL & N-DTC-GL & 1-DTC-GL & 1-DTDV-GL & 1-DBDP-GL \\
\bottomrule
\end{tabular}}
\end{table}
The selection strategies used to create these new \texttt{DIRECT}-type algorithmic variations are:
\begin{itemize}
\item[1.] \textit{\textbf{I}mproved \textbf{O}riginal selection (\textbf{IO}}) as described in~\Cref{sssec:DIRECT-selection}.
\item[2.] \textit{\textbf{I}mproved \textbf{A}ggressive selection (\textbf{IA})} as described in \Cref{sssec:aggressive-selection}.
\item[3.] \textit{Two-step-based (\textbf{G}lobal-\textbf{L}ocal) Pareto selection (\textbf{GL})} as described in~\Cref{sssec:two-step-selection}.
\end{itemize}
The partitioning strategies used in these combinations are the following:
\begin{itemize}
\item[1.] \textit{Hyper-rectangular partitioning based on \textbf{N}-\textbf{D}imensional \textbf{T}risection and objective function evaluations at \textbf{C}enter points (\textbf{N-DTC})} as described in \Cref{sssec:DIRECT-partitioning}.
\item[2.] \textit{Hyper-rectangular partitioning based on \textbf{1}-\textbf{D}imensional \textbf{T}risection and objective function evaluations at \textbf{C}enter points (\textbf{1-DTC})} as described in \Cref{sssec:trisection-along-single-side}.
\item[3.] \textit{Hyper-rectangular partitioning based on \textbf{1}-\textbf{D}imensional \textbf{T}risection and objective function evaluations at two \textbf{D}iagonal \textbf{V}ertices (\textbf{1-DTDV})} as described in \Cref{sssec:diagonal-trisection}.
\item[4.] \textit{Hyper-rectangular partitioning based on \textbf{1}-\textbf{D}imensional \textbf{B}isection and objective function evaluations at two \textbf{D}iagonals \textbf{P}oints (\textbf{1-DBDP})} as described in \Cref{sssec:diagonal-bisection}.
\end{itemize}
Let us note that some constructed combinations are already used in existing \texttt{DIRECT}-type algorithms.
For example, the N-DTC-GL algorithm is identical to the recently proposed \texttt{DIRECT-GL}{} \cite{Stripinis2018a}.
Furthermore, the N-DTC-IO and 1-DBDP-IO algorithms are highly related to \texttt{DIRECT}~\cite{Jones1993} and \texttt{BIRECT}~\cite{Paulavicius2016:jogo} algorithms.
The only difference is that \texttt{DIRECT}{} and \texttt{BIRECT}{} algorithms select all ``equivalent'' candidates (with the same diameter and objective function value), while the IO selection rule restricts to one candidate.
The \texttt{Aggressive DIRECT}~\cite{Baker2000} algorithm is close to the N-DTC-IA variation.
The only difference is the selection step using the limit parameter $ (\delta^{\rm limit}) $.
Moreover, discarding the local search subroutine from the revised hybrid version of the \texttt{DIRECT}{} algorithm~\cite{Jones2001} would lead to 1-DTC-IO variation.
Finally, the 1-DTDV-IO combination is highly related to the \texttt{ADC}{} algorithm~\cite{Sergeyev2006}, but the latter approach has distinct ``local'' and ``global'' phases in the selection procedure.
\section{Experimental investigation using test problems from \texttt{DIRECTGOLib v1.1}}
\label{sec:experiments}
Test problems from the \texttt{DIRECTGOLib v1.1}{} library~\cite{DIRECTGOLib2022v11} (listed in \Cref{apendixas} \Cref{tab:test}) are used to evaluate the developed algorithms.
In total, we examined new algorithms on $96$ box-constrained global optimization test instances.
Note that different subsets (e.g., low dimensional problems $(n \le 4)$, non-convex problems, etc.) of the entire set were used to deepen the investigation.
All problems and algorithms are implemented in the Matlab R2022a environment and are included in the most recent version of \texttt{DIRECT}-type Matlab toolbox \texttt{DIRECTGO v1.1.0} \cite{DIRECTGOv1.1.0}.
All computations were performed on 8th Generation Intel R Core$^\textit{TM}$ i7-8750H @ 2.20GHz Processor.
All $12$ algorithms were tested using a limit of M$_{\rm max} = 10^6$ function evaluations in each run.
For the $96$ analytical test cases with a priori known global optima $ f^* $, the used stopping criterion is based on the percent error:
\begin{equation}
\label{eq:pe}
\ pe = 100 \% \times
\begin{cases}
\frac{f({\mathbf{x}}) - f^*}{|f^*|}, & f^* \neq 0, \\
f({\mathbf{x}}), & f^* = 0,
\end{cases}
\end{equation}
where $ f^* $ is the known global optimum.
In all experimental studies presented in this section, the algorithms were stopped when the percent error became smaller than the prescribed value $\varepsilon_{\rm pe} = 10^{-2}$ or when the number of function evaluations exceeded the prescribed limit of $10^6$.
In other words, we stop the search when the algorithm has attained an objective function value very close to the known optimum value.
Experimental results presented in this paper are also available in digital form in the \texttt{Results/JOGO} directory of the Github repository~\cite{DIRECTGOv1.1.0}.
The \texttt{Scripts/JOGO} directory of the same Github repository~\cite{DIRECTGOv1.1.0} provides the \texttt{MATLAB} script for cycling through all different classes of \texttt{DIRECTGOLib v1.1}{} test problems used in this paper.
The constructed script can be handy for reproducing the results presented here and comparing and evaluating newly developed algorithms.
\subsection{Investigation of different partitioning strategies}
\label{ssec:partitioning-impact}
First, we compare the performance of new \texttt{DIRECT}-type algorithms by stressing the used partitioning strategy.
\Cref{tab:aggresive} summarizes comparative results using all twelve \texttt{DIRECT}-type variations on the whole set of $96$ \texttt{DIRECTGOLib v1.1}{} test problems.
In \Cref{tab:aggresive}, each column corresponds to a different partitioning method.
Since each partitioning method was run on the $96$ problems using $3$ different selection methods (rows of \Cref{tab:aggresive}), it follows that each partitioning method was involved in solving $3 \times 96 = 288$ problems.
The best results are marked in bold.
In total, all three 1-DBDP partitioning-strategy-based algorithms failed to solve ($28/288$) test cases.
In contrast, the second and third best partitioning technique (N-DTC, 1-DTC) based \texttt{DIRECT}-type approaches did not solve ($29/288$) and ($36/288$) cases accordingly.
Not surprisingly, a higher number of solved test problems leads to a better overall average performance of 1-DBDP partitioning strategy-based \texttt{DIRECT}-type approaches.
In total, the 1-DBDP partitioning technique-based \texttt{DIRECT}-type methods required approximately $7 \%$ and $18 \%$ fever function evaluations than the second and third best partitioning scheme (N-DTC and 1-DTC) based \texttt{DIRECT}-type algorithms accordingly.
However, the situation is different when comparing algorithms based on the median number of function evaluations.
The diagonal trisection strategy (1-DTDV) based \texttt{DIRECT}-type algorithms are more efficient than competitors.
The median value for all $288$ test problems solved with the 1-DTDV partitioning scheme is approximately $23 \%$ and $47 \%$ better than the second and third best 1-DTC and 1-DBDP approaches accordingly.
Therefore, 1-DTDV partitioning strategy-based \texttt{DIRECT}-type algorithms can solve at least half of these test problems with the best performance.
Furthermore, most of the time, the 1-DTDV partitioning strategy-based algorithms delivered the best average results in solving low-dimensional test problems ($n \leq 4$).
In total, on $153$ low-dimensional ($n \leq 4$) test cases, the 1-DTDV partitioning strategy-based algorithms required approximately $22 \%$ fever function evaluations than the second-best partition strategy (1-DTC) based variants.
To sum up, the 1-DTDV partitioning strategy performs the best combined with a two-step-based selection scheme (GL).
The 1-DTDV-GL algorithm in general significantly outperformed the other two variations based on IA and IO selection schemes.
The 1-DBDP partitioning strategy-based variation combined with two of the selection schemes (GL and IA) delivered the best average results on higher dimension ($n > 4$) test problems.
In total, the 1-DBDP partitioning strategy-based algorithms required approximately $16 \%$ fever function evaluations than the second-best partition strategy (N-DTC) based variants.
\begin{table}
\caption{The number of function evaluations of twelve \texttt{DIRECT}-type variants on \texttt{DIRECTGOLib v1.1}{} test problems}
\resizebox{\textwidth}{!}{
\begin{tabular}[tb]{@{\extracolsep{\fill}}lcrrrrrrrr}
\toprule
Criteria / Algorithms & $\#$ of cases & \multicolumn{2}{c}{\scriptsize N-DTC-IA} & \multicolumn{2}{c}{\scriptsize 1-DTC-IA} & \multicolumn{2}{c}{\scriptsize 1-DBDP-IA} & \multicolumn{2}{c}{\scriptsize 1-DTDV-IA} \\
\midrule
$\#$ of failed problems & $96$ && $13$ && $13$ && $\mathbf{11}$ && $18$ \\
Average results & $96$ && $172,805$ && $160,691$ && $\mathbf{146,887}$ && $202,694$ \\
Average ($n \leq 4$) & $51$ && $25,968$ && $23,638$ && $45,643$ && $\mathbf{9,785}$ \\
Average ($n > 4$) & $45$ && $339,791$ && $316,541$ && $\mathbf{262,640}$ && $421,539$ \\
Average (convex) & $30$ && $149,711$ && $126,030$ && $\mathbf{109,374}$ && $153,594$ \\
Average (non-convex) & $66$ && $183,302$ && $176,446$ && $\mathbf{163,939}$ && $225,012$ \\
Average (uni-modal) & $15$ && $108,068$ && $78,226$ && $\mathbf{73,957}$ && $111,805$ \\
Average (multi-modal) & $81$ && $187,744$ && $179,722$ && $\mathbf{163,717}$ && $223,668$ \\
Median results & $96$ && $7,608$ && $\mathbf{1,287}$ && $2,108$ && $1,586$ \\
\midrule
Criteria / Algorithms & $\#$ of cases & \multicolumn{2}{c}{\scriptsize N-DTC-IO} & \multicolumn{2}{c}{\scriptsize 1-DTC-IO} & \multicolumn{2}{c}{\scriptsize 1-DBDP-IO} & \multicolumn{2}{c}{\scriptsize 1-DTDV-IO} \\
\midrule
$\#$ of failed problems & $96$ && $\mathbf{12}$ && $18$ && $\mathbf{12}$ && $21$ \\
Average results & $96$ && $\mathbf{142,277}$ && $211,463$ && $146,133$ && $227,455$ \\
Average ($n \leq 4$) & $51$ && $43,832$ && $42,633$ && $\mathbf{41,602}$ && $41,990$ \\
Average ($n > 4$) & $45$ && $\mathbf{254,819}$ && $403,749$ && $265,522$ && $438,574$ \\
Average (convex) & $30$ && $111,817$ && $170,675$ && $\mathbf{80,490}$ && $171,868$ \\
Average (non-convex) & $66$ && $\mathbf{156,122}$ && $230,004$ && $175,971$ && $252,722$ \\
Average (uni-modal) & $15$ && $60,100$ && $\mathbf{57,360}$ && $62,016$ && $111,547$ \\
Average (multi-modal) & $81$ && $\mathbf{161,240}$ && $247,026$ && $165,545$ && $254,203$ \\
Median results & $96$ && $\mathbf{771}$ && $1,198$ && $953$ && $847$ \\
\midrule
Criteria / Algorithms & $\#$ of cases & \multicolumn{2}{c}{\scriptsize N-DTC-GL} & \multicolumn{2}{c}{\scriptsize 1-DTC-GL} & \multicolumn{2}{c}{\scriptsize 1-DBDP-GL} & \multicolumn{2}{c}{\scriptsize 1-DTDV-GL} \\
\midrule
$\#$ of failed problems & $96$ && $\mathbf{4}$ && $5$ && $5$ && $5$ \\
Average results & $96$ && $71,488$ && $\mathbf{62,475}$ && $65,442$ && $71,319$ \\
Average ($n \leq 4$) & $51$ && $9,675$ && $7,073$ && $41,300$ && $\mathbf{5,772}$ \\
Average ($n > 4$) & $45$ && $141,753$ && $125,417$ && $\mathbf{93,714}$ && $145,733$ \\
Average (convex) & $30$ && $55,320$ && $45,520$ && $42,326$ && $\mathbf{8,950}$ \\
Average (non-convex) & $66$ && $78,837$ && $\mathbf{70,182}$ && $75,949$ && $99,669$ \\
Average (uni-modal) & $15$ && $28,478$ && $\mathbf{12,624}$ && $23,300$ && $25,796$ \\
Average (multi-modal) & $81$ && $81,183$ && $\mathbf{73,979}$ && $75,398$ && $81,825$ \\
Median results & $96$ && $1,848$ && $960$ && $2,042$ && $\mathbf{775}$ \\
\bottomrule
\end{tabular}}
\label{tab:aggresive}
\end{table}
\subsubsection{Investigating the impact of the solution on the boundary}
\label{ssec:boundaries}
In \Cref{sssec:diagonal-trisection}, we stressed that the diagonal trisection strategy (1-DTDV) is especially appealing on problems where the solution lies on the boundary of the feasible region.
We have carried out an additional experimental study presented here to investigate this.
Note that the selection strategy was fixed (IO), and only the influence on the performance of four partitioning strategies was investigated.
Out of the 46 box-constrained unique (excluding dimensionality variations) test problems from \texttt{DIRECTGOLib v1.1}, only the Deb02 problem has a solution on the boundary.
More precisely, the solution lies on the feasible region's vertex, making this situation particularly favorable to the 1-DTDV strategy.
Experimental results using four \texttt{DIRECT}-type variations on the \textit{Deb02} test problem (with varying dimensionality $n$) are given in the upper part of \Cref{tab:boundaries}.
Since the solution is at the vertex, independently of the dimension, the 1-DTDV-IO algorithm found the solution in the initialization step and took only two objective function evaluations.
The other three \texttt{DIRECT}-type algorithms required significantly more function evaluations until the stopping condition was satisfied.
In order to carry out a more detailed investigation, test problems with solution coordinates lying on the boundary were artificially constructed.
For this purpose, ten variations of $10$-dimensional \textit{Levy} and five variations of $5$-dimensional \textit{D. Price} test problems with perturbed feasible regions were created.
Let us first consider the case of the \textit{Levy} function.
The original feasible region is $D=[-5, 5]^{10}$ and the solution point $\mathbf{x}^* = (1,1,\dots,1)$.
Thus none of the solution coordinates are on the boundary of the permissible area.
On the original \textit{Levy} problem, the 1-DTDV-IO algorithm performed significantly worse than any of the other three tested \texttt{DIRECT}-type counterparts (see the first row in the middle part of \Cref{tab:boundaries}).
However, the situation changes completely when \textit{Levy} variations with the perturbed feasible region are considered.
We artificially reconstructed the original domain so that an increasing number of solution coordinates are located on the boundary---the column ``Feasible region'' in \Cref{tab:boundaries} specifies the modified feasible region.
For example, for the first \textit{Levy} variation (\textit{Levy}1), the modified feasible region coincides with the original one (i.e., $D^m_1 = D$) apart from the coordinate $x_1$, whose new domain is $x_1 \in [-5, 1]$ (original was $x_1 \in [-5, 5]$).
Therefore, for the \textit{Levy}1 problem, one of its solution coordinates $(x_1)$ is on the boundary of the modified feasible region.
The other nine \textit{Levy} function variations are again obtained by substituting only one but the following coordinate compared to the previous \textit{Levy} variation, as shown in \Cref{tab:boundaries}.
The $\nu$ value (the third column in \Cref{tab:boundaries}) indicates the number of solution coordinates projected onto the boundary.
From the obtained results presented in \Cref{tab:boundaries}, observe that the more coordinates of the solution are located on the boundary, the fewer objective function evaluations the 1-DTDV-IO algorithm needs to find the solution.
When $\nu \geq 8$ (i.e., at least eight out of 10 coordinates lie on the boundary), the 1-DTDV-IO algorithm outperformed other approaches.
However, the situation is the opposite of the other three partitioning schemes based on \texttt{DIRECT}-type algorithms.
For almost all cases, they required more function evaluations when $\nu$ increases.
In the last investigation, five variations of \textit{D. Price} test function (original $D = [-10, 10]^5$ and the solution point lying close to the center-point of the domain) were considered.
For \textit{D. Price}, we perturbed the feasible region by the same strategy as for the \textit{Levy} test problem.
Same as before, when most of the solution coordinates are located on the boundaries of the domain ($\nu \geq 4$), the 1-DTDV-IO algorithm is the most efficient.
However, unlike for the \textit{Levy} function, the presence of the solution coordinates on the boundary did not worsen the other \texttt{DIRECT}-type algorithms but rather improved.
These results show that the performance of center-based partitioning techniques will not necessarily worsen when at least part of the solution coordinates are on the boundary.
\begin{table}[ht]
\caption{The number of function evaluations required for four different \texttt{DIRECT}-type algorithms to find optimal solution lying on the boundary}
\resizebox{\textwidth}{!}{
\begin{tabular}[tb]{@{\extracolsep{\fill}}lcclrrrr}
\toprule
Label & $n$ & $\nu$ & \multicolumn{1}{l}{Feasible region} & N-DTC-IO & 1-DTC-IO & 1-DBDP-IO & 1-DTDV-IO \\
\midrule
\textit{Deb02} & $2$ & $2$ & $D =[0, 1]^n$ & $77$ & $75$ & $100$ & $2$ \\
\textit{Deb02} & $4$ & $4$ & $D =[0, 1]^n$ & $199$ & $145$ & $220$ & $2$ \\
\textit{Deb02} & $8$ & $8$ & $D =[0, 1]^n$ & $653$ & $329$ & $494$ & $2$ \\
\textit{Deb02} & $16$ & $16$ & $D =[0, 1]^n$ & $3,827$ & $1,279$ & $2,368$ & $2$ \\
\midrule
\textit{Levy} & $10$ & $0$ & $D = [-5, 5]^n$ & $2,589$ & $919$ & $1,496$ & $141,999$ \\
\textit{Levy}$1$ & $10$ & $1$ & $D^{\rm m}_1 = D$ and $x_1 \in [-5, 1]$ & $2,847$ & $973$ & $1,326$ & $24,439$ \\
\textit{Levy}$2$ & $10$ & $2$ & $D^{\rm m}_2 = D^{\rm m}_1$ and $x_2 \in [1, 5]$ & $3,221$ & $1,033$ & $1,386$ & $16,291$ \\
\textit{Levy}$3$ & $10$ & $3$ & $D^{\rm m}_3 = D^{\rm m}_2$ and $x_3 \in [-10, 1]$ & $3,447$ & $1,079$ & $2,002$ & $13,625$ \\
\textit{Levy}$4$ & $10$ & $4$ & $D^{\rm m}_4 = D^{\rm m}_3$ and $x_4 \in [1, 10]$ & $3,919$ & $1,119$ & $2,048$ & $14,166$ \\
\textit{Levy}$5$ & $10$ & $5$ & $D^{\rm m}_5 = D^{\rm m}_4$ and $x_5 \in [-2, 1]$ & $4,091$ & $1,195$ & $2,116$ & $10,927$ \\
\textit{Levy}$6$ & $10$ & $6$ & $D^{\rm m}_6 = D^{\rm m}_5$ and $x_6 \in [1, 4]$ & $4,483$ & $1,287$ & $2,316$ & $5,416$ \\
\textit{Levy}$7$ & $10$ & $7$ & $D^{\rm m}_7 = D^{\rm m}_6$ and $x_7 \in [-7, 1]$ & $5,215$ & $2,193$ & $2,484$ & $3,069$ \\
\textit{Levy}$8$ & $10$ & $8$ & $D^{\rm m}_8 = D^{\rm m}_7$ and $x_8 \in [1, 15]$ & $5,487$ & $2,579$ & $3,174$ & $2,494$ \\
\textit{Levy}$9$ & $10$ & $9$ & $D^{\rm m}_9 = D^{\rm m}_8$ and $x_9 \in [-13, 1]$ & $6,299$ & $6,581$ & $3,518$ & $547$ \\
\textit{Levy}$10$ & $10$ & $10$ & $D^{\rm m}_{10} = D^{\rm m}_{9}$ and $x_{10} \in [1,10]$ & $6,487$ & $2,487$ & $3,572$ & $551$ \\
\midrule
\textit{D. Price} & $5$ & $0$ & $D = [-10, 10]^n$ & $22,465$ & $20,791$ & $4,060$ & $134,011$ \\
\textit{D. Price}$1$ & $5$ & $1$ & $D^{\rm m}_1 = D$ and $x_1 \in [-19, 1]$ & $18,245$ & $18,707$ & $2,930$ & $16,089$ \\
\textit{D. Price}$2$ & $5$ & $2$ & $D^{\rm m}_2 = D^{\rm m}_1$ and $x_2 \in [0.7071, 21]$ & $4,975$ & $1,455$ & $1,322$ & $3,434$ \\
\textit{D. Price}$3$ & $5$ & $3$ & $D^{\rm m}_3 = D^{\rm m}_2$ and $x_3 \in [-19, 0.5946]$ & $7,709$ & $3,845$ & $1,610$ & $2,759$ \\
\textit{D. Price}$4$ & $5$ & $4$ & $D^{\rm m}_4 = D^{\rm m}_3$ and $x_4 \in [0.5452, 21]$ & $3,989$ & $1,315$ & $2,280$ & $728$ \\
\textit{D. Price}$5$ & $5$ & $5$ & $D^{\rm m}_5 = D^{\rm m}_4$ and $x_5 \in [-19, 0.5221]$ & $3,247$ & $1,443$ & $2,396$ & $565$ \\
\bottomrule
\multicolumn{8}{l}{$\nu$ -- the number of solution coordinates lying on the boundary} \\
\end{tabular}}
\label{tab:boundaries}
\end{table}
\subsubsection{Investigating the impact of the domain perturbation}
\label{ssec:perturbation}
By investigating the impact of different partitioning strategies, we observed several situations where one method (or partitioning scheme) dominates the others significantly.
However, sometimes one method may be lucky because the partitioning approach in the initial steps naturally samples near the solution.
In such situations, the location of the solution may favor one partitioning scheme over another.
In this section, we explore whether such dominance is robust to slight perturbations of the domain.
Initially, we identified test problems for which a particular partitioning scheme (regardless of the selection strategy) had a clear dominance, possibly due to the conveniently defined variable bounds.
Out of the $46$ \texttt{DIRECTGOLib v1.1}{} unique box-constrained test problems, the dominance of a particular partitioning scheme was identified for eight of them.
We made domain perturbations for all eight problems keeping the same optimal solution.
First, if any partitioning scheme-based \texttt{DIRECT}-type algorithm can find the solution in the initialization step, the original domain $(D)$ has been shifted by $22.5 \%$ to the right side.
In some cases, we have made further perturbations that none of the partitioning schemes would sample on initial iterations close to the solution (the column ``Feasible region'' in \Cref{tab:perturb} specifies the original and perturbed domains).
The obtained experimental results revealing the impact of the domain perturbation of twelve \texttt{DIRECT}-type variants on eight selected original and perturbed test problems with varying dimensionality are given in \Cref{tab:perturb}.
None of the partitioning schemes proved to be robust.
Perturbation of the bounds may significantly reduce the dominance of a particular partitioning scheme.
Quite often, the previously dominant approach may not solve the perturbed problem within the given budget of function evaluations at all.
For example, the 1-DTC partitioning scheme-based \texttt{DIRECT}-type algorithms undoubtedly dominate original \textit{Rastrigin} and \textit{Griewank} test problems.
However, for perturbed variations of \textit{Rastrigin} and \textit{Griewank}, the same 1-DTC partitioning scheme-based \texttt{DIRECT}-type algorithms could not be able to find a solution for most of these cases.
Another obvious example is the \textit{Rosenbrock} test problem case.
The 1-DTDV partitioning scheme-based \texttt{DIRECT}-type algorithms are most efficient when the problem is considered in the original domain $D$.
However, after the $D$'s perturbation, the 1-DTDV partitioning scheme proved to be very inefficient.
Also, in some cases, the bounds' perturbation helped other algorithms perform significantly better than on the original domain.
Examples of such problems are \textit{Styblinski-Tang}, \textit{Easom}, and \textit{Power Sum}.
Furthermore, only for the \textit{Schwefel} test problem, the same two partitioning schemes (1-DBDP and 1-DTDV) based \texttt{DIRECT}-type algorithms remained the most efficient in the original and perturbed domains.
\begin{sidewaystable}
\caption{Experimental results of twelve \texttt{DIRECT}-type variants on 8 selected original and perturbed test problems with varying dimensionality}
\resizebox{\textwidth}{!}{
\begin{tabular}[tb]{@{\extracolsep{\fill}}lccrrrrrrrrrrrr}
\toprule
\multirow{1}{*}{Label} & \multirow{1}{*}{$n$} & \multirow{1}{*}{Feasible region} & N-DTC-IA & 1-DTC-IA & 1-DBDP-IA & 1-DTDV-IA & N-DTC-IO & 1-DTC-IO & 1-DBDP-IO & 1-DTDV-IO & N-DTC-GL & 1-DTC-GL & 1-DBDP-GL & 1-DTDV-GL \\
\midrule
\textit{Alpine} & $5$ & $[0, 10]^n$ & $61,485$ & $10,343$ & $\mathbf{714}$ & {\color{red} $>10^6$} & $2,033$ & $1,231$ & $\mathbf{168}$ & $460,445$ & $1,287$ & $685$ & $\mathbf{320}$ & $27,074$ \\
& $10$ & $[0, 10]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{173,912}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $61,209$ & $\mathbf{4,063}$ & $7,646$ & $173,948$ \\
\textit{Perturbed Alpine} & $5$ & $[\sqrt[i]{2}, 8+\sqrt[i]{2}]^n$ & $61,485$ & $10,343$ & $\mathbf{2,678}$ & $10,941$ & $2,209$ & $\mathbf{1,403}$ & $5,790$ & $3,920$ & $1,479$ & $\mathbf{853}$ & $3,462$ & $19,417$ \\
& $10$ & $[\sqrt[i]{2}, 8+\sqrt[i]{2}]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{218,170}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $31,967$ & $\mathbf{7,245}$ & $15,500$ & {\color{red} $>10^6$} \\
\midrule
\textit{Griewank} & $5$ & $[-330, 870]^n$ & $719,985$ & $69,979$ & {\color{red} $>10^6$} & $\mathbf{47,581}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $524,765$ & $457,207$ & $\mathbf{14,706}$ & {\color{red} $>10^6$} \\
& $10$ & $[-330, 870]^n$ & {\color{red} $>10^6$} & $\mathbf{8,799}$ & $20,534$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{14,593}$ & $204,940$ & {\color{red} $>10^6$} \\
\textit{Perturbed Griewank} & $5$ & $\left[-\sqrt{600i}, \dfrac{600}{\sqrt{i}}\right]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $187,811$ & $\mathbf{102,961}$ & $175,806$ & $353,028$ \\
& $10$ & $\left[-\sqrt{600i}, \dfrac{600}{\sqrt{i}} \right]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} \\
\midrule
\textit{Styblinski-Tang} & $5$ & $[-5, 5]^n$ & $4,941$ & $627$ & $\mathbf{164}$ & $243,882$ & $539$ & {\color{red} $>10^6$} & $\mathbf{78}$ & $7,077$ & $1,779$ & $\mathbf{865}$ & $192$ & $682$ \\
& $10$ & $[-5, 5]^n$ & $68,025$ & $2,631$ & $\mathbf{714}$ & {\color{red} $>10^6$} & $9,785$ & {\color{red} $>10^6$} & $\mathbf{180}$ & {\color{red} $>10^6$} & $11,347$ & $3,237$ & $\mathbf{784}$ & $5,248$ \\
\textit{Perturbed Styblinski-Tang} & $5$ & $[-5, 5 + \sqrt[i]{3}]^n$ & $3,919$ & $\mathbf{533}$ & $1,056$ & $66,810$ & $395$ & $\mathbf{273}$ & $278$ & $4,098$ & $1,659$ & $\mathbf{841}$ & $1,654$ & $36,655$ \\
& $10$ & $[-5, 5 + \sqrt[i]{3}]^n$ & $68,025$ & $\mathbf{2,151}$ & $6,736$ & {\color{red} $>10^6$} & $2,917$ & $\mathbf{829}$ & $1,368$ & {\color{red} $>10^6$} & $13,431$ & $\mathbf{3,447}$ & $11,386$ & {\color{red} $>10^6$} \\
\midrule
\textit{Easom} & $2$ & $[-100, 100]^n$ & $433,031$ & $429,743$ & $\mathbf{444}$ & $20,316$ & $7,581$ & $6,619$ & $\mathbf{322}$ & $6,651$ & $451$ & $\mathbf{321}$ & $544$ & $348$ \\
\textit{Perturbed Easom} & $2$ & $\left[ \dfrac{-100}{i+1}, 100i \right]^n$ & $214,331$ & $429,743$ & $\mathbf{71,392}$ & $177,258$ & $\mathbf{3,689}$ & $6,659$ & $18,462$ & $10,078$ & $475$ & $393$ & $924$ & $\mathbf{376}$ \\
\midrule
\textit{Power Sum} & $4$ & $[0.9, 4.9]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{13,828}$ & {\color{red} $>10^6$} & $321,595$ & $144,385$ & $\mathbf{4,790}$ & {\color{red} $>10^6$} & $69,327$ & $77,353$ & $\mathbf{14,214}$ & $37,012$ \\
\textit{Perturbed Power Sum} & $4$ & $[1, 5 + \sqrt[i]{2}]^n$ & $502,981$ & $176,843$ & $78,746$ & $\mathbf{59,390}$ & $67,959$ & $25,453$ & $17,930$ & $\mathbf{12,219}$ & $152,083$ & $70,745$ & $\mathbf{12,494}$ & $40,753$ \\
\midrule
\textit{Rastrigin} & $5$ & $[-2.75, 7.25]^n$ & $8,703$ & $\mathbf{1,487}$ & $314,712$ & {\color{red} $>10^6$} & $597$ & $\mathbf{453}$ & $38,714$ & $112,597$ & $2,721$ & $\mathbf{1,895}$ & $19,642$ & $7,345$ \\
& $10$ & $[-2.75, 7.25]^n$ & $143,755$ & $\mathbf{7,215}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $4,299$ & $\mathbf{1,551}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $22,971$ & $\mathbf{8,105}$ & {\color{red} $>10^6$} & $140,756$ \\
\textit{Perturbed Rastrigin} & $5$ & $[-5\sqrt[i]{2}, 7+\sqrt[i]{2}]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{567,269}$ & $694,812$ & {\color{red} $>10^6$} & $73,727$ & $24,119$ & $\mathbf{16,440}$ & $90,134$ \\
& $10$ & $[-5\sqrt[i]{2}, 7+\sqrt[i]{2}]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{661,971}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} \\
\midrule
\textit{Rosenbrock} & $5$ & $[-5, 10]^n$ & $73,485$ & $26,325$ & $3,208$ & $\mathbf{1,471}$ & $15,577$ & $1,889$ & $1,494$ & $\mathbf{916}$ & $26,891$ & $15,695$ & $5,110$ & $\mathbf{1,568}$ \\
& $10$ & $[-5, 10]^n$ & $297,755$ & {\color{red} $>10^6$} & $13,366$ & $\mathbf{4,541}$ & $71,021$ & {\color{red} $>10^6$} & $4,590$ & $\mathbf{2,091}$ & $104,643$ & $171,019$ & $22,194$ & $\mathbf{5,759}$ \\
\textit{Perturbed Rosenbrock} & $5$ & $\left[-\dfrac{5}{\sqrt{i}}, 10\sqrt{i}\right]^n$ & $434,985$ & $385,979$ & $\mathbf{291,612}$ & {\color{red} $>10^6$} & $55,693$ & $\mathbf{21,363}$ & $101,508$ & {\color{red} $>10^6$} & $27,763$ & $\mathbf{7,795}$ & $33,056$ & $16,971$ \\
& $10$ & $\left[-\dfrac{5}{\sqrt{i}}, 10\sqrt{i}\right]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $383,081$ & $\mathbf{185,325}$ & $316,392$ & $432,903$ \\
\midrule
\textit{Schwefel} & $5$ & $[-500, 500]^n$ & {\color{red} $>10^6$} & $368,479 $ & $\mathbf{7,566}$ & {\color{red} $>10^6$} & $74,989 $ & $16,767$ & $\mathbf{1,070}$ & $9,561$ & $768,549 $ & $49,247$ & $\mathbf{4,842}$ & $109,746$ \\
& $10$ & $[-500, 500]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{817,512}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{57,736}$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $\mathbf{33,522}$ & {\color{red} $>10^6$} \\
\textit{Perturbed Schwefel} & $5$ & $\left[-500 + \dfrac{100}{\sqrt{i}}, 500 - \dfrac{40}{\sqrt{i}} \right]^n$ & {\color{red} $>10^6$} & $458,979$ & $19,972$ & $\mathbf{2,135}$ & $80,295$ & $35,091$ & $84,096$ & $\mathbf{33,622}$ & $336,581$ & $65,329$ & $9,548$ & $\mathbf{1,580}$ \\
& $10$ & $\left[-500 + \dfrac{100}{\sqrt{i}}, 500 - \dfrac{40}{\sqrt{i}} \right]^n$ & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & {\color{red} $>10^6$} & $99,824$ & $336,983$ \\
\bottomrule
\multicolumn{7}{l}{$i = 1,...,n$ -- indexes used for variable bounds}
\end{tabular}}
\label{tab:perturb}
\end{sidewaystable}
\subsection{Investigation of different selection schemes}
\label{ssec:selection-impact}
Here, the efficiency of new \texttt{DIRECT}-type algorithms is investigated based on the selection scheme.
In \Cref{tab:aggresive}, three row parts corresponds to a different selection approach.
Since each selection strategy was run on the $96$ problems using $4$ different partitioning methods (columns of \Cref{tab:aggresive}), it follows that each selection approach was involved in solving $4 \times 96 = 384$ test problems.
We note that algorithms incorporating a two-step-based Pareto selection scheme (GL) combined with any partitioning strategy, on average, deliver the best results.
All algorithmic variants based on the GL selection scheme did not solve ($19/384$).
In contrast, the IO and IA selection schemes based algorithms failed to solve ($63/384$) and ($55/384$) cases accordingly.
This leads to a much better average performance of \texttt{DIRECT}-type algorithms based on the GL selection scheme.
In total, GL selection scheme-based algorithms required approximately $60\%$ and $62\%$ fever function evaluations compared with the IO and IA counterparts.
The GL selection scheme's most significant advantage can be seen in solving higher-dimensional $(n > 4)$ test problems.
In total, GL selection scheme-based algorithms solving ($n > 4$) test instances required approximately $72\%$ fever function evaluations compared with the IO or IA selection scheme-based counterparts.
The IO selection scheme seems the most suitable for simpler optimization problems (low dimensional, uni-modal, and problems with a few minima).
Of 63 failed problems using the IO selection scheme, 43 were extremely hard, i.e., multi-modal, sharply peaked, and multi-variable (e.g., $n \ge 10$).
The GL selection strategy-based variants usually select more regions to subdivide.
Therefore, they suffer for these more straightforward optimization problems.
However, the GL scheme ensured, that on average, all \texttt{DIRECT}-type variants converged in significantly fewer function evaluations on complex multi-modal test problems.
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
legend pos=north west,
title = {Operational characteristics},
xlabel = {Function evaluations},
xmode=log,
ymin=-0.02,ymax=1.02,
ytick distance=0.1,
xmode=log,
xmin=10,
xmax=1000000,
xtick distance=10,
ylabel = {Proportion of solved problems},
ylabel style={yshift=-0.5em},
legend style={font=\tiny,xshift=-0.5em},
legend cell align={left},
legend columns=1,
height=0.75\textwidth,width=\textwidth,
every axis plot/.append style={very thick},
]
\addplot[mark=*,black,mark options={scale=1.5, fill=princetonorange}, only marks,line width=0.75pt] coordinates {(0.1,0.1)} ;
\label{p1}
\addplot[mark=square*,black,mark options={scale=1.5, fill=yellow}, only marks,line width=0.75pt] coordinates {(0.1,0.1)} ;
\label{p2}
\addplot[mark=diamond*,black,mark options={scale=1.5, fill=sienna}, only marks,line width=0.75pt] coordinates {(0.1,0.1)} ;
\label{p3}
\addplot[mark=triangle*,black,mark options={scale=1.5, fill=psychedelicpurple}, only marks,line width=0.75pt] coordinates {(0.1,0.1)} ;
\label{p4}
\node [draw,fill=white] at (rel axis cs: 0.85,0.14) {\shortstack[l]{
{\scriptsize \textbf{Partitioning schemes}} \\
\ref{p1} {\scriptsize N-DTC} \\
\ref{p2} {\scriptsize 1-DTC} \\
\ref{p3} {\scriptsize 1-DBDP} \\
\ref{p4} {\scriptsize 1-DTDV}}};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.06 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DDA] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.07 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DRA] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.08 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=BIA] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.09 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=ADA] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.1 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DDO] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.11 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DRO] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.12 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=BIO] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.13 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=ADO] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.14 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DDG] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.15 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DRG] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.16 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=BIG] {data/Overallass.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.17 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=ADG] {data/Overallass.txt};
\addplot[blue,line width=0.75pt] coordinates {(0.1,0.1)} ;
\label{p5}
\addplot[sandstorm,line width=0.75pt,densely dashed] coordinates {(0.1,0.1)} ;
\label{p6}
\addplot[onyx,line width=0.75pt,densely dotted] coordinates {(0.1,0.1)} ;
\label{p7}
\node [draw,fill=white] at (rel axis cs: 0.2,0.88) {\shortstack[l]{
{\scriptsize \textbf{POH selection schemes}} \\
\ref{p5} {\scriptsize Improved Original (IO)} \\
\ref{p6} {\scriptsize Improved Aggressive (IA)} \\
\ref{p7} {\scriptsize Global-Local (GL) Pareto}}};
\end{axis}
\end{tikzpicture}}
\caption{Operational characteristics for all twelve \texttt{DIRECT}-type algorithmic variations on \texttt{DIRECTGOLib v1.1}{} test problems}
\label{fig:perf-l1}
\end{figure}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
legend pos=north west,
title = {Operational characteristics},
xlabel = {Function evaluations},
xmode=log,
ymin=-0.02,ymax=1.02,
ytick distance=0.1,
xmode=log,
xmin=10,
xmax=1000000,
xtick distance=10,
ylabel = {Proportion of solved problems},
ylabel style={yshift=-0.5em},
legend style={font=\tiny,xshift=-0.5em},
legend cell align={left},
legend columns=1,
height=0.75\textwidth,width=\textwidth,
every axis plot/.append style={very thick},
]
\node [draw,fill=white] at (rel axis cs: 0.85,0.14) {\shortstack[l]{
{\scriptsize \textbf{Partitioning schemes}} \\
\ref{p1} {\scriptsize N-DTC} \\
\ref{p2} {\scriptsize 1-DTC} \\
\ref{p3} {\scriptsize 1-DBDP} \\
\ref{p4} {\scriptsize 1-DTDV}}};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.06 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DDA] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.07 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DRA] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.08 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=BIA] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.09 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=ADA] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.1 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DDO] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.11 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DRO] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.12 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=BIO] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.13 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=ADO] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.14 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DDG] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.15 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DRG] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.16 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=BIG] {data/Overallassa.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.17 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=ADG] {data/Overallassa.txt};
\node [draw,fill=white] at (rel axis cs: 0.2,0.88) {\shortstack[l]{
{\scriptsize \textbf{POH selection schemes}} \\
\ref{p5} {\scriptsize Improved Original (IO)} \\
\ref{p6} {\scriptsize Improved Aggressive (IA)} \\
\ref{p7} {\scriptsize Global-Local (GL) Pareto}}};
\end{axis}
\end{tikzpicture}}
\caption{Operational characteristics for all twelve \texttt{DIRECT}-type algorithmic variations solving higher-dimensional ($n > 4$) multi-modal \texttt{DIRECTGOLib v1.1}{} test problems}
\label{fig:perf-l1d}
\end{figure}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
legend pos=north west,
title = {Operational characteristics},
xlabel = {Function evaluations},
xmode=log,
ymin=-0.02,ymax=1.02,
ytick distance=0.1,
xmode=log,
xmin=10,
xmax=1000000,
xtick distance=10,
ylabel = {Proportion of solved problems},
ylabel style={yshift=-0.5em},
legend style={font=\tiny,xshift=-0.5em},
legend cell align={left},
legend columns=1,
height=0.75\textwidth,width=\textwidth,
every axis plot/.append style={very thick},
]
\node [draw,fill=white] at (rel axis cs: 0.85,0.14) {\shortstack[l]{
{\scriptsize \textbf{Partitioning schemes}} \\
\ref{p1} {\scriptsize N-DTC} \\
\ref{p2} {\scriptsize 1-DTC} \\
\ref{p3} {\scriptsize 1-DBDP} \\
\ref{p4} {\scriptsize 1-DTDV}}};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.06 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DDA] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.07 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DRA] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.08 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=BIA] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.09 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=ADA] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.1 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DDO] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.11 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DRO] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.12 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=BIO] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.13 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=ADO] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.14 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DDG] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.15 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DRG] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.16 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=BIG] {data/Overallassb.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.17 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=ADG] {data/Overallassb.txt};
\node [draw,fill=white] at (rel axis cs: 0.2,0.88) {\shortstack[l]{
{\scriptsize \textbf{POH selection schemes}} \\
\ref{p5} {\scriptsize Improved Original (IO)} \\
\ref{p6} {\scriptsize Improved Aggressive (IA)} \\
\ref{p7} {\scriptsize Global-Local (GL) Pareto}}};
\end{axis}
\end{tikzpicture}}
\caption{Operational characteristics for all twelve \texttt{DIRECT}-type algorithmic variations solving uni-modal and convex \texttt{DIRECTGOLib v1.1}{} test problems}
\label{fig:perf-l1c}
\end{figure}
Additionally, the operational characteristics~\cite{Grishagin1978,Strongin2000:book} reported in \Cref{fig:perf-l1,fig:perf-l1d,fig:perf-l1c} show the behavior of all twelve algorithms on different subsets of \texttt{DIRECTGOLib v1.1}{} box-constrained test problems.
Operational characteristics provide the proportion of problems that can be solved within a given budget of function evaluations.
\Cref{fig:perf-l1}, drawn using all $96$ box-constrained problems from \texttt{DIRECTGOLib v1.1}{}, clearly shows that IO selection scheme-based \texttt{DIRECT}-type algorithms (1-DTDV-IO and N-DTC-IO) dominate for simpler problems.
They solved almost half of the 96 test problems within a small budget of objective function evaluations.
However, as the number of function evaluations increases (as more complex problems are considered), the GL scheme-based algorithms are most efficient.
Operational characteristics in \Cref{fig:perf-l1d} show the behavior of all twelve algorithms on $35$ higher-dimensionality ($n > 4$) multi-modal test problems.
Once again, when a given budget of function evaluations is low (M$_{\rm max} \leq 500$), all IO selection scheme-based variations perform better.
Unfortunately, with such a small function evaluation budget, the algorithms will only solve approximately $15 \%$ of all the test problems.
When the maximal budget of function evaluations increased (M$_{\rm max} \leq 4,000$), only one of the IO selection scheme combinations (N-DTC-IO) maintained the highest efficiency and solved approximately $50 \%$ of all the test cases.
Finally, when the function evaluation budget is higher (M$_{\rm max} \geq 4,000$), GL selection scheme-based variations (1-DTC-GL and 1-DBDP-GL) have the highest efficiency.
Similar tendencies regarding the best-performing selection strategies can be seen in \Cref{fig:perf-l1c}.
Here, the operational characteristics illustrate the behavior of algorithms solving simplest uni-modal and convex optimization test problems.
Among the partitioning strategies, the 1-DTDV scheme looks the most efficient here.
\section{Experimental investigation using GKLS-type test problems}
\label{sec:exp-GKLS}
Additionally, we compare the performance of all twelve \texttt{DIRECT}-type variants on GKLS-type test problems~\cite{Gaviano2003}.
GKLS-generator allows generating three types (non-differentiable, continuously differentiable, and twice continuously differentiable) of multi-dimensional and multi-extremal optimization test functions with a priori known local and global minima.
The complexity of generated problems is established by setting different values for user-determined parameters: problem dimension $n$, the number of local minima $m$, global minimum value $f^*$, distance $d$ from the global minimizer to the paraboloid vertex, and radius $r$ of the attraction region of the global minimizer.
We use eight different complexity classes (see \Cref{tab:paramet}).
The dimensionality ($n$) and other parameters are set as in~\cite{Paulavicius2014:jogo,Paulavicius2019:eswa}.
Each class consisted of $100$ test instances.
For each dimension $n$, two test classes were considered: the ``simple'' class and the ``hard'' one.
For three- and four-dimensional classes the difficulty is increased by enlarging the distance $d$ from the global minimizer $(\mathbf{x}^*)$ to the paraboloid vertex.
For two and five-dimensional classes this is achieved by decreasing the radius $r$ of the attraction region of the global minimizer.
\begin{table}[ht]
\centering
\caption{Description of GKLS-type test classes used in numerical experiments}
\label{tab:paramet}
\resizebox{\textwidth}{!}{
\begin{tabular}{p{1cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\toprule
Class & Difficulty & $\Delta$ & $n$ & $f^*$ & $d$ & $r$ & $m$ \\
\midrule
$1$ & simple & $10^{-4}$ & $2$ & $-1$ & $0.90$ & $0.20$ & $10$ \\
$2$ & hard & $10^{-4}$ & $2$ & $-1$ & $0.90$ & $0.10$ & $10$ \\
$3$ & simple & $10^{-6}$ & $3$ & $-1$ & $0.66$ & $0.20$ & $10$ \\
$4$ & hard & $10^{-6}$ & $3$ & $-1$ & $0.90$ & $0.20$ & $10$ \\
$5$ & simple & $10^{-6}$ & $4$ & $-1$ & $0.66$ & $0.20$ & $10$ \\
$6$ & hard & $10^{-6}$ & $4$ & $-1$ & $0.90$ & $0.20$ & $10$ \\
$7$ & simple & $10^{-7}$ & $5$ & $-1$ & $0.66$ & $0.30$ & $10$ \\
$8$ & hard & $10^{-7}$ & $5$ & $-1$ & $0.66$ & $0.20$ & $10$ \\
\bottomrule
\end{tabular}}
\end{table}
The same stopping rule is adopted in these experiments as in~\cite{Paulavicius2014:jogo,Paulavicius2019:eswa}.
The global minimizer $\mathbf{x}^* \in D$ is considered to be found when an algorithm generated a function evaluation point $\mathbf{x}^i \in D^i_k$ such that:
\begin{equation}
\label{eq:pes}
\arrowvert x^i_j - x^*_j \arrowvert \leq \sqrt[n]{\Delta}(b_j - a_j), \hspace{1cm} 1 \leq j \leq n,
\end{equation}
where $0 \leq \Delta \leq 1$ is an accuracy coefficient \cite{Sergeyev2006}(see \Cref{tab:paramet} for $\Delta$ parameter values).
In other words, we stop the search when the algorithm has produced a point very close to the known optimum.
In each run, we used the same limit of function evaluations equal to $10^6$.
Note that the stopping rule \eqref{eq:pes} does not require algorithms to find a solution with high accuracy.
Therefore, the Pareto selection enhancing the local search (see \Cref{sssec:two-step-selection}) was disabled.
The experimental results are summarized in \Cref{tab:results4}.
The notation “$>10^6(j)$” indicates that after the maximal number of function evaluations $10^6$, the algorithm under consideration was not able to solve $j$ problems in total.
First, contrary to the previous tendencies, the best results are obtained when \texttt{DIRECT}-type variants include an improved original selection scheme (IO).
In numerical experiments described in \Cref{sec:experiments}, \texttt{DIRECT}-type algorithms with integrated IO selection schemes had almost the worst efficiencies.
However, all four \texttt{DIRECT}-type variants based on the IO selection scheme are promising for GKLS-type test problems.
The least attractive is an improved aggressive selection scheme (IA).
All failed $40$ cases appeared when this selection scheme was combined with three different partitioning strategies (except 1-DBDP).
The best average results (see the upper part of \Cref{tab:results4}) are achieved using 1-DTC-IO (for seven different classes) and 1-DBDP-GL (for one class).
The average number using the 1-DTC-IO algorithm is $8,021$, while the second (1-DBDP-IO) and third-best (1-DTDV-IO) algorithms deliver approximately $19 \%$ ($9,571$) and $47 \%$ ($15,231$) worse overall performances.
Interestingly, the 1-DBDP-GL algorithm, which produced the best overall result in \Cref{sec:experiments}, ranks only fifth as delivered $67 \%$ ($24,746$) worse average results than 1-DTC-IO.
The lowest aggregated median number (the middle part of \Cref{tab:results4}) for all eight classes again is obtained using the same 1-DTC-IO algorithm ($1,427$).
Therefore, the 1-DTC-IO algorithm can solve at least half of GKLS-type problems with the best performance.
In contrast, the second (1-DBDP-GL) and third (1-DBDP-IO) best algorithms delivered approximately $3.71 \%$ ($1,482$) and $3.84 \%$ ($1,484$) worse overall median values.
In the bottom part of \Cref{tab:results4}, the maximal number of function evaluations required to solve test problems within a particular class is reported.
The previously emphasized algorithmic variation 1-DTC-IO is the best for four out of eight classes.
However, solving two of the most complex (``hard'') classes (No = $6$ and $8$) 1-DTDV-IO algorithm seems the most promising.
\begin{sidewaystable}
\caption{Comparison of twelve \texttt{DIRECT}-type variants on eight classes of GKLS-type problems}
\resizebox{\textwidth}{!}{
\begin{tabular}[tb]{@{\extracolsep{\fill}}c|rrrr|rrrr|rrrr}
\toprule
Class & N-DTC-IA & 1-DTC-IA & 1-DBDP-IA & 1-DTDV-IA & N-DTC-IO & 1-DTC-IO & 1-DBDP-IO & 1-DTDV-IO & N-DTC-GL & 1-DTC-GL & 1-DBDP-GL & 1-DTDV-GL \\
\midrule
\multicolumn{13}{c}{Average number of function evaluations}\\
\midrule
$1$ & $510$ & $289$ & $257$ & $1,148$ & $198$ & $\mathbf{149}$ & $156$ & $360$ & $236$ & $217$ & $185$ & $665$ \\
$2$ & $5,215$ & $5,491$ & $1,449$ & $6,278$ & $1,068$ & $\mathbf{803}$ & $863$ & $1,458$ & $2,332$ & $3,130$ & $1,245$ & $3,454$ \\
$3$ & $4,069$ & $1,747$ & $1,252$ & $9,870$ & $1,019$ & $\mathbf{673}$ & $869$ & $1,776$ & $1,264$ & $1,286$ & $825$ & $4,148$ \\
$4$ & $15,331$ & $9,393$ & $4,032$ & $35,291$ & $2,477$ & $\mathbf{1,543}$ & $1,832$ & $4,539$ & $5,125$ & $4,427$ & $2,640$ & $14,943$ \\
$5$ & $36,586$ & $20,911$ & $13,179$ & $91,161$ & $7,843$ & $\mathbf{4,591}$ & $8,207$ & $11,806$ & $10,720$ & $10,553$ & $9,199$ & $32,597$ \\
$6$ & $198,129$ & $157,187$ & $95,541$ & $350,033$ & $26,970$ & $\mathbf{16,899}$ & $23,905$ & $34,020$ & $77,715$ & $68,348$ & $57,867$ & $117,684$ \\
$7$ & $21,811$ & $17,769$ & $4,956$ & $67,634$ & $6,216$ & $4,419$ & $3,974$ & $11,156$ & $12,262$ & $10,645$ & $\mathbf{3,472}$ & $32,108$ \\
$8$ & $308,948$ & $226,208$ & $96,621$ & $477,868$ & $67,636$ & $\mathbf{35,091}$ & $36,768$ & $55,625$ & $120,118$ & $99,362$ & $58,278$ & $183,881$ \\
$1-8$ & $73,825$ & $54,874$ & $27,160$ & $129,910$ & $14,178$ & $\mathbf{8,021}$ & $9,571$ & $15,231$ & $27,783$ & $24,746$ & $16,713$ & $48,685$ \\
\midrule
\multicolumn{13}{c}{Median number of function evaluations}\\
\midrule
$1$ & $308$ & $207$ & $204$ & $878$ & $119$ & $122$ & $\mathbf{111}$ & $328$ & $128$ & $155$ & $130$ & $469$ \\
$2$ & $4,316$ & $2,375$ & $1,016$ & $5,793$ & $1,058$ & $724$ & $\mathbf{673}$ & $1,364$ & $1,979$ & $2,097$ & $855$ & $3,272$ \\
$3$ & $1,267$ & $949$ & $755$ & $7,813$ & $\mathbf{387}$ & $503$ & $488$ & $1,597$ & $445$ & $604$ & $461$ & $3,344$ \\
$4$ & $8,970$ & $3,962$ & $2,366$ & $32,915$ & $1,782$ & $\mathbf{1,075}$ & $1,189$ & $4,230$ & $2,463$ & $2,992$ & $1,399$ & $13,753$ \\
$5$ & $15,523$ & $9,063$ & $5,851$ & $78,151$ & $4,874$ & $\mathbf{2,872}$ & $4,443$ & $10,761$ & $4,388$ & $7,052$ & $4,202$ & $28,822$ \\
$6$ & $121,285$ & $76,586$ & $55,130$ & $334,277$ & $15,517$ & $\mathbf{9,237}$ & $15,628$ & $32,796$ & $43,458$ & $33,804$ & $29,189$ & $116,530$ \\
$7$ & $7,514$ & $4,079$ & $2,102$ & $42,804$ & $1,673$ & $2,291$ & $2,278$ & $10,992$ & $1,533$ & $3,440$ & $\mathbf{1,427}$ & $29,635$ \\
$8$ & $203,711$ & $128,408$ & $53,291$ & $429,811$ & $43,400$ & $24,327$ & $\mathbf{19,967}$ & $47,221$ & $65,892$ & $54,497$ & $27,697$ & $162,411$ \\
$1-8$ & $6,767$ & $4,188$ & $2,098$ & $26,227$ & $1,644$ & $\mathbf{1,427}$ & $1,487$ & $4,599$ & $2,117$ & $3,178$ & $1,482$ & $13,235$ \\
\midrule
\multicolumn{13}{c}{Maximal number of function evaluations}\\
\midrule
$1$ & $4,777$ & $1,955$ & $1,178$ & $4,811$ & $1,153$ & $\mathbf{655}$ & $840$ & $961$ & $2,031$ & $1,319$ & $1,090$ & $2,502$ \\
$2$ & $21,841$ & $25,021$ & $11,674$ & $20,089$ & $3,197$ & $\mathbf{2,201}$ & $4,374$ & $3,964$ & $8,431$ & $11,041$ & $8,836$ & $10,269$ \\
$3$ & $31,291$ & $13,233$ & $8,196$ & $34,607$ & $6,625$ & $\mathbf{3,273}$ & $5,032$ & $4,864$ & $10,723$ & $11,335$ & $5,058$ & $16,789$ \\
$4$ & $132,121$ & $150,809$ & $22,720$ & $88,327$ & $15,307$ & $9,763$ & $\mathbf{7,806}$ & $10,236$ & $48,371$ & $45,721$ & $14,064$ & $37,514$ \\
$5$ & $212,339$ & $131,783$ & $116,136$ & $280,015$ & $39,129$ & $\mathbf{18,853}$ & $62,016$ & $35,898$ & $74,277$ & $47,527$ & $70,028$ & $78,365$ \\
$6$ & $>10^6(4)$ & $>10^6(4)$ & $562,156$ & $932,395$ & $260,793$ & $126,061$ & $141,914$ & $\mathbf{86,105}$ & $907,497$ & $662,983$ & $345,082$ & $280,069$ \\
$7$ & $266,007$ & $220,647$ & $77,998$ & $317,351$ & $110,237$ & $33,691$ & $\mathbf{27,380}$ & $41,751$ & $86,269$ & $135,889$ & $54,400$ & $145,022$ \\
$8$ & $>10^6(8)$ & $>10^6(6)$ & $776,052$ & $>10^6(18)$ & $472,125$ & $229,583$ & $313,420$ & $\mathbf{210,483}$ & $960,573$ & $700,615$ & $436,072$ & $718,803$ \\
\bottomrule
\end{tabular}}
\label{tab:results4}
\end{sidewaystable}
Additionally, we visualize the performance of all twelve \texttt{DIRECT}-type variations on GKLS-type problems using the operational characteristics.
\Cref{fig:perf-l2} shows the behavior on four ``simple'' GKLS classes, while \Cref{fig:perf-l3} shows the ``hard'' ones.
For simple classes, IO and GL selection schemes seem the most promising.
Among algorithms, when a low budget of function evaluations is considered (M$_{\rm max} \leq 400$), the 1-DTC-IO algorithm is the most efficient.
However, when M$_{\rm max} > 400 $, the 1-DBDP-GL algorithmic combination outperforms all others.
Furthermore, the best performance on these simple classes is achieved regardless of the selection scheme used with the 1-DBDP partitioning strategy.
Finally, for ``hard'' classes (see \Cref{fig:perf-l3}) IO selection scheme seems the most favorable (especially for simpler problems), while GL is the second-best option.
However, when a higher maximal number of function evaluations is allowed, the performance of GL and IO selection scheme-based algorithms is quite similar.
Among the algorithms, 1-DBDP-IO and 1-DTC-IO are the two best-performing ones.
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
legend pos=north west,
title = {Operational characteristics},
xlabel = {Function evaluations},
xmode=log,
ymin=-0.02,ymax=1.02,
ytick distance=0.1,
xmode=log,
xmin=10,
xmax=1000000,
xtick distance=10,
ylabel = {Proportion of solved problems},
ylabel style={yshift=-0.5em},
legend style={font=\tiny,xshift=-0.5em},
legend cell align={left},
legend columns=1,
height=0.75\textwidth,width=\textwidth,
every axis plot/.append style={very thick},
]
\node [draw,fill=white] at (rel axis cs: 0.85,0.14) {\shortstack[l]{
{\scriptsize \textbf{Partitioning schemes}} \\
\ref{p1} {\scriptsize N-DTC} \\
\ref{p2} {\scriptsize 1-DTC} \\
\ref{p3} {\scriptsize 1-DBDP} \\
\ref{p4} {\scriptsize 1-DTDV}}};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.06 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DDA] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.07 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DRA] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.08 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=BIA] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.09 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=ADA] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.1 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DDO] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.11 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DRO] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.12 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=BIO] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.13 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=ADO] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.14 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DDG] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.15 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DRG] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.16 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=BIG] {data/Overallass1.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.17 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=ADG] {data/Overallass1.txt};
\node [draw,fill=white] at (rel axis cs: 0.2,0.88) {\shortstack[l]{
{\scriptsize \textbf{POH selection schemes}} \\
\ref{p5} {\scriptsize Improved Original (IO)} \\
\ref{p6} {\scriptsize Improved Aggressive (IA)} \\
\ref{p7} {\scriptsize Global-Local (GL) Pareto}}};
\end{axis}
\end{tikzpicture}}
\caption{Operational characteristics for all twelve \texttt{DIRECT}-type algorithmic variations on four ``simple'' GKLS-type classes}
\label{fig:perf-l2}
\end{figure}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
legend pos=north west,
title = {Operational characteristics},
xlabel = {Function evaluations},
xmode=log,
ymin=-0.02,ymax=1.02,
ytick distance=0.1,
xmode=log,
xmin=10,
xmax=1000001,
xtick distance=10,
ylabel = {Proportion of solved problems},
ylabel style={yshift=-0.5em},
legend style={font=\tiny,xshift=-0.5em},
legend cell align={left},
legend columns=1,
height=0.75\textwidth,width=\textwidth,
every axis plot/.append style={very thick},
]
\node [draw,fill=white] at (rel axis cs: 0.85,0.14) {\shortstack[l]{
{\scriptsize \textbf{Partitioning schemes}} \\
\ref{p1} {\scriptsize N-DTC} \\
\ref{p2} {\scriptsize 1-DTC} \\
\ref{p3} {\scriptsize 1-DBDP} \\
\ref{p4} {\scriptsize 1-DTDV}}};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.06 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DDA] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.07 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=DRA] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.08 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=BIA] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.09 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},sandstorm,line width=0.75pt,densely dashed] table[x=T,y=ADA] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.1 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DDO] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.11 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=DRO] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.12 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=BIO] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.13 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt] {};}},decorate,},blue,line width=0.75pt] table[x=T,y=ADO] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.14 with {\node[circle,draw=black,fill=princetonorange,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DDG] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.15 with {\node[mark=square,draw=black,fill=yellow,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=DRG] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.16 with {\node[diamond,draw=black,fill=sienna,inner sep=1.5pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=BIG] {data/Overallassw.txt};
\addplot[postaction={decoration={markings,mark=between positions 0 and 1 step 0.17 with {\node[regular polygon,regular polygon sides=3,draw=black,fill=psychedelicpurple,inner sep=1pt,solid] {};}},decorate,},onyx,line width=0.75pt,densely dotted] table[x=T,y=ADG] {data/Overallassw.txt};
\node [draw,fill=white] at (rel axis cs: 0.2,0.88) {\shortstack[l]{
{\scriptsize \textbf{POH selection schemes}} \\
\ref{p5} {\scriptsize Improved Original (IO)} \\
\ref{p6} {\scriptsize Improved Aggressive (IA)} \\
\ref{p7} {\scriptsize Global-Local (GL) Pareto}}};
\end{axis}
\end{tikzpicture}}
\caption{Operational characteristics for all twelve \texttt{DIRECT}-type algorithmic variations on four ``hard'' GKLS-type classes}
\label{fig:perf-l3}
\end{figure}
\section{Conclusions and future work}
\label{sec:conclusiuo}
This paper presented an extensive experimental investigation of various candidate selection and partitioning techniques traditionally used in the \texttt{DIRECT}-type algorithms.
Twelve \texttt{DIRECT}-type algorithmic combinations were created by considering four well-known partitioning and three selection schemes.
In general, experimental results confirmed the well-known fact from ``No free lunch theorems for optimization''~\cite{Wolpert1997} that no one particular optimization algorithm works best for every problem.
However, detailed experimental studies have helped identify particular \texttt{DIRECT}-type algorithmic variations that work well in certain situations.
For example, our experimental findings in \Cref{ssec:perturbation} revealed that what initially looks like a clear dominance case goes away with small domain perturbations.
This should remind us how dangerous it can be to generalize from limited test-function results.
Below, we emphasize when certain variations have performed best and make some recommendations based on that.
Investigation using \texttt{DIRECTGOLib v1.1}{} test problems showed that independently on the partitioning strategy, a two-step-based Pareto selection scheme (GL) ensures the best performance on more challenging optimization problems (higher-dimensionality, multi-modal, non-convex).
The two best algorithmic variations are when the (GL) selection scheme is combined with the 1-DTC and 1-DBDP partitioning approaches.
While the 1-DTDV-GL looks best for more straightforward problems (low-dimensional, uni-modal), the 1-DTC-GL, 1-DBDP-GL combination is more efficient in solving more challenging problems.
The worst results were obtained using various partitioning strategies combined with the (IO) selection scheme, which showed promising performance only when a given budget of function evaluations is small.
Moreover, regardless of the selection scheme, the 1-DTDV partitioning strategy has a significant advantage when the most solution coordinates are on the boundary of the feasible region.
Additionally, the 1-DTDV partitioning approach has proven to be the most efficient in solving low-dimensional \texttt{DIRECTGOLib v1.1}{} test problems.
However, the combination based on the 1-DTDV partitioning scheme is very inefficient for higher dimensional test problems.
For such problems, the 1-DBDP partitioning approach seems much more appropriate.
Experimental investigation on 800 GKLS-type test problems showed contrasting results.
This study revealed that the (IO) scheme could be very efficient.
While on simple GKLS classes, the efficiency of \texttt{DIRECT}-type algorithms based on (IO) and (GL) selection schemes are very similar.
However, better performance is explicitly achieved with (IO) for hard classes.
Let us recall that this selection scheme showed the worst results in the previous investigation.
To sum up, our study demonstrated that using already known techniques combined in new variations can create more efficient \texttt{DIRECT}-type algorithms.
For example, efficient diagonal partition-based \texttt{BIRECT}{} can be further improved by replacing the original selection scheme with the GL selection (from the \texttt{DIRECT-GL}{} algorithm), resulting in a more efficient algorithm (1-DBDP-GL).
As for further research, one possible direction could be improving the two-step-based (Global-Local) Pareto selection scheme (GL). Algorithms based on this scheme showed superior performance solving most complex optimization test problems but relatively poor efficiency on more straightforward problems.
One possible modification could be borrowing \texttt{DIRECT}'s $\varepsilon$ parameter or similar technique to limit the size of selected POHs.
Optionally, instead of performing the selection enhancing the local search in every iteration, a specific rule could be added to specify when this selection is needed.
Finally, finding the solution efficiently should start by investigating the problem.
Then, considering this knowledge, the design or finding of a specific optimization algorithm is needed.
Thus, one of our nearest future work plans is to extend this idea by developing the automatic \texttt{DIRECT}-type algorithm selection.
\section*{Source code statement}
All twelve introduced \texttt{DIRECT}-type algorithms are implemented in \texttt{MATLAB} and are available in the most recent version of \texttt{DIRECTGO v1.1.0}{} (\url{https://github.com/blockchain-group/DIRECTGO/tree/v1.1.0}) and can be used under the MIT license.
We welcome contributions and corrections to it.
\section*{Data statement}
\texttt{DIRECTGOLib} - \texttt{DIRECT}{} \textbf{G}lobal \textbf{O}ptimization test problems \textbf{Lib}rary is designed as a continuously-growing open-source GitHub repository to which anyone can easily contribute.
The exact data underlying this article from \texttt{DIRECTGOLib v1.1}{} can be accessed either on GitHub or at Zenodo:
\begin{itemize}
\item at GitHub: \url{https://github.com/blockchain-group/DIRECTGOLib/tree/v1.1},
\item at Zenodo: \url{https://doi.org/10.5281/zenodo.6491951},
\end{itemize}
and used under the MIT license.
We welcome contributions and corrections to it.
\vspace{15pt}
\noindent\textbf{Funding} The research work of S. Stripinis was funded by a Grant (No. S-MIP-21-53) from the \textit{Research Council of Lithuania}.
\vspace{15pt}
\noindent\textbf{Acknowledgment} The authors greatly thank the anonymous Reviewer for his valuable and constructive comments, which helped us significantly extend and improve the manuscript.
\bibliographystyle{spmpsci}
|
1,108,101,565,121 | arxiv | \section{Introduction}
Mobile devices connect to the Internet via one or more telecommunications
operators. Users usually have expectations about the services
they receive from these operators \cite{sung,mitraicme2011}. Based
on their experience about the services they receive, they either choose
to stay with the current operator or they switch to a new operator.
For example, in 2011, Vodafone Australia nearly lost 440,000 customers to different operators such as Telstra and Optus%
\footnote{http://www.itnews.com.au/News/290168,vodafone-australia -churn-nears-half-a-million-for-2011.aspx.
Retrieved 12/07/12.%
}. Telecommunications operators are interested in maximizing their
revenue by trying to retain their customers. On the other hand, users consider operators that meet their expectations based on cost and quality of service (QoS) offered to them.
It is widely assumed \cite{nokia,jain,1631338,Jumisko2008} that by maximizing network QoS (e.g., increasing network bandwidth
and/or increasing wireless signal strength) or by reducing the cost of
services, users will be satisfied with the services provided to them.
On the other hand, \cite{sung,kilkki} argue that users may or may not be satisfied with QoS and the cost
of services offered to them by operators.
For example, consider a statement posted on the Apple forum:%
\footnote{https://discussions.apple.com/thread/3437795?start=180\& tstart=0.
Retrieved 11/07/12.%
}
\textbf{Example:} \emph{{}``I am having the same issues as everyone else...Phone shows
5 bars on \textquotedbl{}4G.\textquotedbl{} Makes calls and texts
just fine but no imessage or internet (safari as well as any other
apps that require connectivity). Right now the two things I have noticed
are that I'm more likely to have it work late at night (11pm-2am)
and more likely to have it work when I'm outdoors. Most of my problems
are experienced while at work (9am-6pm) and when I leave work it tends
to work for a while (lunch break, errands) but not always. I did try
turning off all the 'System Services' under 'Location services' and
have not noticed any difference in my phone's behavior.''}
This example shows that positive user experience may not be guaranteed even with 4G networks
There is a need to understand users' perception of services or their Quality of Experience (QoE) \cite{jain,kilkki,sung}. QoE
as a term is often misunderstood and is narrowly associated with QoS
\cite{kilkki}. We argue that QoE is a multidisciplinary and a multidimensional concept. It involves concepts from several fields including human computer interaction, cognitive and behavioral science, computer networks and economics \cite{kilkki,sung,essay,marez2007}.
ITU-T \cite{ituqoe} defines QoE as: \textit{``The overall acceptability
of an application or service, as perceived subjectively by the end-user.''}
It's worth noting that ITU-T also consider the following statements:
\textit{``Quality of Experience includes the complete end-to-end
system effects (client, terminal, network, services infrastructure,
etc.)''} and `\textit{`Overall acceptability may be influenced by
user expectations and context.''} The key point to note here is that ITU-T does not define
what it means by \emph{{}``context''} and how experts can measure
users' \emph{{}``expectations''}.
\emph{Context} is any information that assists in determining a situation(s) related
to a user, network or device \cite{Dey}. For example, from GPS coordinates,
user-related situation can be inferred as: \textquotedblleft{}user is at work\textquotedblright{}.
From delay of 50 ms and packet losses of 0\%, network-related situation
can be determined as \textquotedblleft{}network is not congested\textquotedblright{}.
We consider context as any information that assists in determining users' QoE. Context can be static and dynamic. Static context does
not change often, while dynamic context changes over a period of time
and is difficult to predict. Static context may include user's application
preferences, their security requirements and cost.
In real-life environments, context can be
highly dynamic and stochastic i.e., it can change in a very short period
of time and is uncertain; it can be imperfect; it can exhibit a range
of temporal characteristics; it can have several alternative representations; it
can be interrelated; it can be distributed; and it may not be available at
a particular time. The timely collection and processing of context may be crucial
as it may loose its accuracy. Dynamic context may include
user location, velocity, network load, battery power,
memory/CPU utilization, presence and SNR.
ETSI \cite{ETSI2010} defines QoE as: \textit{``A measure of user
performance based on both objective and subjective psychological measures
of using an ICT service or product.''} It also highlights
the importance of technical parameters such as those related to QoS
and communication context.
Nokia \cite{nokia} defines QoE as: \textit{``QoE is how a user perceives the usability
of a service when in use - how satisfied he or she is with a service. The term QoE
refers to the perception of the user about the quality of a particular service or network.''}
Mitra, Zaslavsky and {\AA}hlund \cite{mitraTMC} define QoE as:
\emph{{}``Quality of experience (QoE) is a metric that depends on
the underlying QoS along with a person's preferences towards a particular
object or service where his/her preferences are defined by his/her
personal attributes related to expectations, experiences, behaviour,
cognitive abilities, object's attributes and the environment surrounding
that person''}.
\begin{table}
\begin{centering}
\caption{Context parameters related to user, application, device and network
environment that need to be considered when modelling, measuring and
predicting QoE.}
\par\end{centering}
\centering{}%
\begin{tabular}{|p{3cm}|p{4.6cm}|}
\hline
Context classes & Context parameters\tabularnewline
\hline
\hline
User and user environment & location, temperature, heart rate, eye movement, amount of sweat,
social context, people nearby, light, background noise, age, gender\tabularnewline
\hline
Tool/device/object & screen size, design layout, resolution, general intuitiveness, buttons
placement, input/output methods, appeal, usability\tabularnewline
\hline
Application & type, requirements\tabularnewline
\hline
Network & type, bandwidth,delay, jitter, packet loss, RTT, loss burst size,
protocols used, received signal strength, congestion levels\tabularnewline
\hline
\end{tabular}
\end{table}
QoE can be computed using subjective and objective test. Subjective tests require direct data collection from end users. These tests lead to higher costs in terms of time and money. On the other hand, objective test directly predict users' QoE without requiring subjective tests. For example, the ITU-T E-Model \cite{G.107}
considers QoS parameters (e.g., network delay and packet losses) to
compute QoE in terms of the mean opinion score (MOS).
In table 1 we enlist several context parameters related to application, device, network and the user environment that may assist in computing users' QoE. Along with context, there can be a plethora of QoE parameters such as enjoyment, user satisfaction, technology acceptance, efficiency, accuracy and perceived ease-of-use \cite{brooks2010,1631338,5246986}. Studying and modelling these parameters to determine QoE is a challenging task \cite{brooks2010,1631338,sung,mitrasac2011}. There can be inter-dependencies and non-linear relationships
between context and QoE parameters \cite{correlation,mitraTMC}. Further, some parameters may be hidden. By the term \textit{"hidden"} we mean that some parameters may not be observed directly. Thus, these parameters may be hard to measure and quantify. QoE modelling and measurement may require the combination of several QoE parameters to determine overall QoE. For example, combining QoE parameters such as ``user satisfaction'' and ``technology acceptance'' to compute users' overall QoE. This problem can be aggravated by the fact that each QoE parameter can be measured on a different scale or by considering different units of measurement \cite{brooks2010,mitraicme2011}. For example, ``user satisfaction" can be measured on the scale of 1 to 5. On the other hand, ``technology acceptance" can be measured using simple ``yes" or ``no".
The contribution of this paper is to review research pertaining to QoE modelling, measurement and prediction. In doing so, we identify and highlight several important challenges that should be addressed to realize efficient techniques for QoE modelling, measurement and prediction. This paper is organized
as follows: Section II presents an in-depth discussion on subjective
and objective methods and introduces the research problems pertaining to QoE modelling, measurement and prediction. Sections III and IV presents the state-of-the-art methods and discusses their advantages and shortcomings.
Section V presents the future research directions that should be pursued to
realize efficient methods for QoE modelling, measurement and prediction.
Finally, section VI presents the conclusion.
\section{Methods for QoE Measurement and Prediction}
\subsection{Subjective Methods}
QoE measurement can be performed using subjective and objective tests \cite{brooks2010,moormonet,Li-yuan2006,1631338,takahashi}. Subjective
tests involve direct data collection from users. For example, in
the form of user ratings. Standardization bodies such as the ITU-T in its ITU-T P.800 recommendation \cite{P.800} presents a methodology for conducting subjective tests. This recommendation also defines a method to measure users' QoE based on a score called the mean opinion score (MOS) \cite{P.800}. MOS is used widely for subjective voice/video quality assessment where human test subjects grade their overall experience on the Absolute Category Rating Scale (ACR). This scale typically comprises of five alternatives, for example, '5' means {}``excellent'',
'4' means {}``very good'', '3', '2' and '1' means {}``good'', ``fair'' and ``poor'' respectively.
There are several problems that arise while conducting subjective tests. For example, a large sample space is required to get credible results \cite{rix}. These tests can be expensive and time consuming \cite{takahashi,brooks2010,mollericc2009}. Hence,
subjective tests are mainly limited to major telecommunication providers.
Further, the native language of human subjects might not be same across test subjects and the results obtained via subjective tests can be biased or even incomplete \cite{Knoche99}.
Several researchers \cite{madm,Janowski2009,brooks2010,mitraicme2011} also noted problems while adhering to the ITU-T P.800 recommendation for conducting subjective tests. The biggest problem with MOS is that an average of user ratings is computed. Mathematical operations such as computing mean and standard deviation cannot be applied on subjective ratings as these ratings are categorical in nature (e.g., "excellent" and "fair"). The human test subjects ranks the alternatives on the categorical scale where the distance between these alternatives cannot be known \cite{madm,Janowski2009,brooks2010,mitraicme2011}. Hence, mathematical operations cannot be applied. Nonetheless, MOS is the most widely used method to assess subjective ratings in both industry and academia.
\subsection{Objective Methods}
Takahashi \emph{et al. }\cite{takahashi}\emph{ }argued for developing
objective methods to estimate QoE for multimedia applications.
Objective methods such as the ITU-T E-Model \cite{G.107}, PESQ \cite{pesq},
PSQA \cite{psqa}, USI \cite{usi}, once
developed, can be used for \textit{QoE prediction} without requiring subjective
tests. Most of objective methods map their scores onto the MOS to determine QoE. For example, the ITU-T E-Model \cite{G.107} computes
the R-factor in range of {[}0:100{]} for narrowband codecs and {[}0:129]
for wideband codecs. The R-factor is then mapped to the MOS using a non-linear equation to determine users' QoE between the range of 1 and 5.
Objective methods are hard to develop, model and deploy due to large parameter space.
Further, any modification made to current objective methods by addition or deletion of parameters may require new tests to fine-tune the current model or to derive new statistical models for QoE prediction \cite{brooks2010}. We assert that the current objective models
such as \cite{G.107}, \cite{lingfen2006} and \cite{fiedlernetwork}
are based on simplistic assumptions regarding QoE prediction. For example,
Fiedler and Hossfeld \cite{fiedlernetwork} considers only one to two QoS
parameters to predict QoE.
\subsection*{The role of context}
Most of the QoE measurement and prediction methods were developed under controlled laboratory environments with a limited number of objective and subjective parameters such as delay, jitter, packet loss, and bandwidth. Moor \emph{et al.} \cite{moormonet},
Jumisko-Pyykk\"{o} and Hannuksela \cite{Jumisko2008}, Ickin \emph{et al.} \cite{ickin12} and Mitra \emph{et al. }\cite{mitraicme2011,mitradbn}\emph{
}argued for QoE measurement and prediction in real-life user environments.
Jumisko-Pyykk\"{o} and Hannuksela \cite{Jumisko2008} shows that users'
QoE differs in laboratory and real-life user environments. In real-life environments, context can change dynamically while users
are on-the-move. For example, at different user locations, QoS can
vary. Factors such as time-of-the-day, can help explain rise in network congestion (see example in section I) leading to decrease in users' QoE. Further, users' social context changes
throughout the day leading to variation in QoE. For example, users'
QoE may be affected if there are people nearby. Nearly all objective
methods (e.g., \cite{usi,fiedlernetwork,oneclick,psqa}) developed till date do not consider grouping of several context parameters
such as user location, time of the day and screen resolution for QoE prediction.
Thus, objective methods not considering context may not provide accurate
QoE predictions in mobile computing systems. \textbf{}
Brooks and Hestnes \cite{brooks2010} discussed the importance of
considering both subjective and objective methods for QoE measurement
and prediction. They also discussed the importance of measuring QoE
on a single scale by combining several QoE parameters. Moor \emph{et al.} \cite{moormonet} and Mitra \emph{et
al.} \cite{mitraicme2011,mitrasac2011} also suggested the combination
of both subjective and objective methods for QoE measurement. We conclude
that there can be a plethora of context and QoS parameters affecting
QoE. In fact, different QoE parameters can also affect each other as shown
in Fig 1. Further, QoE parameters can be measured on a different scale
\cite{brooks2010} as shown in Fig 2.
We assert that there is a need to develop methods that correctly identify and model these parameters in order to measure and predict QoE on a single scale. Once developed, these methods may benefit both telecommunication operators and end users. For example, objective methods may be used to provide users with personalized services on their mobile devices. On the other hand, operators may be able to minimize network churn.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.3008]{parameter_relationships}\caption{Parameter relationships between context and QoE parameters can
be complex. Grey ovals depict QoE parameters. White ovals depicts context parameters.}
\par\end{centering}
\end{figure}
\subsection{Research Challenges}
Researchers
considering the problem of QoE modelling, measurement and prediction face a number of researcher challenges. These include:
\begin{enumerate}
\item \textbf{QoE modelling:} QoE measurement and prediction may involve
a large parameter space comprising of several QoE and context parameters
\cite{5246986} as shown in Fig. 1. There can be \emph{N} context parameters affecting \emph{M} QoE parameters. Further \emph{M}
QoE parameters can affect each other. Thus, selecting relevant parameters and defining and
finding relationships between these parameters can be challenging.
The relationships between these parameters are usually non-linear and hard to quantify. This necessitates the development of novel QoE modelling techniques
to model all these parameters efficiently. The QoE models should not
only be conceptual, but should also transcend to solving the challenges associated with QoE measurement and prediction. For example, rather than simply classifying and representing
the parameters, QoE models should directly be used for QoE measurement
and prediction.
\item \textbf{QoE measurement and prediction:}
The challenge of QoE measurement and prediction involving multiple QoE and context parameters is not well addressed. Consider Fig. 2, each QoE parameter can be measured on a different scale and may involve different units of measurement \cite{brooks2010,mitraicme2011}. These scales can be qualitative or quantitative. For example, QoE parameter ``user satisfaction'' is measured using an ordinal (qualitative) scale involving ratings 1 to 5 (see Fig 2b). On the other hand, QoE parameter ``technology acceptance'' may not require a scale as it can be measured using a simple {}``yes'' or {}``no'' type questions \cite{1631338}. Thus, determining QoE based on different types of scale and/or different units of measurement remains a challenging task.
Further, current methods do not explicitly deal with the problem of imprecision in QoE measurement and prediction due to uncertainty caused by scarce and sparse data or other uncontrollable factors prevalent in both laboratory and real-life environments \cite{mitrasac2011}.
\item \textbf{QoE measurement and prediction over time:} \cite{mitradbn,karapanos2010,perkis} argue that QoE evolves over time. By repeated use of a service, a user may or may not be satisfied by their QoE. Thus, QoE measurement and prediction at a single point in time may not yield correct results and may have to be done over a longer period time. This necessitates the development of novel techniques for QoE modelling, measurement and prediction over time \cite{mitradbn}.
\end{enumerate}
In the following sections, we discuss these challenges in detail and discuss the state-of-the-art research and present future research directions that may lead to efficient methods for QoE modelling, measurement and prediction.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.50]{scales}\caption{Typical scales for QoE measurement.}
\par\end{centering}
\end{figure}
\section{QoE Modelling }
Perkis \emph{et al.} \cite{perkis} presented a conceptual model for QoE measurement. Their model includes technology and user related parameters to measure QoE. The authors classified these parameters as either quantifiable or unquantifiable. Quantifiable parameters include bandwidth, delay and jitter. Parameters such as expectation, attitude, ease-of-use are related to user and are deemed to be unquantifiable. We differ with authors here that some user related parameters such as ease-of-use can be quantified.
The biggest problem with their model is that parameters can only be classified and represented as a tree structure and it cannot be used to measure and predict QoE. Further, their model assumes that all parameters are independent of each other. In reality, this might not be the case. For example, QoE parameter, ``user satisfaction'' may affect another QoE parameter such as ``technology acceptance'' which may determine whether a user accepts a particular technology or not.
Sun \cite{sung} developed a conceptual QoE model for multimedia applications such as video-on-demand. Their model was inspired by the customer satisfaction model, the disconfirmation of expectations model (DEM) and the sport spectator satisfaction index. Their model was based on the premise that users have expectations (both positive and negative) and their expectations are based on their needs. If their perceptions about a service are met, the users might have positive disconfirmation, leading them being satisfied. On the other hand, if the users have a negative disconfirmation i.e., if their expectations are not met, they might not be satisfied. The author modelled users' perception as a function of affective and cognitive responses. The author then used these responses to predict the overall QoE by deriving utility functions in the context of mobile video steaming applications. The author, however, did not present a method to integrate new context and QoE parameters and assumes independence between parameters.
Gong \emph{et al.} \cite{gongqoemodel} proposed a pentagram model for QoE measurement. The main highlight of their model is that it combines several QoE parameters i.e., service availability, usability, retainability, integrity and instantaneousness to determine a single QoE value. Each QoE parameter is a function (linear or ratio) of one or more QoS parameters. For example, service integrity is the function of delay, jitter and packet loss ratio. Each QoE parameter is then represented in a pentagram. The computed QoE value is mapped to the normalized MOS scale to determine a single QoE value. However, the author do not discuss how their model can be extended with new context and QoE parameters. Further, their model does not define dependencies between context parameters i.e, all the parameters are considered independent. The main highlight of their model is that it is practical and can be used in real applications.
A comprehensive treatment of QoE modelling problem was considered by Wu \emph{et al.} \cite{1631338}. The authors presented a conceptual QoE model comprising of QoE and QoS constructs. The QoS constructs include factors such as interactivity, vividness and consistency. These factors in turn describe network QoS parameters. For instance, interactivity depends on delay and vividness relates to metrics such peak-signal-to-noise-ratio (PSNR). The QoE construct consists of parameters such as concentration, attention and technology acceptance. The authors also presented a theoretical framework for QoE measurement. The quantitative mappings between QoE and QoS were established via correlation analysis. However, all parameters affecting QoE were considered independent.
QoE models \cite{sung,gongqoemodel,1631338} are mainly limited to QoS parameters. These models do not consider other context parameters such as location, type of mobile device, as time-of-the day, etc. (as mentioned in table 1). Mitra \textit{et al}. \cite{mitrasac2011,mitraicme2011} argue that inclusion of several context parameters in a QoE model, may lead to increase in QoE measurement and prediction accuracy especially in users' real-life environments.
Korhonen \textit{et al.} \cite{Korhonen2010}
discussed the need for context-awareness for QoE measurement. The main contribution
of the paper is to categorize context into eight different categories
and to find \emph{triggering context } i.e., to find the most important
context category. Context categories identified by the authors were: environment context, person
context, device context, task context, social context, spatio-temporal
context, service and access network context. For results analysis,
a questionnaire was prepared based on the aforementioned context categories. The authors used these categorizes to analyze
users' phone usage experience. However, the authors did not develop a
context model to reason about context to measure and predict QoE.
Further, they did not describe how their approach can be incorporated
in applications.
Laghari and Connelly \cite{laghari12} proposed
a conceptual QoE model. Their model consists of human, context, business and technology domains. However, authors simply classify parameters related to each domain.
For example, GPS related to context domain and age and gender related to human domain. The authors do not present methods whereby their conceptual model can be used to predict users' QoE. Thus, their model is limited to parameter classification.
Marez and Moor \cite{marez2007}, developed a conceptual
framework for QoE measurement. Their model consists of five components.
These include: 1) quality of effectiveness: it includes QoS parameters such as jitter, packet loss, reliability, CPU
usage, etc; 2) usability: it includes parameters such as ease-of-use.
3) quality of efficiency: it includes subjective parameters related
to device, application and network. These parameters include, user
satisfaction, speed, interface, etc; 4) expectations: as the name
suggests, this component quantify degree of users' exceptions
that are met; and 5) context: including information such as, environmental,
personal, social, cultural, technological and organizational. As with
models presented above, their model simply classify several
parameters affecting QoE. Their model does not include methods to measure and predict QoE.
Mitra \textit{et. al} \cite{mitraicme2011, mitraTMC} presented a context-aware approach called CaQoEM---Context-aware QoE Modelling and Measurement to model, measure and predict users' QoE. Their model is based on Bayesian networks (BNs) \cite{russelandnorvig} and the context spaces model \cite{Padovitz2004a}. By using BNs, the relationships between context and QoE parameters and the relationships
among QoE parameters can be determined in a simplified and in an efficient manner. The experts simply need to define mappings \emph{casually} by linking \emph{causes} (e.g., context parameters) to \emph{effects} (e.g., QoE parameters). They do not need to develop a precise mathematical or statistical model to determine the mappings between context and QoE parameters. The BNs can automatically handle linear and non linear relationships and can handle both discrete and continuous variables. Compared to \cite{sung,gongqoemodel,1631338} their model is capable of graceful addition and removal context and QoE parameters. It incorporates domain/experts knowledge for QoE measurement and prediction. Finally, it maps several QoE and context parameters to measure and predict users' QoE on a single scale.
QoE models presented in \cite{perkis,sung,marez2007,laghari12} are conceptual in nature. These models do not transcend to solving the challenges associated with QoE measurement and prediction i.e., to derive mapping(s) between multiple QoE and context parameters to measure and predict users' QoE on a single scale. These models simply enable experts to classify parameters but do not propose methods to realistically measure and predict users' QoE in both laboratory and real-life settings. For example, Laghari and Connelly' \cite{laghari12} model link multiple domains together but do not provide concrete methods to determine relationships between each domain and their related parameters.
If experts were to use conceptual models such as \cite{perkis,sung,marez2007,laghari12}, they will have to firstly determine the parameters they require. They will then have to derive statistical or mathematical models. These models may not correspond to the original conceptual model, tremendously reducing the benefits of QoE modelling. Except for \cite{gongqoemodel,mitrasac2011, mitraicme2011, 1631338, mitraTMC}, the other QoE models cannot be used directly to measure and predict QoE. Further, except for \cite{mitrasac2011,mitraicme2011, mitraTMC}, these models may not be used directly in applications and cannot be shared among experimenters/operators.
\section{QoE Measurement and Prediction }
QoE prediction methods can mainly be classified into regression-based (linear and non-linear) methods (e.g., \cite{lingfen2006,usi,oneclick,G.107,Janowski2009,mollericc2009}),
correlation analysis-based methods \cite{1631338,correlation} and artificial intelligence (AI)- and machine learning (ML)-based methods \cite{psqa,menkovski2009,mitrasac2011,mitraicme2011,mitradbn}.
In recent years, several objective QoE prediction methods were developed
for Voice Over Internet Protocol (VoIP), Internet Protocol Television (IPTV) and several other applications including web browsing
and file transfer protocol (FTP). These include, ITU-T E-Model \cite{G.107},
Perceptual Evaluation of Speech Quality (PESQ) \cite{pesq}, Perceptual Assessment of Speech Quality Assessment
\cite{psqa}, User Satisfaction Index \cite{usi}, OneClick \cite{oneclick},
generalized linear models (GLZ) \cite{Janowski2009}, Decision Trees-based
models \cite{menkovski2009} and CaQoEM \cite{mitraicme2011,mitrasac2011,mitradbn}.
Chen \emph{et al.} \cite{usi} proposed the User Satisfaction Index (USI) to predict users' QoE based on VoIP call session lengths. We believe that USI's dependence on the call session lengths to predict QoE will not hold in case of mobile computing systems. In these systems, users may be on-the-move and their devices may be prone to network related impairments such as congestion and handoffs which severely hamper QoE \cite{marshgronvall,mitrawcnc,mollericc2009}. USI is not flexible and do not consider other context parameters such as user's location, pricing and time-of-the-day. Further, it only considers one QoE parameter, ``user satisfaction'' and do not provide mechanisms to include other QoE parameters, if necessary.
Chen \emph{et al.} \cite{oneclick} presented OneClick to measure and predict QoE regarding multimedia applications such as VoIP, video streaming and gaming. The authors developed a Possion regression equation to predict users' QoE based on user click rates. The user click rate is computed when the users click the keys on their keyboard corresponding to network QoS conditions. Authors validated OneClick using experimentation and by performing two case studies. The experimental analysis comprised of VoIP and video streaming applications but considered only three human subjects. The case studies comprised of VoIP applications such as AIM, MSN Messenger, Skype and first-person shooter games such as Halo and Unreal Tournament.
We assert that OneClick should be validated with a large number of user studies and with different applications. The authors argued that OneClick can be used to predict QoE in case of unmeasurable parameters (similar to hidden parameters we discussed in section I.) such as background noise. However, the authors do not discuss how their method can model and determin relationships between unmeasurable (hidden) and measurable parameters. Further, the authors do not provide a method to handle inter-parameter dependencies between measurable parameters. Lastly, we conclude that OneClick cannot measure users' QoE by considering multiple QoE parameters together to predict overall QoE.
As mentioned in section III, Wu \emph{et al.} \cite{1631338} presented a comprehensive QoE framework comprising of several QoS and QoE parameters. For results validation, the authors performed three experiments to study the effects of QoS, number of people in user's environment and the type of communication medium on QoE. For the first experiment, authors concluded that an increase in one-way delay causes distraction to users. In the second experiment, authors concluded that users' performance was slightly affected with the presence of few people near to them. However, users could still perform their tasks efficiently and with little distraction. Finally, in their last experiment, the authors concluded that the choice of audio-visual communication medium affects users' QoE. For example, visual medium performs better than audio medium. However, the mixed medium i.e., audio-visual medium performs best in terms of task completion time.
For results analysis, the relationships between QoS and QoE parameters were found using correlation analysis. However, this approach can be impractical when there are several QoE and QoS parameters as finding correlation between each parameter is a complex task. The authors do not present methods to determine inter-parameter relationship between context to predict the overall QoE. Lastly, their model incorporates multiple QoE parameters but the authors do not present a method to determine the overall QoE by considering these parameters together.
Fiedler, Hossfeld and Tran-Gia \cite{fiedlernetwork} presented a
quantitative mapping between QoS and QoE using their IQX hypothesis.
It is based on exponential relationship between QoS
and QoE parameters. The IQX hypothesis takes as an input QoS parameters
such as packet loss and jitter (in the form of p-ordered ratio) to
determine QoE in the form of PESQ MOS for VoIP applications. The authors
show that the derived non-linear regression equation can provide an
excellent mapping between QoS parameters and MOS for VoIP application.
The authors also tested their hypothesis for QoE related to web browsing
by considering weighted session time and delivered bandwidth. The main drawback of IQX hypothesis is that it only considers
one QoS parameter to predict the corresponding QoE value. The authors
did not consider the problem of integrating additional context and QoE parameters to predict the overall QoE.
Kim \emph{et al.} \cite{correlation} proposed a method for QoE prediction based on a function of QoS parameters such as delay, jitter, packet loss and bandwidth. Firstly, a normalized QoS value is computed based on the linear weighted sum of QoS parameters. Once the QoS value is computed, it is then used to determine QoE on the scale of one to five based on another QoE function. However, the authors do not discuss in detail how the weights of each QoS parameters can be computed. The authors validated their method based on a simple case study (related to IPTV) and did not consider experimental analysis and/or subjective tests. This raises doubts concerning their methods applicability in real systems. Further, their method is limited to QoS parameters and treats each parameter independently. In reality, this might not be the case. Lastly, the authors did not discuss how new QoE parameters can be included in their model.
Janowski and Papir \cite{Janowski2009} considered generalized linear models (GLZ) to predict QoE. GLZ is a general form of linear regression and can deal with non-normality of data. It outputs the probability distribution of users' ratings instead of simply computing mean and standard deviation. We believe that the probability distribution can be valuable for the experts to understand the diversity of user ratings. Based on the subjective tests involving 60 users, the authors validated that GLZ can capture users' ratings and provides better understanding users' ratings based on probability distribution.
The GLZ model, models a single QoE parameter independently. It is therefore difficult to predict the overall QoE based on multiple QoE parameters. In such a case, multiple QoE models, for each QoE parameter will need to be developed. Then a new model for overall QoE prediction can be developed based on the outputs of each QoE model.
\subsection*{Discussion}
The methods \cite{usi,oneclick,fiedlernetwork,1631338,correlation} consider statistical approaches for QoE prediction. For example, linear/non-linear regression and correlation analysis. These methods involve mathematical operations such as computing average, variance and standard deviation of users' ratings. It is worth noting that the users' subjective ratings are mainly based on the ordinal scale. The ordinal scale is a rank ordered scale with a finite set of alternatives. For example, ``excellent'', ``very good'', ``good'', ``fair'' and ``poor'' \cite{madm}. These alternatives do not express precise numerical values. Further, the distance between alternatives can not be established \cite{madm,mitraicme2011,muordinal}. Thus, we assert that mathematical operations such as mean and standard deviation cannot be applied \cite{madm,muordinal,Janowski2009,mitraicme2011}. Consequently, the methods involving MOS as a metric will also be incorrect \cite{madm,muordinal,mitraicme2011}.
Mu \emph{et al.} \cite{muordinal} show that in case of subjective tests, normality of collected data (user ratings) cannot be verified. Further, due to the subjective nature of users' ratings, parametric statistical models cannot be applied for QoE measurement and prediction. Hence, techniques involving ordinary regression analysis will also be invalid. The authors point out that the conditions for valid statistical tests are rarely verified and documented by computer science researchers.
Mitra \textit{et al.}\cite{mitraicme2011,mitraTMC} proposed to use a biploar interval scale \cite{madm} to map users' ratings into an interval scale (see Fig 2(a)). For example, a 5-point ordinal scale (see Fig 2(b)) is calibrated in such a manner that the best alternative, for example, ``excellent'' is assigned a maximum value, '1'; the worst alternative on the other hand is assigned the lowest value, '0'. The mid-point is also used for calibration. For example, ``good'' is assigned a value of 0.50. This means that values lower than 0.50 are less favourable compared to values higher than 0.50. For example, a value between 0.8750 and 1 is considered to be ``excellent'' while a value in the range of 0 and 0.1250 is considered as ``poor''. This way normalized values can be used to determine a QoE rating. Thus, a bipolar scale enables an expert to perform mathematical operations such as computing mean and standard deviation and the application of parametric statistical models.
\subsection*{Artificial Intelligence and Machine Learning Based Methods}
Recently, researchers considered AI- and ML-based methods to predict users' QoE. The techniques such as decision trees (DTs), random neural networks (RNNs), hidden Markov models (HMMs), Bayesian networks (BNs) and dynamic Bayesian networks (DBNs) were applied successfully to predict users' QoE in both laboratory and real-life environments. The main reason for the success of AI- and ML-based methods may be attributed to providing solid mathematical models for QoE modelling and prediction. These models are flexible and are less prone to uncertainty regarding user ratings. Further, these methods were validated using sound techniques such as cross validation \cite{russelandnorvig}. These methods are more flexible than parametric statistical models. For example, these methods may not have to adhere to strict assumptions regrading independence and normality or residuals checks. Most importantly, these methods can efficiently deal with non-linear relationships between several parameters.
Rubino, Tirilly and Varela \cite{psqa} proposed and developed a PSQA metric for QoE prediction. The PSQA metric considers numerical values such as packet loss percentage and mean loss burst size to output MOS based on RNNs. The problem with RNNs is that it requires a large number of training samples for accurate QoE prediction. This limits their ability to learn in an online manner. In contrast to RNNs, methods based on DTs \cite{menkovski2009} and BNs \cite{mitrasac2011},\cite{mitraicme2011} learn efficiently with smaller data sets. Another problem with PSQA is its dependence on the MOS and it cannot directly predict users' subjective ratings. On the other hand, \cite{menkovski2009},\cite{mitrasac2011},\cite{mitraicme2011} can classify and predict ordinal ratings. Further RNNs cannot incorporate non-numerical context parameters such as user's mood or location.
Menkovski \emph{et al.} \cite{Menkovskimomm} considered several ML classifiers such as BNs, DTs and support vector machines (SVMs) to predict users' QoE. Based on experimental analysis, the authors show that DTs and rule-based systems are suitable for predicting QoE and can outperform other AI techniques such as BNs and SVMs. The authors, however, tried to classify single QoE parameter, ``QoE acceptance'' in the form of ``yes'' or ``no''. They did not consider other QoE parameters to predict users' QoE. Further, they did not discuss how new context can be included in their method.
A recent study conducted by Mitra, \AA{}hlund and Zaslavsky \cite{mitrawcnc} show that different form of BNs such as generative and discriminative BNs can learn with scare data with high prediction accuracy of approx. 98\%. They validated their model using simulations and considered typical network impairments prevalent in mobile computing systems such as handoffs, wireless signal fading and network congestion.
Liu, Zhou and Song \cite{Li-yuan2006} presented an approach for QoE prediction from pervasive computing point-of-view. Their approach included a hierarchical model to represent QoS parameters and their effects on QoE. They proposed a rough-set theory based approach to predict users' QoE. However, their method has several limitations. Firstly, their method can not deal with uncertainty regarding QoE measurement arising from missing user data and if there is significant variations in user ratings. Secondly, their model employs a rule-based approach where several rules need to be manually created to predict users' QoE. Thirdly, there model do not consider several QoE parameters together to predict overall QoE. Finally, the authors validated their model using a simple case study example and did not consider subjective or experimental tests
Mitra\emph{,} \AA{}hlund and Zaslavsky \cite{mitrasac2011,mitraicme2011} developed CaQoEM, a context-aware, decision-theoretic approach for QoE measurement and prediction. Their approach incorporates BNs and utility theory to measure and predict users' QoE. CaQoEM captures uncertainty and deal with missing user ratings in an efficient and unbiased manner. It provides simplified and efficient ways to define relationships between context, QoS and QoE parameters. Further, CaQoEM incorporates graceful addition
and removal of context parameters and can incorporate domain/experts knowledge for QoE measurement and prediction. CaQoEM can map several QoE parameters and context attributes to measure users' QoE on a single scale.
The authors validated their approach using a case study, subjective and experimental tests. Compared to \cite{psqa}, \cite{Menkovskimomm} and \cite{Janowski2009}, their results show that CaQoEM is resilient to scarce and sparse data i.e., widely distributed user ratings. This was achieved by considering all the QoE ratings together to determine a single scalar value that ``best describes users' QoE'' rather than just selecting the most likely outcome based on highest probability.
In case of scarce and sparce data, methods such as \cite{Menkovskimomm},\cite{psqa} may fail as they simply consider classification accuracy as a metric for evaluation. For example, consider a case where six users out of ten gave \textquoteright{}5\textquoteright{} and the remaining four users gave \textquoteright{}4\textquoteright{} to VoIP call quality. In this case, \cite{Menkovskimomm} will select QoE as \textquoteright{}5\textquoteright{} with probability of 0.60. This is incorrect as it ignore ratings of other four users. CaQoEM will however, find the best alternative using BNs and utility theory. Further, CaQoEM enables experts to add their expertise into the BN model to reach a single (or multiple) conclusion(s) regarding QoE which is not possible in other QoE measurement techniques such as \cite{menkovski2009,Janowski2009,psqa,usi,oneclick}.
\subsection*{Methods for QoE Measurement and Prediction Overtime}
Karapanos \emph{et al.} \cite{karapanos2010} conducted a study concerning mobile phone usage. Their results show that users perception of innovativeness increased during the first month and then remained stable. Also, users' learnability was low for the first week and then increased sharply when they got accustomed to their mobile devices. Perkis, Munkeby and Hillestad \cite{perkis} conducted a 4 week study regarding QoE
in 3G networks. Their results show that user expectations decreased after two weeks. They also show that MOS regarding video application decreased in the last two weeks from 3.4 to 3.1 (out of 5). These results strongly suggests that users' QoE varies with time.
Hossfeld \emph{et at. }\cite{hossfeldhmm} considered the problem
of QoE prediction over time by considering Web QoE model. The authors considered page load time as the QoS parameter and considered "user satisfaction" as the QoE parameter. Their experimental results with a number of users how that hidden Markov models \cite{Rabiner1989} can model and predict users' QoE over time. However, the problem with their approach is that their HMMs considers only one QoS and QoE parameter. If experts were to incorporate more context and QoE parameters, the HMM will be harder to train and its prediction accuracy may significantly drop \cite{murphythesis}.
Mitra, Zaslavsky and \AA{}hlund \cite{mitradbn} presented a more generic approach for QoE modelling, measurement and prediction over time by considering DBNs \cite{russelandnorvig}. Their model considers
several QoE and context parameters to measure and predict
QoE over time. The authors developed a DBN that can handle spatio-temporal context to track and predict users' QoE over time. The authors using simulations and case studies show that DBNs can be used
efficiently to track and predict users QoE over time. However, their
method needs further subjective tests under real-life test conditions.
We assert that the current state-of-the-art research including standards and recommendations developed by ITU-T and ETSI do not sufficiently address the problem of QoE modelling, measurement and prediction over time. QoE measurement and prediction over time can be beneficial to experts who are interested in understanding how users interact with their services in a long run and to establish factors that may lead to network churn. Further, mobile devices may learn users' QoE over time to perform user-centric handoffs \cite{mitraatnac}.
Table 2 presents
the comparison of the methods for QoE modelling, measurement and prediction.
\begin{sidewaystable}
\centering
\caption{State-of-the-art in QoE modelling, measurement and prediction. }
\begin{tabular}{|p{3cm}|p{3cm}|p{3cm}|p{1.5cm}|p{2cm}|p{1.4cm}|p{0.8cm}|p{3.7cm}|}
\hline
Paper & Domain & Technique(s) & Context-Aware & Unified QoE Model & Multiple QoE parameters & QoE over time\tabularnewline
\hline
\hline
Chen \emph{et al.} \cite{usi} & VoIP & Cox regression & No & No & No & No\tabularnewline
\hline
Chen \emph{et al}. \cite{oneclick} & Any multimedia/gaming application & Poisson regression & No & No & No & No\tabularnewline
\hline
Wu \emph{et al.} \cite{1631338} & Any application & Correlation analysis & No & Yes & Yes & No\tabularnewline
\hline
Fiedler Hossfeld and Tran-Gia \cite{fiedlernetwork} & VoIP and Web browsing & Exponential function & No & No & No & No\tabularnewline
\hline
Gong \emph{et al.} \cite{gongqoemodel} & Any application & Pentagram model & No & Yes & Yes & No\tabularnewline
\hline
Kim, Hyun and Choi \cite{correlation} & Any application & Correlation model & No & No & No & No\tabularnewline
\hline
Liu, Zhou and Song \cite{Li-yuan2006} & Any application & Rough set theory & No & No & No & No\tabularnewline
\hline
Janowski and Papir \cite{Janowski2009} & FTP & Generalized linear model & No & No & No & No\tabularnewline
\hline
Rubino Tirilly and Varela \cite{psqa} & VoIP and video & Random neural networks & No & No & No & No\tabularnewline
\hline
Menkowski \emph{et al.} \cite{Menkovskimomm} & IPTV & Decision trees & No & No & Yes & No\tabularnewline
\hline
Mitra, \AA{}hlund and Zaslavsky \cite{mitraicme2011,mitraTMC} & Any application & Bayesian Networks and utility theory & Yes & Yes & Yes & No\tabularnewline
\hline
Mitra, Zaslavsky and \AA{}hlund \cite{mitradbn} & Any application & Dynamic Bayesian networks and utility theory & Yes & Yes & Yes & Yes\tabularnewline
\hline
Hossfeld \emph{et al.}\cite{hossfeldhmm} & Web application & Support vector machines, iterative exponential regressions and two-dimensional
hidden Markov models. & No & No & No & Yes\tabularnewline
\hline
\end{tabular}
\end{sidewaystable}
\section{Discussion and Future Research Directions}
\subsection{From Conceptual to Practical Models for QoE Measurement and Prediction}
In section III, we discussed that QoE modelling involves a complex process of defining relationships between context and QoE parameters with an aim to compute a QoE value on a single scale \cite{brooks2010,mitraicme2011}. There may be a large number of objective and subjective context parameters that influence users' QoE in both laboratory and real-life environments. Some of the parameters can be determined while others may be \emph{hidden} i.e., having an indirect affect on users' QoE and thus, are hard to relate and quantify \cite{mitrasac2011}.
Several researchers \cite{kilkki,marez2007,perkis} proposed conceptual QoE models that merely classify parameters and their possible relationships.
For example, Perkis, Munkeby and Hillestad \cite{perkis} represented QoE as a tree structure. However, these conceptual models are not practical. These models do not provide unified mechanisms to model, measure and predict QoE. The experts using these conceptual models can merely represent context parameters. They cannot use these models to measure and predict QoE. If they try to extend these conceptual models for performing QoE measurement and prediction, the conceptual representation of parameters may change. For example, parameters represented conceptually as a tree structure may not directly transcend to a mathematical or a statistical model. For example, a regression model. Thus, reducing the scope and benefit of QoE modelling. This necessitates the development of practical models that may benefit experts by providing them a simplified, systematic and a unified approach to model, measure and predict QoE.
We assert that context-aware QoE modelling is a relatively unexplored area and there is a scope to develop novel QoE models. These models should be practical, shareable and reusable across multiple application domains. Unlike \cite{kilkki,marez2007,perkis,laghari12}, these models should also be realistically implementable in systems and applications alike. We believe that probabilistic and ontological models should be extremely beneficial for QoE modelling, measurement and prediction.
\subsection{Methods for QoE Measurement and Prediction}
In section IV, we discussed several methods for QoE measurement and prediction (see table 2 for classification of these methods). We discussed that QoE can be measured using different types of scale and/or by using different units of measurement. For example, a scale of 1 to 5 \cite{mitraicme2011}. QoE can also be measured using by simple {}``yes'' or {}``no'' type questions \cite{menkovski2009}. Typical scales include the ordinal and interval scale \cite{brooks2010,madm}. The transformation of user ratings on the interval scale is difficult, therefore, mostly ordinal scale is used.
In subjective tests, users select alternatives marked on the ordinal scale where the distance between alternatives is not fixed. Thus, meaningful results using mathematical operations such as average, standard deviation and ratio cannot be applied \cite{brooks2010,mitraicme2011,Janowski2009,muordinal}. Further, the application of regression-based methods (e.g., \cite{fiedlernetwork}) may also be incorrect. For instance, linear regression requires the residuals to be normally distributed. If users choose only few alternatives on a scale instead of all, (e.g., only ratings '5' and '4' are selected on the scale of 1 to 5) the error distribution will be asymmetric. Regression techniques only provide the prediction of mean and thus, distribution of the choices is lost \cite{Janowski2009}. As in \cite{Janowski2009,mitraicme2011,muordinal}, we assert that getting a probability distribution instead of computing mean will be correct and may assist the experts in understanding how QoE is distributed based on the underlying test conditions.
We propose that experts should carefully evaluate the type of data and should carefully choose the statistical techniques they intend to apply. For correct application of any statistical technique, several conditions may have to be verified. If these conditions are not met, the application of these techniques may be incorrect. Mu \textit{et al.} \cite{muordinal} pointed out that non-parametric statistical techniques should be considered when dealing with subjective tests. We believe that AI-based techniques proposed by Menkovski \emph{et al.} \cite{menkovski2009} and \cite{mitraicme2011,mitradbn,mitraTMC} can be valuable for QoE measurement and prediction since these techniques can discover relationships between several context and QoE parameters. However, direct application of these techniques can also challenging.
The challenge lies in the fact that these techniques simply classify the alternatives based on some test conditions. For example, consider a case where we have ten user ratings in which six users gave {}``excellent'' and four users gave ``very good''; the AI-based techniques will simply classify alternative as {}``excellent'' with probability 0.60 by ignoring the ratings of other 4 users \cite{mitraicme2011}. To alleviate such problems, Mitra, \AA{}hlund and Zaslavsky \cite{mitraicme2011} used a decision-theoretic approach for QoE measurement and prediction where the authors considered an alternative that ``best describes'' the underlying QoE ratings. This was done via considering all the QoE ratings together to determine a single scalar value. This scalar value was then mapped to the bipolar interval scale to determine the final QoE value.
In section I and IV, we also highlighted the challenge of QoE measurement on a single scale by considering multiple QoE parameters together \cite{brooks2010,mitraicme2011}. We assert that most of the methods presented in the state-of-the-art concentrate on predicting single QoE parameter independently. Only Gong \textit{et al.} \cite{gongqoemodel} and Mitra, \AA{}hlund and Zaslavsky \cite{mitraicme2011,mitraTMC} presented methods to measure QoE based on multiple QoE parameters.
Finally, we conclude that QoE measurement and prediction over time largely remains an open area of research. We assert that QoE measurement is an evolving process and it should be performed over a period of time (several days, weeks or months depending on the service or application requirements). This may lead to the development of accurate QoE prediction techniques. In this context, $K^{th}-ordered$ Markov models, HMMs and DBNs might be valuable as these models can efficiently model users' QoE based on their past ratings as demonstrated by \cite{hossfeldhmm} and \cite{mitradbn}.
\section{Conclusion}
This paper presented a survey of the state-of-the-art research in the area of quality of experience (QoE). We highlighted several challenges associated with QoE modelling, measurement and prediction. We discussed existing methods and highlighted their advantages and shortcomings. This survey also outlined future research directions for QoE modelling, measurement and prediction.
\subsection*{Acknowledgement}
The authors would like to thank Saguna for her valuable feedback and improving the readability of this paper.
\bibliographystyle{plain}
|
1,108,101,565,122 | arxiv | \section{Introduction}
The Optimal Power Flow (OPF) problem is a routine at the core of important tools for power system operations (\cite{frank2012optimal}). The OPF problem seeks to determine the production levels of generating units that satisfy the power (net) demand at minimum cost while complying with some technical constraints imposed by those units and the grid. The main challenge of the OPF is that it is a non-linear and non-convex optimization problem, due to the power flow equations that govern the (static) behavior of power systems. For this reason, the \emph{direct current} approximation (DC) of the power flow equations, which transforms the problem into a linear program, is frequently used. The demand and renewable generation are factors that increase the uncertainty in power systems, and ignoring it can lead to unsafe operating conditions.
Chance-constrained programming suits applications in areas where decisions have to be made dealing with random parameters (\cite{miller1965chance}). In these situations, it is desirable to ensure feasibility of the system almost surely, but there is hardly any decision which would guarantee it under extreme events or unexpected random circumstances. In the context of the OPF, chance-constrained programming can be used to minimize the expected operating cost whilst guaranteeing that the system withstands unforeseen peeks of electrical load due to stochastic demand or uncertainty in power generation (\cite{vanackooij2011}). The chance-constrained OPF (CC-OPF) problem addresses this uncertainty and pursues to ensure the safe operation of a power system with a high level of probability. Under a linear approximation of the power flow equations, the CC-OPF problem can be formulated as the following chance-constrained problem (CCP) with joint linear chance constraints and random RHS and LHS:
\begin{subequations} \label{GenCC}
\begin{align}
\min_{x} \quad & f(x) \label{GenCC_FO}\\
\text{s.t.} \quad & x \in X \label{GenCC_xinX} \\
& \mathbb{P}\left\{ a_{j}(\omega)^{\top}x \le b_j(\omega), \ \forall j \in \mathcal{J}\right\} \ge 1-\epsilon. \label{GenCC_chance}
\end{align}
\end{subequations}
In \eqref{GenCC}, $x\in \mathbb{R}^{|\mathcal{I}|}$ is a vector of continuous decision variables, $X\subseteq \mathbb{R}^{|\mathcal{I}|}$ is a polyhedron that represents a set of deterministic constraints, and $f: \mathbb{R}^{|\mathcal{I}|} \longrightarrow \mathbb{R}$ is a convex function. Uncertainty is represented through the random vector $\omega$ taking values in $\mathbb{R}^{d}$ and giving rise to a technology matrix with random rows $a_{j}(\omega)\in \mathbb{R}^{|\mathcal{I}|}$, $j \in \mathcal{J}$ and random $b_j(\omega) \in \mathbb{R}$, $j \in \mathcal{J}$.
$\mathbb{P}$ is a probability measure, and $\epsilon$ is a confidence or risk parameter, typically near zero, so that the set of constraints \eqref{GenCC_chance} are satisfied with probability at least $(1-\epsilon)$. Apart from power systems, applications of CCPs include supply chain, location and logistics (\cite{taleizadeh2012, shaw2016, elci2018a}), risk control in finance (\cite{danielsson2008, natarajan2008}), and healthcare problems such as operating room planning (\cite{najjarbashi2020}) or vaccine allocation (\cite{tanner2008}), among others.
When the probabilistic constraint corresponds to \eqref{GenCC_chance}, the CCP has joint chance constraints (JCC) and is hence classified as a joint CCP (JCCP), in contrast with single CCPs (SCCPs), i.e.\ CCPs with individual or single chance constraints (SCC) of the form $\mathbb{P}\left\{ a_{j}(\omega)^{\top}x - b_j(\omega) \le 0\right\} \ge 1-\epsilon_j$, $\forall j \in \mathcal{J}$. JCCPs are suitable for contexts where all constraints need to be simultaneously satisfied with a high probability, and the dependence between random variables makes them clearly harder. Both SCCPs and JCCPs have been extensively studied (see \cite{prekopa2003, vanackooij2011} and the references therein).
There are a number of reasons why general CCPs are challenging. The first one is the non-convexity of the feasible set. In general, the feasible region of a CCP is not convex in the original space even when $x$ is continuous, there is only RHS uncertainty and the constraints inside the probability in \eqref{GenCC_chance} result in a polyhedral region (\cite{kucukyavuz2021}). To circumvent this problem, several approaches have been proposed. Some methods (e.g.\ \cite{lagoa2005,henrion2007,henrion2011}) give convexity results and investigate the conditions under which the feasible region of problem \eqref{GenCC} is convex. In another line of research, various convex approximation schemes such as quadratic (\cite{ben-tal2000}) or Bernstein approximation (\cite{nemirovski2007}), have been proposed in the literature. The CVaR approximation has gained a lot of popularity since its introduction (\cite{rockafellar2000, sun2014}), and remains one of the most used methods to deal with stochastic problems. Nonetheless, the solutions to the approximated problems err on the side of over-conservatism. In this context, some iterative schemes such as ALSO-X have been recently proposed to identify tighter inner convex approximations of the CCP at the expense of a higher computational cost (\cite{ahmed2017, jiang2022also}). Finally, other works suggest convex approximations for non-linear CCPs. For instance, \cite{hong2011} propose to solve the JCCP by a sequence of convex approximations followed by a gradient-based Monte Carlo method, whereas \cite{pena-ordieres2020} introduce a smooth sampling-based approximation.
The second difficulty of CCPs is that checking the feasibility of a given solution is not, in general, an easy task. For instance, even if the uncertainty follows a known continuous distribution, calculating the joint probability requires a multi-dimensional integration, which becomes increasingly difficult with the dimension of the random vector~$\omega$. On top of that, in most cases the distribution $\mathbb{P}$ is not fully known. To cope with these two obstacles at once, in this work we make use of Sample Average Approximation (SAA), which in practice boils down to dealing with a finite discrete distribution. The application of SAA may be seen as the result of approximating a general known distribution via the generation of independent Monte Carlo samples of the random vector $\omega$ (\cite{shapiro2003, nemirovski2006}) or as a data-driven approach that works with observations of $\omega$ that are available to the decision-maker even if the distribution is unknown.
SAA allows for a deterministic reformulation of the problem, and the resulting model is a mixed-integer problem (MIP). For the resolution of CCPs using MIP reformulations, we refer the reader to \cite{kucukyavuz2021}. When there is only RHS uncertainty (i.e.\ the technology matrix is fixed), a reformulation of the problem leads to a MIP with a set of constraints that form a \emph{mixing set} and that have been extensively studied, alone or in combination with the knapsack constraint that also appears in the formulation (\cite{gunluk2001,luedtke2010, abdi2016}). Alternative reformulations like the ones proposed by \cite{dentcheva2000, nair2011} rely on the concept of $(1-\epsilon)$-efficient points. The case when the technology matrix is random, while the RHS is not, has been studied e.g.\ in \cite{tayur1995}. As for the general case, it has also been addressed in the literature. Specifically, a large line of research has focused on the development of \emph{quantile cuts}, a particular type of valid inequality that can be viewed as a projection of a set of mixing inequalities for the MIP onto the original problem space. These cuts and the associated quantile closure have been recently studied in \cite{qiu2014,xie2016,xie2018,ahmed2018} and successfully applied to computational experiments of CCPs in \cite{song2014,ahmed2017}, among others.
To address the CC-OPF problem, several papers in the literature (e.g., \cite{LineGoran}) directly work with SCCs. However, the main drawback of this modeling approach is that, even in those cases where the probability of violating each individual constraint seems more than tolerable, the resulting \emph{joint risk} (that is, the probability that \emph{any} of the technical constraints be violated) may still be excessive and inadmissible. This is the key motivation behind the use of JCCs to tackle the CC-OPF problem (see, e.g., \cite{LineAlejandra}). It is also true that there are ways to guarantee the satisfaction of the joint chance-constraint system by way of SCCs. Unfortunately, the success of this strategy depends on the non-trivial task of how to allocate the joint risk of the system among the single constraints. For example, based on Bonferroni's inequality, distributing the joint risk evenly across all individual constraints ensures that the joint chance constraint is met. However, this results in a rather conservative solution in general. To reduce the conservatism of this solution approach, \cite{baker2019joint} propose a learning algorithm to filter out redundant constraints and, thus, increase the risk of the non-redundant ones, whereas \cite{jia2021iterative} devise a non-parametric iterative framework to allocate the joint risk. In contrast to these works, we explicitly model and deal with the joint chance constrained version of the problem.
In this vein, there are several approaches in the literature to solve the JCC-OPF problem. \cite{vrakopoulou2013} adopt the scenario approach (SA) to approximate the solution of the JCC-OPF, while \cite{chen2021time} propose a heuristic data-driven method that involves enforcing the satisfaction of the technical constraints for a box of the uncertainty. This box is inferred using one-class support vector clustering and its size is contingent on the system's desired reliability. \cite{LineTunning} propose an iterative tuning algorithm to solve a robust reformulation of the JCC-OPF problem. \cite{esteban2021distributionally} introduce a distributionally robust JCC-OPF model that considers contextual information using an ambiguity set based on probability trimmings. To make their model tractable, they resort to the widely known CVaR-based approximation of the JCC.
The aforementioned SAA method is another effective way to solve JCCPs and has the potential to identify OPF solutions with a better cost performance than that of the more conservative solutions delivered by the previous approaches. However, solving the JCC-OPF problem using SAA is challenging due to the presence of binary variables, the number of scenarios required and the size of the power systems. \cite{lejeune2020optimal} propose a methodology to solve the SAA of the JCC-OPF without including the power flow equations into the joint chance-constraint system. To the best of our knowledge, we are the first to efficiently solve the JCC-OPF problem by means of the SAA approach, using a MIP reformulation and including the arduous power flow constraints. Furthermore, unlike the sample-based approach introduced in \cite{LineAlejandra}, which is based on a smooth nonlinear approximation of the JCC-OPF, ours offers optimality guarantees.
The performance of SAA is, nonetheless, directly contingent on the number of samples available. In particular, we refer the reader to the article by \cite{luedtke2008}, where relations are established between the empirical acceptable probability of violation and the number of samples such that the SAA-based solution be feasible in the JCCP with a predefined confidence level. Statistical considerations apart, the main aim of this work is to prove that the proposed methodology leads to a substantial reduction of the computational burden of JCPPs addressed by a MIP SAA-based reformulation.
The main contribution of our work is the introduction of a new methodology to efficiently solve the SAA reformulation of the JCC-OPF. Our method solves the MIP to optimality, and is based on the combination of a tightening-and-screening procedure with the development of valid inequalities to obtain a formulation which is compact and tight at a time.
We begin with a description of an iterative algorithm to strengthen the Big-Ms present in the mixed-integer reformulation of the JCC-OPF. Interestingly, although the procedure is not new (see \cite{qiu2014}), we complement it with a screening procedure that allows us to eliminate an enormous percentage of the line and generator inequalities of the MIP. The screening procedure is possible due to the special features of our model, decisive to speed up the resolution of the instances proposed and, to the best of our knowledge, has not been applied to other CCPs before.
For ease of explanation and computation, the valid inequalities are proposed for the particular class of linear chance constraints present in the JCC-OPF, but our results are also applicable to more general types of CCPs, as we detail in \ref{sec:anexoExtensionMultidim}. We also show in \ref{sec:anexoQuantileCuts} the relationship between our valid inequalities and the quantile cuts, stating that the addition of our inequalities yields a feasible region equivalent to the quantile closure of a specific relaxation of the MIP problem. The main advantage of our inequalities is that, unlike quantile cuts, they are not NP-hard to compute, and neither do they require a specific separation algorithm to include them dynamically (that is, they can all be included in the model from the outset). As many other techniques developed for JCCPs, our cuts also apply to the SCC-OPF. Finally, our valid inequalities extend the results introduced in \cite{roos1994} for the $k$-\emph{violation problem}, which can be seen as a mixed-integer linear problem (MILP) reformulation of a SCCP.
Finally, we test our resolution method through extensive computational results using standard power systems available in the related literature. The combination of the valid inequalities with the tightening of the Big-Ms and the screening procedure allows us to effectively solve to optimality instances that are not solved with the initial MIP formulation, since the combination of both techniques ostensibly reduces their size and difficulty. We also compare our resolution approach with state-of-the-art convex inner approximations of CCPs, in particular, the CVaR-based approximation, ALSO-X, and ALSO-X+ (\cite{jiang2022also}).
The remainder of the paper is organized as follows. In Section \ref{sec:JCC_OPF} we introduce the main notation and the formulation of the JCC-OPF, the core problem of this work. Section \ref{sec:JCC_OPF_SAA} involves the reformulation of the problem into a MIP using the SAA approach and the proposed methodology: Subsection \ref{sec:screening} describes the tightening and screening procedures, whereas Subsection \ref{sec:Valid} introduces the valid inequalities and the necessary algorithms to compute them. In Section \ref{sec:case_study} we present a case study, testing our results to solve instances of the DC OPF available in the literature. Section \ref{sec:conclusion} points further research topics and includes some concluding remarks. \ref{sec:anexoQuantileCuts} establishes the relationship between our valid inequalities and the quantile cuts present in the literature, and \ref{sec:anexoExtensionMultidim} discusses the generalization of our methodology and its possible application to other types of CCPs.
\section{Optimal Power Flow under Uncertainty: A Joint Chance-Constrained Modeling Approach} \label{sec:JCC_OPF}
In this section, we introduce the formulation of the OPF problem that we consider throughout this article. In its deterministic version, the OPF seeks to determine the least-costly dispatch of thermal generating units to satisfy the system \emph{net} demand (i.e., demand minus renewable generation), while complying with the technical limits of production and transmission network equipment. However, given the inherently uncertain nature of the electricity net demand, the probabilistic version of the OPF problem can be formulated as a JCCP that aims at minimizing the expected production cost while enforcing that the technical constraints are satisfied with a given (high) probability.
The formulation of the JCC-OPF we present next is based on the following assumptions:
\begin{enumerate}
\item \emph{Power system:} A power system consists of a set of buses (nodes), lines and generators which we denote by $\mathcal{N}$, $\mathcal{L}$ and $\mathcal{G}$, in that order. We use indexes $n$, $l$ and $g$ to refer to elements in these sets, respectively. Furthermore, $\mathcal{G}_n$ represents the set of generators connected to node $n$.
\item \emph{Nodal net loads}: The (uncertain) electricity net demand at node $n$, $\tilde{d}_n$, is given by $\tilde{d}_n=d_n - \omega_n$, where $d_n$ is the predicted value and $\omega_n$ is the forecast error with a change of sign. This error is modeled as a random variable with zero mean which follows an unknown continuous probability distribution.
\item \emph{Generation}: To cope with the forecast errors $(\omega_n)_{n \in \mathcal{N}}$, generators' power outputs are adjusted according to the following affine control policy:
\begin{align*}
& \tilde{p}_g = p_g - \beta_g\Omega, \quad \forall g \in \mathcal{G},
\end{align*}
%
where $\Omega:= \sum_{n \in \mathcal{N}} \omega_n$ is the system-wise aggregated forecast error, and $p_g$ and $\beta_g$ are the power output dispatch and the participation factor of generating unit $g$, respectively (see, e.g., \cite{bienstock2014chance,LineTunning,LineAlejandra}). The minimum and maximum capacity of generator $g$ is denoted by $\underline{p}_g$ and $\overline{p}_g$, respectively.
\item \emph{Power balance}: Given the affine control policy of the previous point, the power balance equation takes the following form:
\begin{align*}
& \sum_{g \in \mathcal{G}} \tilde{p}_g - \sum_{n \in \mathcal{N}} \tilde{d}_n = \sum_{g \in \mathcal{G}} \left(p_g - \beta_g\Omega\right) - \sum_{n \in \mathcal{N}} \left(d_n - \omega_n\right) = 0.
\end{align*}
%
Hence, to ensure the power balance for \emph{any} realization of the forecast errors $(\omega_n)_{n \in \mathcal{N}}$, it must hold:
\begin{eqnarray*}
\sum_{g \in \mathcal{G}} p_g - \sum_{n \in \mathcal{N}} d_n &=& 0\\
\sum_{g \in \mathcal{G}} \beta_g &=& 1.
\end{eqnarray*}
\item \emph{Power flows}: Line flows are modeled using the well-known approximation based on the power transfer distribution factors (PTDFs), $B_{ln}$, $l \in \mathcal{L}$, $n \in \mathcal{N}$, which sets a linear relation between the power flow through line $l$ and the power injected at node $n$. The maximum capacity of line $l$ is denoted by $\overline{f}_l$.
\item \emph{Power production cost}: The cost function of each generating unit is assumed to be quadratic and, as a result, the total power production cost is given by
\begin{align*}
& \sum_{g \in \mathcal{G}} C_{2,g} \, \left(p_{g} - \Omega\beta_g\right)^2 + C_{1,g} \, \left(p_{g} - \Omega\beta_g\right) + C_{0,g},
\end{align*}
\noindent where $C_{2,g}$, $C_{1,g}$, $C_{0,g}$ are the coefficients defining the quadratic cost function of generating unit $g$. On the assumption that $\omega_n$, for each $n \in \mathcal{N}$, is a random variable with zero mean, we have (see, for instance, \cite{LineAlejandra})
\begin{align*}
& \mathbb{E} \left[ \sum_{g \in \mathcal{G}} C_{2,g} \, \left(p_{g} - \Omega\beta_g\right)^2 + C_{1,g} \, \left(p_{g} - \Omega\beta_g\right) + C_{0,g} \right] = \sum_{g \in \mathcal{G}} C_{2,g} \, p_{g}^2 + C_{1,g} \, p_{g}+ C_{0,g} + \mathbb{V} \left(\Omega\right) \, C_{2,g} \, \beta_{g}^2,
\end{align*}
\noindent where $\mathbb{V}(\Omega)$ denotes the variance of the random variable $\Omega$.
\end{enumerate}
With the above ingredients, the JCC-OPF problem that we tackle in this paper is formulated as follows:
\begin{subequations}
\label{eq:JCC-OPF}
\begin{align}
\min_{p_g,\beta_g \geq 0, \forall g \in \mathcal{G}} \quad & \sum_{g \in \mathcal{G}} C_{2,g} \, p_{g}^2 + C_{1,g} \, p_{g}+ C_{0,g} + \mathbb{V} \left(\Omega\right) \, C_{2,g} \, \beta_{g}^2 \label{eq:OPF_objective}\\
\text{s.t.} \quad & \sum_{g \in \mathcal{G}} \beta_g = 1 \label{eq:OPF_balance1}\\
& \sum_{g \in \mathcal{G}} p_{g} - \sum_{n \in \mathcal{N}} d_{n} = 0 \label{eq:OPF_balance2}\\
&\underline{p}_{g} \leq p_g \leq \overline{p}_{g}, \quad \forall g \in \mathcal{G} \label{eq:OPF_gen-det}\\
& -\overline{f}_{l} \leq \sum_{n \in \mathcal{N}} B_{ln}\left(\sum_{g \in \mathcal{G}_n} p_{g} - d_n \right) \leq \overline{f}_{l}, \quad \forall l \in \mathcal{L} \label{eq:OPF_flow-det}\\
& \mathbb{P}
\left(\begin{array}{l}
\underline{p}_{g} \leq p_g -\Omega\beta_g \leq \overline{p}_{g}, \quad \forall g \in \mathcal{G} \\
-\overline{f}_{l} \leq \displaystyle\sum_{n \in \mathcal{N}} B_{ln}\left(\displaystyle\sum_{g \in \mathcal{G}_n} \left(p_{g} - \Omega\beta_g\right) + \omega_n - d_n \right) \leq \overline{f}_{l}, \quad \forall l \in \mathcal{L}
\end{array} \right) \geq 1 - \epsilon. \label{eq:OPF_jointCC}
\end{align}
\end{subequations}
The objective \eqref{eq:OPF_objective} is the minimization of the expected total generation cost. The equality constraints \eqref{eq:OPF_balance1} and \eqref{eq:OPF_balance2} enforce the power balance in the system, whereas constraints \eqref{eq:OPF_gen-det} and \eqref{eq:OPF_flow-det} ensure a feasible power dispatch which corresponds to an error-free scenario, i.e., to a realization of the net-load forecast errors such that $\omega_n = 0$ $\forall n \in \mathcal{N}$. Finally, expression \eqref{eq:OPF_jointCC} constitutes the joint chance-constraint system by which the decision-maker states that the OPF solution must be feasible with a probability greater than or equal to $1-\epsilon$. Accordingly, parameter $\epsilon$ is the maximum allowed probability of constraint violation set by the user. Formulation \eqref{eq:JCC-OPF} is quite standard and has been used before by \cite{bienstock2014chance,LineTunning,LineAlejandra}, among others.
Problem~\eqref{eq:JCC-OPF} can be written in the form of~\eqref{GenCC}, i.e., as a CCP with linear JCC and random RHS and LHS.
To see this, define the vector of continuous decision variables $x$ in \eqref{GenCC} as $x:=(p_g,\beta_g)_{g \in \mathcal{G}}$, and group all these variables by means of the set $\mathcal{I}$ with elements $i$ running from 1 to $|\mathcal{I}| = 2|\mathcal{G}|$. In this way, we have that $x \in \mathbb{R}^{|\mathcal{I}|}_{+}$, the set $X\subseteq \mathbb{R}^{|\mathcal{I}|}$ represents the polyhedron defined by the deterministic constraints \eqref{eq:OPF_balance1}--\eqref{eq:OPF_flow-det}, and $f: \mathbb{R}^{|\mathcal{I}|} \longrightarrow \mathbb{R}$ is the convex function providing the expected total generation cost \eqref{eq:OPF_objective}. Likewise, if we collect all the constraints involved in the joint chance-constraint system \eqref{eq:OPF_jointCC} into the set $\mathcal{J}$ (hence $|\mathcal{J}| = 2|\mathcal{G}| + 2|\mathcal{L}|$), this system can be represented by way of a technology matrix with random rows $a_{j}(\omega)\in \mathbb{R}^{|\mathcal{I}|}$, $j \in \mathcal{J}$, and random RHS $b_j(\omega) \in \mathbb{R}$, $j \in \mathcal{J}$, where the uncertainty is again represented through the random vector $\omega$ taking values in $\mathbb{R}^{|\mathcal{N}|}$.
The technology matrix in the joint chance-constraint system of the OPF problem~\eqref{eq:JCC-OPF} has a special structure. Indeed, each row $a_{j}(\omega)$ in this matrix can be rewritten as $a_j(\omega) = a^{0}_j + \Omega(\omega) \hat{a}_j$, with $a_j^{0}, \hat{a}_{j}\in \mathbb{R}^{|\mathcal{I}|}$ and where $\Omega(\omega)$ is a real-valued function whose domain includes the support of $\omega$. In our particular case, recall that $\Omega(\omega) = \sum_{n \in \mathcal{N}}\omega_n$. In Section~\ref{sec:Valid}, we exploit this special structure to dramatically facilitate the certification of the optimal OPF solution given our strategy to solve problem~\eqref{eq:JCC-OPF}.
\section{Solving the joint chance-constrained OPF via Sample Average Approximation}\label{sec:JCC_OPF_SAA}
As discussed in the introduction, the CCP \eqref{eq:JCC-OPF} can be easily reformulated into a MIP using SAA. Thus, we assume that $\omega$ has a finite discrete support defined by a collection of points $\{\omega_s \in \mathbb{R}^{|\mathcal{N}|}, s \in \mathcal{S}\}$ and respective probability masses $\mathbb{P}(\omega = \omega_s)=\frac{1}{|\mathcal{S}|}$, $\forall s\in \mathcal{S}=\{1,\dots,|\mathcal{S}|\}$. Accordingly, $\omega_{ns}$ and $\Omega_s$ are realizations of the respective random variables under scenario $s$. We define $p:= \floor*{\epsilon |\mathcal{S}|}$, the vector $y$ of binary variables $y_s$, $\forall s\in \mathcal{S}$, and the large enough constants $M^1_{gs}, M^2_{gs}, M^3_{ls}, M^4_{ls}$. Thus, the MIP reformulation of problem \eqref{eq:JCC-OPF} writes as follows:
\begin{subequations}
\label{eq:BigM-OPF}
\begin{align}
\min_{p_g,\beta_g\geq0,y_s} \quad & \sum_{g \in \mathcal{G}} C_{2,g} \, p_{g}^2 + C_{1,g} \, p_{g}+ C_{0,g} + \widehat{\mathbb{V}} \left(\Omega\right) \, C_{2,g} \, \beta_{g}^2 \label{eq:MIP1_objective}\\
\text{s.t.} \quad & \eqref{eq:OPF_balance1} - \eqref{eq:OPF_flow-det} \label{eq:MIP1_repeated}\\
& p_g - \Omega_s \beta_g \geq \underline{p}_{g} - y_s M^1_{gs}, \quad \forall g \in \mathcal{G}, s \in \mathcal{S} \label{eq:MIP1_gen_LB}\\
& p_g -\Omega_s \beta_g \leq \overline{p}_{g} + y_s M^{2}_{gs}, \quad \forall g \in \mathcal{G}, s \in \mathcal{S} \label{eq:MIP1_gen_UB}\\
& \sum_{n \in \mathcal{N}} B_{ln}\left(\sum_{g \in \mathcal{G}_n} \left(p_{g} - \Omega_s \beta_g\right) - d_n + \omega_{ns} \right) \geq -\overline{f}_{l} - y_s M^{3}_{ls}, \quad \forall l \in \mathcal{L}, s \in \mathcal{S} \label{eq:MIP1_flow_LB}\\
& \sum_{n \in \mathcal{N}} B_{ln}\left(\sum_{g \in \mathcal{G}_n} \left(p_{g} - \Omega_s \beta_g\right) - d_n + \omega_{ns} \right) \leq \overline{f}_{l} + y_s M^4_{ls}, \quad \forall l \in \mathcal{L}, s \in \mathcal{S} \label{eq:MIP1_flow_UB}\\
& \sum_{s \in \mathcal{S}} y_s \leq p \label{eq:MIP1_violation}\\
& y_s \in \{0,1\}, \quad \forall s \in \mathcal{S}. \label{eq:MIP1_bincharacter}
\end{align}
\end{subequations}
Constraints \eqref{eq:MIP1_gen_LB}-\eqref{eq:MIP1_flow_UB} represent the sample-based reformulation of the joint chance constraint \eqref{eq:OPF_jointCC}. For a given scenario $s\in \mathcal{S}$, inequalities \eqref{eq:MIP1_gen_LB}-\eqref{eq:MIP1_flow_UB} guarantee that all the original constraints are satisfied when $y_s = 0$. If $y_s=1$, some of the original constraints can be violated for scenario $s$. Finally, the inequality \eqref{eq:MIP1_violation} ensures that the probability of the JCC is met and the binary character of variables $y_s$ is declared in \eqref{eq:MIP1_bincharacter}.
For simplicity and ease of notation, we reformulate model \eqref{eq:BigM-OPF} as
\begin{subequations}
\label{MIPGenCC}
\begin{align}
\min_{x,y_s} \quad & f(x) \label{MIPGenCC_FO}\\
\text{s.t.} \quad & x \in X \label{MIPGenCC_xinX}\\
& \Omega_s \hat{a}_j^{\top} x - b_{js} + x^{\top} a_j^{0} \le M_{js}y_s, \quad \forall j\in \mathcal{J}, s\in \mathcal{S} \label{MIPGenCC_chance}\\
& \sum_{s \in \mathcal{S}} y_s \leq p \label{MIPGenCC_violation}\\
& y_s \in \{0,1\}, \quad \forall s \in \mathcal{S}, \label{MIPGenCC_ybinaria}
\end{align}
\end{subequations}
\noindent where $x:=(p_g,\beta_g)_{g \in \mathcal{G}}$, the deterministic feasible set $X$ represents constraints \eqref{eq:OPF_balance1}-\eqref{eq:OPF_flow-det}, and constraint \eqref{MIPGenCC_chance} is a generalization of constraints \eqref{eq:MIP1_gen_LB}-\eqref{eq:MIP1_flow_UB}. Note that we do not make any assumptions on the sign of $\hat{a}_j^{\top}$, $b_{js}$ and $a_j^{0}$. Using the generic formulation \eqref{MIPGenCC}, we present in Subsection~\ref{sec:screening} a procedure to properly tune the values of the large constants $M_{js}$. We also explain in this subsection how the intermediate results of the tightening procedure can be efficiently used to remove constraints from set \eqref{MIPGenCC_chance} that are superfluous, thus making model \eqref{MIPGenCC} more compact. Finally, we introduce in Subsection~\ref{sec:Valid} a set of valid inequalities that makes the linear relaxation of~\eqref{MIPGenCC} remarkably tighter.
\subsection{Tightening and Screening}
\label{sec:screening}
It is well-known that the linear relaxation of a Big-M formulation tends to provide weak lower bounds in general (\cite{conforti2014}). This is even more so when the Big-Ms are chosen too loose. Constants $M_{js}$ in \eqref{MIPGenCC_chance} should be set large enough for the corresponding constraints to be redundant when the associated binary variables $y_s$ are equal to 1, and \emph{as small as possible} to tighten the MIP formulation. To this end, the literature provides an algorithm called ``Iterative Coefficient Strengthening'' (\cite{qiu2014}). A customization of this procedure for the joint chance-constrained formulation \eqref{MIPGenCC} is detailed in Algorithm~\ref{alg:IterativeCoefStreng}.
\begin{algorithm}
\begin{small}
\caption{Iterative Coefficient Strengthening} \label{alg:IterativeCoefStreng}
\begin{algorithmic}
\STATE \textbf{Input}: The LHS and RHS coefficients $\{a_j^{0}, \hat{a}_j\}_{j \in \mathcal{J}}$ and $\{b_{js}\}_{j \in \mathcal{J}, s \in \mathcal{S}}$, respectively, the sample $\{\Omega_{s}\}_{s \in \mathcal{S}}$, and the maximum allowed violation probability $p$ that determine the joint chance-constraint system \eqref{GenCC_chance}, the deterministic feasible set $X$, and the total number of iterations $\kappa$.
\textbf{Output}: The large constants $M_{js}, \forall j\in \mathcal{J}, s\in \mathcal{S}$.
\begin{enumerate}[\hspace*{0.5cm} Step 1.]\itemsep -.1cm
\item Initialization, $k = 0$, $M_{js}^0 = \infty, \forall j\in \mathcal{J}, s\in \mathcal{S}$.
\item For each $j\in\mathcal{J}$ and $s\in\mathcal{S}$ update $M_{js}^{k+1}$ as follows: If $M_{js}^{k}<0$, then $M_{js}^{k+1}=M^{k}_{js}$. Otherwise,
\begin{subequations}
\label{RelMIP}
\begin{flalign}
M^{k+1}_{js}=\max_{x,y_s} \quad & x^{\top} a_{j}^{0} + \Omega_{s} \hat{a}_{j}^{\top} x - b_{js} \label{RelMIP_FO}\\
\text{s.t.} \quad & x \in X \label{RelMIP_xinX}\\
& x^{\top} a_j^{0} + \Omega_s \hat{a}_j^{\top} x - b_{js} \leq M^{k}_{js}y_s, \quad \forall j\in \mathcal{J}, s\in \mathcal{S} \label{RelMIP_chance}\\
& \sum_{s \in \mathcal{S}} y_s \leq p \label{RelMIP_violation}\\
& 0 \leq y_s \leq 1, \quad \forall s \in \mathcal{S}. \label{RelMIP_y}
\end{flalign}
\end{subequations}
\item If $k+1 < \kappa$, then $k = k + 1$ and go to Step 2. Otherwise, stop.
\end{enumerate}
\end{algorithmic}
\end{small}
\end{algorithm}
The output of Algorithm~\ref{alg:IterativeCoefStreng} is the tuned Big-Ms that are input to the MIP reformulation \eqref{MIPGenCC}. This algorithm produces Big-Ms whose value either decreases or remains equal at each iteration. There is, in fact, a number of iterations beyond which the resulting Big-Ms converge. To reduce the computational burden of running Algorithm~\ref{alg:IterativeCoefStreng}, all problems \eqref{RelMIP} in Step 2 can be solved in parallel. Hereinafter, we use the short name ``\textbf{T}($\kappa$)'' (from ``Tightening'') to refer to the solution of model \eqref{MIPGenCC} using the Big-M values given by the ``Iterative Coefficient Strengthening'' algorithm with $\kappa$ iterations.
Equally important, Algorithm~\ref{alg:IterativeCoefStreng} can be easily upgraded to delete constraints $(j,s)$ in \eqref{MIPGenCC_chance} that are redundant, and therefore, can be removed from problem \eqref{MIPGenCC}. Indeed, if Algorithm~\ref{alg:IterativeCoefStreng} delivers a large constant $M_{js} \leq 0$, then constraint $j$ in scenario $s$ can be deleted from \eqref{MIPGenCC} without altering its feasible region or its optimal solution. This is so because a non-positive $M_{js}$ means that there is no $x$ satisfying \eqref{RelMIP_xinX}--\eqref{RelMIP_y} such that the constraint takes on a value strictly greater than zero. Consequently, the constraint is redundant in \eqref{MIPGenCC}, since the feasibility region of \eqref{RelMIP} is a relaxation of \eqref{MIPGenCC}. This upgrade of method \textbf{T} not only makes formulation \eqref{MIPGenCC} tighter through coefficient strengthening, but also more compact by screening out redundant constraints. Naturally, the tightening and screening power of algorithm \textbf{T} increases at each iteration. From now on, we use the short name ``\textbf{TS}($\kappa$)'' (from ``Tightening and Screening'') to refer to the strategy whereby model \eqref{MIPGenCC} is solved without the constraints \eqref{MIPGenCC_chance} for which the value of $M_{js}$ provided by Algorithm~\ref{alg:IterativeCoefStreng} after $\kappa$ iterations is lower than or equal to 0.
While strengthening the parameters $M_{js}$ is a common strategy in the technical literature to reduce the computational burden of CCPs, this is the first time, to our knowledge, that intermediate results of Algorithm~\ref{alg:IterativeCoefStreng} are used to eliminate superfluous constraints from model \eqref{MIPGenCC}. We stress that the screening process itself comes at no cost from Algorithm~\ref{alg:IterativeCoefStreng}, while removing superfluous constraints from \eqref{MIPGenCC} may substantially facilitate its solution. As we show in Section~\ref{sec:case_study}, this is particularly true for the JCC-OPF.
Apart from making formulation \eqref{MIPGenCC} more compact, the screening of superfluous constraints can also be used to accelerate the ``Iterative Coefficient Strengthening'' process at each iteration. To do so, it suffices to modify Algorithm~\ref{alg:IterativeCoefStreng} so that \eqref{RelMIP_chance} only includes the constraints for which $M^k_{js}>0$. Thus, the number of constraints of model \eqref{RelMIP} is significantly reduced and so is its solution time.
\subsection{Valid inequalities}
\label{sec:Valid}
In this section, we derive a set of valid inequalities to make the linear relaxation of problem \eqref{MIPGenCC} tighter. Furthermore, these valid inequalities can also be added to the constraint set~\eqref{RelMIP} of Algorithm~\ref{alg:IterativeCoefStreng}, thus dramatically increasing the tightening and screening power of \textbf{TS}. To facilitate the comparative analysis carried out in Section~\ref{sec:case_study}, the so upgraded algorithm is named ``\textbf{TS+V}($\kappa$)'' (from ``Tightening and Screening with Valid inequalities'').
To derive the set of valid inequalities for problem \eqref{MIPGenCC}, we define the real variables $z_j\in\mathbb{R}$ as $z_j:=\hat{a}_j^{\top} x$, and let $z_j^d=\inf_{x\in X}\hat{a}_j^{\top} x$, $z_j^u=\sup_{x\in X}\hat{a}_j^{\top} x$ denote the lower and upper bounds on $z$ induced by the polyhedral feasibility set \eqref{MIPGenCC_xinX}. Let us also define the function $L_{js} : f_{js}(z_j) = \Omega_s z_j - b_{js}$ for $z_j \in [z_j^d, z_j^u]$ and the set of functions $\mathcal{L}_j := \left\{ L_{js}, \; \forall s \in \mathcal{S} \right\}$. The valid inequalities we propose are heavily supported by the concepts of \emph{$k$-lower} and \emph{$k$-upper envelopes}, which we define in the following.
\begin{definition}
For a given line $L_{js}$, we say that the point $(\tilde{z},\tilde{t})\in\mathbb{R}^2$ \emph{lies below}, \emph{on} or \emph{above} function $L_{js}$ depending on whether $\tilde{t}<\Omega_s \tilde{z} - b_{js}$, $\tilde{t}=\Omega_s \tilde{z} - b_{js}$ or $\tilde{t}>\Omega_s \tilde{z} - b_{js}$, respectively.
Naturally, we also say that function $L_{js}$ \emph{lies above}, \emph{contains}, or \emph{lies below} point $(\tilde{z},\tilde{t})$ in these cases. We also say that a point $(\tilde{z},\tilde{t})$ belongs to the set of lines $\mathcal{L}_j$ if there exists a line $L_{js}\in\mathcal{L}_j$ that contains the point $(\tilde{z},\tilde{t})$.
\end{definition}
\begin{definition}
For a set of lines $\mathcal{L}_j$, the \emph{lower} (resp.\ \emph{upper}) \emph{score} of a point is the number of lines in $\mathcal{L}_j$ that lie below (resp.\ above) that point. The \emph{$k$-lower} (resp.\ \emph{$k$-upper}) \emph{envelope} of a set of lines $\mathcal{L}_j$ is the closure of the set of points that belong to $\mathcal{L}_j$ and that have lower (resp.\ upper) score equal to $k-1$. The $k$-lower envelope is also known as \emph{$k$-level}.
\end{definition}
For the sake of illustration, Figure \ref{fig:kenvelope} shows in bold the $5$-upper envelope of a set of 8 lines. Clearly, the $k$-envelopes of sets $\mathcal{L}_j$ can be seen as piece-wise linear functions on $z_j$.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{4envelope_convex_tz.png}
\caption{In bold, the 5-upper envelope (4-lower envelope, 4-level) of a set $\mathcal{L}_j$ of 8 lines in the plane. In dotted, the lower hull of the 5-upper envelope}\label{fig:kenvelope}
\end{figure}
\begin{prop} \label{prop:VVII_versionpiecewise}
For a fixed $j\in \mathcal{J}$, let $U_j^{p+1}(\bigcdot)$ be the (p+1)-upper envelope of the set of lines $\mathcal{L}_j$, with $p:= \floor*{\epsilon |\mathcal{S}|}$. Then the inequality
\begin{equation} \label{eq:VVII_versionpiecewise}
U_j^{p+1}(\hat{a}_j^{\top} x) + x^{\top} a_j^{0} \le 0, \quad x\in X
\end{equation}
\noindent is valid for problem \eqref{MIPGenCC}.
\end{prop}
\begin{proof}
Let $\bar{x},\bar{y}$ be any feasible solution of problem \eqref{MIPGenCC} with $\bar{z}_j = \hat{a}_j^{\top} \bar{x} \in [z_j^d,z_j^u]$, and suppose that $U^{p+1}_j(\bar{z}_j) + \bar{x}^{\top} a_j^{0} > 0$. By definition of $k$-upper envelope, there exist $p+1$ lines in $\mathcal{L}_j$, $L_{js_i}: f_{js_i}(z_j)= \Omega_{s_i}z_j - b_{s_i}$ $\forall i\in \{1,\dots,p+1\}$ such that $f_{js_i}(\bar{z}_j)= \Omega_{s_i}\bar{z}_j - b_{s_i} \ge U_j^{p+1}(\bar{z}_j)$.
Then for $i\in \{1,\dots,p+1\}$ it holds that
$\Omega_{s_i}\bar{z}_j - b_{s_i} + \bar{x}^{\top} a_j^{0} > 0$. Substituting in constraint \eqref{MIPGenCC_chance}, we obtain that $\bar{y}_{s_i}M_{s_i}>0$ $\Rightarrow$ $\bar{y}_{s_i}=1$, for $i\in \{1,\dots,p+1\}$. But then constraint \eqref{MIPGenCC_violation} is not satisfied, since $\sum_{s\in \mathcal{S}} \bar{y}_s \ge \sum_{i=1}^{p+1} \bar{y}_{s_i} = p+1 > p$. This is, however, in contradiction with our initial statement that $\bar{x}, \bar{y}$ is a feasible solution of problem \eqref{MIPGenCC}.
\end{proof}
The technical literature already includes references that propose methodologies to determine the $k$-envelope of a set of linear functions. For instance, the authors of \cite{cheema2014} give a basic algorithm for constructing $k$-envelopes called the Rider Algorithm. Essentially, this algorithm is based on the fact that the $k$-envelope is an unbounded polygonal chain that can be described by a sequence of vertices, which are intersections of lines of the set. In fact, every point in the $k$-envelope is contained in a given line. In this paper, we adapt the algorithm proposed in \cite{cheema2014} to the particular case in which $z_j^d,z_j^u$ are finite. Algorithm \ref{alg:Rider} describes our procedure in detail for a general set of the type $\mathcal{L}_j$.
\begin{algorithm}
\begin{small}
\caption{Rider Algorithm to construct the $k$-upper envelope of $\mathcal{L}_j$ (adapted to the bounded case) } \label{alg:Rider}
\begin{algorithmic}
\STATE To describe the $k$-upper envelope of a set of lines $\mathcal{L}_j$, we derive the sequence of vertices of the polygonal chain (intersections of lines in $\mathcal{L}_j$) $((z^0,t^0),\dots, (z^R, t^R))$ with $(z_j^d = z^0 < \dots<z^R=z_j^u)$. \\%To compute them, we also keep a sequence of lines $(L^0,\dots,L^{m-1})$ from $\mathcal{L}_j$. \\
\begin{enumerate}[\hspace*{0.5cm} Step 1.]\itemsep -.1cm
\item Set $r=0$. Let $\{(z^d_j,\Omega_sz_j^d-b_{js}),\forall s\in \mathcal{S}\}$, be the intersections of the lines $L_{js}\in\mathcal{L}_j$ with the vertical line $z=z_j^d$, and assume w.l.o.g.\ that all the points have different upper scores. Let $s_0\in \mathcal{S}$ be such that the intersection $(z^d_j,\Omega_{s_0}z_j^d-b_{js_0})$ has upper score equal to $k+1$. Then $(z^0,t^0)=(z^d_j,\Omega_{s_0}z_j^d-b_{js_0})$.
\item Compute the value of $z_j$ for which the line $L_{js_r}$ intersects the rest of the lines $L_{js}\in\mathcal{L}_j$ as $\text{int}_z(s,s_r) = \frac{b_{js}-b_{js_r}}{\Omega_{s}-\Omega_{s_r}}$. If $\exists$ $s'$ : $z^r <\text{int}_z(s',s_r) < z^u$, go to Step 3. Otherwise, go to Step 4.
\item Find the line that intersects $L_{js_r}$ at the leftmost point to the right of $z^r$, that is, find $s_{r+1}=\arg\min_s \left\{\text{int}_z(s,s_r): \text{int}_z(s,s_r) > z^{r}\right\}$. Set $(z^{r+1},t^{r+1})=( \text{int}_z(s_{r+1},s_r),\Omega_{s_{r}}\text{int}_z(s_{r+1},s_r)-b_{js_{r}})$, update $r=r+1$ and go to Step 2.
\item Set $(z^R,t^R)=(z^u_j,\Omega_{s_r}z_j^u-b_{js_r})$.
\end{enumerate}
\end{algorithmic}
\end{small}
\end{algorithm}
The LHSs of the valid inequalities \eqref{eq:VVII_versionpiecewise} are piece-wise linear functions not necessarily convex. Therefore, the inclusion of these inequalities into model \eqref{MIPGenCC} would require a significant amount of additional binary variables, which, in turn, is expected to increase the computational burden of this problem even further. Alternatively, we compute the lower hull of the $k$-upper envelope of $\mathcal{L}_j$, which takes the form of a convex piece-wise linear function. From this lower hull, we can extract a set of linear valid inequalities (the linear extensions of the pieces) that can be seamlessly inserted into model \eqref{MIPGenCC} without the need of any extra binary variables. In doing so, we are able to tighten model \eqref{MIPGenCC}, which can thus be solved more efficiently by available optimization software. Before presenting the set of valid inequalities, the following definition is required.
\begin{definition}
Let $Z$ be the convex hull of a set of points $P$. The \emph{upper} (resp.\ \emph{lower}) \emph{hull} of $P$ is the set of edges of $Z$ that lie on or above (resp.\ on or below) every point in $P$.
\end{definition}
Corollary \ref{cor:VVIIconvexificadas} presents the set of linear valid inequalities \eqref{eq:VVII_versionupperhull} given by the lower hull of the $k$-upper envelope of~$\mathcal{L}_j$.
\begin{cor} \label{cor:VVIIconvexificadas}
Let $\{(z^r,t^r)\}$, $r\in \{0,\dots,R\}$, be the ordered set of vertices obtained by applying Algorithm \ref{alg:Rider} to set $\mathcal{L}_j$, and let $\{(z^{r'},t^{r'})\}$, $r'\in\{0,\dots,R'\} \subseteq \{0,\dots,R\}$, be the ordered subset of vertices such that the associated polygonal chain is the lower hull. Then the following linear inequalities
\begin{equation} \label{eq:VVII_versionupperhull}
\frac{t^{r'+1}-t^{r'}}{z^{r'+1}-z^{r'}}(\hat{a}_j^{\top} x - z^{r'}) + t^{r'} \le - x^{\top} a_j^{0}, \quad x\in X, r'\in\{0,\dots,R'-1\}
\end{equation}
\noindent are valid for problem \eqref{MIPGenCC}.
\end{cor}
\begin{proof}
The proof is straightforward, since for each $x\in X$ it holds $\frac{t^{r'+1}-t^{r'}}{z^{r'+1}-z^{r'}}(\hat{a}_j^{\top} x - z^{r'}) + t^{r'} \le U_j^{p+1}(\hat{a}_j^{\top} x) \le - x^{\top} a_j^{0}$, by hypothesis and using \eqref{eq:VVII_versionpiecewise}.
\end{proof}
There exist plenty of algorithms of convexification of a set of points in the plane. Two of the most well-known are the \emph{Jarvis march} and the \emph{Graham scan} (\cite{toth2017}). Here, we give a simplified version of the former where we make use of special features of our set of points and only compute the lower hull. In particular, we assume that we have a set of presorted points $\{(z^{r},t^{r})\}$, $r\in\{0,\dots,R\}$ whose first and last points always belong to the hull. To speed up the process, we can find the point $(z^{\bar{r}}, t^{\bar{r}})$ from the previous set with the lowest $t$-coordinate (which always belongs to the lower hull), and then apply the algorithm to the subsets $\{(z^0,t^0),\dots,(z^{\bar{r}},t^{\bar{r}})\}$ and $\{(z^{\bar{r}},t^{\bar{r}}),\dots, (z^{R},t^{R})\}$. Algorithm \ref{alg:JarvisMarch} describes in detail the proposed convexification procedure.
\begin{algorithm}
\begin{small}
\caption{Jarvis March Algorithm to obtain the lower hull of a set of presorted points $P$} \label{alg:JarvisMarch}
\begin{algorithmic}
\STATE Let $P=\{(z^r,t^r)\}$, $r\in\{0,\dots,R\}$ be a set of points with $z^0 <\dots<z^{R}$. The algorithm derives a subset $P'=\{(z^{r'},t^{r'})\}$, $r'\in\{0,\dots,R'\}\subseteq \{0,\dots,R\}$, which constitutes the lower hull of the first set.\\
\begin{enumerate}[\hspace*{0.5cm} Step 1.]\itemsep -.1cm
\item Initially, set $P' = \{(z^0,t^0)\}$.
\item Assume $(z^{r'},t^{r'})$ is the last point included in $P'$. If $r'=R$, the lower hull of $P$ is given by the set of points $P'$.
Otherwise, let $r'+1:= \arg \min_r\left\{ \frac{t^{r}-t^{r'}}{z^{r}-z^{r'}}: (z^r,t^r)\in P \text{ with } z^r>z^{r'} \right\}$. Update $P' = P' \cup \{(z^{r'+1},t^{r'+1})\}$. Repeat Step 2.
\end{enumerate}
\end{algorithmic}
\end{small}
\end{algorithm}
\section{Numerical Experiments}
\label{sec:case_study}
This section discusses a series of numerical experiments with which we evaluate the different approaches presented in Section \ref{sec:JCC_OPF_SAA} to solve the SAA-based MIP reformulation of the JCC-OPF. In particular, we compare the performance of approaches \textbf{T}, \textbf{TS} and \textbf{TS+V} using five standard power systems widely employed in the technical literature on the topic, namely the IEEE-RTS-24, IEEE-57, IEEE-RTS-73, IEEE-118, and IEEE-300 test systems. All data pertaining to these systems are publicly available in the repository \cite{pglib} under version~21 and their main features are listed in Table~\ref{tab:features}. All optimization problems have been solved using GUROBI 9.1.2 (\cite{gurobi}) on a Linux-based server with CPUs clocking at 2.6 GHz, 6 threads and 32 GB of RAM. In all cases, the optimality GAP has been set to $10^{-9}\%$ and the time limit to 10 hours.
\begin{table}[h]
\begin{center}
\caption{Short Description of Test Power Systems}
\begin{tabular}{lccccc}
\hline
&IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\# Nodes &24 &57 &73 &118 &300\\
\# Generators &32 &4 &96 &19 & 57\\
\# Lines &38 &41 &120 &186 &411\\
\hline
\end{tabular}
\label{tab:features}
\end{center}
\end{table}
Similarly to \cite{LineAlejandra}, we assume that the error of net loads is normally distributed, i.e., $\omega \sim N(\mathbf{0},\Sigma)$, where $\mathbf{0}$ and $\Sigma$ represent, respectively, the zero vector and the covariance matrix. We also assume that the standard deviation of $\omega_n$ at node $n$ is proportional to the net nodal demand $d_n$ according to a parameter $\zeta$ between 0 and 1. Thus, this parameter controls the magnitude of net demand fluctuations. Under these assumptions, the procedure to model uncertainty proposed in \cite{LineAlejandra} runs as follows. First, we compute the positive definite matrix $C=\widehat{C}\widehat{C}^{\top}$ where each element of matrix $\widehat{C}$ is a sample randomly drawn from a uniform distribution with support in $[-1,1]$. Then, to obtain a positive definite matrix $\Sigma$ in which the diagonal elements are equal to $(\zeta d_{n})^{2}$, we define each of its entries ($\sigma_{nn'}$) as follows:
\begin{align*}
& \sigma_{nn'} = \zeta^{2} \frac{c_{nn'}}{\sqrt{c_{nn}c_{n'n'}}} d_n d_{n'}, \quad \forall n,n' \in \mathcal{N}.
\end{align*}
\noindent where $c_{nn'}$ denotes the element of matrix $C$ in row $n$ and column $n'$. To avoid generating infeasible instances of the JCC-OPF problem, the parameter $\zeta$ has been set to 0.15 for the four smallest systems and to 0.05 for the IEEE-300 system. To characterize the net demand uncertainty, we consider 1000 scenarios and a tolerable probability of violation of the joint chance constraint of 5\% (i.e., $\epsilon = 0.05$ and $p=50$). Finally, each solution strategy is run for ten different sets of randomly generated samples. Accordingly, in this section we provide tables with figures averaged over these ten instances.
Table \ref{tab:bench} includes the results of solving the mixed-integer quadratic optimization model \eqref{eq:BigM-OPF} if the large constants $M^1_{gs}$, $M^2_{gs}$, $M^3_{ls}$ and $M^4_{ls}$ are set to a high enough arbitrary value, specifically $10^4$. Despite being remarkably computationally expensive, this approach has been used in the technical literature (e.g., \cite{zhang2015data}), and thus we refer to it as \emph{benchmark approach} (\textbf{BN}). Table~\ref{tab:bench} includes the number of constraints in the model (\#CON), the linear relaxation gap (LR-GAP) calculated using the optimal solution of each instance, the optimality gap given by the difference between the best lower bound and the best integer solution found by the MIP solver (MIP-GAP), the number of instances solved to global optimality in less than 10 hours (\#OPT) and the solution time in seconds (Time). As expected, the computational time needed to solve the OPF with the Big-M model \eqref{eq:BigM-OPF} increases significantly with the size of the instances. While the 10 instances from systems IEEE-RTS-24, IEEE-57 and IEEE-37 are solved in less than 10 hours, none of the instances for systems IEEE-118 and IEEE 300 are solved to global optimality within that time limit, and the average MIP-GAP after 10 hours amounts to 0.29\% and 0.27\%, respectively. Interestingly, despite the fact that the LR-GAP is relatively low for the IEEE-RTS-73 system, the average computational time required to solve this case is particularly high compared to the two smaller systems.
\begin{table}[h]
\begin{center}
\caption{Benchmark approach (\textbf{BN}): Results}
\begin{tabular}{cccccc}
\hline
&IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\#CON &140143 &168171 &432435 &410413 &936939\\
LR-GAP &1.756\% &0.623\% &0.061\% &0.956\% &1.114\%\\
MIP-GAP (\#OPT) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.29\% (0) &0.27\% (0)\\
Time (s) &1121.3 &103.2 &11161.2 &36000.0 &36000.0\\
\hline
\end{tabular}
\label{tab:bench}
\end{center}
\end{table}
As discussed in the technical literature, a proper tuning of the Big-Ms makes model \eqref{eq:BigM-OPF} tighter and generally easier to solve (\cite{qiu2014}) by the MIP routine. Therefore, we evaluate the computational performance of the ``Iterative Coefficient Strengthening'' Algorithm and provide the corresponding results in Table \ref{tab:tightening}. In particular, \textbf{T}(1), \textbf{T}(2) and \textbf{T}(3) represent the results obtained by solving model \eqref{eq:BigM-OPF} with the Big-M values provided by Algorithm~\ref{alg:IterativeCoefStreng} with $\kappa=1$, 2 and 3, respectively.
Table \ref{tab:tightening} includes the average values of LR-GAP and MIP-GAP, the number of instances solved to optimality in less than 10 hours (\#OPT) and the speedup factor with respect to the benchmark approach. To determine this factor, we have considered that the total computational time of approach \textbf{T} is given as the sum of the time required to run Algorithm \ref{alg:IterativeCoefStreng} $\kappa$ times to determine the Big-Ms plus the time it takes to solve problem \eqref{eq:BigM-OPF}.
\begin{table}[h]
\begin{center}
\caption{Coefficient tightening approach (\textbf{T}): Results}
\begin{tabular}{ccccccc}
\hline
\multicolumn{2}{c}{} &IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\multirow{3}{*}{LR-GAP} &\textbf{T}(1) &1.755\% &0.510\% &0.061\% &0.711\% &0.472\%\\
&\textbf{T}(2) &1.662\% &0.330\% &0.055\% &0.522\% &0.324\%\\
&\textbf{T}(3) &1.386\% &0.255\% &0.029\% &0.434\% &0.264\%\\
\hline
\multirow{3}{*}{MIP-GAP (\#OPT)} &\textbf{T}(1) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.27\% (0) &0.16\% (0)\\
&\textbf{T}(2) &0.00\% (10) &0.00\% (10) &0.03\% (2) &0.16\% (0) &0.09\% (0)\\
&\textbf{T}(3) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.12\% (0) &0.07\% (0)\\
\hline
\multirow{3}{*}{Speedup factor} &\textbf{T}(1) &0.22x &0.07x &0.61x &1.00x &1.00x\\
&\textbf{T}(2) &0.17x &0.13x &0.31x &1.00x &1.00x\\
&\textbf{T}(3) &0.48x &0.24x &0.66x &1.00x &1.00x\\
\hline
\end{tabular}
\label{tab:tightening}
\end{center}
\end{table}
Since reducing the Big-M values makes model \eqref{eq:BigM-OPF} tighter, the results in Table \ref{tab:tightening} show lower values of LR-GAP with respect to \textbf{BN}. Furthermore, this effect grows with the number of iterations since Algorithm~\ref{alg:IterativeCoefStreng} ensures that the Big-Ms never increase between iterations. Although decreasing the values of the Big-Ms leads to tighter MIPs for all the test systems, the numerical results in Table~\ref{tab:tightening} clearly indicate that computational savings are not guaranteed in all cases. Indeed, while the ten instances are solved by \textbf{BN} in less than 10 hours for the IEEE-RTS-73 system, \textbf{T}(2) only provides the optimal solution for two instances. On top of that, the speedup factors for the three smaller systems are always lower than 1, which means that the computational times actually increase in these cases. On the contrary, the average MIP-GAP of the two largest systems is significantly decreased with respect to \textbf{BN}. Therefore, we conclude that, due to the heuristics implemented in current commercial MIP solvers, the computational advantages that one could expect a priori from ``Iterative Coefficient Strengthening'' are not always guaranteed and are contingent on the structure and data of the problem.
Next, in Table~\ref{tab:screening} we provide the computational results related to the \textbf{TS} method, in which Algorithm~\ref{alg:IterativeCoefStreng} is extended to remove redundant constraints from model \eqref{eq:BigM-OPF}. Here, \#CON is provided as the percentage of the number of constraints of the reference model \textbf{BN} (indicated in Table~\ref{tab:bench}) that are retained by \textbf{TS} in each iteration. Table \ref{tab:screening} also includes the average MIP-GAP, the number of instances solved to optimality and the average speedup factor in relation to \textbf{BN}.
\begin{table}[h]
\begin{center}
\caption{Tightening and Screening (\textbf{TS}): Results}
\begin{tabular}{ccccccc}
\hline
\multicolumn{2}{c}{} &IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\multirow{3}{*}{\#CON} &\textbf{TS}(1) &23.9\% &2.8\% &26.0\% &8.9\% &12.7\%\\
&\textbf{TS}(2) &23.3\% &2.2\% &23.8\% &6.3\% &9.1\%\\
&\textbf{TS}(3) &23.1\% &2.1\% &23.0\% &5.6\% &8.0\%\\
\hline
\multirow{3}{*}{MIP-GAP (\#OPT)} &\textbf{TS}(1) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.15\% (0) &0.08\% (0)\\
&\textbf{TS}(2) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.03\% (2) &0.04\% (0)\\
&\textbf{TS}(3) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.01\% (6) &0.01\% (4)\\
\hline
\multirow{3}{*}{Speedup factor} &\textbf{TS}(1) &1.5x &1.8x &4.7x &1.0x &1.0x\\
&\textbf{TS}(2) &1.5x &3.8x &2.6x &1.1x &1.0x\\
&\textbf{TS}(3) &3.8x &4.4x &15.1x &1.4x &1.2x\\
\hline
\end{tabular}
\label{tab:screening}
\end{center}
\end{table}
Table \ref{tab:screening} shows that the upgraded Algorithm~\ref{alg:IterativeCoefStreng} screens out a huge percentage of the constraints in model \eqref{eq:BigM-OPF}, only retaining between 2\% and 26\% of them. The results in this table demonstrate that combining the tightening of the Big-Ms and the elimination of superfluous constraints leads to significant computational savings. For instance, the speedup factor for the three smallest systems ranges now between 1.5x and 15.1x. In addition, \textbf{TS}(3) is able to solve six and four instances to optimality for systems IEEE-118 and IEEE-300, respectively, in less than 10 hours, and the average MIP-GAP is reduced to 0.01\% in these two largest power systems. Therefore, the computational performance of model \eqref{eq:BigM-OPF} has been drastically improved by combining the strengthening of the Big-Ms (making model \eqref{eq:BigM-OPF} tighter) and the removal of redundant constraints (making model \eqref{eq:BigM-OPF} more compact). Equally important, the elimination of superfluous constraints largely reduces the need of the MIP solver for RAM memory, which decreases from around 100 GB in \textbf{T} to 32 GB in \textbf{TS}.
We continue the numerical experiments by evaluating the impact of including the valid inequalities derived in Section \ref{sec:Valid} into model \eqref{eq:BigM-OPF}. The so obtained results are collated in Table \ref{tab:valid0}. In what follows, this approach is called \textbf{BN+V} for short. The results in Table \ref{tab:valid0} comprise, in order and following the previous notation, the average number of constraints, the average values of LR-GAP and MIP-GAP, the number of instances solved to optimality, and the average speedup factor with respect to the \textbf{BN} approach of Table \ref{tab:bench}.
\begin{table}[h]
\begin{center}
\caption{Tightening by valid inequalities (\textbf{BN+V}): Results}
\begin{tabular}{cccccc}
\hline
&IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\#CON &101.0\% &101.2\% &101.2\% &101.6\% &101.5\%\\
LR-GAP &0.3374\% &0.2038\% &0.0001\% &0.4784\% &0.3192\%\\
MIP-GAP (\#OPT) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.03\% (1) &0.08\% (0)\\
Speedup factor &46.4x &13.8x &65.2x &1.1x &1.0x\\
\hline
\end{tabular}
\label{tab:valid0}
\end{center}
\end{table}
As can be seen, our valid inequalities only increase the total number of constraints by 1.0-1.6\%. However, the LR-GAP is significantly reduced compared to that obtained by \textbf{BN}. This effect is particularly noticeable for the IEEE-RTS-73 system with an average value of the linear relaxation gap equal to 0.0001\%, meaning that the linear relaxation of problem~\eqref{eq:BigM-OPF} with the proposed valid inequalities is very tight and its solution very close to the actual solution of that problem. Furthermore, the inclusion of the valid inequalities lead to average speedup factors that range between 13.8x and 65.2x for the three smallest systems. For the two largest systems, the time limit is reached in most instances, but the average MIP-GAP is reduced to 0.03\% and 0.08\%, respectively.
Finally, we present similar simulation results for the setup in which the valid inequalities are also used to boost the tightening and screening power of Algorithm \ref{alg:IterativeCoefStreng} (that is, the valid inequalities are also included in problem \eqref{RelMIP}), leading to method \textbf{TS+V}. Table~\ref{tab:valid} provides the average number of constraints, the average values of LR-GAP and MIP-GAP, and the average speedup factor of \textbf{TS+V} with respect to \textbf{BN} in Table~\ref{tab:bench}. Since increasing the number of iterations of Algorithm~\ref{alg:IterativeCoefStreng} barely affects the performance of \textbf{TS+V}, all the data shown in Table \ref{tab:valid} correspond to $\kappa=1$.
\begin{table}[h]
\begin{center}
\caption{Tightening and screening with valid inequalities (\textbf{TS+V}): Results}
\begin{tabular}{ccccccc}
\hline
&IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\#CON &3.49\% &1.61\% &3.84\% &2.68\% &3.30\%\\
LR-GAP &0.3365\% &0.1519\% &0.0001\% &0.2821\% &0.1603\%\\
MIP-GAP (\#OPT) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.00\% (10) &0.00\% (10)\\
Speedup factor &706.8x &35.3x &1470.0x &23.1x &8.5x\\
\hline
\end{tabular}
\label{tab:valid}
\end{center}
\end{table}
The comparison of the results in Tables~\ref{tab:screening}, \ref{tab:valid0} and \ref{tab:valid} yields the following observations. First, including the valid inequalities in Algorithm~\ref{alg:IterativeCoefStreng} strengthens the Big-Ms even further, which in turn remarkably reduces the linear relaxation gap and increases the number of constraints identified as redundant in model \eqref{eq:BigM-OPF}. Indeed, the total number of constraints eventually retained by \textbf{TS+V} ranges between 1.61\% and 3.84\% if compared with \textbf{BN}. Second, \textbf{TS+V} can solve the 10 instances to global optimality in less than 10 hours for the five test systems considered in these numerical experiments. In fact, the optimal solutions obtained by \textbf{TS+V} are the ones we use to compute the values of the linear relaxation gap throughout these simulations. Third, \textbf{TS+V} is able to achieve speedup factors between 8.5x and 1470.0x depending on the test system. All in all, \textbf{TS+V} features the best computational performance in terms of resolution time and MIP-GAP among all the methods tested so far.
To conclude this study, in Table~\ref{tab:comparison} the results of \textbf{TS+V} are contrasted with those provided by state-of-the-art approximations available in the literature. In particular, we consider the CVaR-based and ALSO-X conservative approximations, both described in \cite{jiang2022also}. Table~\ref{tab:comparison} provides the average cost increase in percentage with respect to the optimal cost and the speedup factor with regard to \textbf{BN} for the different methodologies compared. As expected, the CVaR-based approximation leads to conservative results and involves average cost increases that range between 0.49\% and 2.88\%. Interestingly enough, \textbf{TS+V} computes the optimal solution and involves a higher speedup factor than the CVaR-based approach for two of the five systems. Compared with the two ALSO-X approximations, \textbf{TS+V} obtains the global optimal solution in all cases with speedup factors that are still tantamount to those of these approximate methods.
\begin{table}[h]
\begin{center}
\caption{Comparison of the proposed \textbf{TS+V} approach and existing approximate methods}
\begin{tabular}{llccccc}
\hline
\multicolumn{2}{c}{} &IEEE-RTS-24 &IEEE-57 &IEEE-RTS-73 &IEEE-118 &IEEE-300\\
\hline
\multirow{4}{*}{Average cost increase}
&\textbf{TS+V} &0.00\% &0.00\% &0.00\% &0.00\% &0.00\%\\
&CVaR &2.88\% &0.53\% &1.71\% &0.57\% &0.49\%\\
&ALSO-X &0.80\% &0.08\% &0.41\% &0.08\% &0.05\%\\
&ALSO-X+ &0.53\% &0.07\% &0.12\% &0.05\% &0.04\%\\
\hline
\multirow{4}{*}{Speedup factor}
&\textbf{TS+V} &706.8x &35.3x &1470.0x &23.1x &8.5x\\
&CVaR &387.1x &117.0x &265.1x &4045.5x &779.7x\\
&ALSO-X &18.0x &1.2x &21.3x &148.6x &11.13x\\
&ALSO-X+ &7.8x &0.7x &6.6x &49.3x &3.7x\\
\hline
\end{tabular}
\label{tab:comparison}
\end{center}
\end{table}
\section{Conclusions and future research} \label{sec:conclusion}
In this paper, we propose a novel exact resolution technique for a MIP SAA-based reformulation of the JCC-OPF problem. Our methodology includes a screening method to eliminate superfluous constraints based on an iterative procedure to repeatedly tighten the Big-Ms present in the MIP. These procedures are combined with the addition of valid inequalities based on the special structure of the model. Said inequalities strengthen its linear relaxation and allow for additional screening of constraints. The resultant model is thus compact and tight.
In the case study, we show that, in comparison with the benchmark model, our methodology provides remarkable results in terms of the linear relaxation bounds, the RAM memory needed to solve the instances, and the total computational resolution time. Specifically, our method \textbf{TS+V} solves to optimality all of the instances generated for the IEEE-RTS-118 and the IEEE-RTS-300 test systems, the majority of which are not solved within 10 hours of computational time using the benchmark approach. Furthermore, the average number of constraints eliminated from all instances with \textbf{TS+V} always exceeds a 95\% of them, and the lower bound is markedly increased by the inclusion of the valid inequalities, showing the outstanding results of the combination of the methods developed.
The comparison of our results with those provided by existing approximate methods shows that our approach is computationally very competitive for small and medium-sized instances, always providing the best results in terms of cost. For the large instances addressed, while outperformed by the approximate methods in terms of computational time (as expected), our exact solution strategy not only provides a certificate of optimality, but also returns the optimal solution within the set time limit. A promising future research line consists in the development of a generalized set of valid inequalities that combine variables from pairs or subgroups of lines and generators.
\section*{Acknowledgements}
This work was supported in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705), in part by the Spanish Ministry of Science and Innovation (AEI/10.13039/501100011033) through project PID2020-115460GB-I00, and in part by the Junta de Andalucía (JA) and the European Regional Development Fund (FEDER) through the research project P20\_00153. \'A. Porras is also financially supported by the Spanish Ministry of Science, Innovation and Universities through the University Teacher Training Program with fellowship number FPU19/03053. Finally, the authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of M\'alaga.
|
1,108,101,565,123 | arxiv | \section{Introduction}
Different methods and data sets are being used to reconstruct the dark
energy (DE) equation of state $w=p_{\rm de}/\rho_{\rm de}$ and thereby
also to test the concordance model (which has $w=-1$). The results
vary significantly according to the methods and data sets used, and
the error bars and uncertainties are large. It is clear that
higher-precision data are needed for an effective reconstruction and
for robust testing of models. But just as important, more effort is
needed to improve the statistical methods and the design of
observational tests. In particular, there is a need for effective
model-independent statistical methods and for tests that target the
concordance model.
One of the most direct ways to reconstruct $w$ is via supernovae
(SNIa) observations that give the luminosity distance
$d_L$. Model-independent approaches to reconstructing $w$ have been
developed
\cite{weller_albrecht,Alam:2003sc,daly,Alam:2004jy,Wang:2004py,Daly:2004gf,Sahni:2006pa,Shafieloo:2005nd,Zunckel:2007jm,
EspanaBonet:2008zz,Genovese:2008sw,Bogdanos:2009ib,
Clarkson:2010bm,Holsclaw:2010sk,Crittenden:2011aa,Shafieloo:2012yh,Lazkoz:2012eh,
Shafieloo:2012ht,Seikel}. SNIa observations lead indirectly to
$H(z)$ via the derivative $d_L'(z)$. Then we need the second
derivative of $d_L(z)$ to reconstruct $w$. This is very challenging
for any reconstruction technique since any noise on the measured
$d_L(z)$ will be magnified in the derivatives. The problem can be
lessened if direct $H(z)$ data are used because only the first
derivative needs to be calculated to determine $w(z)$.
In this paper we focus on observations that directly give
$H(z)$. Presently, this may be derived from differential ages of
galaxies (`cosmic chronometers') and from the radial baryon acoustic
oscillation (BAO) scale in the galaxy distribution. Compared to SNIa
observations, less $H(z)$ observational data are needed to reconstruct
$w$ with the same accuracy. For the cosmic chronometer data, it has
been estimated \cite{Ma:2010mr} that 64 data points with the accuracy
of the measurements in \cite{Stern} are needed to achieve the same
reconstruction accuracy as from the Constitution SNIa data
\cite{Hicken}.
We use a model-independent method for smoothing $H(z)$ data to also
perform consistency tests of the concordance model (flat $\Lambda$CDM)
and of curved $\Lambda$CDM models. These consistency tests are
formulated as functions of $H(z)$ and its derivatives which are
constant or zero in $\Lambda$CDM, independently of the parameters of
the model (see~\cite{2012arXiv1204.5505C} for a review). Deviations
from a constant function indicate problems with our assumptions about
dark energy, theory of gravity, or perhaps something else, but without
the usual problems of postulating an alternative to $\Lambda$CDM. Some
of the tests we use here are given for the first time.
Gaussian processes (GP) provide a model-independent smoothing
technique that can meet the challenges of reconstructing derivatives
from data \cite{Rasmussen,MacKay}. We follow the same GP approach
that has been applied to supernova data in a previous work
\cite{Seikel} by some of the authors of this paper. We use
\href{http://www.acgc.uct.ac.za/~seikel/GAPP/index.html}{GaPP}
(Gaussian Processes in Python), their publicly available
code\footnote{\href{http://www.acgc.uct.ac.za/~seikel/GAPP/index.html}{\url{http://www.acgc.uct.ac.za/~seikel/GAPP/index.html}}}. (See
\cite{Holsclaw:2010sk,Shafieloo:2012ht} for different uses of GP in
this context.) A brief description of the GP algorithm is given in
Appendix \ref{GP}.
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/chrono_H.eps}
\includegraphics[width=0.3\textwidth]{Plots/BAO_H.eps}
\includegraphics[width=0.3\textwidth]{Plots/cc_BAO_H.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/chrono_dH.eps}
\includegraphics[width=0.3\textwidth]{Plots/BAO_dH.eps}
\includegraphics[width=0.3\textwidth]{Plots/cc_BAO_dH.eps}
\caption{$h(z)=H(z)/H_0$ (top) and $h'(z)$ (bottom) reconstructed
from cosmic chronometer data (left), BAO data (middle) and CC+BAO
data (right), using Gaussian processes. Shaded areas represent 68\%
and 95\% confidence levels. The dashed (red) curve is flat
$\Lambda$CDM with $\Omega_m = 0.27$; the solid (blue) curve is the
GP mean. Note that while the BAO data appear to give an inconsistent
$h'(z)$, this is driven by the two highest redshift points both of
which happen to lie below the flat $\Lambda$CDM curve.}
\label{hfig}
\end{figure*}
\section{Testing $\Lambda$CDM}\label{theory}
The Friedmann equation,
\begin{eqnarray}
h^2(z) &\equiv& {H^2(z) \over H^2_0}= \Omega_m(1+z)^3 + \Omega_K(1+z)^2 \nonumber\\
&+& (1-\Omega_m - \Omega_K)\exp\left[3 \int_0^z \frac{1+w(z{'})}{1+z{'}} dz{'}\right]\!,
\label{hz}
\end{eqnarray}
can be rearranged to give
\begin{equation}
w(z)\equiv {p_{\rm de} \over \rho_{\rm de}} = \frac{ 2(1+z)hh' -
3h^2 + \Omega_K(1+z)^2}{3\big[h^2 -\Omega_m(1+z)^3 -
\Omega_K(1+z)^2\big] }\,.
\label{whz}
\end{equation}
In principle, given $h(z)$ data we can smooth it, attempt to estimate
its derivative, and reconstruct $w(z)$. However, reconstruction of
$w(z)$ is compromised by various difficulties. It depends on the
values of $\Omega_m$ and $\Omega_K$, so we need independent
information about these parameters when we reconstruct $w(z)$ from
$H(z)$ data. These are difficult to estimate without assuming a form
for $w(z)$ \cite{Clarkson:2007bc,Hlozek:2008mt,Kunz:2012aw}.
These difficulties reflect the fact that we cannot use data to
construct physical models~-- rather, we need to use data to test
physical models. The $\Lambda$CDM model could be tested by looking for
deviations from $w=-1$. However, there is a more focused approach: to
develop null hypotheses for $\Lambda$CDM, independently of the
parameters $\Omega_m$ and $\Omega_K$~\cite{2012arXiv1204.5505C}.
To test the concordance model -- i.e. flat $\Lambda$CDM -- we can use
\eqref{hz} to define a diagnostic function of redshift
\cite{Sahni:2008xx,Zunckel:2008ti,Shafieloo:2009hi}:
\begin{eqnarray}
\mathcal{O}^{(1)}_m(z) &\equiv & \frac{h^2 -1}{z(3+3z+z^2)} . \label{om1h}
\end{eqnarray}
Then
\begin{eqnarray}
\mathcal{O}^{(1)}_m(z) &=& \Omega_m ~~~\mbox{implies the concordance model}.\nonumber
\end{eqnarray}
If $\mathcal{O}^{(1)}_m(z)$ is not a constant, this is a signal of an
alternative dark energy or modified gravity model. Given observed
$h(z)$ data, we can estimate confidence limits for
$\mathcal{O}^{(1)}_m$. If these are not consistent with a constant
value, we can rule out the concordance model.
It is more effective to measure deviations from zero than from a
constant. The more effective diagnostic is thus the vanishing of the
derivative $\mathcal{O}^{(1)\prime}_m(z)$. This is equivalent to
$\mathcal{L}^{(1)}=0$, where \cite{Zunckel:2008ti}
\begin{eqnarray}
\mathcal{L}^{(1)} &\equiv & 3
(1+z)^2 (1-h^2)+ 2 z(3+3z+z^2)h h'. \label{Ltest}
\end{eqnarray}
The null test is therefore
\begin{eqnarray}
\mathcal{L}^{(1)}&\neq &0 ~~~\mbox{falsifies the concordance model}. \nonumber
\end{eqnarray}
To apply this test, we need to reconstruct $h'(z)$ from the data.
If the concordance model is ruled out, it is still possible that a
curved $\Lambda$CDM model describes the Universe.
Equations \eqref{hz} and \eqref{whz} (with $w=-1$) form a linear
system for $\Omega_m$ and $\Omega_K$. Solving for these parameters we
can define
\begin{eqnarray}
\mathcal{O}^{(2)}_m(z) &\equiv & 2 \frac{ (1+z)(1-h^2)+z(2+z)hh'}{z^2(1+z)(3+z)}\!, \label{Om-hz}\\
\mathcal{O}_K(z) &\equiv & \frac{ 3 (1+z)^2 (h^2 -1)- 2z(3+3z+z^2)hh'}{z^2(1+z)(3+z)}\!, \label{OK-hz}
\end{eqnarray}
and we have
\begin{eqnarray}
\mathcal{O}^{(2)}_m(z)&=& \Omega_m ~~~\mbox{implies $\Lambda$CDM}, \nonumber\\
\mathcal{O}_K(z) &=& \Omega_K ~~~\mbox{implies $\Lambda$CDM}. \nonumber
\end{eqnarray}
These quantities are equivalent to those derived in
\cite{Clarkson:2009jq} in terms of $D(z)$, the dimensionless comoving
luminosity distance. The $D(z)$ forms contain second derivatives $D''$
whereas the $h(z)$ forms above contain only first derivatives $h'$.
Given observed Hubble rate data from which we can estimate the
derivative $h'(z)$, we can then estimate confidence limits for ${\cal
O}^{(2)}_m(z)$ and ${\cal O}^{(2)}_K(z)$. If these are not
consistent with a constant value, we can rule out $\Lambda$CDM in
general, and conclude that dark energy has $w\neq 1$ (or there is
modified gravity).
The more effective diagnostic of these consistency tests is the
vanishing of the derivatives of \eqref{Om-hz} and \eqref{OK-hz}. The
vanishing of $\mathcal{O}^{(2)\prime}_m$ is equivalent to
$\mathcal{L}^{(2)}=0$, where
\begin{eqnarray}
\mathcal{L}^{(2)}(z) &\equiv & 3 (1+z)^2 (h^2- 1)
- 2z(3+6z+2z^2)hh' \nonumber \\ &+& z^2 (3+z)(1+z)(h'^2+hh'').
\label{Lm2_test}
\end{eqnarray}
Then
\begin{eqnarray}
\mathcal{L}^{(2)}(z)&\neq &0 ~~~\mbox{falsifies $\Lambda$CDM}. \nonumber
\end{eqnarray}
The vanishing of $\mathcal{O}^{(2)\prime}_K$ does not give any
independent information -- it is also equivalent to
$\mathcal{L}^{(2)}=0$.
Given observations of $h(z)$, we can construct this function
independently of the parameters of the model and test $\Lambda$CDM by
measuring consistency with zero. This has the advantage that it is
easier to detect deviations from zero rather than a constant, but at
the expense of requiring an extra derivative in the observable. This
is akin to detecting deviations from constant in $w$, but without
reliance on the parameters of the model.
For the application of these consistency tests, it is crucial to use a
model-independent method to reconstruct $\mathcal{O}_m^{(1)}$,
$\mathcal{O}_m^{(2)}$, $\mathcal{O}_K$, $\mathcal{L}^{(1)}$ and
$\mathcal{L}^{(2)}$. Model-dependent approaches have the problem that
they affect or even determine the outcome of the consistency test:
While fitting a $\Lambda$CDM model to the data would always lead to a
result that is consistent with $\Lambda$CDM, fitting a model that does
not include $\Lambda$CDM as a special case would result in
inconsistencies with $\Lambda$CDM. The only model-dependent approches
that do not entirely determine the outcome of the test are those
assuming a model which includes $\Lambda$CDM as a special
case. Nevertheless, they affect the result by forcing the data into a
specific parametrisation, which might not reflect the true model. The
only way to avoid this problem is to use a non-parametric
approach. Here, we use Gaussian processes, which are described in
Appendix \ref{GP}.
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/Om1_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om1_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om1_cc_BAO.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Om2_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om2_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om2_cc_BAO.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Ok_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/Ok_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/Ok_cc_BAO.eps}
\caption{$\mathcal{O}^{(1)}_m(z)$ (top), $\mathcal{O}^{(2)}_m(z)$
(middle) and $\mathcal{O}_K(z)$ (bottom) reconstructed from
cosmic chronometers (left), BAO (middle) and CC+BAO (right).
For $\mathcal{O}^{(1)}_m(z)$, the dashed (red) curve is flat
$\Lambda$CDM. For $\mathcal{O}^{(2)}_m(z)$ and
$\mathcal{O}_K(z)$ it is a curved $\Lambda$CDM model.}
\label{Om1}\label{Om2}\label{Ok}
\end{figure*}
\section{Reconstruction and consistency tests from $H(z)$ data}\label{reconstruction}
Cosmic chronometers are based on observations of the differential ages
of galaxies
\cite{Stern,Jimenez:2001gg,Crawford:2010rg,Moresco:2012jh}. The Hubble
rate at an emitter with redshift $z$ is
\begin{equation}
H(z) = -\frac{1}{1+z} \frac{dz}{dt_e},
\end{equation}
where $t_e$ is the proper time of emission. The differential method
uses passively evolving galaxies formed at the same time to determine
the age difference $\Delta t_e$ in a small redshift bin $\Delta z$,
assuming a Friedmann background. To find old galaxies sharing the same
formation time, we have to look for the oldest stars in both galaxies
and show that they have the same age. This method is effective; but
while the differential approach significantly
reduces the systematics that would be present when determining the
absolute ages of galaxies, it
still faces uncertainties due to the assumptions that are made to
estimate the age.
The second way to measure $H(z)$ is the observed line-of-sight
redshift separation $\Delta z$ of the baryonic acoustic oscillation
(BAO) feature in the galaxy 2-point correlation function
\cite{Gaztanaga:2008xz,Chuang,Blake:2012pj},
\begin{equation}
H(z) = \frac{\Delta z}{r_s(z_d)}\,,
\end{equation}
where $r_s(z_d)$ is the sound horizon at the baryon drag epoch.
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/Lm1_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm1_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm1_cc_BAO.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Lm2_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm2_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm2_cc_BAO.eps}
\caption{$\mathcal{L}^{(1)}_m=\mathcal{L}^{(1)}/(1+z)^6$ (top) and
$\mathcal{L}^{(2)}_m=\mathcal{L}^{(2)}/(1+z)^6$
(bottom) reconstructed from cosmic chronometers (left), BAO (middle) and CC+BAO
(right). The dashed (red) curve is a $\Lambda$CDM model. }
\label{Lm1_Lm2}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/w_chrono.eps}
\includegraphics[width=0.3\textwidth]{Plots/w_BAO.eps}
\includegraphics[width=0.3\textwidth]{Plots/w_cc_BAO.eps}
\caption{$w(z)$ reconstructed from cosmic chronometers (left), BAO
(middle~-- note the different $z$ range) and CC+BAO
(right) by marginalizing over $\Omega_m = 0.275 \pm 0.016$.
The dashed (red) curve is a $\Lambda$CDM model.}
\label{wfig}
\end{figure*}
\subsection*{Results: real data}
\noindent We use the following $H(z)$ data sets:\\[1mm]
{\em CC:} ~~18 cosmic chronometer data points \cite{Moresco:2012by}.\\
{\em BAO:} ~~6 radial BAO data points \cite{Blake:2012pj,Gaztanaga:2008xz,Chuang}.\\
{\em CC+BAO:} ~~Combination of CC and BAO sets.\hspace{4mm}
We normalize $H(z)$ using $H_0 = 70.4 \pm
2.5\,$km\,s${}^{-1}$Mpc${}^{-1}$. The uncertainty in $H_0$ is
transferred to $h(z)$ as $ \sigma_h^2 = ({\sigma_H^2}/{H_0^2}) +
({H^2}/{H_0^4}) \sigma^2_{H_0}.$ The reconstructed functions $h(z)$
and $h'(z)$ are shown in Fig.~\ref{hfig}. The shaded regions
correspond to the 68\% and 95\% confidence levels (CL). The true model
is expected to lie 68\% of the plotted redshift range within the 68\%
CL. Note that this is only an expectation value. The actual value for
a specific function may deviate from the expectation. The dependence
of the actual percentage on the smoothness of the function has been
analysed in \cite{Seikel}.
Figure \ref{Om1} shows the reconstruction of $\mathcal{O}_m^{(1)}$.
The reconstruction of $\mathcal{O}_m^{(2)}$ and $\mathcal{O}_K$ is
shown in Fig. \ref{Om2}, and Fig.~\ref{Lm1_Lm2} gives
$\mathcal{L}^{(1)}$ and $\mathcal{L}^{(2)}$. We actually plot a
modified $\mathcal{L}_m=\mathcal{L}/(1+z)^6$ which stabilises the
errors at high redshift without affecting the consistency condition.
The reconstructed $w(z)$, also requiring $h'$, is shown in
Fig. \ref{wfig}, where we assume the concordance values $\Omega_m =
0.275 \pm 0.016$ and $\Omega_K= 0$ \cite{Komatsu}.
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/lcdm_H.eps}
\includegraphics[width=0.3\textwidth]{Plots/evolv_H.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/lcdm_dH.eps}
\includegraphics[width=0.3\textwidth]{Plots/evolv_dH.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/lcdm_d2H.eps}
\includegraphics[width=0.3\textwidth]{Plots/evolv_d2H.eps}
\caption{$h(z)$ (top), $h'(z)$ (middle) and $h''(z)$ (bottom)
reconstructed from
simulated data, assuming a concordance model (left) and model \eqref{wzevo} with slowly
evolving $w(z)$ (right). }
\label{hmock}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/Om1_lcdm.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om1_evolv.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Om2_lcdm.eps}
\includegraphics[width=0.3\textwidth]{Plots/Om2_evolv.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Ok_lcdm.eps}
\includegraphics[width=0.3\textwidth]{Plots/Ok_evolv.eps}
\caption{$\mathcal{O}^{(1)}_m(z)$ (top), $\mathcal{O}^{(2)}_m(z)$
(middle) and $\mathcal{O}_K(z)$ (bottom) reconstructed from
simulated data, assuming a concordance model (left) and model \eqref{wzevo} (right).}
\label{mock_OmOk}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.3\textwidth]{Plots/Lm1_lcdm.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm1_evolv.eps}\\
\includegraphics[width=0.3\textwidth]{Plots/Lm2_lcdm.eps}
\includegraphics[width=0.3\textwidth]{Plots/Lm2_evolv.eps}
\caption{$\mathcal{L}^{(1)}_m=\mathcal{L}^{(1)}/(1+z)^6$ (top) and
$\mathcal{L}^{(2)}_m=\mathcal{L}^{(2)}/(1+z)^6$
(bottom) reconstructed from
simulated data, assuming a concordance model (left) and model \eqref{wzevo} (right). }
\label{mock_Lm}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.4\textwidth]{Plots/w_lcdm.eps}
\includegraphics[width=0.4\textwidth]{Plots/w_evolv.eps}
\caption{$w(z)$ reconstructed from
simulated data, assuming a concordance model (left) and model \eqref{wzevo} (right), by marginalizing over $\Omega_m = 0.275 \pm 0.016$.}
\label{mockw}
\end{figure*}
\subsection*{Results: mock data}
To demonstrate how a larger number of data will affect our results
when reconstructing $w$ and testing $\Lambda$CDM, we simulated a data
set of 64 points for $H(z)$, drawing the error from a Gaussian
distribution $\mathcal{N}(\bar{\sigma},\epsilon)$ with $\bar{\sigma} =
10.64z+8.86$ and $\epsilon = 0.125(12.46z+3.23)$, adapting the
methodology of~\cite{Ma:2010mr}.
We simulated data points for two different models:\\
Concordance model, $\Omega_K = 0$, $\Omega_m = 0.27$.\\
A model with slowly evolving equation of state:
\begin{equation}
w(z) = -\frac{1}{2}
+{1\over2} \tanh3 \Big(z- \frac{1}{2}\Big),
\label{wzevo}
\end{equation}
and the same concordance density parameters.
The GP reconstructions are shown in
Figs. \ref{hmock}--\ref{mockw}.
\subsection*{Discussion}\label{conclusion}
Figure \ref{Om1} shows that for the CC and CC+BAO data (18 and 24
points), we get good reconstructions when there is no differentiation
of $h(z)$ involved. The BAO data set only contains 6 data points up
to redshift 0.73. Beyond that redshift, the reconstruction differs
significantly from $\Lambda$CDM. The results from the CC and CC+BAO
sets are however in very good agreement with $\Lambda$CDM.
The BAO data appear to be inconsistent with the concordance
model. However, 6 data points are not sufficient for a reliable
reconstruction. The two data points with highest redshift happen to be
below the concordance curve, which pulls the reconstructed curve
down. This is probably just a coincidence, but it illustrates the
importance of having the derivative of the data consistent with the
model, as well as the data itself. Current and upcoming large-volume
surveys, such as BOSS~\cite{Schlegel:2009hj}, EUCLID~\cite{euclid}
and SKA~\cite{Ansari:2011bv}, will provide radial BAO measurements of
increasing number and precision.
The reconstruction of $\mathcal{O}_m^{(2)}$ and $\mathcal{O}_K$ shown
in Fig. \ref{Om2} is more challenging for the available data set,
since we need the first derivative of $h$. With present data sets,
the uncertainties in the reconstruction are quite large. Using CC and
CC+BAO, these results as well as the results for $\mathcal{L}^{(1)}$
and $\mathcal{L}^{(2)}$ shown in Fig.~\ref{Lm1_Lm2}, are consistent
with $\Lambda$CDM.
For the mock data sets, Figs.~\ref{hmock} and \ref{mock_OmOk} show
that the GP reconstructions recovers the assumed models very
effectively. We can clearly distinguish the model with slowly evolving
$w(z)$ from $\Lambda$CDM in $\mathcal{O}_m^{(1)}$. For
$\mathcal{O}_m^{(2)}$ and $\mathcal{O}_K$, the reconstruction errors
are too large to see this difference. The same is true for
consistency tests $\mathcal{L}^{(1)}$ and $\mathcal{L}^{(2)}$ shown in
Fig.~\ref{mock_Lm}.
The reconstruction of the equation of state $w(z)$ also shows a clear
difference of the two models, assuming we can accurately determine
$H_0$, $\Omega_m$ and $\Omega_K$ separately from $w(z)$: see
Fig.~\ref{mockw}. GP works very well to recover the assumed $w$. With
less than 100 data points, we can reconstruct a dynamical dark energy
model far better than is achievable using thousands of SNIa data~--
compare to analogous reconstructions in~\cite{Seikel}.
\section{Conclusions}
We have considered the information that current and future $H(z)$ data
can give us. Currently such data come from cosmic chronometers and BAO
data, and is plainly consistent with the concordance model. Future
data, however, will provide a powerful discriminator between models.
It is remarkable how few data points are required compared to
supernovae: to reconstruct $w(z)$ accurately in our non-parametric way
requires many thousands of SNIa, compared to less than 100 $H(z)$ data
points.
We have derived and analysed new consistency tests for the
$\Lambda$CDM model, which we have formulated in terms of $H(z)$
directly, rather than using the more familiar distance
function~\cite{Clarkson:2009jq,2012arXiv1204.5505C}. By smoothing the
data points using Gaussian process, we have shown that these can be
very effective in determining that $\Lambda$CDM is the incorrect
model, but without having to assume the key parameters $\Omega_m$ and
$\Omega_K$, which currently only have constraints derived by assuming
$\Lambda$CDM or a similar alternative. These tests not only require
that the data points themselves are consistent with the model, but
that their derivative is also.
Future data which directly measures the expansion history will
therefore play an important role in future dark energy studies.
~\\{\bf Acknowledgements:}\\
We thank Phil Bull and Mat Smith for discussions.
SY and RM are supported by the South African Square Kilometre Array
Project. MS and CC are supported by the National Research Foundation
(NRF) South Africa. RM is supported by the UK Science \& Technology
Facilities Council (grant no. ST/H002774/1).
|
1,108,101,565,124 | arxiv | \section{Introduction}
\label{sec:intro}
\input{tex/intro}
\section{Preliminaries and Related Work}
\label{sec:related}
\input{tex/related}
\section{Approach}
\label{sec:approach}
\input{tex/approach}
\section{Experiments and Results}
\label{sec:exp}
\input{tex/exp}
\section{Conclusion}
\label{sec:conc}
\input{tex/conc}
\section{Acknowledgement}
\label{sec:acks}
\input{tex/acks.tex}
\bibliographystyle{IEEEtran}
\subsection{The proposed algorithm}
\label{subsec:algorithm}
The algorithm we propose achieves the following goals:
\begin{enumerate}
\item Scalable on GPU cores.
\item Memory consumption proportional to the number of matched triangles.
\end{enumerate}
Unlike most previous methods that are based on either tree search or graph indexing, our algorithm is BFS-based, which is a better fit for the GPU\@. In terms of memory usage, though the algorithm is not recursive, we are still able to limit the space needed to space proportional to the number of edges in the graph by using more efficient pruning techniques and selecting query order in a novel way. Algorithm~\ref{alg:tcsm} is the pseudocode that shows both the algorithm itself and its implementation using the Gunrock framework. Our inputs include a triangle as a query graph $Q$ and a large data graph $G$ for searching. Both graphs are undirected. Our output includes the exact number of matched triangles as well as the node sequence listings of them.
\begin{algorithm}[!ht]
\label{alg:tcsm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{small}
\begin{algorithmic}[1]
\Require{Triangle Graph $Q$, Data Graph $G$.}
\Ensure{Count of triangles $n$ and listings of all matched triangles.}
\Procedure{PreCompute\_on\_CPUs}{}
\State
\Call{Store\_none\_tree\_connection}{$E$}
\State
\Call{Generate\_UMO}{NEC}
\EndProcedure
\Procedure{Filtering\_candidate\_set}{$Q,G$}
\State
\Call{Advance+Compute}{$G$}\Comment{Compute NE for each node}
\State
\Call{Filter+Compute}{$G,Q,c\_set$}\Comment{Filter nodes based on (NE); update NE}
\EndProcedure
\While{$(\lvert M[i]\rvert < \lvert Q\rvert)$}
\Procedure{Verifying\_Constraints}{$G,Q,c\_set, M$}
\State
\Call{Advance}{$c\_set$}\Comment{BFS traversal from source nodes in $c\_set$ to dest nodes that are verified on stored constraints.}
\State
\Call{Compute}{$c\_set$}\Comment{Compact satisfied dest nodes to $c\_set$.}
\State
\Call{Write\_to\_Partial}{$M$}\Comment{Add updated $c\_set$ to partial results $M$.}
\State
\Call{Mask}{$M$}\Comment{Set incomplete partial results to invalid values.}
\EndProcedure
\EndWhile
\State
\Return{Triangle count:$\frac{\lvert M\rvert}{\lvert Q\rvert}$, $M$}
\end{algorithmic}
\end{small}
\end{algorithm}
For the pre-computing part, need to store the query node connection information. We make sure that the node sequence we generate meets the BFS traversal of a spanning tree derived from $Q$. So we need to store any non-tree edge connection (line 2 of Alg.~\ref{alg:tcsm}). We maintain a set of constraints on query node ID values to avoid generating duplicated subgraphs. We call this a \emph{unique mapping order} (UMO) (line 3 of Alg.~\ref{alg:tcsm}). The concept is derived from the neighborhood equivalence idea from Turbo$_{\text{ISO}}$~\cite{Han:2013:TIT}.
\begin{definition}
Any pair of nodes $u_i, u_j \in V(Q)$ are neighborhood-equivalent (denoted by $\simeq$), if for every embedding $m$ that contains node mapping pairs $(u_i, v_i)$ and $(u_j, v_j)$ where $v_i, v_j \in V(G)$, there exits an embedding $m'$ such that $m'=m - \{(u_i, v_i), (u_j, v_j)\} \cup \{(u_i, v_j), (u_j, v_i)\}$.
\end{definition}
The above definition illustrates that neighborhood equivalent query nodes share the same matched vertices in data graph. The equivalence class of a query vertex $u$ is a set of query vertices that are neighborhood-equivalent to ($\simeq$) $u$. This class is called the neighborhood equivalence class (NEC). The Turbo$_{\text{ISO}}$~\cite{Han:2013:TTU} paper proves a lemma that, in an NEC, for each node $u\in V(Q)$, either an NEC member $u_n$ has the same label and same set of adjacent vertices; or every member of the NEC has the same label and are adjacent to each other. By leveraging the NEC idea, we first find NECs in the query graph, and inside each NEC, we define an ordering based on node IDs and store the orderings as constraints. For example, a triangle query graph itself is an NEC. The ordering we define is $\{u_1, u_2, u_3\}$ ($u_1,u_2,u_3$ are node IDs of the query graph) where $u_1<u_2<u_3$. So when doing triangle matching in the data graph, we only traverse edges with a destination node ID value larger than the source node ID value.
The main algorithm can be separated into two big steps: \emph{filtering} and \emph{verification}. The filtering step starts with a computation of neighborhood encoding (NE) (line 6 of Alg.~\ref{alg:tcsm}), which is computed based on the degrees of nodes in the data graph. Here a node $u$'s NE is defined as the degree of $u$. NE is based on an idea of using an integer to represent neighborhood information that characterizes each vertex in the data graph. We compute each triangle node's NE during pre-processing on the CPU and filter the candidate nodes in $G$ based on NE (line 7 of Alg.~\ref{alg:tcsm}). Note that NE information is updated once we filter out non-valid candidate nodes. The verification step is based on multi-source breadth-first search, where the sources are the candidate set of nodes from the filtering step. After each traversal, we verify if the destination nodes can be added to the partial results based on the pre-computed constraints (line 11 of Alg.~\ref{alg:tcsm}). Note that during verification, we avoid excessive memory usage by doing compaction before writing to partial results (line 13 of Alg.~\ref{alg:tcsm}), and before the next traversal, we mask out the unfruitful partial results that contain nodes/edges less than the current stage number (line 14 of Alg.~\ref{alg:tcsm}). The number of parallel traversals depends on the level of the query graph spanning tree. So in the pseudocode of Alg.~\ref{alg:tcsm}, we use a while loop to ensure partial results are fully complete before returning the final results.
\subsection{Implementation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.85\columnwidth]{img/sm}
\caption{Implementation flow chart. \label{fig:sm}}
\end{center}
\end{figure}
To better understand Alg.~\ref{alg:tcsm}, we draw an implementation flowchart, which can be found in Figure~\ref{fig:sm}.
We use compressed sparse row (CSR) format as our data structure to store graphs in a space-efficient fashion.
Filter, advance and compute are the three operators we use from Gunrock library. The filter operator takes in all nodes from the data graph $G$ and returns nodes with satisfied NE requirements. Next, we use the advance operator to traverse all the neighbor nodes of the candidate nodes. So we need to verify if the newly traversed edges in the data graph satisfy connection constraints with corresponding query edges. The connection constraints include both the connection with existing partial results as well as requirements brought by corresponding query edges. By using the advance operator, this step is done in a massively parallel manner where the large amount of newly added edges are mapped to consecutive GPU threads and the verification computation is also done in a SIMT manner. Then we get output neighbor nodes from the threads that pass the constraint-verification tests of the advance operator. In the next step, we use a compute operator to compact the candidate nodes from scattered threads to consecutive positions in order to serve as the inputs for the next iteration of advance traversal. Note that in GPU computing, consecutive (coalesced) reads perform much better than scattered reads. So not only do we benefit from better memory complexity but also gain a performance boost from compaction.
\subsection{Optimizations}
Our first optimization is $k$-look-ahead, borrowed from the VF3 algorithm, in the verification step to filter out more redundant partial results in each iteration according to feasibility rules besides using connection constraints.
$k$-look-ahead originates from the idea that it is possible to prove a non-consistent state will not generate any successive consistent states. However, it may happen that, even if the state is consistent according to our constraints verification, after a certain number of steps it cannot generate any consistent descendants and, thus, cannot contribute to the final results. The detection of such situations would help to further reduce the number of explored states by avoiding the generation of consistent, but unfruitful, states. In order to achieve this aim, the algorithm verifies if the addition of the new candidate nodes to the partial results generates a consistent state; in addition, it is able to detect $k$ steps in advance if the state will not have any consistent descendants, a $k$-look-ahead.
Note that this optimization is a necessary but not sufficient condition. If it is false, it guarantees that the current candidate node will not pass the next iteration of verification. In our implementation, we use 1- and 2-look-ahead only, which in practice prunes out a significant fraction of unfruitful partial results.
Another optimization is our method of avoiding duplicated final results that cannot be discovered when they are incomplete. For instance, if we visit a certain part of the data graph with a different node sequence, we can get different partial results at the beginning but the same solution in the end, which increases the number of intermediate results and requires further effort to filter out duplicated final results as well. One possible solution is to use a hash table to keep track of all node sequences. Though it stops the generation of duplicate results, it cannot filter out unfruitful intermediate results. Moreover, such a large hash table storing all partial solutions would be very hard to fit on a single GPU\@. Instead we add constraints to the query node visiting order to ensure that no duplicate final results would be generated from the beginning. We successfully transfer the idea of a \emph{node equivalence class} (NEC), which was previously used to reduce a depth-first search space to solve this problem (mentioned in Section~\ref{subsec:algorithm}). For example, consider a triangle: all three nodes of a triangle are equivalent. So we need to define a visiting order of those equivalent nodes to avoid visiting the same combination multiple times. So in this example, we relate the visiting order to node id value and only visit in the order of increasing node id values. In this way, the same combination of nodes will only be visited once in the data graph.
As a summary, we use the following optimizations:
\begin{enumerate}
\item Compaction used after each advance operation to move the scatterly distributed intermediate results to adjacent spaces in memory, which improves data access efficiency as well as memory usage.
\item $k$-look-ahead before adding a new node to partial results to prune out unfruitful intermediate results.
\item Encoding neighborhood information for better filtering and constraint verification.
\item Use node equivalence idea to avoid generating duplicated solutions and redundant intermediate results.
\item Mask out incomplete partial results at the end of each iteration to save memory usage.
\end{enumerate}
\subsection{Experimental Setup}
\paragraph{System}
We tested our triangle counting implementation on an NVIDIA Titan~V GPU. The Titan~V is a Volta-based GPU with 80 streaming multiprocessors (SMs) and 12 GB HBM2 memory. The total memory bandwidth of the Titan~V is 652.8 GB/s. In our experiments, we compare with last year's champion~\cite{Hu:2018:HTC}. In their paper, they run experiments on an NVIDIA Tesla P100 GPU from the San Diego Super Computer Center (SDSC)\@. This P100 GPU is a Pascal-based GPU with 56 SMs and 16 GB CoWoS HBM2 memory at 732 GB/s memory bandwidth. Our implementation is compiled with CUDA 10.0.
\paragraph{Dataset}
The experiments are done using both real-world and synthetic datasets from the HPEC graph challenge. The graphs are unlabeled graphs. Specifically, we compare results generated from small graphs recognized by Hu et al.~\cite{Hu:2018:HTC} for the reason that our implementation currently only supports one GPU and the GPU memory is limited. But our implementation could be extended to efficient multi-GPU implementation easily under the Gunrock framework. We also assume the input graph is undirected in our implementation.
\subsection{Results}
In Table~\ref{tab:perf}, we show our performance in time and transacted edges per second (TEPS)\@. We compare with last year's champion~\cite{Hu:2018:HTC}. From the table we can see that we are consistently faster than Hu et al.~\cite{Hu:2018:HTC} in all real-world datasets, but slower for one synthetic dataset. Note that the given synthetic datasets have a larger number of triangles, which means more intermediate results will be generated during the computation and thus slow down our implementation.
\begin{table*}[tbph]
\centering
\footnotesize
\begin{tabular}{@{}lcccccc@{}}
\toprule
Graph & $\lvert V \rvert$ & $\lvert E \rvert$ & Triangles & Runtime (ms) & Rate (TEPS) & Speedup \\
\midrule
amazon0302 & 262,112 & 899,792 & 717,719 & 0.445414 & 2.02E+09 & 9.53 \\
amazon0312 & 400,728 & 2,349,869 & 3,686,467 & 2.707648 & 8.68E+08 & 4.23 \\
amazon0505 & 410,237 & 2,439,437 & 3,951,063 & 2.840805 & 8.59E+08 & 4.17 \\
amazon0601 & 403,395 & 2,443,408 & 3,986,507 & 2.836609 & 8.61E+08 & 3.73 \\
as20000102 & 6,475 & 12,572 & 6,584 & 0.298905 & 4.21E+07 & 1.01 \\
as-caida20071105 & 26,476 & 53,381 & 36,365 & 0.599098 & 8.91E+07 & 8.17 \\
ca-AstroPh & 18,773 & 198,050 & 1,351,441 & 0.427914 & 4.63E+08 & 4.94 \\
ca-CondMat & 23,134 & 93,439 & 173,361 & 0.125122 & 7.47E+08 & 11.65 \\
ca-GrQc & 5,243 & 14,484 & 48,260 & 0.063396 & 2.28E+08 & 17.31 \\
ca-HepPh & 12,009 & 118,489 & 3,358,499 & 0.973678 & 1.22E+08 & 1.79 \\
ca-HepTh & 9,878 & 25,973 & 28,339 & 0.055146 & 4.71E+08 & 14.90 \\
cit-HepPh & 34,547 & 420,877 & 1,276,868 & 0.763726 & 5.51E+08 & 3.49 \\
cit-HepTh & 27,771 & 352,285 & 1,478,735 & 1.486301 & 2.37E+08 & 1.87 \\
cit-Patents & 3,774,769 & 16,518,947 &7,515,023 & 32.25143 & 5.12E+08 & 2.40 \\
email-Enron & 36,693 & 183,831 & 727,044 & 0.718355 & 2.56E+08 & 2.44 \\
email-EuAll & 265,215 & 364,481 & 267,313 & 1.946259 & 1.87E+08 & 1.84 \\
facebook\_combined & 4,040 & 88,234 & 1,612,010 & 1.030469 & 8.56E+07 & 1.39 \\
flickrEdges & 105,939 & 2,316,948 & 107,987,357 & 27.124476 & 8.54E+07 & 1.06 \\
graph500-scale18-ef16 & 174,148 & 3,800,348 & 82,287,285 & 1.0303421 & 1.69E+08 & 2.09 \\
graph500-scale19-ef16 & 335,319 & 7,729,675 & 186,288,972 & 3.23446107 & 1.04E+06 & 1.38 \\
graph500-scale20-ef16 & 645,821 & 15,680,861 & 419,349,784 & 9.46509552 & 6.82E+07 & 1.01 \\
graph500-scale21-ef16 & 1,243,073 & 31,731,650 & 935,100,883 & 29.26859307 & 4.25E+07 & 0.76 \\
loc-brightkite edges & 58,229 & 214,078 & 494,728 & 0.550747 & 3.89E+08 & 4.12 \\
loc-gowalla edges & 196,592 & 950,327 & 2,273,138 & 6.270409 & 1.52E+08 & 0.77 \\
oregon1\_010331 & 10,671 & 22,002 & 17,144 & 0.421596 & 5.22E+07 & 2.11 \\
oregon1\_010407 & 10,730 & 21,999 & 15,834 & 0.414824 & 5.30E+07 & 1.96 \\
oregon1\_010414 & 10,791 & 22,469 & 18,237 & 0.43478 & 5.17E+07 & 2.02 \\
oregon1\_010421 & 10,860 & 22,747 & 19,108 & 0.437284 & 5.20E+07 & 1.86 \\
oregon1\_010428 & 10,887 & 22,493 & 17,645 & 0.433493 & 5.19E+07 & 1.89 \\
oregon1\_010505 & 10,944 & 22,607 & 17,597 & 0.437427 & 5.17E+07 & 2.06 \\
oregon1\_010512 & 11,012 & 22,677 & 17,598 & 0.449109 & 5.05E+07 & 1.82 \\
oregon1\_010519 & 11,052 & 22,724 & 17,677 & 0.44961 & 5.05E+07 & 1.82 \\
oregon1\_010526 & 11,175 & 23,409 & 19,894 & 0.439644 & 5.32E+07 & 1.86 \\
oregon2\_010331 & 10,901 & 31,180 & 82,856 & 0.44663 & 6.98E+07 & 2.10 \\
oregon2\_010407 & 10,982 & 30,855 & 78,138 & 0.456381 & 6.76E+07 & 2.05 \\
oregon2\_010414 & 11,020 & 31,761 & 88,905 & 0.43776 & 7.26E+07 & 1.95 \\
oregon2\_010421 & 11,081 & 31,538 & 82,129 & 0.458956 & 6.87E+07 & 1.85 \\
oregon2\_010428 & 11,114 & 31,434 & 78,000 & 0.432467 & 7.27E+07 & 1.99 \\
oregon2\_010505 & 11,158 & 30,943 & 72,182 & 0.446486 & 6.93E+07 & 1.90 \\
oregon2\_010512 & 11,261 & 31,303 & 72,866 & 0.437808 & 7.15E+07 & 2.10 \\
oregon2\_010519 & 11,376 & 32,287 & 83,709 & 0.446916 & 7.22E+07 & 1.91 \\
oregon2\_010526 & 11,462 & 32,730 & 89,541 & 0.447869 & 7.31E+07 & 1.91 \\
p2p-Gnutella04 & 10,877 & 39,994 & 934 & 0.054383 & 7.35E+08 & 21.63 \\
p2p-Gnutella05 & 8,847 & 31,839 & 1,112 & 0.058627 & 5.43E+08 & 14.33 \\
p2p-Gnutella06 & 8,718 & 31,525 & 1,142 & 0.062394 & 5.05E+08 & 13.51 \\
p2p-Gnutella08 & 6,302 & 20,777 & 2,383 & 0.060773 & 3.42E+08 & 18.19 \\
p2p-Gnutella09 & 8,115 & 26,013 & 2,354 & 0.059509 & 4.37E+08 & 20.33 \\
p2p-Gnutella24 & 26,519 & 65,369 & 986 & 0.093102 & 7.02E+08 & 14.01 \\
p2p-Gnutella25 & 22,688 & 54,705 & 806 & 0.048971 & 1.12E+09 & 30.94 \\
p2p-Gnutella30 & 36,683 & 88,328 & 1,590 & 0.055981 & 1.58E+09 & 18.87 \\
p2p-Gnutella31 & 62,587 & 147,892 & 2,024 & 0.077367 & 1.91E+09 & 25.42 \\
roadNet-CA & 1,965,207 & 2,766,607 & 120,676 & 0.282073 & 9.81E+09 & 49.29 \\
roadNet-PA & 1,088,093 & 1,541,898 & 67,150 & 0.180125 & 8.56E+09 & 38.56 \\
roadNet-TX & 1,379,918 & 1,921,660 & 82,869 & 0.20709 & 9.28E+09 & 41.61 \\
soc-Epinions1 & 75,880 & 405,740 & 1,624,481 & 2.308416 & 1.76E+08 & 1.35 \\
soc-Slashdot0811 & 77,361 & 469,180 & 551,724 & 2.095413 & 2.24E+08 & 2.25 \\
soc-Slashdot0902 & 82,169 & 504,230 & 602,592 & 2.20871 & 2.28E+08 & 1.51 \\
\bottomrule
\end{tabular}
\caption{Runtime (ms) and throughput (TEPS) for provided graphs. Speedup is over Hu et al.'s~\cite{Hu:2018:HTC} GraphChallenge submission last year (2018).
\label{tab:perf}
}
\end{table*}
\subsection{Problem Definition}
Triangle counting is the problem of finding all occurrences of a triangle in a graph. The occurrences should match the original triangle graph both structurally and semantically. In other words, both the graph topology and attribute information for nodes and edges should be considered when determining the similarity between two graphs. In the Graph Challenge problem, both graphs are undirected with no node labels or edge weights. We use the following mathematical definitions:
\begin{definition}
A graph is a 3-tuple $G=(V,E)$, where $V$ is a set of vertices, $E \subseteq V \times V$ are the edges connecting those vertices.
\end{definition}
We represent the vertex and edge set of the graph $G$ respectively as $V(G)$ and $E(G)$. If two vertices in $G$, say $u,v \in V$, are connected by an edge $e \in E$, which is denoted by $e=(u, v)$, then $u,v$ are \textit{adjacent} or \textit{neighbors}. A graph is \textit{undirected} when it only contains edges that have no direction, meaning $(u,v)$ and $(v,u)$ essentially represent the same edge. Though this paper only describes the problem on undirected graphs, it can be extended to directed ones easily. With these preliminaries, we define subgraph isomorphism as follows:
\begin{definition}
A graph $G = (V, E)$ is subgraph-isomorphic to another graph $G' = (V', E')$, denoted as $G \subseteq G'$, if there is an injection function $f:V\rightarrow V'$, such that
\begin{center}
$\forall (u,v) \in E: (f(u), f(v)) \in E'$.
\end{center}
\end{definition}
Given a triangle as a query graph $Q$ and a data graph $G$, the exact triangle counting problem enumerates all triangles that are isomorphic to $Q$ in $G$. So the inputs we take are one query graph, $Q$, which is a triangle, and one data graph, $G$, in MatrixMarket format. Our outputs are the number of matches as well as the matched subgraph node ID lists from the data graph $G$. Compared with other methods such as set intersection and matrix multiplication, one advantage of using subgraph matching to solve triangle counting is that we can get the triangle listings for free. Another advantage is that our implementation could potentially be extended to embeddings other than triangles with/without node and/or edge label information.
\subsection{The Gunrock graph processing framework}
We note several graph frameworks in Section~\ref{sec:intro}; in this work, we choose the Gunrock~\cite{Wang:2017:GGG} GPU-based graph analytics framework. Gunrock uses a high-level, bulk-synchronous, data-centric abstraction. Gunrock programs are expressed as manipulations of frontiers of vertices or edges that are actively participating in the computation. Its traversal-based operators (shown in Fig.~\ref{fig:gunrock}) currently include:
\begin{description}
\item[Advance] which generates a new frontier via visiting the neighboring vertices/edges in the current frontier (work distribution/load balancing).
\item[Filter] which removes elements from a frontier via validation tests.
\item[Segmented intersection] which computes the intersection of two neighbor lists for each pair of elements from two input frontiers.
\item[Compute] user-defined vertex/edge-centric computations that run in parallel; they can be combined with advance or filter.
\end{description}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\columnwidth]{img/gunrock}
\end{center}
\caption{Gunrock framework graph traversal operators.\label{fig:gunrock}}
\end{figure}
The Gunrock framework is very efficient for BFS-based algorithms. Since our algorithm is also BFS-based, we use the Gunrock framework to fully utilize the massive parallelism of GPUs by using the above four operators. We also leverage Gunrock's capability of supporting frontiers of either nodes or edges. We describe the pros and cons of using Gunrock in Section~\ref{sec:approach}.
|
1,108,101,565,125 | arxiv | \section{Introduction}
The ability to readout binary quantum observables such as quantum bits (qubits) is an important {\it desideratum} for quantum information processing~\cite{divincenzo2001}. In particular, it is often highly desirable that the readout have high fidelity and be quantum nondemolition (QND). For instance, many promising fault-tolerant architectures for scalable fault-tolerant quantum computation require that stabilizer parities be repeatedly read out during the computation~\cite{fowler2009,sun2014,kelly2015,cramer2016,ofek2016,rosenblum2018,negnevitsky2018,hu2019,riste2020,andersen2019,andersen2020,bultink2020}. For fault tolerance to be achieved in these architectures, it is crucial that the readout fidelity be above the threshold of the error-correcting code~\cite{knill1998,aharonov2000}. Moreover, it is necessary that the readout be QND so that the code is projected onto the eigenstate corresponding to the observed stabilizer eigenvalues. QND readouts have the important advantage that repeated readouts leave the eigenvalues of the observable unchanged. Therefore, each repetition provides additional information on the observable. As a result, the readout fidelity increases exponentially with the number of repetitions~\cite{deuar1999,deuar2000}. This property has been exploited to improve the readout fidelity of quantum bits (qubits) for a variety of implementations including trapped ion qubits~\cite{schaetz2005,hume2007}, solid-state spin qubits~\cite{meunier2006,jiang2009,neumann2010,robledo2011,waldherr2011,maurer2012,dreau2013,pla2013,lovchinsky2016,boss2017,holzgrafe2019,nakajima2019,yoneda2020,xue2020-2}, and superconducting qubits~\cite{elder2020}. The same temporal correlations in the outcomes of consecutive QND readouts can be used to correct stabilizer readout errors in quantum error-correcting codes~\cite{wang2010,devitt2010,fowler2012-2}.
A seemingly natural figure of merit for the performance of repetitive QND readout is the probability $\epsilon$ of a readout error occurring in a single repetition. Here, each repetition is assigned a binary outcome, with $\epsilon$ being the probability of an incorrect assignment. The readout errors are then corrected by performing a majority vote on the binary outcomes. The cumulative readout error rate $e_N$ after $N$ repetitions is simply proportional to the probability that an error has occurred in more than half of the repetitions, $e_N \propto \epsilon^{N/2}$. Therefore, it appears that the cumulative readout error rate is fully determined by the single-repetition error rate $\epsilon$. However, a typical real-world readout does not only have two outcomes. Rather, the readout outcomes are commonly analog (see Fig.~\ref{fig:fig1}) and need not even be scalar. For instance, the single-repetition readout outcome could be a continuous electrical voltage or current~\cite{elzerman2004,barthel2009,mallet2009,morello2010,studenikin2012,jeffrey2014,saira2014,broome2017,walter2017,nakajima2017,pakkiam2018,vukusic2018,harveycollard2018,opremcak2018,west2019,urdampilleta2019,zheng2019,keith2019,keith2019-2,opremcak2021,ebel2020,connors2020,martinez2020,rosenthal2021,jang2020}, a nonbinary photon count at a photodetector~\cite{myerson2008,gehr2010,robledo2011,harty2014,shields2015,danjou2016,hopper2020,todaro2021,edmunds2020}, or a collection of such outcomes. If each individual repetition is assigned a binary outcome, information on the level of confidence in each analog outcome is discarded. Such analog-to-binary conversion is known as ``hard decoding''. It was shown that taking into account the additional information contained in the distribution of analog readout outcomes, or ``soft decoding'', can significantly reduce $e_N$ compared to hard decoding~\cite{danjou2014-2,hann2018,dinani2019,liu2020,xue2020-2}. It follows that two repetitive QND readouts characterized by the same value of $\epsilon$ can yield different values of $e_N$. This suggests that $\epsilon$ is not a universal descriptor of readout performance~\footnote{The importance of soft decoding was also recognized in the context of continuous-variable quantum error correction~\cite{fukui2017,vuillot2019,noh2020-3,noh2020,rozpedek2020,fukui2020}, continuous-variable quantum communication~\cite{bari2012}, and quantum parameter estimation~\cite{danjou2014-2,ryan2015,xu2019-7}.}. Moreover, it was shown that the existence of a soft-decoding advantage is highly dependent on the details of the often highly non-Gaussian distributions of analog readout outcomes. Heuristic arguments have been put forward to predict when an advantage exists in common cases~\cite{danjou2014-2,xue2020-2}, but a unified and economical description that fully captures the performance of repetitive QND readout for all outcome distributions is highly desirable.
The present work suggests a figure of merit that fully captures the cumulative error rate $e_N$ of the repetitive QND readout of binary observables with an arbitrary distribution of analog readout outcomes. That figure of merit is the asymptotic rate of decrease of $\ln e_N$ with the number of repetitions $N$. In the classical theory of hypothesis testing, this quantity is known as the Chernoff information for the discrimination of two probability distributions~\cite{chernoff1952,hoeffding1965}. Like the single-repetition readout error rate $\epsilon$, the Chernoff information can be obtained solely from the statistics of readout outcomes in a single repetition. In fact, it is closely related to $\epsilon$ when the readout outcomes are binary. Unlike $\epsilon$, however, the Chernoff information does not discard information associated with the level of confidence in each analog readout outcome. Therefore, the Chernoff information enables a universal description of repetitive QND readout, in the sense that all outcome distributions with the same Chernoff information have the same asymptotic cumulative readout fidelity. Therefore, theoretical analysis of repetitive QND readout is reduced to the calculation of the Chernoff information. Importantly, this universality persists in the nonasymptotic regime and in the presence of non-QND imperfections. This leads to simple and universal expressions for the cumulative error rate of a QND readout that remain accurate for small $N$. Finally, the Chernoff information is used to predict the soft-decoding advantage in cases of practical importance without having to resort to time-consuming simulations~\cite{danjou2014-2,hann2018,dinani2019,nakajima2019,liu2020,xue2020-2}. The present work paves the way for a generalized understanding of the real-world readout of quantum observables and should facilitate the engineering of high-fidelity QND readout in near-term quantum devices on all platforms.
\section{Repetitive quantum nondemolition readout \label{sec:qndReadoutRepetitive}}
\subsection{Quantum nondemolition readout \label{sec:qndReadout}}
A binary quantum observable $A$ has only two distinct eigenvalues $a = +1$ and $a = -1$. The observable $A$ could be, e.g., the $Z$ Pauli observable of a qubit or a parity-check stabilizer in an error-correcting code. If the system is prepared in an eigenstate of $A$ with eigenvalue $a$, an ideal QND readout of $A$ yields the value $a$ with certainty. Moreover, the post-readout state is also an eigenstate with eigenvalue $a$. However, a real-world QND readout is subject to noise that introduces uncertainty in the value of $a$. In general, it is therefore not possible to determine $a$ with certainty after a single readout. Fortunately, the QND property guarantees that every subsequent readout yields the same eigenvalue as in the first readout. Thus, repeated readouts ``average out'' the noise and enable readout of the observable $A$ to arbitrary accuracy. A more detailed overview of the theory of quantum nondemolition readout is given in Appendix~\ref{app:qndReadout}.
\subsection{Single repetition \label{sec:singleRepetition}}
A single repetition of a general QND readout yields an outcome $\mathcal{O}$ that depends on the eigenvalue $a$. More precisely, the statistics of the readout outcomes are described by the probability distribution $P_\pm(\mathcal{O})$ for observing $\mathcal{O}$ if $a = \pm 1$. Here, the distributions $P_\pm(\mathcal{O})$ can take any form. For instance, the outcome $\mathcal{O}$ could be a discrete random variable, a continuous random variable, or a multidimensional set of random variables. Possible distributions $P_\pm(\mathcal{O})$ are illustrated schematically in Fig.~\ref{fig:fig1}. Several other experimentally relevant examples are discussed in Secs.~\ref{sec:generalized} and \ref{sec:softDecoding}. In the following, it is assumed that these distributions are known \emph{a priori}, either empirically or from theoretical modeling. The precise meaning of the distributions $P_\pm(\mathcal{O})$ within quantum measurement theory is reviewed in Appendix~\ref{app:qndReadout}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig1}
\caption{Schematic illustration of single-repetition probability distributions $P_\pm(\mathcal{O})$ of analog readout outcomes conditioned on the observable eigenvalue $a=\pm 1$. The corresponding log-likelihood ratio $\lambda(\mathcal{O})$ is also shown. The single-repetition error rates $\epsilon_\pm$ conditioned on the eigenvalue $a = \pm 1$ are represented by the shaded areas. \label{fig:fig1}}
\end{figure}
The most commonly used figure of merit for readout performance in a single repetition is the error rate $\epsilon$, defined as the average probability of assigning the incorrect eigenvalue to the observed outcome. The value of $\epsilon$ depends on the rule chosen to assign an eigenvalue $a$ to each outcome $\mathcal{O}$. In the following, it is assumed that the two eigenvalues are equally likely {\it a priori}. This leads to a definition of $\epsilon$ that is agnostic about the value of $a$. Moreover, this case is common and desirable because it maximizes the information extracted by readout. Under this assumption, the assignment rule that minimizes $\epsilon$ is obtained by calculating the log-likelihood ratio~\cite{kay1998}
\begin{align}
\lambda(\mathcal{O}) = \ln \frac{P_+(\mathcal{O})}{P_-(\mathcal{O})}. \label{eq:LLR}
\end{align}
When $\lambda(\mathcal{O})$ is larger (smaller) than $0$, the eigenvalue $a = +1$ ($a = -1$) is assigned. If $\lambda(\mathcal{O}) = 0$, the eigenvalue is assigned at random. The log-likelihood ratio, Eq.~\eqref{eq:LLR}, is central to hypothesis testing. It should be interpreted as the observer's level of confidence in the assignment given the observed outcome $\mathcal{O}$. The log-likelihood ratio is depicted in Fig.~\ref{fig:fig1} alongside the distributions $P_\pm(\mathcal{O})$. The average single-repetition error rate is $\epsilon = \left(\epsilon_+ + \epsilon_-\right)/2$, where
\begin{align}
\epsilon_+ = P_+(\lambda < 0), \;\;\;\epsilon_- = P_-(\lambda > 0) \label{eq:conditionedErrors}
\end{align}
are the error rates conditioned on preparation of $a = +1$ and $a = -1$, respectively. Here, $P_\pm(\lambda)$ are the probability distributions for $\lambda$ conditioned on $a = \pm 1$. The conditioned error rates $\epsilon_\pm$ are represented by the shaded areas~\footnote{Or hypervolumes in the case of multidimensional $\mathcal{O}$.} in Fig.~\ref{fig:fig1}. Because $\epsilon_+$ and $\epsilon_-$ are, respectively, integrals of $P_+(\mathcal{O})$ and $P_-(\mathcal{O})$ only, the error rate $\epsilon$ cannot contain information on the relative value of $P_+(\mathcal{O})$ and $P_-(\mathcal{O})$. Therefore, important information contained in the functional form of log-likelihood ratio $\lambda(\mathcal{O})$ is discarded.
\subsection{Multiple repetitions \label{sec:multipleRepetitions}}
That the single-repetition error rate discards information is most readily seen by considering the repeated QND readout of the binary observable $A$. Repeated readout yields a string of outcomes $\mathbf{O}_N = \left\{\mathcal{O}_0,\mathcal{O}_1,\dots,\mathcal{O}_{N-1}\right\}$. Due to the QND nature of the readout, all outcomes are independently sampled from the same distribution $P_\pm(\mathcal{O})$ when an eigenstate with eigenvalue $a$ is prepared. Accordingly, the joint distribution of the readout outcomes conditioned on the eigenvalue $a = \pm 1$ is $P_\pm(\mathbf{O}_N) = \prod_{k=0}^{N-1} P_\pm(\mathcal{O}_k)$. A quantum-mechanical derivation is given in Appendix~\ref{app:qndReadout}. The cumulative log-likelihood ratio for the entire string of outcomes $\mathbf{O}_N$ is thus
\begin{align}
l_N = \ln \frac{P_+(\mathbf{O}_N)}{P_-(\mathbf{O}_N)} = \sum_{k=0}^{N-1} \lambda(\mathcal{O}_k). \label{eq:cumLLR}
\end{align}
As before, the eigenvalue $a = +1$ ($a = -1$) is assigned when $l_N > 0$ ($l_N < 0$). Equation~\eqref{eq:cumLLR} shows that in the general case, each outcome must be weighed by $\lambda(\mathcal{O}_k)$ in order to perform optimal assignment. Therefore, discarding information contained in $\lambda(\mathcal{O})$ in each repetition is necessarily suboptimal. The average cumulative error rate is now $e_N = (e_{+,N}+e_{-,N})/2$, where
\begin{align}
e_{+,N} = P_+(l_N < 0), \;\;\;e_{-,N} = P_-(l_N > 0) \label{eq:cumConditionedErrors}
\end{align}
are the cumulative error rates conditioned on preparation of $a = +1$ and $a = -1$, respectively. Here, $P_\pm(l_N)$ are the probability distributions for $l_N$ conditioned on $a = \pm 1$. Because the noise is sampled independently in each repetition, $e_N$ is expected to decrease exponentially as $N$ grows, $e_N \propto \exp\left(-C N\right)$ for some constant $C$. The constant $C$ is the Chernoff information, which will be argued to be a more appropriate figure of merit for repetitive QND readout than the single-repetition error rate $\epsilon$.
\section{A generalized figure of merit \label{sec:generalized}}
\subsection{The Chernoff information \label{sec:chernoffInformation}}
The asymptotic behavior of the cumulative error rate $e_N$ is given by the theory of large deviations~\cite{cover2005} developed by Cram{\' e}r~\cite{cramer1938,*cramer1994,*cramer2018} and Sanov~\cite{sanov1957,*sanov1961,*sanov1961-2}, and applied to hypothesis testing by Chernoff and Hoeffding~\cite{chernoff1952,hoeffding1965}. The theory is summarized in Appendix~\ref{app:largeDeviationTheory}. The result is that
\begin{align}
\begin{array}{lcl}
\ln e_N \sim - C N & \textrm{as} & N\rightarrow \infty,
\end{array} \label{eq:exponentialDependence}
\end{align}
where
\begin{align}
C = -\inf_{s\in\left[0,1\right]} \ln \left[ \int d\mathcal{O}\,P_+(\mathcal{O})^s P_-(\mathcal{O})^{1-s} \right]. \label{eq:chernoffInformation}
\end{align}
Here, ``$\sim$'' denotes asymptotic equality. The quantity $C$ is known as the Chernoff information~\footnote{In the literature, the Chernoff information is also known as the Chernoff bound or the Chernoff distance. A quantum version of the Chernoff information has also been developed~\cite{audenaert2007,audenaert2008,calsamiglia2008,nussbaum2009} which optimizes the asymptotic cumulative error rate over all possible quantum measurements. However, the present work is concerned with readout performance for a given imperfect local measurement. The classical Chernoff information is sufficient to that end.}. It is a symmetric distance measure between the distributions $P_+(\mathcal{O})$ and $P_-(\mathcal{O})$ and can be interpreted as a rate of information gain per repetition. Like the single-repetition readout error rate $\epsilon$, the Chernoff information depends only on the statistics of readout outcomes in a single repetition. Unlike $\epsilon$, however, it depends on the relative value of $P_+(\mathcal{O})$ and $P_-(\mathcal{O})$. Consequently, the Chernoff information encodes information contained in the level of confidence $\lambda(\mathcal{O})$ in each readout outcome. As a result, readout outcome distributions $P_\pm(\mathcal{O})$ with the same single-repetition error rates $\epsilon_\pm$ do not necessarily have the same Chernoff information.
\subsection{Universality \label{sec:universalityQND}}
The power of using the Chernoff information as a figure of merit for readout of binary observables is that all readout outcome distributions $P_\pm(\mathcal{O})$ with the same Chernoff information, no matter their shape, give the same asymptotic behavior for $\ln\epsilon_N$ as $N\rightarrow \infty$. Here, it is argued that such universal behavior persists even in the nonasymptotic regime $N \gtrsim 1$. As discussed in Sec.~\ref{sec:gaussianDistributions}, the Chernoff information for Gaussian noise with signal-to-noise ratio $r$ is simply given by $C = r/2$. This suggests an interpretation of the Chernoff information as an effective Gaussian signal-to-noise ratio. More precisely, it suggests that as far as the cumulative error rate $e_N$ is concerned, non-Gaussian noise may be replaced by an effective Gaussian noise with signal-to-noise ratio $2C$. In the case of Gaussian noise, however, an exact expression for $e_N$ can be obtained for all $N$, namely, $e_N = e_{\pm,N} = \textrm{erfc}\left(\sqrt{r N/2}\right)/2$~\cite{gambetta2007}. This naturally leads to the ansatz that the same relationship holds for arbitrary noise by setting $r = 2C$:
\begin{align}
e_N = e_{\pm,N} = \frac{1}{2}\textrm{erfc}\left(\sqrt{C N}\right). \label{eq:gaussianAnsatz}
\end{align}
It can be shown from simple counter-examples at $N=1$ that Eq.~\eqref{eq:gaussianAnsatz} is not exact for finite $N$. Nevertheless, its approximate validity was assessed by performing numerical Monte Carlo simulations for a variety of readout outcome distributions $P_\pm(\mathcal{O})$ (see Appendix~\ref{app:simulations}). The results are shown in Fig.~\ref{fig:fig2}. It is found that Eq.~\eqref{eq:gaussianAnsatz} captures $\ln e_N$ extremely well for all $N \ge 1$ and for all the considered noise models. These include Gaussian noise ubiquitous in readouts relying on electronic and homodyne detection~\cite{barthel2009,morello2010,jeffrey2014,saira2014,nakajima2017,broome2017,walter2017,harveycollard2018,pakkiam2018,west2019,urdampilleta2019,zheng2019,keith2019,connors2020,opremcak2021,rosenthal2021,jang2020}, Poissonian noise ubiquitous in readouts relying on fluorescence detection~\cite{myerson2008,gehr2010,robledo2011,harty2014,shields2015,danjou2016,hopper2020,todaro2021,edmunds2020}, Cauchy noise with fat polynomial tails, and the heavily bimodal non-Gaussian readout noise observed empirically in Ref.~\cite{xue2020-2}. The latter two cases show that Eq.~\eqref{eq:gaussianAnsatz} approximately holds for noise distributions that are heavily non-Gaussian and need not even have a finite variance~\footnote{Some of the qubit readout schemes cited here are not QND. Nevertheless, such qubits may still be used as ancillas to perform QND readout of other observables. In such cases, the non-Gaussian features of the noise translate directly to the repetitive QND readout discussed in this manuscript.}. The approximate validity of Eq.~\eqref{eq:gaussianAnsatz} can be intuitively understood with the following argument. For a fixed value of $C N$, the number of readouts $N$ increases as $C \rightarrow 0$. Therefore, the noise becomes effectively Gaussian on all time scales and the limit of a continuous readout subject to Gaussian noise is recovered~\cite{gambetta2007,tsang2012}. This expresses a generalized central-limit theorem for the probability of rare events in Eq.~\eqref{eq:cumConditionedErrors}. What the simulations in Fig.~\ref{fig:fig2} shows is that Eq.~\eqref{eq:gaussianAnsatz} remains an excellent approximation for common non-Gaussian sources of noise with finite $N$ and finite $C \lesssim 1$ . This is precisely the situation where repetitive QND readout is most useful.
In the regime where Eq.~\eqref{eq:gaussianAnsatz} is less accurate, $C \gtrsim 1$, a more general approximate universal form of the cumulative error rates is obtained with the help of a saddle-point approximation~\cite{vantrees2001,butler2015}. While such an approximation becomes more accurate as $N$ increases, it typically remains very accurate for finite $N$~\cite{butler2015}. As discussed in Appendix~\ref{app:largeDeviationTheory}, the saddle-point approximation for the average error rate is
\begin{align}
\begin{split}
e_N \approx \frac{1}{2}&\textrm{erfc}\left(\sqrt{C N}\right) + \frac{\left(\alpha^{-1/2}-1\right)}{\sqrt{4\piC N}}\exp\left(-C N\right), \label{eq:saddlePointLeading}
\end{split}
\end{align}
and the saddle-point approximations for the conditioned error rates are
\begin{align}
e_{\pm,N} \approx e_N \pm \frac{\left(2s^* - 1\right)}{\sqrt{4\pi \alpha C N}}\exp\left(-C N\right). \label{eq:saddlePointLeadingConditioned}
\end{align}
Here, $\alpha$ is a parameter that is easily computed from the distributions $P_\pm(\mathcal{O})$ as described in Appendix~\ref{app:largeDeviationTheory}. Moreover, $s^*$ is the position of the optimum in Eq.~\eqref{eq:chernoffInformation}. These parameters quantify the deviation from the Gaussian behavior of Eq.~\eqref{eq:gaussianAnsatz}. Indeed, it is shown in Appendix~\ref{app:largeDeviationTheory} that $\alpha \rightarrow 1$ and $s^* \rightarrow 1/2$ as $C \rightarrow 0$. In that limit, the saddle-point approximation reduces to Eq.~\eqref{eq:gaussianAnsatz} for all $C N$. Note that universality persists even in the extreme case where the saddle-point approximation breaks down, $\alpha C N \ll 1$. In that case, the cumulative error rate approaches the Chernoff upper bound, $e_N \approx \exp\left(-C N\right)/2$~\cite{cover2005}. Finally, it must be noted that deviations from Eqs.~\eqref{eq:saddlePointLeading} and \eqref{eq:saddlePointLeadingConditioned} may occur at finite $N$ for discrete readout noise~\footnote{More precisely, Eqs.~\eqref{eq:saddlePointLeading} and \eqref{eq:saddlePointLeadingConditioned} must be modified if the cumulative log-likelihood ratio $l_N$ takes a discrete set of values for some finite $N$. This can occur even for some continuous distributions $P_\pm(\mathcal{O})$. For instance, the single-repetition log-likelihood ratio $\lambda(\mathcal{O})$ is approximately binary-valued for the near-continuous distributions in Fig.~\ref{fig:fig2}(e). Finally, note that exact analytical expressions can be obtained for $e_N$ and $e_{\pm,N}$ in the special cases of Poissonian noise and binary noise.}. It is possible to modify the above expressions to account for these so-called ``continuity corrections'' if necessary~\cite{butler2015}.
The above discussion shows that Eq.~\eqref{eq:gaussianAnsatz} can be used to accurately estimate the cumulative error rate $e_N$ for arbitrary analog readout noise and finite $N$, obviating the need for time-consuming simulations~\cite{danjou2014-2,hann2018,dinani2019,nakajima2019,liu2020,xue2020-2} that are specialized to the noise model. Even in the regime where Eq.~\eqref{eq:gaussianAnsatz} becomes less accurate, universal behavior is retained at the cost of introducing only two additional parameters $\alpha$ and $s^*$. This approximate nonasymptotic universal behavior should greatly facilitate readout engineering by reducing the analysis of the great variety of noise models discussed in the literature to the calculation of the Chernoff information and its auxiliary quantities $\alpha$ and $s^*$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig2}
\caption{(a) Monte Carlo simulations of the cumulative error rate $e_N$ for (b) Gaussian noise ($C = 0.5$, $\alpha = 1$, $s^* = 0.5$), (c) Poissonian noise ($C = 0.2533$, $\alpha = 0.9999, s^* = 0.5569$), (d) Cauchy noise ($C = 0.4422$, $\alpha = 1.1079$, $s^* = 0.5$), and (e),(f) the analog ($C = 0.1634$, $\alpha = 1.0522$, $s^* = 0.5203$) and binary ($C = 0.1577$, $\alpha = 1.0536$, $s^* = 0.5199$) noise observed in Ref.~\cite{xue2020-2}. In (b)-(f), the blue distribution corresponds to $a = +1$ and the red distribution corresponds to $a = -1$. The cumulative error rate is a universal function of $C N$ and $C/p$ for all simulated noise models. The ideal QND case corresponds to $C/p = \infty$. The solid black line is obtained from Eq.~\eqref{eq:gaussianAnsatz}. The details of the simulations are discussed in Appendix~\ref{app:simulations}. \label{fig:fig2}}
\end{figure*}
\subsection{Non-QND imperfections \label{sec:universalityNonQND}}
In practice, the error rate of a QND readout is limited by non-QND imperfections that generate transitions between the eigenvalues of $A$. To achieve a low cumulative error rate $e_N$, non-QND processes must necessarily act on a time scale longer than (1) the duration $\Delta t$ of a single readout and longer than (2) the time scale $\Delta t/C$ for achieving low error rate. In this ``single-shot readout'' regime, transitions between the eigenvalues of $A$ are effectively classical~\cite{gagen1993,korotkov2001}. More precisely, the observed quantum jumps are well described by classical transition probabilities for all Markovian non-QND processes (see Appendix~\ref{app:qndReadout} for a more detailed discussion). A very common case is relaxation from $a=+1$ to $a=-1$ with small probability $p \ll \min(C,1)$ in each repetition~\cite{myerson2008,barthel2009,jeffrey2014,saira2014,walter2017,pakkiam2018,west2019,urdampilleta2019,zheng2019,nakajima2019,yoneda2020,xue2020-2,connors2020,opremcak2021,rosenthal2021}. It is important to verify that universality persists in this more realistic scenario.
For Gaussian noise, it is known that $e_N$ is a function of $r N$ and $r/p$ only~\cite{gambetta2007}. Therefore, $e_N$ should be a universal function of $C N$ and $C/p$ regardless of the details of the noise in the regime where Eq.~\eqref{eq:gaussianAnsatz} holds, $\alpha \approx 1$ and $s^* \approx 1/2$. It was verified that this is indeed the case by performing Monte Carlo simulations using the procedure described in Appendix~\ref{app:simulations}. The results are shown in Fig.~\ref{fig:fig2}. All noise models with the same value of $C/p$ collapse on the same curve when plotted as a function of $C N$. Therefore, the cumulative error rate $e_N$ may simply be tabulated for various values $C N$ and $C/p$ by assuming Gaussian noise. The cumulative error rate for arbitrary non-Gaussian noise can then be directly read off from the Gaussian results. Although the simulation is not shown here, it was also verified that the above conclusions hold when both relaxation and excitation occur with probability $p$.
In the regime where Eq.~\eqref{eq:gaussianAnsatz} is less accurate, $\alpha \neq 1$ and $s^* \neq 1/2$, it was observed numerically that the above conclusions hold provided that the values of $\alpha$ and $s^*$ are also specified. That is, the logarithm of the cumulative readout error rate appears to have the functional form
\begin{align}
\ln e_N = f\left(C N, C/p, \alpha, s^* \right). \label{eq:nonQNDUniversality}
\end{align}
Note that the average error rate may now depend on $s^*$ because non-QND imperfections may affect different states asymmetrically. Just as in the perfectly QND case, additional ``continuity corrections'' may be required in the case of discrete distributions and finite $N$~\cite{butler2015}.
\section{Hard and soft decoding \label{sec:softDecoding}}
\subsection{Soft-decoding advantage \label{sec:softDecodingAdvantage}}
The Chernoff information can also be used to quantify the information lost by converting analog readout outcomes to binary values. To do this, the Chernoff information $C$ for analog outcomes is compared to the Chernoff information $C_b$ for the corresponding binarized outcomes. This is reflected in the soft-decoding advantage
\begin{align}
\mathcal{A} = \frac{C}{C_b}. \label{eq:advantage}
\end{align}
If $\mathcal{A} = 1$, no information is discarded by binarizing readout outcomes. If $\mathcal{A} > 1$, however, a significant amount of information has been lost. Indeed, inspection of Eq.~\eqref{eq:exponentialDependence} shows that binarizing readout outcomes changes the order of magnitude of $e_N$ by a factor $\mathcal{A}$,
\begin{align}
e_N \propto \left(e_{N,b}\right)^\mathcal{A} \;\;\; \textrm{as} \;\;\; N \rightarrow \infty. \label{eq:orderMagnitude}
\end{align}
Here, $e_{N,b}$ is the cumulative error rate for binarized outcomes. Equivalently, the number of readouts required to achieve a desired value of $e_N$ is $\mathcal{A}$ times larger with hard decoding than with soft decoding. Accounting for such lost information could prove critical in pushing readout errors below the threshold of quantum error-correcting codes. Due to the persistence of universality at small $N$ discussed in Sec.~\ref{sec:universalityQND}, the asymptotic soft-decoding advantage is expected to also persist in the nonasymptotic limit. Indeed, it was verified that Eq.~\eqref{eq:advantage} accurately predicts the soft-decoding advantage observed for small $N$ in Ref.~\cite{xue2020-2}.
A general analytical expression for $C_b$ is given in Appendix~\ref{app:softDecodingAdvantage}. In the important limit $\epsilon_\pm \rightarrow 0$, it takes the form
\begin{align}
C_b \sim \left[\frac{1}{\ln(\epsilon_+^{-1})}+\frac{1}{\ln(\epsilon_-^{-1})}\right]^{-1}. \label{eq:asymptoticBinaryChernoffInformation}
\end{align}
Similar to the average single-repetition error rate, $\epsilon = (\epsilon_+ + \epsilon_-)/2$, $C_b$ is a monotonic function of both $\epsilon_+$ and $\epsilon_-$. This makes $C_b$ an appropriate substitute for $\epsilon$ to quantify the performance of a single repetition.
To illustrate the usefulness of the Chernoff information in characterizing readout, the soft-decoding advantage, Eq.~\eqref{eq:advantage}, is now calculated for two examples of experimental interest.
\subsection{Example 1: Gaussian distributed readout outcomes \label{sec:gaussianDistributions}}
It is first assumed that the readout of the eigenvalues $a = \pm 1$ is subject to additive Gaussian noise, such that the distributions of analog readout outcomes are
\begin{align}
P_\pm(\mathcal{O}) = \sqrt{\frac{r}{2\pi}} \exp\left[-\frac{r\left(\mathcal{O} \mp 1\right)^2}{2}\right]. \label{eq:distsGaussian}
\end{align}
Here, $r$ is the (power) signal-to-noise ratio. Gaussian noise is ubiquitous in real experiments. For instance, electronic noise in the readout of semiconductor spin qubits~\cite{barthel2009,morello2010,nakajima2017,broome2017,harveycollard2018,pakkiam2018,west2019,urdampilleta2019,zheng2019,keith2019,connors2020,jang2020} as well as quantum noise in the readout of superconducting qubits~\cite{jeffrey2014,saira2014,walter2017,opremcak2021,rosenthal2021} are well modeled by additive Gaussian noise. An application of Eq.~\eqref{eq:chernoffInformation} gives $C = r/2$ for all $r$. The single-repetition error rates corresponding to these distributions are $\epsilon_\pm = \textrm{erfc}{\left(\sqrt{r/2}\right)}/2$. Using this expression, it is possible to show that $C_b \approx r/\pi$ for $r \ll 1$ and $C_b \approx r/4$ for $r \gg 1$ (see Appendix~\ref{app:softDecodingAdvantage}). Thus, the soft-decoding advantage varies smoothly from
\begin{align}
\begin{array}{lcl}
\mathcal{A} = \frac{\pi}{2} \textrm{ for } r \ll 1& \textrm{ to } &\mathcal{A} = 2 \textrm{ for } r \gg 1
\end{array}.
\end{align}
Therefore, hard decoding of Gaussian distributions leads to loss of information for all signal-to-noise ratios. In particular, the number of repetitions required to reach a desired error rate is always at least $\pi/2 \approx 1.57$ times larger if the analog outcomes are binarized. The result for $r\gg1$ is consistent with the analysis of Ref.~\cite{danjou2014-2} and with known results from the classical theory of soft-decision decoding~\cite{chase1972,einarsson1976}.
\subsection{Example 2: Gaussian distributed readout outcomes with conversion errors \label{sec:conversionErrors}}
In the presence of Gaussian readout noise, the eigenvalues $a = \pm 1$ are ideally each converted to Gaussian distributions with means $\pm 1$. In practice, however, imperfections in the readout scheme may lead to conversion errors. As a result, the distributions $P_\pm(\mathcal{O})$ often resemble mixtures of Gaussian distributions~\cite{morello2010,barthel2009,jeffrey2014,saira2014,walter2017,pakkiam2018,west2019,urdampilleta2019,zheng2019,xue2020-2}. Such imperfections can be modeled with the distributions
\begin{align}
\begin{split}
P_\pm(\mathcal{O}) = &(1-\eta)\, \sqrt{\frac{r}{2\pi}} \exp\left[-\frac{r\left(\mathcal{O} \mp 1\right)^2}{2}\right]\\
&+ \eta\, \sqrt{\frac{r}{2\pi}} \exp\left[-\frac{r\left(\mathcal{O} \pm 1\right)^2}{2}\right].
\end{split} \label{eq:distsConversion}
\end{align}
Here, $\eta$ is the rate of conversion errors. Expressions for $C$ and $C_b$ for these distributions are given in Appendix~\ref{app:softDecodingAdvantage}. The resulting soft-decoding advantage $\mathcal{A}$ is shown in Fig.~\ref{fig:fig3} as a function of $\eta$ and of the rate of errors due to Gaussian noise, $\epsilon_G \equiv \textrm{erfc}{\left(\sqrt{r/2}\right)}/2$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig3}
\caption{Soft-decoding advantage $\mathcal{A}$ for a QND readout with both Gaussian noise and conversion errors. Here, $\epsilon_G = \textrm{erfc}\left(\sqrt{r/2}\right)/2$ is the rate of pure Gaussian errors and $\eta$ is the rate of conversion errors. Note that the Chernoff information increases as $\epsilon_G$ and $\eta$ decrease. The soft-decoding advantage was calculated using the expressions given in Appendix~\ref{app:softDecodingAdvantage}. \label{fig:fig3}}
\end{figure}
There is a clear transition from a region of parameter space where $\mathcal{A} = 1$ when conversion errors dominate to a region where $\mathcal{A} > 1$ when Gaussian errors dominate. This agrees with the heuristic conclusions of Refs.~\cite{danjou2014-2,xue2020-2}. Note, however, that while previous work had to resort to time-consuming simulations to quantify soft-decoding advantage of non-Gaussian distributions~\cite{danjou2014-2,hann2018,dinani2019,nakajima2019,liu2020,xue2020-2}, the present approach enables an accurate prediction of $\mathcal{A}$ by computing a single integral. This makes it much easier to explore the parameter space to engineer and optimize readout.
\section{Conclusion}
In conclusion, a generalized figure of merit for the repetitive QND readout of a binary quantum observable $A$ was suggested. This figure of merit is the Chernoff information associated with the analog distributions of readout outcomes for each eigenvalue of $A$ [see Eq.~\eqref{eq:chernoffInformation}]. When the readout outcomes are binary, the Chernoff information is closely related to the commonly used single-repetition error rate. Contrary to the single-repetition error rate, however, the Chernoff information is a universal figure of merit: all noise models with the same Chernoff information yield the same asymptotic functional form for the cumulative error rate. It follows that arbitrary non-Gaussian readout noise can be modeled by effective Gaussian noise without loss of generality. Crucially, it was shown that universal behavior persists for the small number of repetitions and non-QND imperfections relevant to real-world experiments. Finally, the Chernoff information was used to quantify the amount of information discarded by binarizing readout outcomes in each repetition, and simple results were derived analytically for experimentally relevant readout models. The results presented here provide a unified description of repetitive QND readout and should greatly facilitate the standardization, optimization, and engineering of quantum readout across all experimental platforms.
There are several possible avenues for future research. Firstly, it would be interesting to generalize the results presented here for noise that is nonstationary or correlated between repetitions, two features that are likely to appear in real experiments. Extensions to the discrimination of nonbinary observables~\cite{leang1997,li2016} could also be of interest in some architectures. Moreover, a rigorous justification of the universal behavior described by Eq.~\eqref{eq:nonQNDUniversality} is highly desirable. Such a justification could potentially be obtained by using extensions of large deviation theory for Markov processes~\cite{donsker1976,butler2015} or through the relationship between probability theory and the renormalization group~\cite{jonalasinio2001}. In the latter approach, the readout outcomes $\mathcal{O}_k$ are thought of as classical degrees of freedom on an $N$-site lattice and non-QND transition probabilities are interpreted as weak interaction parameters between the $\mathcal{O}_k$. The functional form of the error rate could then be obtained by analyzing the renormalization group flow around the fixed point at $N=\infty$. While the present work focuses on repetitive QND readout, the results are expected to be relevant for all quantum information processing tasks where large streams of analog readout outcomes must be processed. In particular, it would be of great interest to investigate whether the Chernoff information can help to universally parametrize the logical failure rate~\cite{iyer2018} as well as quantify the soft-decoding advantage for quantum error-correcting codes subject to arbitrary readout noise.
\begin{acknowledgments}
This work was supported by the \href{https://www.bmbf.de/}{BMBF} project DiaPol, the \href{https://www.dfg.de/}{DFG} via a Reinhart Koselleck award, and the \href{https://erc.europa.eu/}{ERC} Synergy Grant HyperQ. The author thanks W.A.~Coish, M.B.~Plenio, X.~Xue, and M.C.~Korzeczek for useful comments on the manuscript, as well as X.~Xue, T.F.~Watson, and L.M.K.~Vandersypen for sharing data.
\end{acknowledgments}
|
1,108,101,565,126 | arxiv | \section{Introduction}
It is well-known that human and animals can learn new tasks without forgetting old ones. Nevertheless, conventional retraining of an existing Deep Neural Network (DNN) model for new tasks could easily result in the forgetting of old knowledge upon earlier tasks and thus degrades the performance. Such phenomenon is known as \textit{catastrophic forgetting}, which widely exists in continual learning~\cite{kirkpatrick2017overcoming}. We note that the continual learning refers that a model is incrementally updated over a sequence of tasks, performing knowledge transfer from earlier tasks to current one.
Recent works~\cite{li2017learning,kirkpatrick2017overcoming,riemer2018learning,chaudhry2018efficient,yoon2017lifelong,mallya2018packnet, hung2019compacting, yoon2019scalable, parisi2019continual} have made significant efforts in introducing various countermeasures to overcome the catastrophic forgetting issue.
As illustrated in~\cref{fig:overview}, given a well-trained model on the initial task, those countermeasures could be generally summarized into four methods:
\textbf{a)} Training the model w.r.t the new task with regularization to prevent drastic weight update, thus maintaining the performance of old tasks~\cite{li2017learning,kirkpatrick2017overcoming,riemer2018learning,chaudhry2018efficient};
\textbf{b)} Freezing the backbone model of old tasks, while introducing additional task-specific weights for the new tasks~\cite{rusu2016progressive};
\textbf{c)} Selectively retraining partial weights of backbone model, as well as adding additional task-specific parameters on new tasks~\cite{mallya2018packnet, hung2019compacting, yoon2017lifelong, yoon2019scalable};
\textbf{d)} Fixing the backbone model weights and only learning a binary mask to select relevant weights for new tasks~\cite{mallya2018piggyback}.
Overall, it can be seen that the trend is to introduce task-specific parameters while squeezing out model redundancy.
We elaborate further on the above four methods against catastrophic forgetting (\cref{fig:overview}). Method-a) could not effectively prevent the catastrophic forgetting with the growing number of new tasks.
In contrast, in Method-b) and Method-d), since the backbone model weights correspond to old tasks are frozen, the inference accuracy on old tasks is guaranteed. However, Method-b) fails to achieve good performance on new tasks, owing to parameters correspond to old tasks (i.e., backbone model) are not effectively utilized.
More recently, the method-c), named compress-grow-based approach (CPG \cite{hung2019compacting}), handles the problem by iteratively compressing (via pruning) and then growing additional task-specific parameters. Note that, it will expand the model capacity until the accuracy on new task is maximized. Unfortunately, such method is at the cost of one order larger training time and computing resources, since it combines model pruning, weights selection, model channel expansion and even weight regularization. Although it largely alleviates the catastrophic forgetting and performs well on old and new tasks, such extremely high training cost, in terms of both training time and computing resources, makes it impossible to deploy into state-of-the-art popular edge based or mobile computing based continual learning domain.
In this work, we propose a new training method called \textit{Kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while keeping the backbone model fixed. The KSM method has capability to mitigate catastrophic forgetting, to better transfer knowledge from old tasks, and more importantly, to maximize the training efficiency.
More specifically, we use a similar network architecture as Piggyback~\cite{mallya2018piggyback} (\cref{fig:overview}(d)), which introduces a mask tensor to perform weight re-factorization.
We want to highlight that our method differs from piggyback or other prior works in the following aspects:
\begin{enumerate}
\item \textbf{Kernel-wise mask sharing.} To reduce the mask size and improve the computation efficiency in hardware, we design the mask in kernel-wise, instead of element-wise. For example, for a 3 by 3 kernel, the mask size would reduce by 9 times.
\item \textbf{Soft mask.}
To boost the knowledge representation ability without involving additional training cost, we decompose the mask into a binary mask and partial real-value scaling coefficient tensor.
\item \textbf{Softmax trick.} To eliminate gradient estimation in binary mask training, we propose to leverage softmax trick to relax the gradient calculation for mask during training.
\end{enumerate}
Benefiting from these techniques, the proposed KSM method could achieve CPG-like (or even better) accuracy performance, while keeping Piggyback-like (or even better) training speed.
It is also worth noting that, in this work, we only focus on KSM without growing the backbone model. But it is compatible with model expansion if needed, which will be investigated in future works.
\section{Related Work}
\subsection{Continual learning}
The related works in continual learning can be categorized into network regularization and dynamic architecture.
\subsubsection{Network regularization}
Network regularization approaches aim to constrain updates of model weights by applying penalties to keep the learned tasks information. \cite{li2017learning} proposes Learning Without Forgetting (LWF), which shrink the prediction distance between current task and previous tasks by knowledge distillation~\cite{hinton2015distilling}.
EWC~\cite{kirkpatrick2017overcoming} uses Fisher’s information to evaluate the importance of weights for old tasks, and
slow down the update of the important weights. Based on similar ideas, \cite{zenke2017temporal} alleviates catastrophic forgetting by allowing individual synapses to estimate their importance for solving a learned task.
\subsubsection{Dynamic architecture}
Network regularization methods can not completely guarantee to solve catastrophic forgetting, especially with unlimited number of tasks.
An another method to address this challenge is by dynamically expanding the network architecture. \cite{rusu2016progressive} proposes to expand the network by generating new sub-network with fixed size for each task, while fix the backbone model. \cite{yoon2017lifelong} selectively retrain the old network while expanding with limited neurons by group-sparsity regularization, and then splitting and duplicating the neurons to avoid catastrophic forgetting.
Beyond that, PackNet~\cite{mallya2018packnet} avoids this issue by identifying weights important for prior tasks through network pruning, while keeping the important weights fixed after training for a particular task.
In contrast to directly expanding model architecture, \cite{yoon2019scalable} adds additional task-specific parameters for each task and selectively learn the task-shared parameters together.
CPG~\cite{hung2019compacting} combines the model pruning, weight selection and model expansion methods, which gradually prunes the task-shared weights and then learns additional task-specific weights. Moreover, it could uniformly expand the model channels in each layer if the current accuracy can not meet the requirement.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{piggyback_1.pdf}
\caption{Illustration of Piggyback to learn a binary mask given a background model~\cite{mallya2018piggyback}. }
\label{fig:piggyback}
\end{figure}
\subsection{Multi-domain learning}
Multi-domain learning~\cite{rebuffi2017learning, rosenfeld2018incremental} aims to build a model, which can adapt a task into multiple visual domains without forgetting previous knowledge, and meanwhile using as fewer parameters as possible. \cite{rosenfeld2018incremental} proposes to recombine the weights of the backbone model via controller modules in channel-wise. \cite{liu2019end} proposes domain-specific attention modules for the backbone model. One of the most related method is Piggyback~\cite{mallya2018piggyback}, which solves the issue by learning task-specific binary masks for each task, as illustrated in~\cref{fig:overview}(d). They achieve this by generating the real-value masks which own the same size with weights, passing through a binarization function to obtain binary masks, that are then applied to existing weights. We denote the real-value mask and binary mask as ${\tens{M}}^r$ and ${\tens{M}}^b$ respectively, then, the binarization function is given by:
\begin{equation}
\label{eqt:binary}
\textup{Forward}:~~~ {\tens{M}}^{\textrm{b}} =
\begin{cases}
1 & \textrm{if} ~{\tens{M}}^{\textrm{r}} \geq \tau\\
0 & \textrm{otherwise}
\end{cases}
\end{equation}
\begin{equation}
\textup{Backward}:~~~ \nabla {\tens{M}}^{\textrm{b}} = \nabla {\tens{M}}^{\textrm{r}}
\label{eqt:ste}
\end{equation}
Where $\tau$ is a constant threshold value. However, the gradient of binarization is non-differential during back-propagation. They use the Straight-Through Estimator (STE)~\cite{hubara2016binarized} to solve this problem, which estimates the gradient of real-value mask by the gradient of binary mask as shown in~\cref{eqt:ste}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{method_4.pdf}\\
\caption{
Overview of the proposed soft mask (KSM) learning method.
Give a task $t$, we aim to learn a task-specific soft mask ${\tens{M}}_t$, by refactoring the fixed backbone weight to favor the current task. ${\tens{M}}_t$ is decomposed into a binary mask ${\tens{M}}^\textrm{b}_t$ and a scaling coefficient tensor ${\tens{A}}^\textrm{s}_t$ (\cref{eqt:soft_mask}). To obtain ${\tens{M}}^\textrm{b}_t$, the learnable real-value mask ${\tens{M}}_t$ pass through a logistic function (\cref{eqt:sigmoid}) and a softmax function (\cref{eqt:soft_trick}) successively. In addition, scaling tensor ${\tens{M}}^\textrm{b}_t$ is generated by ${\tens{M}}^\textrm{r}_t$ (\cref{eqt:scaling}).
During training backward, the real-value mask can be updated without gradient estimation. After training, only the soft mask is saved for testing.
}
\label{fig:soft_mask}
\end{figure}
\section{Kernel-wise Soft Mask Method}
Different from the conventional multi-task learning where the data of all tasks is available at training time, we consider a continual learning setting in which new tasks ($\{\mathcal{T}_1, \mathcal{T}_2, ..., \mathcal{T}_N\}$) arrive sequentially and cannot be used for training future tasks.
Given a convolution layer, we denote the weights ${\tens{W}}^{(l)} \in \mathbb{R}^{c_\textrm{in}\times c_\textrm{out}\times kh\times kw}$, where $c_\textrm{in}, c_\textrm{out}, kh, kw$ refers the weight dimension of $l$-th layer, including \#output channel, \#input channel, kernel height and width, respectively.
We denote the dataset of the $t$-th task ($\mathcal{T}_t$) as ${\tens{D}}_t = \{{\bm{x}}_t, {\bm{y}}_t\}$, where ${\bm{x}}_t$ and ${\bm{y}}_t$ are vectorized input data and label pair.
To adapt the pre-trained backbone model with the parameter $\{{\tens{W}}_1\}$ from the initial task $\mathcal{T}_1$ to a new task $\mathcal{T}_t$, we intend to learn a task-specific soft mask ${\tens{M}}_t\in \mathbb{R}$ that is applied to the fixed parameter ${\tens{W}}_1$ to provide good performance.
Based on this idea, the optimization objective can be mathematically formalized as:
\begin{equation}
\min_{{\tens{M}}_t} {\mathcal{L}}{ \Big ( }f ( {\bm{x}}_t; \{{\tens{M}}_t \times {\tens{W}}_{1}\}), {\bm{y}}_t {\Big )}
\label{eqt:loss}
\end{equation}
As described in~\cref{eqt:loss}, the inference performs matrix multiplication between mask tensor ${{\tens{M}}}$ and weight, thus refactorizing the weights to favor the new tasks. Such mask based method differs from prior mask-based counterparts \cite{mallya2018piggyback} in following aspects:
\begin{enumerate}
\item \textbf{Kernel-wise Mask Sharing}. Since the task-specific weights are refactorized from the backbone model via the task-specific mask, the size of mask directly determines the computation and model overhead for domain adaption objective.
Instead of utilizing the mask in the identical size with weights as in~\cite{mallya2018piggyback}, we introduce the compact mask where each mask element is shared by the kernel $kh \times kw$. Such kernel-wise mask method properly alleviates the computation and memory burden, with potential to outperform the existing methods in terms of accuracy.
\item \textbf{Soft Mask.} In contrast to prior works leveraging binary mask (${\tens{M}}_t \in \{0,1\}$), we use the real-value mask (${\tens{M}}_t \in \mathbb{R}$) instead (aka. soft mask) with sparse patterns.
Note that our method still includes sparse elements as the binary counterpart, but the zero elements are replaced with real values.
Such a soft mask can be viewed as a superposition of a binary mask ${\tens{M}}^b_t$ and a scaling coefficient tensor ${\tens{A}}^s_t$, which can be expressed as:
\begin{equation}
{\tens{M}}_t = {\tens{M}}^b_t + {\tens{A}}^s_t.
\label{eqt:soft_mask}
\end{equation}
The above modification empowers the soft mask with a richer representation capability, without low-level kernel support to meet the objective of low hardware overhead.
\item \textbf{Softmax trick for better gradient calculation.}
Since the soft mask above includes sparse portion, there still exists the non-differential issue. Instead of utilizing Straight-Through Estimator (STE) in the binary mask counterpart, we propose to leverage the softmax trick to relax the categorical objective. Compared to the STE method, the softmax trick could provide better gradient calculation, to achieve higher accuracy on new tasks.
\end{enumerate}
\cref{fig:overview}depicts the evolution from prior implementation to our method. More details of our soft mask based method are presented in the following subsections.
\subsection{Soft mask}
As in Piggyback method~\cite{mallya2018piggyback}, the adopted binary mask compulsively divides the background model into two categories: task-relevant or -irrelevant, represented by 1 or 0 in the binary mask respectively.
To better leverage the knowledge of the background model, we introduce an additional trainable real-value scale coefficient tensor ${\tens{A}}^\textrm{s}$ as a replacement of the zero elements in the binary mask counterpart. In this way, it can improve the learning capacity without time-consuming re-training of zeroed-out weights in CPG~\cite{hung2019compacting}.
Next, we seek to answer the following two key questions:
\begin{itemize}
\item How to generate the scaling coefficient tensor without involving additional training parameters or cost?
\item Where to apply these scaling factors?
\end{itemize}
Intuitively, the trainable soft mask should be utilized to represent the relevant or importance levels w.r.t to the corresponding weight kernel of the background model. In light of this, we propose to directly use it as the scaling factor.
In practice, the magnitude of values in the real-value mask is typically very small (i.e. 0.01), even with negative values.
In our method, we normalize those values to treat them as the scaling factor of each kernel when learning new tasks.
Next, we apply those normalized scaling factors only to the kernels that are zeroed-out in the binary mask, so as to create a soft mask and avoid mask size increasing significantly due to those real values.
As shown in Fig.~\ref{fig:soft_mask}, the above two steps can be achieved by inverting `0' and `1' in the generated binary mask ${\tens{M}}^b$, followed by multiplying with the real-value mask ${\tens{M}}^r$.
The scaling factor is given by:
\begin{equation}
{\tens{A}}^s = \overset{\sim}{\tens{M}}{^b} \cdot \textrm{normal}({\tens{M}}_t^r.\textrm{detach()})
\label{eqt:scaling}
\end{equation}
Where $\overset{\sim}{\tens{M}}{^b}$ inverts 0 and 1 in the ${\tens{M}}{^b}$. The `detach' is used to only grasp the values of ${\tens{M}}{^r}$ without influence the backpropogation. Note that, since all the masks are set in kernel-wise, each mask value will be applied on a kernel weight as shown in~\cref{fig:soft_mask}.
In short, we generate the soft mask ${\tens{M}}$ by combining the binary mask ${\tens{M}}^b$ and the scaling factor ${\tens{A}}^s$ as shown in~\cref{eqt:soft_mask}. It can be understood as we fix the important kernels (`1' in binary mask) and scale the unimportant kernels (`0' in binary mask) to be different trainable levels for the new task. The soft mask is generated in this way, mainly for the following two reasons:
\begin{enumerate}
\item Directly utilizing the already existing real-value mask does not involve additional trainable parameters or changing the backbone model architecture, indicating that it can be trained with no extra cost.
\item These scaling factors increase the model capacity for the new task, with very small mask size increase due to the facts that 1) real-values occupy a small portion in the mask and 2) our kernel-wise mask dimension is already much smaller than traditional element-wise mask. We will quantify the overhead and the sparsity level in the analysis later.
\end{enumerate}
\subsection{Softmax trick for gradient calculation}
\cite{mallya2018piggyback} proposes a masking method, where they train a real-value mask followed by a hard threshold function to binarize the mask as depicted in~\cref{eqt:binary}.
However, the binarization function is not differentiable, the general solution is to skip the threshold function during backpropagation and update the real mask by directly using gradients computed from binary mask, which is known as Straight Through Estimator (STE) as shown in Eq.\ref{eqt:ste}. Different from that, we propose a method to eliminate the gradient estimation step and make whole mask learning compatible with existing gradient based backpropagation training process.
First, we relax the binarization function in Eq.\ref{eqt:binary} to a continuous logistic function:
\begin{equation}
\sigma({\tens{M}}^r) = \frac{1}{1+\textrm{exp}(-k({\tens{M}}^r-\tau))}
\label{eqt:sigmoid}
\end{equation}
where $k$ is a constant.
Note that the logistic function becomes closer to hard thresholding function when k is increasing.
The partial derivative of the logistic function is:
\begin{equation}
\frac{\partial \sigma({\tens{M}}^r)} {\partial {\tens{M}}^r} = k \cdot \sigma({\tens{M}}^r) \cdot (1-\sigma({\tens{M}}^r))
\end{equation}
In this work, we treat $\sigma({\tens{M}}^r)$ as a probability mask to estimate the importance level of the corresponding weight kernels to save training time without involving extra parameters.
When considering it as a probability mask, sampling from a Bernoulli distribution is a reasonable and popular way to generate, but such sampling procedure is not differentiable. To overcome this issue, we leverage the softmax trick, which performs a differential sampling to approximate a categorical random variable. Summarizing, we define $p(\cdot)$ using the softmax trick as
\begin{equation}
p({\tens{M}}^r) = \frac{ \textrm{exp}((\textrm{log} \pi_0)/T)}{\textrm{exp}((\textrm{log}\pi_0)/T) + \textrm{exp}((\textrm{log}\pi_1)/T)}
\label{eqt:soft_trick}
\end{equation}
Where $\pi_0$ and $\pi_1$ represent $1-\sigma({\tens{M}}^r)$ and $\sigma({\tens{M}}^r)$ respectively. The temperature $T$ is a hyper-parameter to adjust the range of input values, meanwhile choosing larger one could avoid gradient vanishing during back-propagation. Note that the output of $p({\tens{M}}^r)$ closer to a Bernoulli sample as $T$ towards to 0.
Benefiting from the differentiable property of~\cref{eqt:sigmoid} and \cref{eqt:soft_trick}, the real-value mask ${\tens{M}}^r$ can be embedded with existing gradient based backpropagation training without gradient approximation. During training, most values in the distribution of $p({\tens{M}}^r)$ will move towards either 0 and 1. To represent $p({\tens{M}}^r)$ as binary format, we use the one-hot code of $p({\tens{M}}^r)$ during training forward, which has no influence on the real-value mask to be updated for back-propagation.
In the end, the soft mask is generated as described in~\cref{eqt:soft_mask}. During forward, the input-output relationship of one layer is given by
${\bm{y}} = {\tens{W}}_{1} \cdot ({\tens{M}}^\textrm{b} + {\tens{A}}^\textrm{s}){\bm{x}}$. According to the chain rule in the back-propagation, the gradient of such soft mask is given by:
\begin{equation}
\begin{gathered}
\nabla {\tens{M}}^s = (\frac{\partial E} {\partial {\bm{y}}}) \cdot (\frac{\partial {\bm{y}}} {\partial p({\tens{M}}^r)}) \cdot (\frac{\partial p({\tens{M}}^r)} {\partial \sigma({\tens{M}}^r)}) \cdot (\frac{\partial \sigma({\tens{M}}^r)} {\partial {\tens{M}}^r}) \\
\end{gathered}
\end{equation}
Where the partial derivative of each term is given by:
\begin{equation}
\begin{gathered}
\frac{\partial E} {\partial {\bm{y}}} = \nabla {\bm{y}} \\
\frac{\partial {\bm{y}}} {\partial p({\tens{M}}^r)} = {\bm{x}}^T \cdot {\tens{W}}_1 \\
\frac{\partial p({\tens{M}}^r)} {\partial \sigma({\tens{M}}^r)} = - \frac{p({\tens{M}}^r)(1-p({\tens{M}}^r))} {T\sigma({\tens{M}}^r)(1-\sigma({\tens{M}}^r))}
\end{gathered}
\end{equation}
By doing so, the proposed method can optimize the soft mask in an end-to-end manner, where every step is differentiable. We illustrate the complete algorithm in Algorithm \ref{algo:1}. During training, we save the optimized ${\tens{M}}^*$, and then directly applying it to the corresponding weight for testing.
\setlength{\textfloatsep}{0pt
\begin{algorithm}
\caption{The proposed soft mask learning}\label{algo:1}
\begin{algorithmic}[1]
\Require{Give the initial task $\mathcal{\tau}_1$ and the backbone model with the parameter ${\tens{W}}_1$, the threshold $\tau$ and temperature $T$ }
\For{Task t = 2, ..., N}
\State Get data ${\bm{x}}_t$ and label ${\bm{y}}_t$
\State ${\tens{M}}_t^b \leftarrow$ one-hot$(p({\tens{M}}_t^r))$
\State $\overset{\sim}{\tens{M}}{^b_t}\leftarrow \textrm{invert}({\tens{M}}_t^b)$
\State ${\tens{M}}_t \leftarrow {\tens{M}}^b_t + \overset{\sim}{\tens{M}}{^b_t} \cdot \textrm{normal}({\tens{M}}_t^r.\textrm{detach}()) $
\State ${\tens{M}}^*_t \leftarrow \min_{{\tens{M}}^s_t} {\mathcal{L}}{ \Big ( }f ( {\bm{x}}_t;{\tens{W}}_{1} \cdot {\tens{M}}_t), {\bm{y}}_t {\Big )}$
\State During testing, execute $f ( {\bm{x}}_t;{\tens{W}}_{1} \cdot {\tens{M}}^*_t)$
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{table*}[t]
\caption{The accuracy (\%) and training cost (s) on Twenty Tasks of CIFAR-100. Considering those accuracy and training time comparison, we could achieve best average accuracy and around $\sim10\times$ faster than CPG.}
\scalebox{0.65}{
\begin{tabular}{cccccccccccccccccccccc|c}
\hline
Methods & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & \textbf{Avg} \\ \hline
\multirow{2}{*}{PackNet} & Acc & 66.4 & 80.0 & 76.2 & 78.4 & 80.0 & 79.8 & 67.8 & 61.4 & 68.8 & 77.2 & 79.0 & 59.4 & 66.4 & 57.2 & 36.0 & 54.2 & 51.6 & 58.8 & 67.8 & 83.2 & 67.5 \\ \cline{2-23}
& \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{334} & \multicolumn{1}{l}{360} & \multicolumn{1}{l}{370} & \multicolumn{1}{l}{379} & \multicolumn{1}{l}{382} & \multicolumn{1}{l}{385} & \multicolumn{1}{l}{385} & \multicolumn{1}{l}{389} & \multicolumn{1}{l}{234} & \multicolumn{1}{l}{358} & \multicolumn{1}{l}{370} & \multicolumn{1}{l}{378} & \multicolumn{1}{l}{384} & \multicolumn{1}{l}{385} & \multicolumn{1}{l}{384} & \multicolumn{1}{l}{337} & \multicolumn{1}{l}{359} & \multicolumn{1}{l}{371} & \multicolumn{1}{l}{377} & \multicolumn{1}{l|}{382} & \multicolumn{1}{l}{365} \\ \hline
\multirow{2}{*}{Piggyback} & Acc & 65.8 & 78.2 & 76.4 & 79.8 & 86.0 & 81.0 & 79.4 & 82.4 & 81.8 & 86.4 & 87.8 & 76.0 & 82.8 & 80.6 & 48.2 & 70.4 & 65.0 & 71.80 & 87.80 & 90.6 & 77.1 \\ \cline{2-23}
& \multicolumn{1}{l}{Time} & \multicolumn{1}{l}{100} & \multicolumn{1}{l}{150} & \multicolumn{1}{l}{102} & \multicolumn{1}{l}{113} & \multicolumn{1}{l}{154} & \multicolumn{1}{l}{102} & \multicolumn{1}{l}{121} & \multicolumn{1}{l}{119} & \multicolumn{1}{l}{97} & \multicolumn{1}{l}{130} & \multicolumn{1}{l}{84} & \multicolumn{1}{l}{110} & \multicolumn{1}{l}{96} & \multicolumn{1}{l}{120} & \multicolumn{1}{l}{106} & \multicolumn{1}{l}{97} & \multicolumn{1}{l}{97} & \multicolumn{1}{l}{106} & \multicolumn{1}{l}{110} & \multicolumn{1}{l|}{119} & \multicolumn{1}{l}{111} \\ \hline
\multirow{2}{*}{CPG} & Acc & 66.6 & 76.2 & 78.2 & 80.6 & 86.4 & 83.0 & 81.4 & 82.4 & 82.0 & 86.8 & 86.8 & 81.4 & 82.8 & 82.0 & 50.4 & 72.4 & 66.2 & 71.2 & 85.6 & 91.6 & 78.7 \\ \cline{2-23}
& Time & 629 & 2101 & 2123 & 2120 & 2121 & 2127 & 2116 & 2120 & 2122 & 2121 & 2122 & 2115 & 2127 & 2125 & 2126 & 2114 & 2124 & 2126 & 2123 & 2125 & 2046 \\ \hline
\multirow{2}{*}{Ours} & Acc & 67.2 & 78.0 & 78.8 & 78.4 & 85.6 & 82.6 & 80.2 & 83.4 & 82.6 & 89.4 & 88.4 & 80.6 & 83.2 & 80.8 & 52.8 & 73.2 & 67.8 & 72.6 & 88.0 & 92.0 & \textbf{79.2} \\ \cline{2-23}
& Time & 130 & 81 & 111 & 123 & 123 & 127 & 62 & 106 & 88 & 78 & 95 & 85 & 73 & 88 & 90 & 90 & 80 & 95 & 96 & 65 & \textbf{94.3} \\ \hline
\end{tabular}}
\label{tab:cifar100_20}
\end{table*}
\section{Experiments}
\subsection{Datasets and backbone architectures}
Similar as prior works, we use VGG16-BN~\cite{simonyan2014very} and ResNet50~\cite{he2016deep} as the backbone architectures for the following datasets:
\noindent\textbf{ImageNet-to-Sketch}
In this experiments, six image classification datasets are used: CUBS~\cite{wah2011caltech}, Stanford Cars~\cite{krause20133d}, Flowers~\cite{nilsback2008automated}, WikiArt~\cite{saleh2015large} and Sketch~\cite{eitz2012humans}. We use the ResNet50 as the backbone model which are trained on ImageNet dataset~\cite{russakovsky2015imagenet}, then fine-tunes the fine-grained datasets sequentially.
\noindent\textbf{Twenty Tasks of CIFAR-100} We divide the CIFAR-100 dataset into 20 tasks. Each task has 5 classes, 2500 training images, and 500 testing images. In the experiment, VGG16-BN model (VGG16 with batch normalization layers) is employed to train the 20 tasks sequentially.
\subsection{Competing Method to be Compared}
To test the efficacy of our method, we compare it with recent several representative methods in three categories:
\begin{itemize}
\item \textbf{Whole model fine-tuning:} Fine-tuning the whole model for each task individually
\item \textbf{PiggyBack~\cite{mallya2018piggyback}} It fixes the backbone weights and then learns a binary mask to select partial weights for new tasks.
\item \textbf{PackNet~\cite{mallya2018packnet}} It first prunes unimportant weights, and then fine-tunes them for learning new tasks.
\item \textbf{CPG~\cite{hung2019compacting}} It combines PackNet and PiggyBack to gradually prune, pick and grow the backbone model for learning new tasks sequentially.
\end{itemize}
\subsection{Results on ImageNet-to-Sketch dataset}
In this experiment, following the same settings in the works of CPG ~\cite{hung2019compacting} and Piggyback~\cite{mallya2018piggyback}, We train each task for 30 epochs using the Adam optimizer. The initial learning rate is 1e-4, which is decayed by a factor of 10 after 15 epochs.
\subsubsection{Accuracy comparison}
The accuracy of the five classification tasks is tabulated in~\cref{tab:imagenet}.
For the first ImageNet task, CPG and PackNet perform slightly worse than the others, since both methods have to compress the model via pruning.
Then, for the following five fine-grained tasks, the proposed method could achieve all better accuracy comparing with Piggyback and PackNet.
Even comparing with the individually fine-tuning whole model for each task, we could still achieve better performance except WikiArt dataset.
In comparison to CPG that requires one order more training time (\cref{fig:train_cost_imagenet}), our method achieves better accuracy in tasks of CUBS, Flowers and Sketch. However, we admit that, owing to a small portion of real-values in the mask, our method needs slightly more model size than other methods.
Note that, the model size reported in~\cref{tab:imagenet} includes both the backbone model and introduced mask size.
\subsubsection{Training time comparison} To do a fair comparison, all the methods are trained on the single Quadro RTX 5000 GPU with the same batch size.
\cref{fig:train_cost_imagenet} summarizes the whole training time for each method.
First, our method slightly outperforms Piggyback, since the proposed soft mask learning method (as illustrated in~\cref{eqt:sigmoid} and \cref{eqt:soft_mask}) is faster than the binarization function in real hardware implementation. Then, ours and Piggyback could both achieve better speed than PackNet, since PackNet needs to retrain weights which is slower than training a mask. Last, it is very obvious that CPG requires more than $\sim10\times$ more training time than all rest methods, while 3 out of 5 tasks have lower accuracy than ours as shown in \cref{tab:imagenet}.
\begin{table}[h]
\caption{Accuracy on ImageNet-to-Sketch dataset}
\scalebox{0.85}{
\begin{tabular}{cccccc}
\hline
Dataset & \multicolumn{1}{l}{Finetune} & \multicolumn{1}{l}{PackNet} & \multicolumn{1}{l}{Piggyback} & \multicolumn{1}{l}{CPG} & \multicolumn{1}{l}{Ours} \\ \hline
ImageNet & & 75.71 & 76.16 & 75.81 & 76.16 \\
CUBS & 82.83 & 80.41 & 81.59 & 83.59 & \textbf{83.81} \\
Cars & 91.83 & 86.11 & 89.62 & \textbf{92.80} & 92.14 \\
Flowers & 96.56 & 93.04 & 94.77 & 96.6 & \textbf{96.94} \\
WikiArt & 75.60 & 69.40 & 71.33 & \textbf{77.15} & 75.25 \\
Sketch & 80.78 & 76.17 & 79.91 & 80.33 & \textbf{81.12}\\ \hline\hline
\begin{tabular}[c]{@{}c@{}}Model Size\\ (MB)\end{tabular} & 554 & 115 & 121 & 121 & 146 \\ \hline
\end{tabular}}
\label{tab:imagenet}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{time_cost.pdf}
\caption{Training cost on ImageNet-to-Sketch datasets with various continual learning methods.}
\label{fig:train_cost_imagenet}
\end{figure}
\begin{table*}[t]
\centering
\caption{The accuracy on Twenty Tasks of CIFAR-100 with different initial tasks. The accuracy of individual task on these five settings is slightly different.
Neverthless, the average accuracy is better than PackNet and Piggyback. Comparing with CPG, better accuracy could be achieved on three different initial types. }
\scalebox{0.75}{
\begin{tabular}{ccccccccccccccccccccc|c}
\hline
Initial & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & \textbf{Avg} \\ \hline
1 & 67.2 & 78.0 & 78.8 & 78.4 & 85.6 & 82.6 & 80.2 & 83.4 & 82.6 & 89.4 & 88.4 & 80.6 & 83.2 & 80.8 & 52.8 & 73.2 & 67.8 & 72.6 & 88.0 & 92.0 & 79.2 \\ \hline
5 & 67.0 & 77.2 & 77.6 & 79.2 & 84.8 & 82.6 & 78.0 & 85.2 & 82.8 & 88.8 & 88.4 & 80.8 & 84.2 & 81.4 & 50.2 & 71.8 & 67.0 & 71.2 & 86.0 & 91.8 & 78.8 \\ \hline
10 & 67.8 & 77.2 & 76.6 & 79.4 & 82.8 & 81.6 & 80.8 & 83.4 & 82.0 & 88.6 & 88.2 & 81.2 & 85.0 & 80.2 & 53.4 & 73.8 & 68.6 & 74.4 & 87.2 & 91.2 & \textbf{79.3} \\ \hline
15 & 67.6 & 78.2 & 77.0 & 77.0 & 81.8 & 82.6 & 78.4 & 83.4 & 83.2 & 86.6 & 88.4 & 80.0 & 83.0 & 78.0 & 51.2 & 70.8 & 67.8 & 67.8 & 86.4 & 91.0 & 78.0 \\ \hline
20 & 66.8 & 75.6 & 77.2 & 76.6 & 85.4 & 81.0 & 79.0 & 84.0 & 82.2 & 87.4 & 86.4 & 79.0 & 83.8 & 80.4 & 49.0 & 70.8 & 66.4 & 72.0 & 88.2 & 93.6 & 78.2\\ \hline
\end{tabular}}
\end{table*}
\subsection{Results on twenty tasks of CIFAR-100}
Different from the ImageNet-to-Sketch setting that relies on a pre-trained model on ImageNet dataset, in this experiment, we first train a task from scratch as the backbone model. Afterward, we fix the backbone model weights and learn the task-specific mask for the rest tasks sequentially. To conduct a fair comparison, we follow the same configuration as CPG, and select the same task as the initial task-1.
Note that, since this work only focuses on continual learning without model expansion, all the CPG results are without expansion based on their open source code.
\subsubsection{Accuracy and training time comparison} Similar phenomenon can be observed with the ImageNet-to-Sketch setting.
Table.\ref{tab:cifar100_20} shows the accuracy and training cost for these methods. Our method could achieve completely better results than Piggyback and PackNet. In addition, comparing with CPG, we also could achieve better results in most tasks. In terms of training time, our method is around $\sim10\times$ faster than CPG.
Considering those accuracy and training time comparison, it shows our method could achieve a well-performed knowledge transfer based on a weak backbone model which only trains on 2 classes. It is worthy to note that the initial task indeed influences the performance of rest tasks, since we fix the backbone weights all the time. In the next section, we will show that, even with different initial tasks, in all cases, our method could learn a mask that achieves good knowledge representation for new task.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{time_cost_cifar100.pdf}
\caption{Training cost on twenty tasks of CIFAR-100 with various continual learning methods.}
\label{fig:init_tasks}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{CUBS.pdf}\\
\includegraphics[width=0.95\linewidth]{cars.pdf}\\
\includegraphics[width=0.95\linewidth]{flower.pdf} \\
\includegraphics[width=0.95\linewidth]{wikiart.pdf}\\
\includegraphics[width=0.95\linewidth]{sketches.pdf}
\caption{The ratio of two mask types visualization on ResNet50 for ImageNet-to-Sketches dataset.}
\label{fig:arch_visual}
\end{figure}
\subsection{Ablation Study and Analysis}
\subsubsection{Kernel-wise, soft mask and softmax trick}
We study the individual effect of the three main techniques of our proposed method on ImageNet-to-Sketch dataset setting. As shown in~\cref{tab:Ablation}, we denote the `Piggyback-Soft' as replacing the 0 values in piggyback's binary mask with scaling factors, and denote the `Ours-softmax' as we only use the proposed softmax trick to generate binary mask. Also, we name the `Ker-wise' and `Ele-wise' as kernel-wise and element-wise mask respectively. The `Ours-Softmax' achieves better results than Piggyback, which means the proposed completely differentiable mask learning process with softmax trick could generate better optimization, since we don't have gradient estimation. In addition, `Piggyback-Soft' achieves better results than `Piggyback' proving that adding scaling factors to zeroed-out weights indeed improves the task-specific representation ability. Also, changing the mask to kernel-wise almost has very minor or neglectable influence for performance. In the end, the `Ours-Full' combines all three techniques, showing best overall performance in all datasets.
\begin{table}[h]
\caption{The ablation study on the proposed method}
\label{tab:Ablation}
\scalebox{0.8}{
\begin{tabular}{cccccc}
\hline
Method & CUBS & Cars & Flowers & Wikiart & Sketch \\ \hline
Piggyback & 81.59 & 89.62 & 94.77 & 71.33 & 79.91 \\
Piggyback - Ker-wise & 81.76 & 89.57 & 94.88 & 70.30 & 79.95 \\
Piggyback - Soft & 82.26 & 91.17 & 95.85 & 73.12 & 80.22 \\
Ours - Softmax & 82.86 & 91.71 & 96.67 & 74.06 & 80.70 \\
Ours - Ele-wise & 83.79 & 92.18 & 96.90 & 75.0 & 81.10 \\ \hline
Ours - Full & 83.81 & 92.14 & 96.94 & 75.25 & 81.12 \\ \hline
\end{tabular}}
\end{table}
\subsubsection{The Effect of Different Initial Tasks}
\label{sec:init_task}
Different from the ImageNet-to-Sketch dataset setting that heavily relies on a strong pre-trained model, we randomly select a task and then train it from scratch as the initial model in Twenty Tasks of CIFAR-100 setting.
To explore how does the initial task affects the performance of rest tasks, we randomly select five different tasks as the initial task as shown in~\cref{fig:init_tasks}. Thus, the accuracy of these five settings on each individual task is slightly different, since they own different domain shift levels.
Nevertheless, the average accuracy is better than PackNet and Piggyback. Comparing with CPG, better accuracy could be achieved on three different initial types, which indicates that the proposed method could balance the domain shift with different initial tasks.
\subsubsection{Architecture and Soft Mask Visualization}
\cref{fig:arch_visual} visualizes the ratio of `1' values in binary mask and the scaling factor. Two observations can be found crossing all tasks: 1) Within a task, high-level layers need more changes than low-level layers, especially the last convolutional layer. 2) The scaling factor ratio seems can reflect the domain shift difficulty, for example, the largest dataset WikiArt need more changes than the smallest dataset Flowers.
\section{Conclusion}
In this work, We propose a novel kernel wise soft mask method for multiple task adaption in the continual learning setting, which learns a hybrid binary and real-value soft mask of a given backbone model for new tasks.
Comprehensive experiments on the ImageNet-to-Sketch dataset and twenty tasks of CIFAR-100 indicate that, with no need of using weight regularization and model expansion, the proposed method can run $\sim 10\times$ faster than the state-of-the-art CPG based learning method with similar accuracy performance.
In addition, we analyze the effect of different backbone models. Even with a weak backbone model, the proposed method also could learn reasonable information for new tasks. We show that we can achieve better results compared with the related prior mask-based methods.
\clearpage
\footnotesize{
|
1,108,101,565,127 | arxiv | \section{Introduction}
We report a study of two-dimensional, collinear,
spin-S antiferromagnets (AFMs) and ferromagnets (FMs) aimed at
better understanding the magnetic behavior of both, and in particular
the role of quantum versus classical fluctuations in these systems.
A number of quasi-$2D$ experimental systems, including spin-$\frac{1}{2}$ AFMs
La$_2$CuO$_4$ and Sr$_2$CuO$_2$Cl$_2$ \cite{spin1half,spin1half:RCvsQC} ,
spin-1 AFMs La$_2$NiO$_4$ and K$_2$NiF$_4$ \cite{spin1},
spin-$\frac{5}{2}$ AFM Rb$_2$MnF$_4$ \cite{spin5half} as well as
spin-$\frac{1}{2}$ FMs such as K$_2$CuF$_4$ \cite{dejongh} are
well described over a range of temperatures by the $2D$ Heisenberg model
\begin{equation}
H = J\sum_{\langle ij\rangle} {\bf S}_i {\bf S}_j
\label{H}
\end{equation}
on a square lattice, which we study by high temperature
series expansion methods. We expect much of our results to
apply to other lattices having collinear long-range order at $T=0$.
Let us begin by considering the relevant energy scales in the problem.
One important energy scale for both AFMs ($J>0$) and FMs ($J<0$) is
the $T=0$ spin stiffness $\rho_s$, defined as rigidity with respect to
a twist in the magnetic structure.
Quantum $1/S$ corrections
make this quantity different for AFMs and FMs with the same value of
spin; for model (\ref{H}), the order of magnitude is however the same,
$\rho_s^{\rm AFM}\sim JS^2$, $\rho_s^{\rm FM}=JS^2$
(note that the FM value is exact).
As shown by Chakravarty, Halperin, and Nelson \cite{CHN} for
AFMs, and by Kopietz and Chakravarty \cite{KC} for FMs,
the asymptotic $T\to0$ magnetic behavior for these models obeys scaling and
depends on only two dimensionful parameters:
the energy scale $\rho_s$, and a quantity which sets the
overall length scale and can be obtained from the $q\to0$ limit of
the spin wave dispersion $\epsilon(q)$. For AFMs, $\epsilon(q)$ is
linear and one defines the $T=0$ spin wave velocity as
$c=\lim_{q\to0}\epsilon(q)/q\sim JSa$.
For FMs, $\epsilon_q$ is quadratic and one defines the $T=0$ spin wave
stiffness $\rho = \lim_{q\to 0} \epsilon(q)/q^2\sim JSa^2$.
Note that the spin wave stiffness $\rho$
has the dimension of $\mbox{energy}\times\mbox{length}^2$, while
the spin stiffness $\rho_s$ has the dimension of energy.
For the Heisenberg model, the spin wave spectrum is
well-defined for all $q$ and its upper bound, reached at the boundary
of the Brillouin zone where $q\sim 1/a$,
can be estimated as
$\Lambda \sim c/a \sim JS$ for the AFMs, or
$\Lambda \sim \rho/a^2 \sim JS$ for the FMs, i.e. not only
$\rho_s$, but also $\Lambda\sim JS$ has the same order of magnitude in
AFMs and FMs.
Whereas the $T\to0$ behavior depends on $\rho_s$ and $c$ for AFMs,
or $\rho_s$ and $\rho$ for FMs,
the $T\to\infty$ behavior (the Curie-Weiss law) depends on
$\rho_{\rm CL}=JS(S+1)$ and the lattice constant $a$.
In what follows, we show
that this simple observation is part of a general picture where
the low temperature behavior, which may contain several scaling regimes,
at $T\sim \Lambda$ crosses over to
the high-temperature behavior,
which may also contain several scaling regimes.
A similar crossover in 3D occurs {\em below} the long
range ordering temperature (which is zero in 2D), and was studied by
Vaks, Larkin, and Pikin \cite{Vaks:Larkin:Pikin}.
In model (\ref{H}), the ratio $\Lambda/\rho_s\sim 1/S$ can be
made arbitrarily small by increasing $S$, but it cannot
be made arbitrarily large as $S\ge \frac{1}{2}$. Nevertheless, the case
$\Lambda/\rho_s\gg 1$ is of great interest.
First, there exist models where
$\rho_s$ goes to zero while $\Lambda$ does not,
in which case $\Lambda/\rho_s\gg 1$ applies rigorously;
an example of such a system is the two-layer
Heisenberg model \cite{Sandvik:Scalapino,SCS} near the critical point
where the N\'eel AFM long-range order vanishes.
Second, for the $S=1/2$ AFM model (\ref{H}),
the ratio $\Lambda/\rho_s \sim 10\gg1$ is quite large; furthermore,
there exists evidence
\cite{CSY,SGS,Sandvik:Scalapino,EGSS,SCS}, first pointed out by
Chubukov and Sachdev, that at intermediate temperatures this model
is indeed in the quantum critical limit $\Lambda/\rho_s\gg 1$.
\newpage
\phantom{.}
\vfill
\begin{figure}
\centerline{\epsfxsize=3.0in\epsfbox{pd.ps}}
\caption{A phase diagram of spin-S quantum antiferromagnets (AFMs)
and ferromagnets (FMs); see Table I for the corresponding scaling and
crossover expressions for the correlation length.
All regime boundaries are gradual crossovers rather
than phase transitions, and their positions can only be defined within
numerical factors of order unity.
The behavior in the $T/\Lambda\gg 1$ region
is in the universality class of 2D classical magnets,
where magnetic properties of AFMs and FMs for the same value of $S$ are
the same near their respective ordering wavevectors, and depend on
$T/\rho_{\rm CL}$, where $\rho_{\rm CL}=JS(S+1)$. The classical
behavior includes
Curie-Weiss regime ($T\gg \max(\rho_s,\Lambda)$) and
classical scaling regime ($\Lambda\ll T\ll \rho_{\rm CL}$), which
are separated by a fairly wide {\em classical crossover}
regime for $\Lambda \ll T\sim \rho_{\rm CL}$,
where scaling does not hold, but
nevertheless $\xi_{\rm AFM}\approx \xi_{\rm FM}\approx a
\psi_{\rm CL}(T/JS(S+1))$.
In the region $T/\Lambda\ll 1$
quantum effects are important and therefore AFMs and FMs behave
differently. For the AFMs, this region is in the universality class of
the QNL$\sigma$ model. Here these scaling regimes are found
for both AFMs and FMs:
renormalized classical (RC) regime for $T\ll \min(\rho_s,\Lambda)$;
quantum critical (QC) regime for $\rho_s\ll T\ll \Lambda$; and a wide
quantum
crossover regime for $T\sim \rho_s\ll \Lambda$, where the correlation
length remains a universal function (different for AFMs and FMs)
of the thermal de Broglie
wavelength for spin waves ($\lambda_T=c/T$ or
$\lambda_T=(\rho/T)^{1/2}$) and the ratio $T/\rho_s$.
}
\label{fig:pd}
\end{figure}
\vfill
\phantom{.}
\newpage
\phantom{.}
\vfill
\begin{center}
\begin{tabular}{|l|l|}
\hline
Regime & Correlation Length \\
\hline
\hline
Curie-Weiss & $\xi_{\rm AFM} = \xi_{\rm FM}\alt a$ \\
$T \gg \max(\rho_{\rm CL},
\Lambda)$ & Properties depend on $T/\rho_{\rm CL}$ \\
\hline
Classical Crossover & $\xi_{\rm AFM} = \xi_{\rm FM}$ \\
$T\sim \rho_{\rm CL}\gg
\Lambda$ & $\ \ = a \psi_{\rm CL}(T/\rho_{\rm CL})$ \\
\hline
Classical & $\xi_{\rm AFM} = \xi_{\rm FM}
\sim a (T/\rho_{\rm CL}) $ \\
$\rho_{\rm CL}\gg T \gg
\Lambda$ & $\ \ \times \exp(2\pi\rho_{\rm CL}/T)$ \\
\hline \hline
Quantum Critical & $\xi_{\rm AFM} \sim c/T$, \\
$\rho_s\ll T\ll \Lambda$ & $\xi_{\rm FM} \sim
\left(\rho/T\log(T/\rho_s)\right)^{1/2} $ \\
\hline
Quantum Crossover & $\xi_{\rm AFM} = (c/T) \,
\phi_{\rm AFM}(T/\rho_s)$, \\
$T\sim \rho_s \ll \Lambda$ & $\xi_{\rm FM} = (\rho/T)^{1/2}
\phi_{\rm FM}(T/\rho_s)$ \\
\hline
Renormalized Classical & $\xi_{\rm AFM} \sim (c/\rho_s) \,
\exp(2\pi\rho_s/T)$, \\
$T \ll \min(\rho_s,\Lambda)$ & $\xi_{\rm FM} \sim (\rho T/\rho_s)^{1/2}
\exp(2\pi\rho_s/T)$ \\
\hline
\end{tabular}
\end{center}
\noindent {\small
{\sc Table I}. Correlation length in the scaling and crossover regimes
shown in the phase diagram of Fig.\protect\ref{fig:pd}, after
Refs.\cite{Luscher:Weisz,CHN,KC,CSY}, and this work. The sign
$\sim$ indicates the presence of a numerical prefactor. The functions
$\phi_{\rm AFM}$ and $\phi_{\rm FM}$ are universal; the function
$\psi_{\rm CL}$ is not, for instance,
it explicitly depends on the ratio of the next-nearest and
nearest-neighbor exchange couplings.
}
\bigskip
\vfill
\phantom{.}
\newpage
\section{Curie-Weiss regime}
At high temperatures $T\gg \max(\rho_s,\Lambda)$,
the spins are weakly correlated and one can keep only a few leading terms
in the high temperature series
expansion. For model (\ref{H}),
this temperature range corresponds to $T\gg \max(JS,JS^2)$.
The corresponding
mean-field theory yields the familiar Curie-Weiss law.
\section{Renormalized Classical Regime}
In the opposite limit of $T\ll \min(\rho_s,\Lambda)$,
the behavior is of the
renormalized classical (RC) type. This regime was studied in detail in
Refs.\cite{CHN,KC,CSY},
where magnetic properties were calculated by mapping
the system onto the classical nonlinear $\sigma$-model and setting the
momentum-integration cutoff to be
proportional to the thermal de Broglie wave vector $q_T$, defined such
that $\epsilon(q_T)\sim T$.
The predicted $\xi(T)$ has the same form for AFMs ($J>0$) and FMs ($J<0$),
when expressed in terms of their respective thermal
de Broglie wavelengths
$\lambda_T=1/q_T$:
\begin{equation}
\xi_{\rm RC} =\frac{e}{8} \lambda_T \frac{T}{2\pi\rho_s}
\exp\left(\frac{2\pi\rho_s}{T}\right),
\label{xi:rc}
\end{equation}
(here the exact value of the prefactor is obtained from the correlation length
of the classical $O(3)$ nonlinear-$\sigma$ model in the minimal subtraction
scheme \cite{HN,JLG}).
However, not only the values of $\rho_s$ differ for FMs and AFMs, but
also $\lambda_T$ has qualitatively different temperature dependences:
\begin{equation}
\lambda_T = \left\{
\begin{array}{ll}
c/T & \ \ \mbox{for AFMs} \\
\sqrt{\rho/T} & \ \ \mbox{for FMs}
\end{array}
\right. .
\label{lambdaT}
\end{equation}
An important question is upto what
temperatures should this RC expression hold?
We recall that
when $T$ is larger than the upper bound
of the $T=0$ spin wave spectrum, $\Lambda$, there can be no spin waves with
wavelengths that are comparable to the thermal de Broglie wavelength.
Since such spin waves are important for RC theory, one would expect it
to be applicable only for
$T\ll \min(\Lambda,\rho_s)$, which for model (\ref{H}) translates into
$T\sim \min(JS,JS^2)$ (here, we ignore numerical factors of order unity).
At higher temperatures new physics must arise.
The Curie-Weiss regime occurs for
$T\gg \max(\rho_s,\Lambda)$ and the RC regime for
$T\ll \min(\rho_s,\Lambda)$. These two regimes exist for all $S$.
We now turn to the discussion of the
regimes where the temperature is larger than one of the
scales $\rho_s$ or $\Lambda$,
but smaller than the other.
\begin{figure}
\centerline{\epsfxsize=3.0in\epsfbox{xi-comp.ps}}
\caption{A semi-log plot of $\xi/a$ vs. $T/(JS(S+1))$
for AFMs (solid symbols) and FMs (open
symbols) for $S=1/2$ (circles) and $S=5/2$ (diamonds).
We also plot the data from Ref.\protect\cite{Luscher:Weisz} for the classical
$S\to\infty$ model (solid line).
For $S=1/2$, AFM and FM correlation lengths are different and
neither agrees with the $S\to\infty$ limit.
For $S=5/2$, AFM and FM
correlation lengths agree {\em both} with each other and
with the $S\to\infty$ model.
The data for $S=5/2$ is plotted as representative for
large spin models; for all studied
spins $S>1$, the behavior is similar although the agreement between
$\xi_{\rm AFM}$ and $\xi_{\rm FM}$ for $S=3/2$ and $S=2$
is not as striking as for $S=5/2$.
This data collapse provides evidence that any quantum effects, which are
expected to be different for AFMs and FMs, become unimportant already
for fairly small spins $S>1$,
in agreement with the conjecture due to Ref.\protect\cite{ESSGB}.
}
\label{fig:xi}
\end{figure}
\section{Classical and Classical Crossover Regimes}
In our earlier work in collaboration
with Greven and Birgeneau \cite{ESSGB}, we proposed a scaling
crossover scenario to describe substantial deviations from RC behavior
observed in the neutron scattering experiments for AFMs
with spin-one and larger. This scenario calls for a crossover to the
behavior of the $S\to\infty$ classical system when
spin wave energies for all wavevectors in the Brillouin zone
become smaller than the
temperature, i.e. for $T\agt \Lambda$.
The correlation length at this RC to classical boundary, obtained from
Eqs.(\ref{xi:rc}),
is exponentially large for large spin,
\begin{equation}
\left. \frac{\xi}{a}\right|_{T\sim\Lambda}
\sim \frac{\exp(S)}{S}\gg 1 \ \ \mbox{for $S\gg 1$}.
\end{equation}
In \cite{ESSGB}, the arguments in favor of RC to classical crossover
were based on data collapse for large
spin when plotted versus $T/(JS(S+1))$. The studied values of spin were
small enough ($S=1/2$ to $S=5/2$) to make the results sensitive to the
choice of the renormalized temperature as $T/(JS(S+1))$ or
$T/JS^2$.
Here, we resolve this arbitrariness by studying antiferro- and
ferromagnets simultaneously. AFMs and FMs have the same
classical limit, but the quantum effects differ and therefore the
difference between AFMs and FMs can be used as a probe for the
strength of quantum effects. It turns out that for any $S>1$,
in the temperature range of interest ($\xi\sim 10^1$),
the correlation length
calculated by high temperature expansions is nearly the same
for AFMs and FMs with the same value of $S$ (Fig.\ref{fig:xi}).
Therefore, it must be determined
by classical rather than quantum physics.
We emphasize that such an argument does not rely on any particular way
of collapsing the data,
because it is drawn from comparisons of AFM
and FM models for the same spin.
On the other hand, for $S=1/2$ the difference between $\xi_{\rm AFM}$
and $\xi_{\rm FM}$ is always large and temperature-dependent, which indicates
the importance of quantum physics for this value of $S$. Spin-one appears to
be a borderline case.
Having established that in the temperature range
$T\gg\Lambda$ finite-spin models
behave similarly to the $S\to\infty$ classical magnet,
we now comment on the behavior of the latter. The
asymptotic low-temperature behavior of the classical 2D Heisenberg
antiferromagnet is given by \cite{JLG}:
\begin{equation}
\frac{\xi_{\rm CL}}{a} = \frac{e}{8}\,
\frac{e^{-\pi/2}}{\sqrt{32}}\,
\frac{T}{2\pi \rho_{\rm CL}}\,
\exp\left(\frac{2\pi\rho_{\rm CL}}{T}\right),
\label{classical}
\end{equation}
where $\rho_{\rm CL}=JS(S+1)$.
It turns out that because of the very small
numerical value of the prefactor ($\sim 0.01$),
this formula becomes accurate only for very low temperatures,
$T/\rho_{\rm CL} \alt 0.6$ ($\xi \agt 100$).
We label the intermediate temperature range where neither
Curie-Weiss nor classical scaling behavior apply the
{\em classical crossover} regime. In this regime,
neither large nor small-temperature asymptotic expressions describe
the correlations accurately, nevertheless, all properties of the model
are nearly the same for AFMs and FMs, and depend on $T$ through
$T/\rho_{\rm CL}$ only.
\section{Quantum Critical and Quantum Crossover Regimes}
We call {\em quantum critical}
that behavior which can formally be obtained for $\rho_s\ll T\ll \Lambda$.
For the antiferromagnets, this regime is the asymptotic {\em quantum
critical} (QC) regime \cite{CHN,CSY}.
The correlation length $\xi$ is proportional to
$\lambda_T=c/T$ with corrections that depend
universally on $T/\rho_s$, and to leading order the dominant
frequency scale for spin fluctuations is $\bar{\omega}\sim T$.
For the FMs the spin wave spectrum is quadratic, which causes
infrared ($q\to 0$) log divergences for
$\rho_s\ll T\ll \Lambda$.
Such divergences (not present for AFMs) are cutoff by $\rho_s$, and
lead to the following multiplicative correction to the correlation length:
\begin{equation}
\xi_{\rm FM} \sim \left(\rho/T\right)^{1/2} \log^{-1/2}
\left(T/\rho_s\right),
\label{QC:FM}
\end{equation}
in either the sigma model or the Schwinger boson formalism.
Note that such a behavior cannot be the true quantum critical
behavior because the limit
$\rho_s\to0$ leads to singular $\xi$ at
finite $T$. Therefore, Eq.(\ref{QC:FM}) must fail as $\rho_s\to 0$,
and the $T=0$ quantum ferromagnet-paramagnet phase transition is likely
to belong to a different universality class \cite{Sachdev:conserved}.
Nevertheless, lacking a better
name and for compatibility with the AFM nomenclature,
we call this behavior a {\em
ferromagnetic quantum critical} regime.
The applicability of the QC description to model (\ref{H}) for $S=1/2$
AFM is widely discussed in the literature
\cite{spin1half:RCvsQC,CSY,SGS,Sandvik:Scalapino,EGSS}. A similar
analysis for the ferromagnets requires detailed calculations of their
properties in the QC regime, and is beyond the framework of this paper.
\section{Conclusion}
In this paper, we present a complete phase diagram of spin-S
antiferromagnets (AFMs) and ferromagnets (FMs).
We show that comparing AFMs and FMs allows one to elucidate
crossover effects from the quantum regimes (including the renormalized
classical regime where, despite its name, quantum fluctuations are
essential) to the purely classical regimes, where AFMs and FMs exhibit
identical behavior near their respective ordering wavevectors. For a
detailed description of the phase diagram and the proposed new
regimes, we refer the reader to Fig.\ref{fig:pd} and Table 1.
For the antiferromagnets,
both our series expansion calculations and the
neutron data \cite{spin1half:RCvsQC,ESSGB} suggest that
the regime boundaries in Fig.\ref{fig:pd} are positioned
such that the universal $T\ll \Lambda$
scaling theory, free of any lattice corrections,
is not obeyed for spin-one and larger at any numerically or
experimentally accessible values of the correlation length.
The data also highlights the disagreement in the magnitude of
spin correlations even for $S=1/2$ at all accessible temperatures with
the existing RC predictions, whereas
$\xi(T)$ was found \cite{spin1half:RCvsQC} to agree
remarkably well with the RC theory.
Our results suggest that an accurate
analytical theory for $S\geq1$
in the experimentally relevant temperature range
can be constructed by taking into account
leading quantum corrections about the $S=\infty$ limit on a lattice.
The important step here would be to obtain an analytic approximation
which is valid in the {\it classical crossover regime}.
On the other hand,
any theory based on a purely continuum
description, such as the QNL$\sigma$ model without
lattice corrections, is clearly
inadequate for any spin larger than $S=1/2$.
We are grateful to R.J. Birgeneau, S. Chakravarty,
A.V. Chubukov, M. Greven, Th. Jolic\oe ur, D. Pines,
S. Sachdev, and S. Sondhi for many useful discussions.
This work is supported by the NSF Grant No. DMR 93-18537.
One of us (A.S.) is supported by an Alfred P. Sloan Research Fellowship.
|
1,108,101,565,128 | arxiv | \section{Introduction}
The hadron and lepton phase of early-universe cosmology, spanning a temperature range between about 175 MeV and 1 MeV (between the quark-gluon plasma and cosmonucleosynthesis), has received only moderate attention in the literature~\cite{Rafelski:2013yka}, in spite of it being very rich in terms of the number of particles and interactions there present, and the underlying physics being relatively well known. This is probably because the only relic particles left from that era in the universe expansion form the cosmic neutrino background~\cite{Rafelski:2013qeu} that there is at present no hope to detect. In spite of the dearth of direct messengers from that era, it is important to pursue its study for future precision work in cosmology.
Particularly, there are few studies of the earlier part of the interval, just after exiting the quark-gluon plasma around 175-150 MeV~\cite{Brambilla:2014jmp}, when a significant fraction of the universe's entropy is carried by strongly interacting particles such as pions; only as the temperature drops below about 100 MeV, their decays $\pi_0\to \gamma\gamma$, $\pi^-\to \mu \bar{\nu}_\mu$, etc. add this entropy to that carried by leptons and photons.
Much information on the pion phase is available from theoretical studies pertaining to the field of Relativistic Heavy Ion Collisions. Particularly, transport coefficients have been well calculated in recent years~\cite{Torres-Rincon:2012sda,Davesne:1995ms,Prakash:1993kd,Mitra:2014dia}
and can be applied to early-universe physics. This is our focuse in the present work. The particular problem that we will address is entropy production. Though most treatments assume that the universe's expansion is adiabatic and always at equilibrium, this is just the simplest hypothesis and one may fancy consider separations from that equilibrium.
One can argue that the rates of the particle-physics processes characterized in the Standard Model are larger than the expansion Hubble factor $H=\dot{a}/a$, so the hypothesis of chemical and thermal equilibrium is reasonable, and the universe expands and cools down adiabatically. We of course concur with the analysis. But one cannot discard large past fluctuations in temperature or other quantities that have not survived to our days precisely because of the large equilibration rates damping them. So there is always a level of hypothesis involved.
What is solid information is that the fluctuations in the Cosmic Microwave Background (CMB) are measured and found small ($\nabla T/T<10^{-5}$). So one can opt for evolving large initial-state inhomogeneities so they are this small at the time of recombination, or for considering inhomogeneities that are so small in size as to evade observation in the CMB (cosmological versus microscopic inhomogeneities).
Further, since we will consider a radiation-dominated epoch, no structure-formation process is involved~\cite{Labini:2011tj}.
The largest contribution to the total entropy at zeroth order during most of the time, is due to the (quasi)massless species (photons, neutrinos and electrons), as will be reminded below. Yet in a hot gas, since transport phenomena are diffusive, the typical transport coefficient (to which entropy production and the relaxation rate will be proportional), drops with the inverse cross-section. The case in point for this study is the thermal conductivity, $\kappa \propto 1 / \sigma$. This means that the largest inhomogeneities at a given stage of the universe evolution will be found in the gaseous subsystem which, being relativistic, is affected by the largest cross-sections.
In the particle phase with components that are photons, leptons, and pions, the largest entropy production is thus likely to take place in the pion gas.
Heavier hadrons are barely present already shortly after the decay of the quark-gluon plasma, for example the kaon multiplicity~\cite{Abelev:2013haa} is down by at least an order of magnitude respect to the pion multiplicity. Therefore, though in principle kaon inhomogeneities can diffuse and produce entropy, we will ignore the phenomenon altogether.
This is because their cross-sections are dictated by the strong QCD interactions and are in the 10-milibarn range, way larger than (Debye-screened, electromagnetic) lepton interactions.
Even letting aside inhomogeneities, for $T>80$ MeV, pions actually carry a larger portion of the total homogeneous-gas entropy than photons
(though not larger than that of leptons) because of their multiplicity, as will be shown below in figure~\ref{figure:gs} (bottom plot).
Thus, there are two reasons to explore entropy and entropy production in the pion gas itself. In the high temperature end just after hadronization of the quark-gluon plasma, pions are large carriers of entropy. And second, they are the ones that can support the largest inhomogeneities, if any are present, because they are the ones opposing diffusion most.
With this motivation, our concrete study will be to address the relaxation of a thermal inhomogeneity at temperature $T+\delta T$ towards the surrounding environment value of $T$, ignoring other quantities that may separate from equilibrium such as momentum distributions or chemical inhomogeneities. Then we will calculate the subsequent entropy production to have reference values that may be useful in future studies.\\
\section{Entropy in the homogeneous Friedmann-Robertson-Walker cosmology}\label{sec:equilibrium}
\subsection{System of equations for universe evolution}
In this section we quickly review the standard statistical physics in the spatially flat ($\kappa=0$) \cite{Planck} homogeneous cosmos that serves as background for later study of inhomogeneities. In this case the two independent Einstein equations give rise to the the Friedmann equation:
\be \label{frlweq}
\left(\frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3}\rho\ .
\ee
for the evolution of the expansion parameter $a(t)$ from the Friedmann equation and the balance equation
\ba \label{e2}
\frac{d\rho}{dt}=-\frac{3\dot{a}}{a}(\rho +P)\,,
\ea
where $P$ is the total pressure and the energy density $\rho$ is the sum of the partial energy densities for the various species
\be \label{edensity}
\rho = \rho_\gamma + \rho_{\nu,\bar{\nu}} + \rho_{e^{\pm}} +\rho_{\mu,\bar{\mu}} + \rho_{\pi^{\pm},\pi^0} + \rho_{N,\bar{N}} +\dots
\ee
In table \ref{table:species} we summarize the main interaction channels
in the temperature range we are discussing, $1\,\text{MeV} <T<175$ MeV. In particular, for entropy considerations, nucleons (already non-relativistic) and dark matter are not important. Pions and muons behave as radiation in the upper end of the temperature range.
This can be seen for each species $i$, with degeneracy $g_i$, contributing
\be \label{rho}
\rho_i =\frac{g_i}{(2\pi)^3}\int d^3 p\, E\,f_i({\bf r},{\bf p},t)\,,
\ee
because in thermal equilibrium, the function $f_i$ (usual Fermi-Dirac or Bose-Einstein distribution)
\be \label{distri}
f_i({\bf r},{\bf p},t)=\frac{1}{e^{(p_\alpha U^\alpha({\bf r},t) -\mu_i ({\bf r},t))/T({\bf r},t)} \pm 1}\,,
\ee
suppresses the contribution by $e^{-m_i/T}$. Here we have considered the more general case of local instantaneous thermodynamic equilibrium which will be useful later. As usual, $p^\alpha$ and $U^\alpha$ are the components of the particle four-momentum $p^\alpha=(E,{\bf p})$ and the fluid four-velocity $U^\alpha=\gamma_{\bf V}(1,{\bf V})$ respectively, with $\gamma_{\bf V}=(1-{\bf V}^2)^{-1/2}$. In a comoving frame, $U^\alpha=(1,0,0,0)$ and thus $p_\alpha U^\alpha =E$. We have determined the chemical potentials $\mu_i$
from the current abundances and the scale factor $a(t)/a({\rm today})$. As they are tiny we do not quote them here.
\begin{table}[htbp]
\begin{tabular}{|c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{0mm}} |}
\hline
$\pi^0\leftrightarrow \gamma\g$ & $NX\leftrightarrow NX$ & $e\pi\leftrightarrow e\pi$ \\
$\pi\pi\leftrightarrow \pi\pi$ & $N\bar{N}\leftrightarrow \gamma\g\ , \pi\pi\dots$
&$e\, e\leftrightarrow e \,e$\\
$\pi^+\leftrightarrow\mu^+\nu_{\mu}$ & $\mu^+\leftrightarrow e^+\nu_e\overline{\nu}_\mu$ & $e\gamma\leftrightarrow e\gamma$ \\
$\pi\pi\leftrightarrow\gamma \gamma$&$\mu\pi\leftrightarrow\mu\pi$& $\gamma\g\leftrightarrow e^-e^+$ \\
$\nu_e \overline{\nu}_e\leftrightarrow e^+ e^-$ & $\mu^-\gamma\leftrightarrow\mu^-\gamma$& $\mu\m\leftrightarrow\mu\m$ \\
$\nu_\mu\overline{\nu}_\mu\leftrightarrow \mu^+\mu^-$ & $\gamma\g\leftrightarrow\mu^-\mu^+$ & $\mu e\leftrightarrow \mu e$
\\ \hline
\end{tabular}
\caption{Main processes in the temperature interval $175\,\text{MeV}-1\text{MeV}$. Except when specified,
the reactions can be written for all charge combinations,
e.g. $\pi$ denoting either of $\pi^+$, $\pi^-$ or $\pi^0$, and $\mu$ both muon and antimuon.
(We omit additional reactions with much smaller branching fractions, such as the Dalitz decay $\pi^0\to\gamma\g^*\to e^+ e^-\gamma$ at $O(1\%)$ \cite{part}.)
In this article we focus on the temperature range where $\pi\pi\leftrightarrow\pi\pi$ dominates transport.
} \label{table:species}
\end{table}
Summing Eq.~(\ref{rho}) over species yields $\rho(T)$ from which the temperature evolution
can be extracted as
\be \label{eqforT}
\frac{dT}{dt}= -\frac{3\dot{a}}{a}(\rho +P)\frac{dT}{d\rho}\, .
\ee
Eq.~(\ref{frlweq}) and~(\ref{eqforT}) can be solved numerically by using for example the Runge-Kutta algorithm.
The energy density is
computed from the numerical integration of Eq.~(\ref{rho}) at each temperature, and the pressure is similarly obtained from the spatial trace of the energy-momentum tensor $\delta_{jk} T^{jk}=3P$,
which results in a sum over partial pressures of all species,
\be \label{pressure}
P = \sum_i P_i =\sum_i \frac{1}{3}\int d^3 p\, f_i({\bf r},{\bf p},t) \frac{\vert {\bf p}\vert^2}{E}\, .
\ee
Density and pressure decrease monotonically with $t$, while the scale factor $a(t)$ increases monotonically; any of them may be used as a clock for further computations. We will set as origin of time the exit from the QGP at the top of the temperature interval,
$0\equiv t_{T=175{\rm MeV}}$,
where we set $a(0)\equiv 1$.
With the solutions at hand we can backtrack from the time of nucleosynthesis (a well-studied period~\cite{Burles:2000zk}) to the pion gas at temperatures two orders of magnitude higher, since the entire particle content in this epoch is well known. We do not resort to usual textbook power-law approximations since simple computer codes produce the
(numerically) exact solutions for this one-dimensional evolution-equation set. For computer accuracy, it is necessary to set a unit system that minimizes the number of large powers. We often take $(100\,\text{MeV})$ for temperature, energy and chemical potential ($k_B=1$) and $\text{peV}^{-1}$ for time and space ($c=1$). With this, the Cavendish constant $G$ turns out to be $1/1.44\,(100\,\text{MeV})^2$. Dimensionally, time is an inverse energy, so that
$1\,\text{s} \rightarrow 1.52\times 10^{3}\, \text{peV}^{-1}$.
The resulting scale factor is shown in figure~\ref{figure:scale}.\\
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/scalefactor_n.pdf}
\caption{Computed scale factor as a function of time (solid, blue online). We take as normalization, $a(0)=1$, where $T(t=0)=175$ MeV.
We also show the analytic form $a(t) \propto \sqrt{t}$ (dashed line, red online) that fits the lepton era at smaller temperatures. As we focuse here on the pion gas, found at earlier times, roughly from $t_{T=175}=0\,\text{s}$ to $t_{T=100}\approx 10^{-2}\, \text{peV}^{-1}\, (10^{-5}\,\text{s})$,
that square-root approximation separates significantly from the actual numerical computation that we employ.
}
\label{figure:scale}
\end{figure}
\vspace{-6mm}
\subsection{Computation of the entropy}
For the calculation of the entropy in the homogeneous case we first note that the thermodynamic magnitudes are a function of the temperature only. We will also assume thermal equilibrium, vanishing chemical potentials and adiabatic expansion. Then the conservation of the entropy per co-moving volume implies $s a^3=s_0 a_0^3$ where $s=s(T)$ is the entropy density (see for example \cite{Weinberg}).
The second principle of thermodynamics gives the total-entropy increase as
\be \label{termo2}
TdS = d(\rho V)+P dV .
\ee
where $S=s V$ is the total entropy and $V$ is the volume. From this equation it is possible to get the thermodynamic relations:
\be
s = \frac{1}{T}(\rho+P)= \frac{dP}{dT}.
\ee
Therefore the entropy in co-moving volume $V$ is constant and proportional to $a^3(t)\frac{\rho +P}{T}$.
Enumerating all the species (in equilibrium at the same universal $T$)
\be \label{entropy0}
s=\frac{\rho_1+\cdots +\rho_n +P_1+\cdots +P_n}{T}.
\ee
The pion gas can be near chemical equilibrium because the pion production rate (through $\gamma\gamma \leftrightarrow \pi \pi$ followed by $\pi^+\pi^-\leftrightarrow \pi^0\pi^0$, and similar lepton-lepton inelastic interactions) is sufficient to offset the pion decay rate.
In figure~\ref{figure:densityplot} we show the time-evolution of the number density for the more relevant temperature span between 175 and 70 MeV. During this time interval, pions (and also muons) are abundant, comparably to the (quasi)massless species.
\begin{figure}
\includegraphics[scale=0.55]{FIGS.DIR/number_species.pdf}
\caption{ \label{figure:densityplot}
(Color online).
Number densities of the most abundant species during the hadron-lepton epoch.
From top to bottom:
$e^-, \ e^+$ (blue); $\mu^-,\ \mu^+$ dashed line (green); $\gamma$ (black); $\pi$ (red, solid);
$\nu$ (black, dotted).
The number density of nucleons is completely negligible during the entire time interval.
\end{figure}
The entropy density may also be written as $\propto g_s(T)\,T^3$, $g_s(T)$ being the number of effective degrees of freedom. In particular, for ultrarelativistic particles,
\be
s(T)=g_s\frac{2\pi^2}{45}T^3\,.
\ee
Due to their relativistic behavior
throughout our entire temperature range, the effective number of degrees of freedom for photons, electrons and neutrinos is constant. The massive species see drops when $T<m_i$ as they become non-relativistic. Our numerical computation of the entropy can be casted in terms of $g_s(T)$ and it is plotted in figure~\ref{figure:gs}. In particular, the contribution of nucleons (as well as all strange and higher-flavor particles, not mentioned further) to the entropy density is completely negligible.
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/ovegs.pdf}
\vspace{3mm}
\includegraphics[scale=0.65]{FIGS.DIR/gs.pdf}
\caption{Top plot: aggregated effective number of relativistic degrees of freedom $g_s$ as a function of $\beta = 1/T$ from numerical calculation. Bottom plot: effective number of degrees of freedom for pions (solid line, blue online), photons (horizontal solid line, black online), electrons and positrons (dotted horizontal line, green online) and muons (dashed line, red online). Note that at the highest part of the temperature interval, pions provide a larger contribution to the entropy density than photons, though leptons are the largest carriers of entropy.
\label{figure:gs}}
\end{figure}
\vspace{-6mm}
\section{Entropy production by local departures from (thermal) homogeneity}
\subsection{Solution to the heat equation}
In this section, we consider separations from the homogeneous background described in section~\ref{sec:equilibrium}.
For simplicity we will take inhomogeneities to be spherical bulbs at temperature different from the background.
Thus, the temperature field $T({\bf r},t)$ will now also acquire a dependence on position. Local thermal equilibrium as well as chemical equilibrium is still assumed (and departures thereof can be separately considered in further investigations that we do not attempt here).
The departure from the background modifies temperature and entropy density
\ba \label{inhom}
T({\bf r},t) &=& T_{back}(t) +\delta T ({\bf r},t)\,, \nonumber \\
s({\bf r},t) &=& s_{back}(t) +\delta s({\bf r},t)\, .
\ea
Setting as simplest initial condition a bubble of higher $T$ than the surroundings, the temperature profile of such bulb will evolve according to the heat equation.
Then,
\be \label{heatin}
\Delta \left( \delta T({\bf r},t)\right) = \frac{\kappa(T)}{c_p(T)}\,
\frac{\partial \left(\delta T(\bf{r,t})\right)}{\partial t}\,.
\ee
with $\kappa(T)$ being the heat conductivity. Here the constant-pressure specific heat $c_p$ is defined as the derivative of the background entropy (neglecting the newly produced one)
with respect to temperature at constant $P$:
\be \label{cp}
c_p(T) =\frac{\partial s_{back}(T)}{\partial T}\Bigg\vert_P\, .
\ee
Since we already calculated the contribution of pions to the entropy density $s^\pi_{back}$ we can immediately compute the partial specific heat of the pion gas (we will further drop the superindex $\pi$ in this section, as all quantities are refered to the pion gas alone).
The other non-trivial function is $\kappa(T)$, the thermal conductivity, which depends on the temperature alone and is known from recent and earlier studies. The numeric data~\cite{Torres-Rincon:2012sda}
from a variational solution of Boltzmann's equation following the Chapman-Enskog expansion is shown in figure~\ref{figure:k}.
Since $c_p(T)$ and $\kappa(T)$ are nontrivial functions of the temperature, the heat equation does not admit an immediate analytical solution, so we numerically solve it by brute force with the simplest parabolic solver for a partial differential equation based on the finite-difference method in space and the Euler method in time. Thus, in figure~\ref{figure:k} we also show a simple interpolating function for the conductivity in the temperature interval of interest that we employ to speed up the computer code.
The valley in the conductivity at mid-temperatures occurs because of the $m_\pi\simeq f_\pi$ scales; the dropping low-temperature behavior can be obtained from the $\pi\pi$ scattering length and non-relativistic kinetic theory, and at high-$T$ dimensional analysis dictates $\kappa\propto T^2$ as visible. The detailed calculation with the full machinery of phase shifts, unitarity, chiral perturbation theory, etc. has been reported elsewhere \cite{Torres-Rincon:2012sda}.
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/k.pdf}
\caption{Dotted blue online: Thermal conductivity $\kappa$ as function of temperature at zero chemical potential from solving the Boltzmann equation. Solid (red online): simple interpolating function
employed in the heat equation solver.
\label{figure:k}}
\end{figure}
The numeric solution of Eq.~(\ref{heatin}), $\delta T({\bf r},t)$, is shown in figure \ref{figure:deltat} with an initial condition that has a spherical profile Gaussian in the radius,
\be \label{init}
\delta T(r,0)=\delta T_0\, e^{- \frac{r^2}{2R^2}}\,.
\ee
Here $\delta T_0$ is the initial central temperature of the inhomogeneity over that of the background, and $R$ is the typical radius.
There are several considerations to choose the size of the inhomogeneity. At the largest scale,
we can ask ourselves what is the largest possible radius that will homogenize during the pion gas lifetime. We must also take the size of the bulb small enough so as to respect CMB constraints.
Glancing back to figure~\ref{figure:scale}, we estimate
the Hubble horizon reached during the pion gas to be around $10^{-3}-10^{-2}$ peV$^{-1}$. This means that no homogenization can take place over distances larger than about a light second $(1-10)\times 10^{-3}$ peV$^{-1}$, or squaring and inverting, $R$ must be no larger than $\approx 10^{16}-10^{17} \text{fm}$. This guarantees that the thermal flattening of the bulb never violates causality.
Further, since the first order heat equation is not relativistically causal and we have not examined the $2^{nd}$ order formalism, we have to restrict ourselves to even significantly smaller spheres.
A further consideration is that if the inhomogeneity is too large, its relaxation time will be so great that when it reaches thermal equilibrium, there are no pions left (they are abundant for $T_{back}\approx 175-80$ MeV). For this reason (exclusively of simplicity), we will restrict the study to inhomogeneities no bigger than $R \approx 10^9$ fm.
These are small enough not to perturb the metric significantly, so we can treat them simply as Newtonian perturbations.
Finally, when we consider the smallest radii of the inhomogeneity, in the typical nuclear scale or somewhat more, RHIC guidance is available.
\begin{center}
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/profile.pdf}
\caption{Temperature profile $T(r,t)$ of an inhomogeneity of initial size $R=2.5\times 10^5\,\text{fm}$, as a function of the radius $r$ for increasing times. Top, solid black line: initial condition $T(0,0)=170 \,\text{MeV}$. Brown solid line, much flatter of the bottom: $T(r,t_r)$, $t_r\approx 10^{-12}\,\text{s}$. Other lines illustrate the time evolution of the inhomogeneity at intermediate times.
\label{figure:deltat}}
\end{figure}
\end{center}
\subsection{Entropy increase in one inhomogeneity}
The variation of the entropy of our inhomogeneity of volume $V$ during the relaxation process can be written as:
\be \label{inentropy}
\frac{dS_T}{dt}=\frac{dS_{\bar{V}}}{dt} +\frac{dS_V}{dt}\,,
\ee
where $S_T$ denotes the total entropy, $dS_{\bar{V}}$ represents the entropy exchanged with the rest of the universe and $dS_V$ the inner entropy production. The exchanged entropy $dS_{\bar{V}}$ can be obtained by means of an integral of the incomming entropy current over the surface of the inhomogeneity $\partial V$. We will consider the exchange as positive if entropy is supplied to the subsystem by the surroundings. The entropy current will be denoted by ${\bf j}_s$.
Concerning the internal entropy production $d S_V$ we introduce the rate of entropy production $\sigma_s$ per unit volume and unit time inside the system. In terms of these quantities, $dS_{\bar{V}}/dt$ and $dS_V/dt$ may be written as
\ba \label{entropye}
\frac{dS_{\bar{V}}}{dt} &=& -\int_{\partial V} {\bf j}_s\cdot {\bf n} \,d\Sigma\,, \nonumber\\
\frac{dS_V}{dt} &=& \int_V \sigma_s \, dV\, .
\ea
Expressing Eq.~(\ref{inentropy}) in terms of the entropy current and density we have:
\be \label{enprod}
\frac{d S_T}{dt} = \frac{d}{dt}\int_V s_T\, dV =-\int_{\partial V} d\Sigma\,\,{\bf j}_s\cdot {\bf n} +\int_V \,dV\,\sigma_s\
\ee
and use of Gauss's theorem yields the equation
\be
\label{enprod2}
\frac{d s_T}{dt} = -\boldsymbol{ \nabla}\cdot {\bf j}_s +\sigma_s\ .\\
\nonumber
\ee
For small flows, linear laws hold, such as the Fourier law for the heat flux:
\be\label{Fourier}
{\bf j}_e=-\kappa(T)\boldsymbol{ \nabla} T\ ;
\ee
where ${\bf j}_e$ is the heat current vector. Other examples of linear laws are Fick's law for a flavor $i$ concentration flux, ${\bf j}_i=-D_i\boldsymbol{ \nabla} n_i$, with $D_i$ being a diffusion coefficient for the particle species
$i$; or Ohm's law for the electric current density ${\bf j}_Q =- \kappa \boldsymbol{ \nabla}\phi$ with ${\bf j}_Q$ being the electric current, $\phi$ the electric potential and $\kappa$ the electric conductivity.
A general form for the entropy production $\sigma_s$ is
\be
\sigma_s= {\bf j}_e\cdot \boldsymbol{ \nabla}\left(\frac{1}{T}\right)-\sum_i \left[ {\bf j}_i\cdot \boldsymbol{ \nabla}\left(\frac{\mu_i}{T}\right) +\frac{A_k\,v_k}{T}\right]+\kappa \frac{{\bf I}\cdot {\bf j}_Q}{T}\cdots \,,
\ee
with $A_k,v_k$ the activities and the stoichiometric coefficients for the $i$th species involved in inelastic particle reactions. In the following we will consider the entropy production $\sigma_s$ for the thermal flow alone (first term).
Basic thermodynamics yields
\be
dU = T\, d S_T =\left({\bf j}_e\cdot {\bf n}\right) d\Sigma\, dt
\ee
where $U$ is the internal energy of the inhomogeneity. Integrating over the surface and in time, and using Gauss's theorem, we find the entropy produced in the process of relaxation of the inhomogeneity:
\be
\Delta S_T =\int _{\partial V} \,d\Sigma\, dt \,\frac{ {\bf j}_e\cdot {\bf n}}{T}=\int _V \,dV\, dt \boldsymbol{ \nabla} \cdot \left( \frac{ {\bf j}_e}{T}\right)
\ee
Applying Fourier's law in Eq.~(\ref{Fourier}) we find:
\be
\Delta S_T = \int \,dV\,dt \boldsymbol{ \nabla}\cdot \left(-\frac{1}{T}\kappa(T) \boldsymbol{ \nabla} T\right) \ .
\ee
Applying now Leibnitz's rule we get
\ba \label{eprod}
\Delta S_T = \int dV\, dt\,\frac{\kappa(T)}{T^2}
\left(\vert \boldsymbol{\nabla}\delta T\vert ^2 -T \Delta \delta T\right)\, ,\nonumber\\
\ea
which is positive, $\Delta S_T \geq 0$, since $ \boldsymbol{\Delta} T = \boldsymbol{ \Delta}(\delta T)\leq 0$ (remember that $T_{back}$ was position-independent).
Comparing with Eq.~(\ref{entropye}) we find the production of entropy and the divergence of its flow
\ba \label{ents}
\sigma_s(r,t) &=& \frac{\kappa(T)}{T^2}\vert \boldsymbol{\nabla} \delta T(r,t)\vert^2\,,\\
-\boldsymbol{ \nabla}\cdot {\bf j}_s(r,t) &=& \frac{\kappa(T)}{T} \Delta \delta T(r,t)\,.\\
\nonumber
\ea
The internal entropy produced in dissipating an inhomogeneity is an integrated entropy $\Delta S_V$, obtained from the entropy-density production $\sigma_s$ after integrating over the time and space when and where the inhomogeneity was relevant,
\be \label{intentropy}
\Delta S_V(\delta T_0)=\int \,dV\,dt \, \sigma_s(r,t)\,.
\ee
To ascertain the size of this produced entropy and assess its relative importance, it is natural to quotient it by the background entropy in the same volume, $S_{back}$, that for a spherical disturbance integrating up to the radius $R$ (defined above in Eq.~(\ref{init}) as the characteristic Gaussian fall-off radius) is
\be \label{sback}
S_{back}(R,T_{back})
\simeq \frac{4}{3}\pi R^3 s_{back}(T_{back})\,.
\ee
We now have all necessary equations and can proceed to the numerical computation.
\section{Numerical results}
\subsection{One inhomogeneity only}
To check the computer codes and understand the typical order of magnitude, let us consider a time period that is short enough so that the background temperature does not vary appreciably and can be considered constant ($T=T_0$). That means in particular that $\kappa$ and $c_p$ also remain constant (in fact the inhomogeneity has not fully spread in this case, but we can deal with this numerically later). Then we can make the replacement
\be
\sigma_s \simeq \frac{\kappa( T_0)}{ T_0^2\,R^4}\delta T_0^2r^2 e^{-\frac{r^2}{2R^2}}\,,
\ee
wherein $T_0 =T_{back}+\delta T_0$. For a temperature interval from $175$ MeV to $170$ MeV ($t\in [0-10^{13}]$ MeV$^{-1}$), we keep $\kappa(T)/T_0^2$ unchanged and of order one, thus, $\sigma_s\propto \frac{\delta T_0^2}{R^4}r^2 e^{-\frac{r^2}{2R^2}}$. Carrying out the integral over space, one gets
\be
\int\,dV\sigma_s\propto \delta T_0^2\,R\,.
\ee
To put some numbers, take an inhomogeneity of size
$ 10^{8}$ fm at $\delta T_0 \approx 10 \,\text{MeV}$; one has then $\int\,dV \sigma_s\simeq 10^{7}$ MeV.
This element multiplied by a time interval $\Delta t\approx 10^{13}$ MeV$^{-1}$ gives an integrated entropy of order $10^{20}$. Nevertheless, since at $175$ MeV $s_{back}$ is numerically of order $10^6$ MeV, the background entropy $S_{back}$ given by Eq.~(\ref{sback}) is $\approx 10^{24}$, so the ratio $\Delta S_V/S_{back}$ is $\approx 10^{-4}$. Inasmuch as we are considering just a tiny time interval in which the bubble did not have enough time to evolve, the value of the entropy produced over the entire life of the bubble must be larger than this figure, and thus not negligible at all (but requires a numeric computation).
Now, by solving the heat equation for $T(r,t)$ in Eq.~(\ref{heatin}), we can compute the integral in Eq.~(\ref{intentropy}) with Eq.~(\ref{ents}) and thus numerically obtain
$\Delta S_V$.
\begin{table}[htbp]
\caption{Values of $\Delta S_V/S_{back}$ (in units of $10^{-3}$) for different temperatures and, for each column, a value of the inhomogeneity size given by $R_1=2.5\times 10^{9}\,,R_2=2.5\times 10^{7}\,,R_3=2.5\times 10^5\,,R_4=2.5\times 10^2$ in fm units. Temperatures are in MeV. \label{table:entropy}}
\begin{tabular}{|c @ {\hspace{2mm}} c @ {\hspace{5mm}} | c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{0mm}}| }
\toprule
$T_{back}$ & $\delta T_0$ & $1$ &$2$ & $3$ &$4$\\
\toprule
\multirow{8}{0.35cm}{130}
& 40 & 56.6 & 46.2 & 46.2 & 43.6 \\
& 35 & 46.7 & 35.6 & 35.6 & 33.6\\
& 30 & 37.7 & 26.3 & 26.3 & 24.8\\
& 25 & 30.1 & 18.4 & 18.4 & 17.4\\
& 20 & 24.6 & 11.8 & 11.8 & 11.1\\
& 15 & 19.8 & 6.7 & 6.7 & 6.3 \\
& 10 & 15.9 & 3.0 & 3.0 & 2.8\\
& 5 & 18.9 & 0.8 & 0.8 & 0.7\\
\hline
\multirow{8}{0.35cm}{100}
& 40 & 162.3 & 132.7 & 132.7 & 125.0 \\
& 35 & 132.6 & 101.9 & 101.1 & 95.3 \\
& 30 & 105.6 & 73.9 & 73.9 & 70.0\\
& 25 & 83.6 & 51.0 & 51.0 & 48.1\\
& 20 & 67.5 & 32.4 & 32.4 & 30.6\\
& 15 & 53.5 & 18.0 & 18.0 & 17.1\\
& 10 & 42.5 & 7.9 & 7.9 & 7.5\\
& 5 & 48.7 & 2.0 & 2.0 & 1.9\\
\toprule
\end{tabular}
\end{table}
Table \ref{table:entropy} shows the numeric computation of $\Delta S_V$ divided by $S_{back}$ for different choices of $T_{back}$ and $\delta T_0$ (initial, central intensity of the perturbation).
In figure \ref{fig:entdt} we plot the same quantity $\Delta S_V/S_{back}$ against $ \delta T_0$ for different initial sizes. As expected, the bigger the bulb is, the more entropy it produces, also relative to the background.
In figure \ref{fig:entone} we simultaneously plot $\Delta S_V/S_{back}$ against the size $R$ and intensity $\delta T_0$ of the inhomogeneity. It is interesting though expected to note that at lower background temperatures the integrated entropy becomes larger for equal $\delta T_0$.
Mathematically this comes from the term $\vert\boldsymbol{\nabla}T\vert^2$ in Eq.~(\ref{ents}),
which increases as the $T_{back}$ decreases, giving rise to a larger entropy production.\\
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/entropydt2.pdf}
\caption{$\Delta S_V/S_{back}$ against $\delta T_0$ for $T_{back}=100$ MeV and different radii $R$, from top to bottom: $R=2.5\times 10^9$ fm (blue online), $R=2.5\times 10^5$ fm (green online), and $2.5\times 10^2$ fm (black online). \label{fig:entdt} }
\end{figure}
\begin{center}
\begin{figure}
\includegraphics[scale=0.28]{FIGS.DIR/entone4.pdf}
\caption{$\Delta S_V/S_{back}$ as a function of $R$ and $\delta T_0$ at different $T_{back}$ in the moment of the formation of the inhomogeneity. From top to bottom the surfaces correspond to $T_{back}=100$ MeV (green online) and $T_{back}=130$ MeV (blue online). \label{fig:entone}}
\end{figure}
\end{center}
\subsection{Multiple inhomogeneities} \label{subsec:multiple}
In the early-universe hadronic gas there is no reason to think that only one bubble of different temperature would form (as opposed to say a nuclear collision which is a system of very limited size). In the absence of data all we can give is an upper bound to the entropy
produced by disposing as
many inhomogeneities as possible
(as long as the background does not lose its meaning).
We adopt as an extreme limit the density of bubbles when their Gaussian two-sigma walls touch. Thus, we will consider for geometric simplicity a Cartesian arrangement featuring inhomogeneities disposed
as in a simple centered cubic structure. The typical size of each inhomogeneity will be $\approx 2R$ in diameter. We take $4R$ as reasonable average separation between inhomogeneities. The edge of such cube
has a length of $2R\,N$ due to the presence of $N$ inhomogeneities plus $(N-1)4\,R$ due to the $(N-1)$ spacings,
as we show in figure \ref{figure:node}.
\begin{center}
\begin{figure}
\begin{tikzpicture}
\draw [very thick] (-4,0) circle (0.4);
\draw [very thick] (-1.9,0) circle (0.4);
\draw [very thick] (1.5,0) circle (0.4);
\draw [very thick] (3.6,0) circle (0.4);
\draw [thick] (-3.6,0) -- (-2.3,0);
\draw [thick] (-1.5,0) -- (-0.5,0);
\draw [thick] (0.1,0) -- (1.1,0);
\draw [thick] (1.9,0) -- (3.2,0);
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(-4.4,-0.6) node(t_k_unten){} --
(-3.6,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(-3.5,-0.6) node(t_k_unten){} --
(-2.4,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(-2.3,-0.6) node(t_k_unten){} --
(-1.5,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(-1.4,-0.6) node(t_k_unten){} --
(-0.5,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(0.1,-0.6) node(t_k_unten){} --
(1,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(1.1,-0.6) node(t_k_unten){} --
(1.9,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(2,-0.6) node(t_k_unten){} --
(3.1,-0.6) node(t_k_opt_unten){};
\draw[decorate,decoration={brace,amplitude=3pt,mirror}]
(3.2,-0.6) node(t_k_unten){} --
(4,-0.6) node(t_k_opt_unten){};
\draw (-4,-1.2) node{$2R$};
\draw (-1.9,-1.2) node{$2R$};
\draw (1.5,-1.2) node{$2R$};
\draw (3.6,-1.2) node{$2R$};
\draw (-2.95,-1.2) node{$4R$};
\draw (-0.95,-1.2) node{$4R$};
\draw (0.55,-1.2) node{$4R$};
\draw (2.55,-1.2) node{$4R$};
\draw [thick] (-0.3,0) circle (0.01);
\draw [thick] (-0.2,0) circle (0.01);
\draw [thick] (-0.1,0) circle (0.01);
\draw (3.5,0.8) node{$N$};
\draw (1.45,0.8) node{$N-1$};
\end{tikzpicture}
\caption{Sketch of the inhomogeneities arrangement.}\label{figure:node}
\end{figure}
\end{center}
\vspace{-8mm}
The background entropy ${S}^{(N)}_{back}$ for $N$ inhomogeneities occupying a volume $V_C$ is then
\be
S^{(N)}_{back}=V_c s_{back} \,,
\ee
with $V_c=\left[(N-1)4R +2NR\right]^3$.
Next we need to model the intensity of each perturbation, $\delta T_0$. In a plasma this will be randomly distributed. Conceivable noise models are white noise (all $\delta T_0$ equally likely) or Brownian noise (the distribution falls as $1/\delta T_0^2)$. An interesting intermediate case that is ubiquitous in physics is the so called $1/f$ noise~\cite{noise} that distributes the bubbles in proportion to $1/\delta T_0$.
Both $1/f$ and $1/f^2$ noises obviously assign lower density to higher $\delta T_0$ We currently have no reason to prefer one or another distribution, so we examine all three of them.
In future work we will examine acoustic oscillations of the gas performing a spectral analysis so that the coefficient amplitudes of each Fourier mode will be left arbitrary, to improve the treatment here).
The noise function is in all three cases of the form
\be
P(\delta T_0)=\frac{\mathcal{C}}{\delta T_0^\beta}\,,
\ee
wherein $\beta=0,1,2$ for the white, $1/f$ and $1/f^2$ noises respectively. The normalization constant $\mathcal{C}$ is determined from the total number of inhomogeneities $N$ in $V_C$ by
\be
\int^{\delta T_b}_{\delta T_a} d(\delta T_0)\,\frac{\mathcal{C}}{ \delta T_0^\beta} = N\,,
\ee
with $\delta T_a\,,\delta T_b$ the lower and upper limits respectively for the initial temperature of the inhomogeneity, i.e., $\delta T_0\in [\delta T_a,\delta T_b]$.
Too high initial temperatures will involve the quark and gluon plasma and are thus out of our reach here, so $\delta T_b\sim 40$ MeV seems reasonable for this exploration.
As for the smallest $\delta T_0$ taken, since we work in the isospin limit (for example in the computation of the thermal conductivity), it doesn't make sense to retain scales smaller than about $5$ MeV where quark-mass or electromagnetic isospin breaking effects may play a role.
Thus, we will choose a temperature interval between $40$ MeV to $5$ MeV for the separation above the thermal background.
In figure \ref{figure:entsize} we plot $\Delta S^{(N)}_V/S^{(N)}_{back}$
for different initial sizes at fixed $T_{back}=100$ MeV for the three noise profiles.
\begin{figure}
\includegraphics[scale=0.67]{FIGS.DIR/entropysize100_2.pdf}
\caption{$\Delta S^{(N)}_V/S^{(N)}_{back}$ against $R$ at fixed $T_{back}=100$ MeV and $\delta T_0 = 40\,\text{MeV}$. From bottom to top: $1/f^2$ noise function (blue online), $1/f$ noise (red online), constant white noise (green online). \label{figure:entsize}}
\end{figure}
The summed, integrated entropy $\Delta S^{(N)}_V$ is defined as the integrated entropy summed over all inhomogeneities weighted by function picked, namely,
\be
\Delta S^{(N)}_V(\delta T_0)=\sum_{\delta T_0}\Delta S_V(\delta T_0)P(\delta T_0)\,,
\ee
with $\Delta S_V$ defined in Eq.~(\ref{intentropy}). Tables \ref{table:entropymultiplef2},\ref{table:entropymultiplef}, and \ref{table:entropymultipleone} give the summed, integrated entropy with distribution of inhomogeneities following the noise functions $1/f^2\,,1/f\,,1$ respectively. In the case of the $1/f\,,1/f^2$ noises, there is a balance between the larger entropy production in the hotter bubbles
and the larger probability of finding the colder ones, yielding a relatively flat entropy-production dependence on $\delta T_0$. As it is shown in figure \ref{figure:enttinhom}, the largest entropy production is attained for the white noise distribution. Nonetheless, note that in no case $\Delta S^{(N)}_V$ is much greater than about $10^{-6} S^{(N)}_{back}$.\\
\begin{figure}
\includegraphics[scale=0.65]{FIGS.DIR/entdt100.pdf}
\caption{$\Delta S^{(N)}_V/S^{(N)}_{back}$ against $\delta T_0$ at $R= 2.5\times 10^{5}\,\text{fm}$ and $T_{back}=100\,\text{MeV}$. Blue online: $1/f^2$ noise function, red online: $1/f$ noise, green online: constant white noise. \label{figure:enttinhom} }
\end{figure}
\begin{table}[htbp]
\caption{$\Delta S^{(N)}_V/S^{(N)}_{back}$ (in units of $10^{-6}$) for different temperatures. The four columns of data correspond to $R_1=2.5\times 10^{9}\,,R_2=2.5\times 10^{7}\,,R_3=2.5\times 10^5\,,R_4=2.5\times 10^2$
in fm units. $T_{back}$ and $\delta T_0$ are expressed in MeV.
\label{table:entropymultiplef2}}
\begin{tabular}{|c @ {\hspace{2mm}} c @ {\hspace{5mm}} |c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{0mm}} |}
\toprule
\multicolumn{2}{|c|}{ $ $ } &
\multicolumn{4}{c|}{ Noise function $1/f^2$}\\
$T_{back}$ & $\delta T_0$ & 1 & 2& 3 & 4\\
\toprule
\multirow{8}{0.55cm}{120}
& 40 & 8.1 & 5.3 & 5.3 & 5.0\\
& 35 & 7.2 & 4.5 & 4.5 & 4.2 \\
& 30 & 6.6 & 4.1 & 4.0 & 3.4 \\
& 25 & 6.3 & 2.9 & 2.9 & 2.6 \\
& 20 & 4.3 & 2.5 & 2.5 & 2.4 \\
& 15 & 4.0 & 1.5 & 1.5 & 1.4 \\
& 10 & 6.2 & 9.8 $\times 10^{-1}$ & 9.8 $\times 10^{-1}$ & 9.1 $\times 10^{-1}$ \\
& 5 & 7.2 & 2.5 $\times 10^{-1}$ & 3.1 $\times 10^{-1}$ & 1.4 $\times 10^{-1}$ \\
\hline
\multirow{8}{0.55cm}{100}
& 40 & 13.0 & 10.6 & 10.6 & 10.0 \\
& 35 & 11.9 & 9.0 & 9.0 & 8.5 \\
& 30 & 10.6 & 7.5 & 7.5 & 7.0 \\
& 25 & 9.6 & 5.8 & 5.8 & 5.5 \\
& 20 & 8.9 & 4.3 & 4.3 & 4.0 \\
& 15 & 8.1 & 2.7 & 2.7 & 2.6 \\
& 10 & 7.6 & 1.4 & 1.4 & 1.3 \\
& 5 & 10.5 & 4.1 $\times 10^{-1}$ & 4.1 $\times 10^{-1}$ & 3.9 $\times 10^{-1}$ \\
\toprule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{$\Delta S^{(N)}_V/S^{(N)}_{back}$ (in units of $10^{-6}$) for different temperatures. The four columns of data correspond to $R_1=2.5\times 10^{9}\,,R_2=2.5\times 10^{7}\,,R_3=2.5\times 10^5\,,R_4=2.5\times 10^2$ in fm units. $T_{back}$ and $\delta T_0$ are given in MeV.
\label{table:entropymultiplef}}
\begin{tabular}{|c @ {\hspace{2mm}} c @ {\hspace{5mm}}| c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{0mm}} |}
\toprule
\multicolumn{2}{|c|}{ $ $ } &
\multicolumn{4}{c|}{ Noise function $1/f$}\\
$T_{back}$ & $\delta T_0$ &1 & 2 &3 & 4\\
\toprule
\multirow{8}{0.55cm}{120}
& 40 & 9.8 & 7.8 & 7.8 & 7.0 \\
& 35 & 7.9 & 5.9 & 5.9 & 4.8 \\
& 30 & 6.2 & 4.4 & 4.4 & 3.8 \\
& 25 & 4.9 & 3.6 & 3.6 & 2.2 \\
& 20 & 4.6 & 1.8 & 1.8 & 1.6 \\
& 15 & 5.5 & 1.1 & 1.1 & 9.4$\times 10^{-1}$\\
& 10 & 4.6 & 7.2$\times 10^{-1}$ & 7.2$\times 10^{-1}$ & 5.3$\times 10^{-1}$\\
& 5 & 5.7 & 1.8$\times 10^{-1}$ & 1.8$\times 10^{-1}$ & 1.4$\times 10^{-1}$\\
\hline
\multirow{8}{0.55cm}{100}
& 40 & 18.9 & 15.4 & 15.4 & 14.5\\
& 35 & 15.6 & 11.9 & 11.9 & 11.2\\
& 30 & 12.6 & 8.8 & 8.8 & 8.3\\
& 25 & 10.0 & 6.1 & 6.1 & 5.8\\
& 20 & 8.2 & 3.9 & 3.9 & 3.7\\
& 15 & 6.6 & 2.2 & 2.2 & 2.1\\
& 10 & 5.3 & 9.9$\times 10^{-1}$ & 9.9$\times 10^{-1}$ & 9.4$\times 10^{-1}$\\
& 5 & 6.3 & 2.5$\times 10^{-1}$ & 2.5$\times 10^{-1}$ & 2.3$\times 10^{-1}$\\
\toprule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{$\Delta S^{(N)}_V/S^{(N)}_{back}$ (in units of $10^{-6}$) for different temperatures. The four columns of data correspond to $R_1=2.5\times 10^{9}\,,R_2=2.5\times 10^{7}\,,R_3=2.5\times 10^5\,,R_4=2.5\times 10^2$ in fm units. $T_{back}$ and $\delta T_0$ are both given in MeV.
\label{table:entropymultipleone}}
\begin{tabular}{|c @ {\hspace{2mm}} c @ {\hspace{5mm}} |c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{5mm}} c @ {\hspace{0mm}} |}
\toprule
\multicolumn{2}{|c|}{ $ $ } &
\multicolumn{4}{c|}{ White noise}\\
$T_{back}$ & $\delta T_0$ & 1 & 2 & 3& 4\\
\toprule
\multirow{8}{0.55cm}{120}
& 40 & 195.1 & 166.3 & 166.3 & 149.8 \\
& 35 & 195.5 & 120.1 & 120.1 & 112.6 \\
& 30 & 183.4 & 120.6 & 120.5 & 107.1\\
& 25 & 153.6 & 84.7 & 84.7 & 78.7 \\
& 20 & 137.3 & 64.3 & 64.3 & 50.4\\
& 15 & 126.1 & 30.4 & 30.4 & 24.3\\
& 10 & 57.3 & 12.8 & 12.82 & 12.3\\
& 5 & 52.2 & 2.7 & 2.7 & 2.6 \\
\hline
\multirow{8}{0.55cm}{100}
& 40 & 386.5 & 315.9 & 315.8 & 297.5 \\
& 35 & 315.7 & 240.7 & 240.7 & 226.9 \\
& 30 & 251.5 & 175.9 & 175.9 & 165.9 \\
& 25 & 199.0 & 121.3 & 121.3 & 114.5 \\
& 20 & 160.6 & 77.1 & 77.0 & 72.8 \\
& 15 & 127.5 & 43.0 & 42.9 & 40.6 \\
& 10 & 101.1 & 18.9 & 18.90 & 17.9 \\
& 5 & 118.6 & 4.7 & 4.6 & 4.4 \\
\toprule
\end{tabular}
\end{table}
The outcome of the computation is that the entropy produced is only a small fraction of the background entropy in the same volume, because many of the inhomogeneities are just of small intensity with these weight functions. But these are ad hoc: independent ways of assessing what inhomogeneities are possible need to be found and we look forward to progress in that respect.
\section{Conclusions}
In this work we have examined the production of entropy by thermal inhomogeneities in the pion gas produced after the quark-gluon plasma hadronization at the early universe.
In view of the uniformity of the CMB at large scales, too large to not have been in causal contact, standard theory
invokes a time of accelerated expansion (inflation) of a universe in thermal equilibrium. Thus, it would appear natural to assume that such equilibrium was also reached early-on at small scales. Nevertheless, the naturality argument is not fool proof (recall the recent discovery of a light Higgs with no accompanying supersymmetric partners for any of the SM particles) and does not discard the possible existence of small-scale inhomogeneities in the early universe. If any such did not have enough time to dissipate before the quark-gluon plasma decay cross-over, or was produced during that phase transition,
the entropy production is calculable with modern nuclear and particle physics theory. We have exemplified with the computation of such entropy increase in a thermal inhomogeneity in the pion gas, but the field is ample and much more work is possible.
We have concentrated on the pion gas at a time interval when pions were
main contributors to the universe's entropy, even more than photons,
and the ones carrying the largest possible inhomogeneity due to their short mean free path.
Of course, further processes involving non-vanishing entropy production, such as flavor or momentum diffusion, are expected to be of equal potential importance. We leave them for future work. Here we have remained within the realm of small, Newtonian thermal perturbations, and the produced $\Delta S_V/S_{\rm back}$ is thus a small fraction.
In future work we plan to examine the damping of acoustic oscillations in this phase by examining subJeans modes in the presence of dissipation coefficients, and try to put quantitative constraints on the maximum size of inhomogeneities that can be dissipated. Looking at figure~\ref{figure:deltat}, we see that the pion gas can easily reduce a thermal perturbation at the 30\% level down to the 3\% level, that is, an order of magnitude. A more detailed study beyond this first exploration is granted.
Meanwhile, for inhomogeneities of that intensity, production of entropy is significant, as can be seen in the various tables of subsection~\ref{subsec:multiple}.
To conclude, we have pointed out that entropy production deserves being examined in the hadron-lepton phase between 1 and 175 MeV, and we have studied in detail one example, that of relaxation of thermal inhomogeneities in the pion gas.
\section*{Acknowledgments}
We thank Antonio Maroto for a critical reading of the cosmology aspects of the work.
Supported by the Spanish Excellence Network on Hadronic Physics FIS2014-57026-REDT, and by grants UCM:910309, MINECO:FPA2011-27853-C02-01, MINECO:FPA2014-53375-C2-1-P and CPAN Consolider-Ingenio 2010. DRF was partially supported by a GRUPIN 14-108 research grant from Principado de Asturias.
|
1,108,101,565,129 | arxiv | \section{Introduction}\label{sec:intro}
Let $\mathbb{A}$ be a linear, homogeneous differential operator with constant coefficients on $\mathbb{R}^n$ from $V$ to $W$, i.e.,
\begin{align}\label{eq:A}
\mathbb{A} u=\sum_{|\alpha|=k} A_\alpha\partial^\alpha u,\qquad u\colon\mathbb{R}^{n}\to V,
\end{align}
where $\mathbb{A}_\alpha\in\mathscr{L}(V,W)$ are fixed linear mappings between two finite dimensional real vector spaces $V$ and $W$. In this respect, we recall the (Fourier) symbol map
\begin{align*}
\mathbb{A}[\cdot]\colon \mathbb{R}^{n}\to \mathscr{L}(V,W),\;\;\;\mathbb{A}[\xi] v=\sum_{|\alpha|=k} \xi^\alpha A_\alpha v,
\end{align*}
defined for $\xi\in\mathbb{R}^n$, $v\in V$. It is a well--known fact that a Korn--type inequality in full--space, by which we mean that for all $u\in\operatorname{C}_c^\infty(\mathbb{R}^n,V)$ we have
\begin{align*}
\|\operatorname{D}\!^k u\|_{\operatorname{L}^p(\mathbb{R}^{n},V\odot^{k}\mathbb{R}^{n})}\leq c\|\mathbb{A} u\|_{\operatorname{L}^p(\mathbb{R}^{n},W)},
\end{align*}
for $1<p<\infty$, is equivalent to \emph{ellipticity} of $\mathbb{A}$; this is to say that the symbol map $\mathbb{A}[\xi]\in\mathscr{L}(V,W)$ is injective for any $\xi\in\mathbb{R}^n\setminus\{0\}$. It is also well--known by the so--called \textsc{Ornstein}'s \emph{Non--inequality} that no such estimate can hold in the critical case $p=1$ (see \cite{Ornstein,CFM} for the classical statement and \cite[Thm.~1.3]{KirKri} for a general form). However, it is natural to ask whether a weaker coercive estimate holds in this borderline case. This has been established by \textsc{Van Schaftingen} in the seminal paper \cite{VS}. Namely, it is proved in \cite[Thm.~1.3]{VS} that a Sobolev--type inequality of the form
\begin{align}\label{eq:VS}
\|\operatorname{D}\!^{k-1}u\|_{\operatorname{L}^{\frac{n}{n-1}}(\mathbb{R}^{n},V\odot^{k-1}\mathbb{R}^{n})}\leq c\|\mathbb{A} u\|_{\operatorname{L}^1(\mathbb{R}^{n},W)}
\end{align}
for $u\in\operatorname{C}^\infty_c(\mathbb{R}^n,V)$ holds if and only if $\mathbb{A}$ is elliptic and \emph{cancelling} (EC). The latter condition states that the intersection of $\mathbb{A}[\xi](V)$ for all non--zero $\xi$ is trivial. This sharp result generalizes the proof of \eqref{eq:VS} for a large class of operators $\mathbb{A}$ given by \textsc{Bourgain} and \textsc{Brezis} in \cite[Thm.~25]{BB07}, building on their fundamental work on critical case estimates in \cite{BB02,BB04,BBCR,BB07}. We remark that in the case $p=1$, the critical estimate \eqref{eq:VS} cannot be achieved by standard potential estimates. With such means, only weak--type estimates can be obtained (see \cite[Ch.~V.1]{Stein}), and, in turn, one needs to employ the vectorial structure of $\mathbb{A}$.
The aim of this paper is to give precise conditions on $\mathbb{A}$ under which a Sobolev--type embedding holds on bounded domains, for which we consider the unit ball $\operatorname{B}\subset\mathbb{R}^n$, thereby generalizing the well--known Gagliardo--Nirenberg--Sobolev inequality
\begin{align*}
\|u\|_{\operatorname{L}^{\frac{n}{n-1}}(\operatorname{B},V)}\leq c\left(\|\operatorname{D}\! u\|_{\operatorname{L}^1(\operatorname{B},V\times\mathbb{R}^n)}+\|u\|_{\operatorname{L}^{1}(\operatorname{B},V)}\right)
\end{align*}
for $u\in\operatorname{C}^\infty(\bar{\operatorname{B}},V)$ to arbitrary differential operators $\mathbb{A}$. Our result also covers the Korn--Sobolev inequality
\begin{align}\label{eq:ST}
\|u\|_{\operatorname{L}^{\frac{n}{n-1}}(\operatorname{B},\mathbb{R}^n)}\leq c\left(\|\mathcal{E} u\|_{\operatorname{L}^1(\operatorname{B},\mathbb{R}^{n\times n}_{\operatorname{sym}})}+\|u\|_{\operatorname{L}^{1}(\operatorname{B},\mathbb{R}^n)}\right)
\end{align}
proved by \textsc{Strang} and \textsc{Temam} in \cite[Prop.~1.2]{ST}, building on the homogeneous inequality of \textsc{Strauss} \cite{Strauss}. Our local version of \textsc{Van Schaftingen}'s Theorem is particularly relevant to variational problems of minimizing energy functionals of the type, e.g.,
\begin{align}\label{eq:varprob}
\mathfrak{F}\colon u\mapsto\int_{\operatorname{B}}f(\mathbb{A} u)\operatorname{d}\! x,
\end{align}
over Dirichlet classes of mappings $u\colon\operatorname{B}\rightarrow V$, for $f\colon W\rightarrow\mathbb{R}$ of linear growth, i.e., there exist constants $c,C>0$ such that $c|w|\leq f(w)\leq C(1+|w|)$ for $w\in W$. Such problems arise for example in plasticity theory (\cite[Ch.~1-2]{FS}), which is the original motivation for the study in \cite{ST}. In this respect, we mention the connection with the recent paper \cite{BDG}, where existence of generalized minimizers of \eqref{eq:varprob} was established for first order operators $\mathbb{A}$. To be precise, we introduce the spaces $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ (resp. $\operatorname{BV}^\mathbb{A}(\operatorname{B})$) as the space of $u\in\operatorname{L}^1(\operatorname{B},V)$ such that the distribution $\mathbb{A} u$ is an integrable function (resp. a Radon measure) on $\operatorname{B}$, which are complete with respect to the obvious norms. Assuming $k=1$, it is proved in \cite[Thm.~1.1]{BDG} that the trace embedding $\operatorname{BV}^\mathbb{A}(\operatorname{B})\hookrightarrow\operatorname{L}^1(\partial\operatorname{B},V)$ holds if and only if $\mathbb{A}$ has \emph{finite dimensional null--space} (FDN). Under this assumption, it is shown that the infimum of $\mathfrak{F}$ over a Dirichlet class in $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ is attained in $\operatorname{BV}^\mathbb{A}(\operatorname{B})$ by a minimizer of the lower--semicontinuous envelope $\bar{\mathfrak{F}}$ of $\mathfrak{F}$ (see \cite[Thm.~5.3]{BDG}, cp. \cite{ADM,GMS}).
The main result of this paper complements the study of \eqref{eq:varprob} by showing that FDN is also equivalent with the critical Sobolev--type embedding $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{\frac{n}{n-1}}(\operatorname{B},V)$:
\begin{theorem}\label{thm:main}
Let $\mathbb{A}$ be as in \eqref{eq:A}, $k=1$, $n>1$. The following are equivalent:
\begin{enumerate}
\item\label{it:main_a} $\mathbb{A}$ has FDN.
\item\label{it:main_b} $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{\frac{n}{n-1}}(\operatorname{B},V)$.
\item\label{it:main_c} $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{p}(\operatorname{B},V)$ for some $1<p\leq\frac{n}{n-1}$.
\item\label{it:main_d} $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow\operatorname{L}^{q}(\operatorname{B},V)$ for all $1\leq q<\frac{n}{n-1}$.
\item\label{it:main_e} $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow\operatorname{L}^{1}(\operatorname{B},V)$.
\end{enumerate}
\end{theorem}
The same holds true with $\operatorname{W}^{\mathbb{A},1}$ being replaced by $\operatorname{BV}^\mathbb{A}$ and for bounded domains that are star--shaped with respect to a ball. The compact embedding \ref{it:main_d} generalizes the well--known result for $\operatorname{BD}$ (i.e., for $\mathbb{A}=\mathcal{E}$; see \cite{FS,ST,Suquet}). In Theorem \ref{thm:main_k} below, we remove the restriction $k=1$, which we temporarily keep for simplicity of exposition. The novelty of Theorems \ref{thm:main} and \ref{thm:main_k} comes from the fact that, up to our knowledge, except for a few examples of operators $\mathbb{A}$, in the literature there are no similar $\operatorname{L}^1$--estimates on bounded domains (without additional assumptions of zero or periodic boundary values). We do not include the case $n=1$, which is not covered by our methods, but turns out to be a simple exercise.
We pause to compare the embedding \ref{it:main_b} in Theorem \ref{thm:main} with \textsc{Van Schaftingen}'s homogeneous embedding $\dot{\operatorname{W}}^{\mathbb{A},1}(\mathbb{R}^n)\hookrightarrow\operatorname{L}^{\frac{n}{n-1}}(\mathbb{R}^n,V)$. For elliptic $\mathbb{A}$, this embedding is equivalent to
\begin{align}\label{eq:zerotraceemb}
\operatorname{W}^{\mathbb{A},1}_0(\operatorname{B})\hookrightarrow\operatorname{L}^\frac{n}{n-1}(\operatorname{B},V)
\end{align}
(see Lemma \ref{lem:embimpliesEC} for a scaling argument), and it can easily be shown that, in the absence of cancellation, we can still prove by means of a Green's formula and boundedness of Riesz potentials that $\operatorname{W}^{\mathbb{A},1}_0(\operatorname{B})\hookrightarrow\operatorname{L}^p(\operatorname{B},V)$ for any $1\leq p < n/(n-1)$ (see Lemma \ref{lem:zerotraceemb}). Here $\operatorname{W}^{\mathbb{A},p}_0(\operatorname{B})$ is defined as the closure of $\operatorname{C}_c^\infty(\operatorname{B},V)$ in the (semi--)norm $u\mapsto\|\mathbb{A} u\|_{\operatorname{L}^p}$. The situation is dramatically different as far as $\operatorname{L}^p$--embeddings of $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ are concerned. By Theorem \ref{thm:main}, if the critical embedding
\begin{align}\label{eq:ourembedding}
\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^\frac{n}{n-1}(\operatorname{B},V)
\end{align}
fails, then no uniform higher integrability estimate is possible. The difference can be even sharper: in Section \ref{sec:ECvsFDN}, we give an example of first order differential operator $\mathbb{A}$ of the form \eqref{eq:A} that is elliptic but does not have FDN such that there is a map in $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ that has no higher integrability. Moreover, this operator can be chosen such that it satisfies the cancelling condition and even the more particular condition of \cite[Thm.~25]{BB07}, so the homogeneous embedding \eqref{eq:zerotraceemb} can hold even if the inhomogeneous \eqref{eq:ourembedding} fails. We remark that the main difference between $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ and $\operatorname{W}^{\mathbb{A},1}_0(\operatorname{B})$ lies in the traces, which are integrable if and only if $\mathbb{A}$ has FDN \cite{BDG}. Another way to look at this discrepancy is to note that, for elliptic $\mathbb{A}$, the only solution of $\mathbb{A} u=0$
in $\operatorname{W}^{\mathbb{A},1}_0(\operatorname{B})$ is the trivial one, which can be seen, e.g., from \eqref{eq:representation}. In the case of \eqref{eq:ourembedding}, if $\mathbb{A}$ is elliptic but does not have FDN, the space $\{u\in\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\colon \mathbb{A} u=0\}$ contains maps that are not integrable at the boundary (see \cite[Sec.~4.3]{BDG}) and maps that are not $\operatorname{L}^{n/(n-1)}$--integrable (see Section \ref{sec:ECvsFDN}). Both examples use the lack of boundary regularity of the solutions of $\mathbb{A} u=0$ in the absence of FDN, a phenomenon which is not relevant for \eqref{eq:zerotraceemb}, where zero boundary data is implicit. If, in turn, $\mathbb{A}$ has FDN, all solutions of $\mathbb{A} u=0$ are polynomials. From this point of view, we can heuristically say that FDN is a canonical condition for Dirichlet problems/inhomogeneous estimates on bounded domains, whereas EC is a canonical condition for problems/homogeneous estimates in full--space.
In this respect, it is of particular interest to compare the conditions EC and FDN. We will prove in Lemma \ref{lem:FDNimpliesEC} that FDN implies EC. In Section \ref{sec:ECvsFDN}, we complete the comparison of these conditions, showing that the implication is strict in general. We write $N:=\dim V$ and summarize our findings in the table below:
\begin{center}
\begin{tabular}{|c|c|}
\hline
& \begin{tabular}{c|c}
$N=1\qquad\quad$&$\qquad\quad N\geq2$
\end{tabular}\\
\hline
$n=2$&
\begin{tabular}{c|c}
\begin{tabular}{l}
$k=1$: \textcolor{NavyBlue}{E$\Rightarrow$FDN}\\
\hline
$k=2$: \textcolor{ForestGreen}{EC$\Rightarrow$FDN}\\
\hline
$k\geq3$: \textcolor{BrickRed}{EC$\not\Rightarrow$FDN}
\end{tabular}&
\begin{tabular}{l}
$k=1$: \textcolor{ForestGreen}{EC$\Rightarrow$FDN}\\
\hline
$k\geq2$: \textcolor{BrickRed}{EC$\not\Rightarrow$FDN}\\
\\
\end{tabular}\\
\end{tabular}\\
\hline
$n\geq3$&
\begin{tabular}{c|c}
\begin{tabular}{l}
$k=1$: \textcolor{NavyBlue}{E$\Rightarrow$FDN}\\
\hline
$k\geq2$: \textcolor{BrickRed}{EC$\not\Rightarrow$FDN}
\end{tabular}&
\begin{tabular}{l}
\textcolor{BrickRed}{EC$\not\Rightarrow$FDN}$\qquad\quad\hspace{0.4mm}$\\
\end{tabular}
\end{tabular}\\
\hline
\end{tabular}
\end{center}
The streamline is that for first order operators acting on scalar fields, ellipticity is equivalent to both conditions, whereas for large values of $n,N,k$, the implication of EC by FDN is strict. Interestingly, two cases remain, which match the case of the canonical elliptic, non--FDN operators $\bar{\partial}$ and $\partial_1^2+\partial_2^2$ (see Section \ref{sec:examples}). In these cases, we show that EC and FDN are equivalent. Our comparison is completely elementary and we hope that it will have some impact in understanding the nature of and the differences between $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ and $\operatorname{W}^{\mathbb{A},1}(\mathbb{R}^n)$ or its homogeneous version.
We move on to briefly describing the proof of Theorem \ref{thm:main}. As mentioned above, we show that the FDN assumption (strictly) implies the cancelling condition. This is coupled with the construction of a suitable extension operator to full--space which enables us to use \cite[Thm.~1.3]{VS}. To prove these, we rely on the known fact that a homogeneous operator $\mathbb{A}$ has FDN if and only it is \emph{$\mathbb{C}$--elliptic}, i.e., the map $\mathbb{A}[\xi]\in\mathscr{L}(V+\operatorname{i} V,W+\operatorname{i} W)$ is injective for any $\xi\in\mathbb{C}^n\setminus\{0\}$. This was noticed in \cite{BDG} for first order operators. We recently became aware of the work \cite{Smith}, where the case of operators of arbitrary order is proved. We record these auxiliary facts below:
\begin{theorem}\label{thm:tools}
Let $\mathbb{A}$ be as in \eqref{eq:A}, $n>1$. Then $\mathbb{A}$ has FDN if and only if $\mathbb{A}$ is $\mathbb{C}$--elliptic. Moreover, if $\mathbb{A}$ has FDN, then $\mathbb{A}$ is cancelling and there exists a bounded, linear extension operator $E_B\colon\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\rightarrow\operatorname{W}^{\mathbb{A},1}(\mathbb{R}^n)$.
\end{theorem}
We remark that the same holds true for any $1\leq p\leq\infty$. To construct the extension operator we use the technique introduced by \textsc{Jones} in \cite{Jones}. Although introduced to deal with rough domains, our reason for resorting to this rather involved method is that we could not otherwise circumvent lack of boundedness of singular integrals on $\operatorname{L}^1$ (see Lemma \ref{lem:extp>1} for a simple proof if $1<p<\infty$). Of the modifications required to adapt \textsc{Jones}' technique, we single out as a novelty the Poincar\'e--type inequality for FDN operators (Proposition \ref{prop:poinc}), namely that for $1\leq l<k$, $1\leq p\leq\infty$ we have
\begin{align}\label{eq:poinc}
\inf_{P\in\ker\mathbb{A}}\|\operatorname{D}\!^l (u-P)\|_{\operatorname{L}^p(\operatorname{B},V\odot^l\mathbb{R}^n)}\leq c\operatorname{diam}(\operatorname{B})^{k-l}\|\mathbb{A} u\|_{\operatorname{L}^p(\operatorname{B},W)}
\end{align}
for $u\in\operatorname{C}^\infty(\bar{\operatorname{B}},V)$. Interestingly, $\mathbb{A}$ having FDN is not necessary for the estimate \eqref{eq:poinc} to hold, as can be seen from \cite{Fuchs1}. We believe that ellipticity alone is sufficient for the estimate to hold and intend to pursue this in future work.
Using the tools from Theorem \ref{thm:tools}, we can refine our result on fractional scales, thereby obtaining the local versions of the embeddings in \cite[Thm.~8.1, Thm.~8.4]{VS}:
\begin{theorem}\label{thm:main_k}
Let $\mathbb{A}$ be as in \eqref{eq:A}, $s\in[k-1,k)$, $q\in(1,\infty)$. Then $\mathbb{A}$ has FDN if and only if there exists $c>0$ such that
\begin{align*}
\|u\|_{\operatorname{B}^{s,\frac{n}{n-k+s}}_q (\operatorname{B},V)}\leq c\left(\|\mathbb{A} u\|_{\operatorname{L}^1(\operatorname{B},W)}+\|u\|_{\operatorname{L}^1(\operatorname{B},V)}\right)
\end{align*}
for all $u\in\operatorname{C}^\infty(\bar{\operatorname{B}},V)$.
\end{theorem}
We obtain the embeddings $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{W}^{s,n/(n-k+s)}(\operatorname{B},V)$ if we choose $q=n/(n-k+s)$ (cp. \cite[Thm.~8.1]{VS}) and $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{W}^{k-1,n/(n-1)}(\operatorname{B},V)$ if we further choose $s=k-1$ (cp. \cite[Thm.~1.3]{VS}).
In view of Theorem \ref{thm:main_k}, it is natural to ask what is the generalization of \cite[Thm.~4.17-18]{BDG} to operators of arbitrary order. If $\mathbb{A}$ has FDN, we can use Theorem \ref{thm:main_k} to give a sub--optimal trace embedding
\begin{align}\label{eq:weaktrace}
\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{B}_q^{s-\frac{1}{p},p}(\partial\operatorname{B},V)\quad\text{ for }s\uparrow k\text{, so }p=\frac{n}{n-k+s}\downarrow1\text{, and }q\downarrow1,
\end{align}
using standard trace theory for Besov spaces. The optimal embedding for $\mathbb{A}=\nabla^k$ was only recently proved by \textsc{Mironescu} and \textsc{Russ} in \cite{MR}, building on the $k=2$ case proved by \textsc{Uspenski\u{\i}} in \cite{Uspenskii}. They proved that the trace operator is continuous onto $\operatorname{B}^{k-1,1}_1$, which is in general strictly smaller than the quick guess $\operatorname{W}^{k-1,1}$ (see \cite[Rk.~A.1]{BP}). Coupling this with the trace theorem in \cite{BDG}, it is natural to make the following:
\begin{conjecture}
An operator $\mathbb{A}$ as in \eqref{eq:A} has FDN if and only if there exists a continuous, linear, surjective trace operator $\mathrm{Tr}\colon\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\rightarrow\operatorname{B}^{k-1,1}_1(\partial\operatorname{B},V)$.
\end{conjecture}
A few remarks are in order. Necessity of FDN can be proved by a modification of the arguments in \cite[Sec.~4.3]{BDG}. Surjectivity is obvious, using \cite[Thm.~1.3-4]{MR} and $\operatorname{W}^{k,1}(\operatorname{B},V)\hookrightarrow\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$. The difficulty stems from proving boundedness (hence, well-definedness) of the trace operator, which cannot be reduced to the situation in \cite{MR} by \textsc{Ornstein}'s Non--inequality, or to \eqref{eq:weaktrace} by strict inclusion of Besov spaces. We do not see a way to merge the techniques in \cite{BDG,MR} and intend to tackle the problem in the future.
This paper is organized as follows: In Section \ref{sec:prel} we collect preliminaries on function spaces, multi--linear algebra, harmonic analysis and give examples of operators. In Section \ref{sec:ECvsFDN} we give the proof of the first two statements in Theorem \ref{thm:tools} and complete the comparison between EC and FDN. In Section \ref{sec:proof} we construct the Jones--type extension and prove Theorems \ref{thm:main} and \ref{thm:main_k}.
\subsection*{Acknowledgement} The authors wish to thank Jan Kristensen for reading a preliminary version of the paper.
\section{Preliminaries}\label{sec:prel}
Throughout this paper we assume that $n>1$.
\subsection{Function spaces}\label{sec:prelfspaces}
We define, reminiscent of \cite{Mazya}, for $1\leq p\leq\infty$ and open $\Omega\subset\mathbb{R}^n$
\begin{align*}
\operatorname{W}^{\mathbb{A},p}(\Omega)&:=\{u\in\operatorname{L}^p(\Omega,V)\colon \mathbb{A} u\in\operatorname{L}^p(\Omega)\},\\
\operatorname{BV}^{\mathbb{A}}(\Omega)&:=\{u\in\operatorname{L}^1(\Omega,V)\colon \mathbb{A} u\in\mathcal{M}(\Omega,W)\},\\
\operatorname{V}^{\mathbb{A},p}(\Omega)&:=\{u\in\operatorname{W}^{\mathbb{A},p}(\Omega)\colon\nabla^l u\in\operatorname{L}^p(\Omega,V\odot^l\mathbb{R}^n),l=1\ldots k-1\},
\end{align*}
and the homogeneous spaces $\dot{\operatorname{W}}^{\mathbb{A},p}$ as the closure of $\operatorname{C}_c^\infty(\mathbb{R}^n,V)$ in the semi--norm $|u|_{\mathbb{A},p}:=\|\mathbb{A} u\|_{\operatorname{L}^p}$. In the case $\mathbb{A}=\nabla^k$, we write $\operatorname{W}^{k,p}(\Omega,V)$, $\operatorname{V}^{k,p}(\Omega,V)$. When it is clear from the context what the target space is, we abbreviate the $\operatorname{L}^p$--norm of maps defined on $\Omega$ by $\|\cdot\|_{p,\Omega}$. We denote the space of $V$--valued polynomials of degree at most $d$ in $n$ variables by $\mathbb{R}_d[x]^V$. We recall the weighted Bergman spaces $A^p_\alpha(\mathbb{D})$ of holomorphic maps defined on the open unit disc $\mathbb{D}\subset\mathbb{C}$, that are $p$--integrable with weight $w_\alpha(z)=(1-|z|^2)^\alpha$. It is well--known that these are Banach spaces under the $\operatorname{L}^p_{w_\alpha}$--norm for $1\leq p<\infty$ and $-1<\alpha<\infty$. We also recall, for $s>0$, $1\leq p,q<\infty$, the Besov space
\begin{align*}
\operatorname{B}^{s,p}_q(\Omega):=\{u\in\operatorname{L}^p(\Omega)\colon |u|_{\operatorname{B}^{s,p}_q(\Omega)}<\infty\},
\end{align*}
with an obvious choice of norm. Here, the Besov semi--norm is defined (see, e.g., \cite[Sec.~2]{devoresharpley}) for integer $r>s$ by
\begin{align*}
|u|_{\operatorname{B}^{s,p}_q(\Omega)}=\|u\|_{\dot{\operatorname{B}}^{s,p}_q(\Omega)}:=\left(\int_0^\infty \dfrac{\sup_{|h|<t}\|\Delta^r_h u\|^q_{\operatorname{L}^p(\Omega)}}{t^{1+sq}}\operatorname{d}\! t\right)^\frac{1}{q},
\end{align*}
where the $r$-th finite difference $\Delta^r_h u$ is defined to be zero if undefined, i.e., if at least one of $x+jh$, $j=1\ldots r$, falls outside $\Omega$. We also define the homogeneous space $\dot{\operatorname{B}}^{s,p}_q(\mathbb{R}^n)$ as the closure of $\operatorname{C}^\infty_c(\mathbb{R}^n)$ in the Besov semi--norm.
We also collect the assumptions on our operators. As in Section \ref{sec:intro}, we say that $\mathbb{A}$ is \newline($\mathbb{C}$--)elliptic if and only if the linear map $\mathbb{A}[\xi]\colon V(+\operatorname{i} V)\rightarrow W(+\operatorname{i} W)$ is injective for all non--zero $\xi\in\mathbb{R}^n(+\operatorname{i}\mathbb{R}^n)$. Trivially, $\mathbb{C}$--elliptic operators are elliptic. We say that $\mathbb{A}$ has FDN (finite dimensional null--space) if and only if the vector space $\{u\in\mathscr{D}'(\mathbb{R}^n,V)\colon\mathbb{A} u=0\}$ is finite dimensional. Finally, $\mathbb{A}$ is cancelling if and only if $\bigcap_{\xi\in S^{n-1}}\mathbb{A}[\xi](V)=\{0\}$.
\subsection{Multi--linear Algebra}
Let $U,V$ be finite dimensional vector spaces and $l\in\mathbb{N}$. We write $\mathscr{L}(U,V)$ for the space of linear maps $U\rightarrow V$ and $V\odot^l U$ for the space of $V$--valued symmetric $l$--linear maps on $U$, a subspace of $V\otimes^l U$, the $V$--valued $l$--linear maps on $U$. This is naturally the space of the $l$--th gradients, i.e., $\operatorname{D}\!^l f(x)\in V\odot^l U$ for $f\in\operatorname{C}^l(U,V)$, $x\in U$. For more detail, see \cite[Ch.~1]{Federer}. We also write $a\otimes b=(a_i b_j)$ (the usual tensor product) and $\otimes^l a:=a\otimes a\otimes\ldots \otimes a$, where $a$ appears $l$ times on the right hand side. We single out the standard fact that $\widehat{\nabla^l f}(\xi)=\hat{f}(\xi)\otimes^l \xi\in V\odot^l U$ for $f\in\mathscr{S}(U,V)$, $\xi\in U$. We recall the pairing introduced in \cite{BDG}, $v\otimes_\mathbb{A}\xi:=\mathbb{A}[\xi]v$, which is reminiscent of the tensor product notation, i.e., if $\mathbb{A}=\operatorname{D}\!$, we have $\otimes_\mathbb{A}=\otimes$. We have the following calculus rules if $k=1$:
\begin{align*}
\mathbb{A}(\rho u)&=\rho\mathbb{A} u+u\otimes_\mathbb{A} \nabla\rho\qquad\text{for }u\in\operatorname{C}^1(\mathbb{R}^n,V), \rho\in\operatorname{C}^1(\mathbb{R}^n),\\
\mathbb{A}(\phi(w))&=\phi^\prime(w)\otimes_\mathbb{A}\nabla w\qquad\text{for }\phi\in\operatorname{C}^1(\mathbb{R},V),w\in\operatorname{C}^1(\mathbb{R}^n).
\end{align*}
The above can easily be checked by direct computation and will be used without mention.
\subsection{Harmonic Analysis}\label{sec:harmonic}
Let $\mathbb{A}$ as in \eqref{eq:A} be elliptic and $u\in\mathscr{S}(\mathbb{R}^n,V)$. We Fourier transform $\mathbb{A} u$ and apply the one--sided inverse $m_\mathbb{A}(\xi):=(\mathbb{A}^*[\xi]\mathbb{A}[\xi])^{-1}\mathbb{A}^*[\xi]\in\mathscr{L}(W,V)$ of $\mathbb{A}[\xi]$ to get that $\hat{u}(x)=m_\mathbb{A}(\xi)\widehat{\mathbb{A} u}(\xi)$ for $\xi\in\mathbb{R}^n$ (we omitted the complex multiplicative constant arising from Fourier transforming, as it can be absorbed in the definition of $m_\mathbb{A}$). We define the $(k-n)$--homogeneous map $\textbf{G}_\mathbb{A}$ as the inverse Fourier transform of the $k$--homogeneous map $m_\mathbb{A}$. Thus we have the Green's function representation $u=\textbf{G}_\mathbb{A}*\mathbb{A} u$. Moreover, we can extrapolate the following:
\begin{lemma}
Let $\mathbb{A}$ as in \eqref{eq:A} be elliptic. Then there exists a $(1-n)$--homogeneous map $\mathbf{K}_\mathbb{A}\in\operatorname{C}^\infty(\mathbb{R}^n\setminus\{0\},\mathscr{L}(W,V\otimes^k\mathbb{R}^n))$ such that
\begin{align}\label{eq:representation}
\operatorname{D}\!^{k-1}u(x)=\int_{\mathbb{R}^n}\mathbf{K}_\mathbb{A}(x-y)\mathbb{A} u(y)\operatorname{d}\! y
\end{align}
for all $u\in\operatorname{C}^\infty_c(\mathbb{R}^n,V)$.
\end{lemma}
We also record standard facts regarding Riesz potentials and singular integrals (see \cite[Ch.~II.4, Ch.~V.1]{Stein} and \cite[Lem.~7.2]{GT}), which we define by
\begin{align*}
I_\alpha f:=|\cdot|^{n-\alpha}*f
\end{align*} for $\alpha\in[0,n)$ and measurable $f$. If $\alpha=0$, the convolution is understood in the sense of a principal value integral.
\begin{theorem}\label{thm:anal_harm}
We have that
\begin{enumerate}
\item $I_0$ is bounded on $\operatorname{L}^p(\mathbb{R}^n)$ for $1<p<\infty$,
\item $I_\alpha$ is bounded $\operatorname{L}^p(\mathbb{R}^n)\rightarrow\operatorname{L}^q(\mathbb{R}^n)$ for $0<\alpha<n$, $p>1$, $q= np/(n-\alpha p)$,
\item\label{itm:riesz_domains} $I_\alpha$ is bounded $\operatorname{L}^p(\Omega)\rightarrow\operatorname{L}^q(\Omega)$ for $0<\alpha<n$, $1\leq p\leq q<np/(n-\alpha p)$ with
\begin{align*}
\|I_\alpha(u)\|_{\operatorname{L}^q(\Omega)}\leq c(\operatorname{diam}\Omega)^{\alpha-n(1/p-1/q)}\|u\|_{\operatorname{L}^p(\Omega)}
\end{align*}
for all $u\in\operatorname{L}^p(\Omega)$.
\end{enumerate}
\end{theorem}
\subsection{Examples}\label{sec:examples}
We give examples of operators arising in conductivity, elasticity, plasticity and fluid mechanics (\cite{FM,FS,Milton}). Let $\mathbb{A}$ be as in \eqref{eq:A}. The facts that we use without mention are the main Theorems \ref{thm:main}, \ref{thm:tools}, and \ref{thm:main_k}.
\begin{enumerate}
\item If $\mathbb{A}=\nabla^k$, we have that $\ker\mathbb{A}=\mathbb{R}_{k-1}[x]^V$, so $\mathbb{A}$ has FDN, hence is EC. This, of course, corresponds to the case of classical Sobolev spaces, but we highlight it here to stress that our generalization brings a new perspective on their study.
\item If $\mathbb{A} u=\mathcal{E}u:=(\nabla u+(\nabla u)^\mathsf{T})/2$ is the symmetrized gradient, it is easy to see that $\ker\mathbb{A}$ is the space of rigid motions, i.e., affine maps of anti--symmetric gradient, so $\mathbb{A}$ has FDN, hence is EC. In this case, we recover the inequality in \eqref{eq:ST}.
\item\label{it:delbar} Let $\mathbb{A} u=\mathcal{E}^D u:=\mathcal{E}u-(\operatorname{div} u/n) \textbf{I}$, where $n\geq2$ and $\textbf{I}$ is the identity $n\times n$ matrix. If $n\geq3$, we have from \cite{Reshet} that $\ker\mathbb{A}$ is the space of conformal Killing vectors, so $\mathbb{A}$ has FDN, hence is EC. If $n=2$, we show in Counterexample \ref{ex:EC>FDN} that $\mathbb{A}$ is elliptic. However, under the canonical identification $\mathbb{R}^2\cong\mathbb{C}$, we can also identify $\mathcal{E}^D$ with the anti--holomorphic derivative $\bar{\partial}$, so that we can further identify $\ker\mathbb{A}$ with the space of holomorphic functions, so $\mathbb{A}$ does not have FDN. Neither is $\mathbb{A}$ cancelling: by ellipticity, we have that $\mathcal{E}^D[\xi](\mathbb{R}^2)=\mathbb{R}^2$. No critical embedding \eqref{eq:zerotraceemb}, \eqref{eq:ourembedding} can hold in this case.
\item Recall from \cite[A.2~(2.2)]{FS} that $\operatorname{W}^{\operatorname{div},1}\cap\operatorname{W}^{\mathcal{E}^D,1}(\operatorname{B})\hookrightarrow\operatorname{L}^{n/(n-1)}(\operatorname{B},\mathbb{R}^n)$. By \ref{it:delbar}, if $n\geq3$ we can simplify and extend the embedding, whereas if $n=2$ the intersection is necessary.
\item If $\mathbb{A}=\Delta$, which is clearly elliptic, we have that $\ker\mathbb{A}$ is the space of all harmonic functions, so $\mathbb{A}$ does not have FDN and since $\mathbb{A}[\xi](V)=\left(\sum_{j=1}^n\xi_j^2\right)\mathbb{R}^N=\mathbb{R}^N$ for $\xi\in\mathbb{R}^n\setminus\{0\}$, neither is $\mathbb{A}$ cancelling.
\item One can use Lemma \ref{lem:FDNimpliesEC} to prove non--rigidity. If $\mathbb{A}$ is elliptic, one can consider minimizers of the $\mathbb{A}$--Dirichlet energy $u\mapsto\int_{\operatorname{B}}|\mathbb{A} u|^2\operatorname{d}\! x$, which has Euler--Lagrange system $\mathbb{A}^*\mathbb{A} u=0$. Then $\Delta_\mathbb{A}:=\mathbb{A}^*\mathbb{A}$ is elliptic, as $\langle(\mathbb{A}^*\mathbb{A})[\xi] v,v\rangle=|\mathbb{A}[\xi]v|^2\gtrsim|\xi|^{2k}|v|^2$, where the last inequality follows from $|\mathbb{A}[\xi]v|>0$ on $\{|\xi|=1,|v|=1\}$ and homogeneity. Therefore $(\mathbb{A}^*\mathbb{A})[\xi](V)=V$ for all $\xi\neq0$, so the Euler--Lagrange system above has infinite dimensional solution space.
\end{enumerate}
\section{EC Versus FDN}\label{sec:ECvsFDN}
We begin by proving the first two statements in Theorem \ref{thm:tools}. Throughout, $n>1$.
\begin{proposition}\label{prop:FDNiffTypeC}
Let $\mathbb{A}$ be as in \eqref{eq:A}. Then $\mathbb{A}$ has FDN if and only if $\mathbb{A}$ is $\mathbb{C}$--elliptic.
\end{proposition}
\begin{proof}
From Theorem \ref{thm:Ka}, we have that if $\mathbb{A}$ is $\mathbb{C}$--elliptic, then $\ker\mathbb{A}$ consists of polynomials of fixed maximal degree. Suppose now that $\mathbb{A}$ is not $\mathbb{C}$--elliptic, so that there exist non--zero $\xi\in\mathbb{C}^n$, $v\in V+\operatorname{i} V$ such that $\mathbb{A}[\xi]v=0$. We define $u_f(x)=f(x\cdot\xi)v$, for holomorphic $f\colon\mathbb{C}\rightarrow\mathbb{C}$. It can be shown by direct real differentiation of real and imaginary parts and use of the Cauchy--Riemann equations for $f$ that $\operatorname{D}\! u_f(x)=(\partial_1 f)(x\cdot\xi)v\otimes\xi$. Since $\partial_1 f$ is itself holomorphic, inductively we get that $\operatorname{D}\!^l u_f(x)=(\partial^l_1f)(x\cdot\xi)v\otimes^l\xi$. We make the simple observation that there exists a linear map $A\in\mathscr{L}(V\otimes^k\mathbb{R}^n,W)$ such that $\mathbb{A} u=A(\operatorname{D}\!^k u)$, so that by standard properties of the Fourier transform we get $\mathbb{A}[\eta]w=A(w\otimes^k\eta)$ for $\eta\in\mathbb{R}^n$, $w\in V$. It is then easy to see that $\mathbb{A} u_f(x)=(\partial_1^k f)(x\cdot\xi)A(v\otimes^k\xi)=0$. In particular, $\Re u_f,\Im u_f\in\ker\mathbb{A}$, so $\mathbb{A}$ has infinite dimensional null--space.
\end{proof}
In light of this result, we we will henceforth use FDN and $\mathbb{C}$--ellipticity interchangeably. We next proceed to an instrumental ingredient for proving sufficiency of FDN for the embedding Theorem.
\begin{lemma}\label{lem:FDNimpliesEC}
Let $\mathbb{A}$ be as in \eqref{eq:A}. If $\mathbb{A}$ has FDN, then $\mathbb{A}$ is cancelling.
\end{lemma}
\begin{proof}
We use Lemma \ref{lem:canc}. Let $u\in\operatorname{C}^\infty(\mathbb{R}^n,V)$ be such that $K:=\operatorname{spt}\mathbb{A} u$ is compact. Consider an open ball $\operatorname{B}$ containing $K$. Cover the complement of $\operatorname{B}$ with an increasing chain of pairwise overlapping open balls $B_j$ such that $\operatorname{B}^c\subset \bigcup_j B_j\subset K^c$. In particular, we have $\mathbb{A} u=0$ in each $B_j$, so by Theorem \ref{thm:Ka}, $u$ must be a polynomial of degree at most $d(\mathbb{A})$ in each $B_j$. Since the pairs of balls overlap on a set of positive measure, we get that $u$ equals a polynomial $P$ in $\operatorname{B}^c$ such that $\mathbb{A} P=0$ in $\mathbb{R}^n$. To conclude, we elaborate on the notation introduced in the proof of Proposition \ref{prop:FDNiffTypeC}. Put $m:=\dim W$, so that we can write $(A\mathscr{V})_{l}=A^l\cdot\mathscr{V}$ for fixed $A^l\in V\otimes^k\mathbb{R}^n$, $l=1\ldots m$, and all $\mathscr{V}\in V\otimes^k\mathbb{R}^n$. For $l=1\ldots m$, we integrate by parts to get
\begin{align*}
\int_{\operatorname{B}}(\mathbb{A} u)_l\operatorname{d}\! x&=\int_{\operatorname{B}}A^l\cdot \operatorname{D}\!^k u\operatorname{d}\! x=
\int_{\partial\operatorname{B}}A^l\cdot(\operatorname{D}\!^{k-1}u\otimes\nu) \operatorname{d}\!\mathcal{H}^{n-1}\\
&=\int_{\partial\operatorname{B}}A^l\cdot(\operatorname{D}\!^{k-1}P\otimes \nu)\operatorname{d}\!\mathcal{H}^{n-1}=\int_{\operatorname{B}}A^l\cdot\operatorname{D}\!^k P\operatorname{d}\! x=\int_{\operatorname{B}}(\mathbb{A} P)_l\operatorname{d}\! x=0,
\end{align*}
where $\nu$ denotes the unit normal to $\partial\operatorname{B}$. The proof is complete.
\end{proof}
The converse of Lemma \ref{lem:FDNimpliesEC}, however, is not true in general. In what follows, we complete the comparison of the FDN condition and \textsc{Van Schaftingen}'s EC condition. We write $N:=\dim V$. The streamline here is that for $N=k=1$, ellipticity alone implies FDN (rendering these cases rather uninteresting), whereas in high dimensions, there are EC operators that are not FDN. Somewhat surprisingly, there are also a few instances in which ellipticity and $\mathbb{C}$--ellipticity differ, but EC implies FDN. We give the details below.
\begin{lemma}\label{lem:EimpliesFDN}
Let $\mathbb{A}$ as in \eqref{eq:A} be elliptic, $N=k=1$. Then $\mathbb{A}$ has FDN.
\end{lemma}
\begin{proof}
Since $N=1$, it is clear that $\mathbb{A}$ is $\mathbb{F}$--elliptic, $\mathbb{F}\in\{\mathbb{R},\mathbb{C}\}$, if and only if the polynomials $(\mathbb{A} [\xi])_l$, $l=1\ldots m$, have no common non--trivial zeroes in $\mathbb{F}$. Since we also assume $k=1$, we have $\mathbb{A}[\xi]=A\xi$ for some $\mathbb{A}\in\mathbb{R}^{m\times n}$. It is clear that all roots of the polynomials thus arising are real (in fact, $\mathbb{A}$ is elliptic if and only if $A\in\mathrm{GL}_n$).
\end{proof}
If $n\geq3$, EC turns out to be insufficient for FDN, even for scalar fields or first order operators.
\begin{counterexample}[EC does \emph{not} imply FDN]\label{ex:EC>FDN}
Consider the operators
\begin{align*}
\mathbb{A}_{k,n} u&:=\nabla^{k-1}\left(\partial_1 u_1-\partial_2 u_2, \partial_2 u_1+\partial_1 u_2, \partial_j u_i\right)_{(i,j)\notin\{1,2\}\times\{1,2\}} \text{ for }N\geq2\\
\mathbb{B}_{k,n}u&:=\nabla^{k-2}\left(\partial^2_1 u+ \partial^2_2 u, \partial^2_j u\right)_{j=3\ldots n}\text{ for }N=1,k\geq2.
\end{align*}
If $n\geq3$ or $k\geq2$, then $\mathbb{A}_{k,n}$ is elliptic and cancelling, but has infinite dimensional null--space. The same is true of $\mathbb{B}_{k,n}$ if $n\geq3$ or $k\geq3$.
\end{counterexample}
\begin{proof}
The failure of FDN is clear: simply take
\begin{align*}
u_\mathbb{A}(x)&:=\left(\Re f\left(x_1+\operatorname{i} x_2\right), \Im f\left(x_1 + \operatorname{i} x_2\right), 0,\ldots,0\right)^\mathsf{T}\\
u_\mathbb{B}(x)&:=g(x_1,x_2)
\end{align*}
for holomorphic $f$ and (scalar) harmonic $g$. We next show that $\mathbb{A}_{k,n}=\nabla^{k-1}\mathbb{A}_{1,n}$ is elliptic if $n,N\geq2$. We can reduce to ellipticity of $\mathbb{A}_{1,n}$, since for non--zero $\xi$, we have that $0=\mathbb{A}_{k,n}[\xi]v=(\mathbb{A}_{1,n}[\xi]v)\otimes^{k-1}\xi$, so $\mathbb{A}_{1,n}[\xi]v=0$. Let $1\leq j\leq n$ be such that $\xi_j\neq0$. If $j\geq3$, we clearly get $v=0$. If $1\leq j\leq2$, we get that $v_i=0$ for $3\leq i\leq N$. The remaining equations are $\xi_1v_1-\xi_2v_2=0=\xi_2v_1+\xi_1v_2$, with determinant $\xi_1^2+\xi_2^2>0$, so $v_1=0=v_2$. It remains to check that, under our assumptions, $\mathbb{A}_{k,n}$ is cancelling. The case $k>1$ is easier, since the composition of operators $\mathbb{L}_1\circ\mathbb{L}_2$ is cancelling if $\mathbb{L}_1$ is. This is simply due to the fact that $\textrm{im}(\mathbb{L}_1\circ\mathbb{L}_2)[\xi]=\mathbb{L}_1[\xi](\textrm{im}\mathbb{L}_2[\xi])\subseteq\textrm{im}\mathbb{L}_1[\xi]$. If $k=1$ and $n\geq3$ we can make a straightforward computation. Write $(w_l)_{l=1\ldots Nn-2}:=\mathbb{A}_{1,n}[\xi]v$. For $w\in\bigcap_{\xi\neq0}\mathbb{A}_{1,n}[\xi](V)$, we can essentially test with different values of $\xi\neq0$. By choosing $\xi$ to have exactly one non--zero entry, we obtain that $w_l=0$ for $3\leq l\leq Nn-2$. Incidentally, when testing with $\xi$ such that $\xi_1=0=\xi_2$, we also obtain $w_1=0=w_2$, so all properties are checked for $\mathbb{A}_{k,n}$. Ellipticity of $\mathbb{B}_{k,n}$ is obvious, whereas cancellation is established analogously.
\end{proof}
The two specific cases that are not covered by Lemma \ref{lem:EimpliesFDN} and Counterexample \ref{ex:EC>FDN} reveal that the classes EC and FDN can coincide even if they are strictly smaller than the class of elliptic operators.
\begin{lemma}\label{lem:EC=FDN}
Let $n=2$ and $\mathbb{A}$ be as in $\eqref{eq:A}$ be elliptic but not $\mathbb{C}$--elliptic. If any of the following hold,
\begin{enumerate}
\item\label{it:ECimpliesFDN_a} $N=1$, $k=2$,
\item\label{it:ECimpliesFDN_b} $N\geq2$, $k=1$,
\end{enumerate}
then $\mathbb{A}$ is not cancelling.
\end{lemma}
\begin{proof}
Suppose \ref{it:ECimpliesFDN_a}. Since $N=1$ and $\mathbb{A}$ is not $\mathbb{C}$--elliptic, the homogeneous, quadratic, scalar polynomials $(\mathbb{A}[\xi])_l$, $l=1\ldots m$, must have a common complex root. This root cannot be real, as $\mathbb{A}$ is real--elliptic. It follows that $(\mathbb{A}[\xi])_l$ are all multiples of the same quadratic polynomial $P:\mathbb{R}\rightarrow\mathbb{R}$, so that $\mathbb{A}[\xi]v=vP(\xi)w_0$ for all $v\in V\simeq\mathbb{R}$ and some $w_0\in W\setminus\{0\}$. It is clear then that $\mathbb{A}[\xi](V)=\mathbb{R} w$ for all $\xi\neq0$. We next assume \ref{it:ECimpliesFDN_b}. Since $\mathbb{A}$ is elliptic, there exist linearly independent $\xi,\eta\in\mathbb{R}^2$, $v,w\in\mathbb{R}^N$ such that $\mathbb{A}[\xi]v=\mathbb{A}[\eta]w$ and $\mathbb{A}[\xi]w=-\mathbb{A}[\eta]v$. We also have that any $\zeta\in\mathbb{R}^2$ can be written as $\zeta=a\xi+b\eta$. We put $v_\zeta:=a v+b w$. It follows that
\begin{align*}
\mathbb{A}[\zeta]v_\zeta=\mathbb{A}[a\xi+b\eta](a v+b w)=(a^2+b^2)\mathbb{A}[\xi]v,
\end{align*}
so that $\bigcap_{\zeta\in\mathbb{R}^2\setminus\{0\}}\mathbb{A}[\zeta](V)\ni\mathbb{A}[\xi]v\neq0$.
\end{proof}
We remark that we can append the proof above by taking $w_\zeta:=b v-a w$ and obtain $\bigcap_{\zeta\neq0}\mathbb{A}[\zeta](V)\supset\{\mathbb{A}[\xi]v,\mathbb{A}[\xi]w\}$. Therefore, if $n=2$, $k=1$, and $\mathbb{A}$ is elliptic but not cancelling, then $\dim\bigcap_{\zeta\neq0}\mathbb{A}[\zeta](V)\geq2$. We do not know of any elliptic, non--cancelling, first order operator in higher dimensions for which this intersection is one--dimensional.
\subsection{Insufficiency of EC}\label{sec:EC>emb}
We next give examples of first order EC operators and domains $\Omega\subset\mathbb{R}^n$ for which the Sobolev--type embedding fails. This follows from Counterexample \ref{ex:EC>FDN} above and from the next lemma, which is a strengthened version of the strict inclusion of (weighted) Bergman spaces in the language of non--FDN operators.
\begin{lemma}\label{lem:EC>emb}
Let $k=1$ and $\mathbb{A}$ as in \eqref{eq:A} be elliptic but \emph{not} have FDN, so there exist linearly independent $\eta_1,\eta_2\in\mathbb{R}^n$ such that $\mathbb{A}[\eta_1+\operatorname{i}\eta_2]$ has non--trivial kernel in $V+\operatorname{i} V$. Assume that $\eta_1,\eta_2$ are orthonormal. If any of the following holds:
\begin{enumerate}
\item\label{itm:cylinder} $\Omega:=\operatorname{B}_{\operatorname{span}\{\eta_1,\eta_2\}}\times[0,1]^{n-2}$,
\item\label{itm:ball} $\Omega:=\operatorname{B}$,
\end{enumerate}
then there exists smooth $u\in\operatorname{L}^1\setminus\bigcup_{p>1}\operatorname{L}^p(\Omega,V)$ such that $\mathbb{A} u=0$.
\end{lemma}
\begin{proof}
We write $\xi=\eta_1+\operatorname{i}\eta_2$, and write $D$ for the unit disc in $\operatorname{span}\{\eta_1,\eta_2\}$. We stress that each $\eta_j$ must be non--zero by ellipticity of $\mathbb{A}$, so $D$ is indeed a non--degenerate disc. We also know from the proof of Proposition \ref{prop:FDNiffTypeC} that there exist non--zero $v\in V+\operatorname{i} V$ such that $\mathbb{A}[\xi]v=0$, and one can show by direct computation that for any holomorphic function $f$ we can define $u_f(x):=f(x\cdot\xi)v$, for which $\mathbb{A}\Re u_f=0=\mathbb{A}\Im u_f$. We have that
\begin{align*}
\int_{\Omega}|u_f(x)|^p\operatorname{d}\! x&=\int_D\int_{(\eta+\{\eta_1,\eta_2\}^\perp)\cap\Omega}|f(\eta\cdot\xi)|^p|v|^p\operatorname{d}\!\mathcal{H}^{n-2}\operatorname{d}\!\mathcal{H}^2(\eta)\\
&=|v|^p\int_D |f(\eta\cdot\xi)|^p\mathcal{H}^{n-2}\left((\eta+\{\eta_1,\eta_2\}^\perp)\cap\Omega\right)\operatorname{d}\!\mathcal{H}^2(\eta)
\end{align*}
We now make the case distinction. Assume \ref{itm:cylinder} holds, so
\begin{align*}
\int_{\Omega}|u_f(x)|^p\operatorname{d}\! x&=|v|^p\int_D |f(\eta\cdot\xi)|^p\operatorname{d}\!\mathcal{H}^2(\eta)=\int_{\mathbb{D}}|f(z)|^p\operatorname{d}\!\mathcal{L}^2(z).
\end{align*}
Assume \ref{itm:ball}, so
\begin{align*}
\int_{\Omega}|u_f(x)|^p\operatorname{d}\! x&=c(n)|v|^p\int_D |f(\eta\cdot\xi)|^p(1-|\eta|^2)^\frac{n-2}{2}\operatorname{d}\!\mathcal{H}^2(\eta)\\
&=c(n)|v|^p\int_{\mathbb{D}}|f(z)|^p(1-|z|^2)^\frac{n-2}{2}\operatorname{d}\!\mathcal{L}^2(z),
\end{align*}
where $c(n)$ denotes the volume of the $(n-2)$--dimensional ball. By Lemma \ref{lem:baire} below, we can choose $f\in A^1_{\alpha}(\mathbb{D})\setminus\bigcup_{p>1}A^p_\alpha(\mathbb{D})$ for $\alpha=0$ and $\alpha=(n-2)/2$ respectively, so that both $\Re u_f$ and $\Im u_f$ are in $\operatorname{L}^1(\operatorname{B},V)$, but one of them is in not in any other $\operatorname{L}^p$. This proves the claim.
\end{proof}
The following Lemma is also feasible by direct computation, but we prefer to give an abstract argument for the sake of brevity.
\begin{lemma}\label{lem:baire}
For all $1\leq p<\infty$, $\alpha\geq0$ the set $A^p_\alpha(\mathbb{D})\setminus\bigcup_{q>p}A^q_\alpha(\mathbb{D})$ is non--empty.
\end{lemma}
\begin{proof}
We abbreviate $A^p:=A_\alpha^p(\mathbb{D})$. The proof relies on the strict inclusion $A^q\subsetneq A^p$ for $1\leq p<q<\infty$ proved in \cite[Cor.~68]{ZZ} and a Baire category argument. Assume that the result is false, so that by H\"older's Inequality we can find a sequence $q_j\downarrow p$ such that $A^p=\bigcup_j A^{q_j}$. For natural $l$, we define the sets $F_l^j:=\{f\in A^{q_j}\colon\|f\|_{A^{q_j}}^{q_j}\leq l\}$, which we claim to be closed in $A^p$. Let $f_m\in F^j_l$ converge to $f$ in $A^p$. By completeness of $A^p$, we have, by Fatou's Lemma on a pointwise convergent, not relabelled subsequence that
\begin{align*}
\int_{\mathbb{D}}|f|^{q_j}w_\alpha\operatorname{d}\!\mathcal{L}^2\leq\liminf_{m\rightarrow\infty}\int_{\mathbb{D}}|f_m|^{q_j}w_\alpha\operatorname{d}\!\mathcal{L}^2\leq l,
\end{align*}
so that indeed $f\in F_l^j$. Since $A^{q_j}$ is a proper subspace of $A^p$, it follows that the sets $F^j_l$ are nowhere dense in $A^p$. It remains to notice that then $A^p=\bigcup_{j,l}F^j_l$, which contradicts completeness of $A^p$ by Baire's Theorem.
\end{proof}
\subsection{Comparison to the \textsc{Bourgain}--\textsc{Brezis} condition}
We recall here the assumptions on $\mathbb{A}$ (sufficient for EC) under which a general inequality of the type \eqref{eq:VS} was first proved in \cite{BB07}, in the case $k=1$ and $V=\mathbb{R}^n$. In their notation, we write $(\mathbb{A} u)_s=\langle L^{(s)},\nabla u\rangle$ for matrices $L^{(s)}\in\mathbb{R}^{n\times n}$, $s=1\ldots m$. It is shown in \cite[Thm.~25]{BB07}, that if an operator $\mathbb{A}$ is elliptic such that $\det L^{(s)}=0$ for $s=1\ldots m$, then \eqref{eq:VS} holds. It is clear (either by \cite[Thm.~1.3]{VS} or by direct computation) that such operators are cancelling. By Lemma \ref{lem:EC=FDN}, if $n=2$, we have that such $\mathbb{A}$ also has FDN, and thus satisfies \eqref{eq:ourembedding}. However, if $n\geq3$, we show that $\mathbb{A}_{1,n}$ as in Counterexample \ref{ex:EC>FDN} with $N=n$ satisfy the \textsc{Bourgain}--\textsc{Brezis} condition, but do not have FDN. We explicitly write down the matrices $L^{(s)}$ if $n=3$, the general case being a simple exercise:
\begin{align*}
&\left(\begin{array}{ccc}1 & 0 & 0 \\ 0 & -1 & 0\\ 0 & 0&0\end{array}\right),
\left(\begin{array}{ccc}0 & 1 & 0 \\ 1 & 0 & 0\\ 0 & 0&0\end{array}\right),
\left(\begin{array}{ccc}0 & 0 & 0 \\ 0 & 0 & 0\\ 1 & 0&0\end{array}\right),
\left(\begin{array}{ccc}0 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 1&0\end{array}\right),\\
&\left(\begin{array}{ccc}0 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 0&1\end{array}\right),
\left(\begin{array}{ccc}0 & 0 & 1 \\ 0 & 0 & 0\\ 0 & 0&0\end{array}\right),
\left(\begin{array}{ccc}0 & 0 & 0 \\ 0 & 0 & 1\\ 0 & 0&0\end{array}\right).
\end{align*}
By the reasoning in Section \ref{sec:EC>emb}, with $\mathbb{A}=\mathbb{A}_{1,n}$, we have that $\dot{\operatorname{W}}^{\mathbb{A},1}(\mathbb{R}^n)\hookrightarrow\operatorname{L}^{n/(n-1)}(\mathbb{R}^n)$, but there are maps in $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ that have no higher integrability.
\section{The Sobolev--type Embedding on Domains}\label{sec:proof}
\subsection{Jones--type Extension}
In this section we complete the proof of Theorem \ref{thm:tools} with the following generalization:
\begin{theorem}\label{thm:extension}
Let $\mathbb{A}$ as in \eqref{eq:A} have FDN, $1\leq p \leq\infty$, $\Omega\subset\mathbb{R}^n$ be a star--shaped domain with respect to a ball. Then there exists a bounded, linear extension operator $E_\Omega\colon\operatorname{W}^{\mathbb{A},p}(\Omega)\rightarrow\operatorname{V}^{\mathbb{A},p}(\mathbb{R}^n)$.
\end{theorem}
To prove this result we use \textsc{Jones}' method of extension developed in the celebrated paper \cite{Jones}. Recall that \textsc{Jones}'s original idea was to decompose a small neighbourhood of $\partial\Omega$ into small cubes and assign suitable polynomials of degree at most $k-1$ to each cube. Inspired by \cite[Sec.~4.1-2]{BDG}, we assign polynomials in $\ker\mathbb{A}$ on such cubes, as explained below. With this crucial modification, the streamlined proof that we include below mostly follows the same lines as in \cite[Sec.~2-3]{Jones}, where all the details we omit can be found. What deserves some special attention is a Poincar\'e--type inequality, which is interesting in its own right. We present it below and mention that it is a generalization of the results in \cite[Sec.~3]{BDG}. We extend the notation presented in Theorem \ref{thm:Ka} by $\pi_\Omega u:=\Pi \mathcal{P} u$, where $\Pi$ denotes the $\operatorname{L}^2$--orthogonal projection of $\mathbb{R}_d[x]^V$ onto $\ker\mathbb{A}$.
\begin{proposition}[Poincar\'e--type inequality]\label{prop:poinc}
Let $\mathbb{A}$ as in \eqref{eq:A} have FDN, $1\leq p\leq\infty$, $0\leq l<k$, and $\Omega\subset\mathbb{R}^n$ be a star--shaped domain with respect to a ball. Then there exists $c>0$ such that
\begin{align*}
\|\nabla^l(u-\pi_\Omega u)\|_{p,\Omega}\leq c(\operatorname{diam}\Omega)^{k-l}\|\mathbb{A} u\|_{p.\Omega}
\end{align*}
for all $u\in\operatorname{C}^\infty(\bar{\Omega},V)$.
\end{proposition}
\begin{proof}
We start with $\|\nabla^l(u-\pi_\Omega u)\|_{p,\Omega}\leq\|\nabla^l(u-\mathcal{P} u)\|_{p,\Omega}+\|\nabla^l(\mathcal{P}u-\pi_\Omega u)\|_{p,\Omega}$, and estimate both terms.
We have by the growth conditions on $K$ from Theorem \ref{thm:Ka} that
\begin{align*}
\|\nabla^l(u-\mathcal{P} u)\|_{p,\Omega}&=\left(\int_{\Omega}\left\vert\int_{\Omega}\nabla^l_x K(x-y)\mathbb{A} u(y)\operatorname{d}\! y\right\vert^p\operatorname{d}\! x\right)^{\frac{1}{p}}\\
&\lesssim\left(\int_{\Omega}\left(\int_{\Omega}\dfrac{|\mathbb{A} u(y)|}{|x-y|^{n+l-k}}\operatorname{d}\! y\right)^p\operatorname{d}\! x\right)^{\frac{1}{p}},
\end{align*}
and we obtain the estimate by standard boundedness of Riesz potentials (see Theorem \ref{thm:anal_harm}\ref{itm:riesz_domains} for the precise scaling if $n+l-k>0$; the case $n+l-k\leq0$ follows by H\"older's Inequality). We then note that $p\mapsto\|p-\Pi p\|_{p,\Omega}$ and $p\mapsto\|\mathbb{A} p\|_{p,\Omega}$ respectively define a semi--norm and a norm on the finite dimensional vector space $\mathbb{R}_d[x]^V/\ker\mathbb{A}$, so that $\|\nabla^l(\mathcal{P}u-\pi_\Omega u)\|_{p,\Omega}\lesssim\|\mathbb{A}\mathcal{P}u\|_{p,\Omega}$, with a domain dependent constant. We recall from the original proof of Theorem \ref{thm:Ka} that $\mathcal{P}u$ is the averaged Taylor polynomial
\begin{align*}
\mathcal{P}u(x)=\int_{\Omega}\sum_{|\alpha|\leq d} \frac{\partial^\alpha_y\left((y-x)^\alpha w(y)\right)}{\alpha!}u(y)\operatorname{d}\! y=\int_{\Omega}\sum_{|\alpha|\leq d}\frac{(x-y)^{\alpha}}{\alpha!} w(y)\partial^\alpha u(y)\operatorname{d}\! y,
\end{align*}
where the weight $w$ is a smooth map supported in the ball with respect to which $\Omega$ is star--shaped such that $\int w=1$. One can show by direct computation that averaged Taylor polynomials ``commute'' with derivatives, in the sense that
\begin{align*}
\mathbb{A}\mathcal{P}u=\int_{\Omega} \sum_{|\beta|\leq d-k} \frac{\partial^\beta_y\left((y-\cdot)^\beta w(y)\right)}{\alpha!}\mathbb{A} u(y)\operatorname{d}\! y.
\end{align*}
It is then obvious that $\|\mathbb{A}\mathcal{P}u\|_{p,\Omega}\lesssim\|\mathbb{A} u\|_{p,\Omega}$. The precise dependence of the constant on the domain follows by standard scaling arguments.
\end{proof}
We next introduce the framework required to prove Theorem \ref{thm:extension}. We use the same Whitney coverings as in \cite{Jones}, which we recall for the reader's convenience. Firstly recall the Decomposition Lemma introduced in \cite{Whitney}, that any open subset $\Omega\subset\mathbb{R}^n$ can be covered with a countable collection $\mathcal{W}_1:=\{S_j\}$ of closed dyadic cubes satisfying
\begin{enumerate}
\item[$(\mathrm{D}_1)$] $\ell(S_j)/4\leq \ell(S_l)\leq 4 \ell(S_j)$ if $S_j\cap S_l\neq\emptyset$,
\item[$(\mathrm{D}_2)$] $\operatorname{int} S_j\cap\operatorname{int} S_l=\emptyset$ if $j\neq l$,
\item[$(\mathrm{D}_3)$] $\ell(S_j)\leq\mathrm{dist}(S_j,\partial\Omega)\leq 4\sqrt{n}\ell(S_j)$ for all $j$,
\end{enumerate}
where $\ell(Q)$ denotes the side--length of a cube $Q$. We henceforth assume that $\Omega$ is as in the statement of Theorem \ref{thm:extension}, so in particular $\Omega$ is an $(\varepsilon,\delta)$--domain. We also consider a Whitney decomposition $\mathcal{W}_2:=\{Q_l\}$ of $\mathbb{R}^n\setminus\bar{\Omega}$, and further define $\mathcal{W}_3:=\{Q\in\mathcal{W}_2\colon \ell(Q)\leq\varepsilon\delta/(16n)\}$. We reflect each cube $Q\in\mathcal{W}_3$ to a non--unique interior cube $Q^*\in\mathcal{W}_1$ such that
\begin{enumerate}
\item[$(\mathrm{R}_1)$] $\ell(Q)\leq\ell(Q^*)\leq4\ell(Q)$,
\item[$(\mathrm{R}_2)$] $\mathrm{dist}(Q,Q^*)\leq C\ell(Q)$,
\end{enumerate}
where above and in the following $C$ denotes a constant depending on $k,p,n,\varepsilon,\delta$ only; additional dependencies will be specified. The non--uniqueness causes no issues, as one can show that
\begin{enumerate}
\item[$(\mathrm{R}_3)$] For any two choices $S_1,S_2$ of $Q^*$, we have $\mathrm{dist}(S_1,S_2)\leq C\ell(Q)$,
\item[$(\mathrm{R}_4)$] For any $S\in\mathcal{W}_1$, there are at most $C$ cubes $Q\in\mathcal{W}_3$ such that $S=Q^*$,
\item[$(\mathrm{R}_5)$] For any adjacent $Q_1,Q_2\in\mathcal{W}_3$, we have $\mathrm{dist}(Q_1^*,Q_2^*)\leq C\ell(Q_1)$.
\end{enumerate}
For detail on theses basic properties of the reflection see \cite[Lem.~2.4-7]{Jones}. We conclude the presentation of the decomposition with the following simple modification of \cite[Lem.~2.8]{Jones}:
\begin{lemma}
For any adjacent cubes $Q_1,Q_2\in\mathcal{W}_3$, there is a chain $\mathcal{C}(Q^*_1,Q^*_2):=\{Q_1^*=:S_1,S_2,\ldots S_m:=Q^*_2\}$ of cubes in $S_j\in\mathcal{W}_1$, i.e., such that $S_j$ and $S_{j+1}$ touch on an $(n-1)$--dimensional hyper--surface for all $j$, and $m\leq C$.
\end{lemma}
We proceed to define the extension operator
\begin{align*}
E_\Omega u:=
\begin{cases}
u&\text{in }\Omega\\
\sum_{Q\in\mathcal{W}_3}\varphi_Q \pi_{Q^*}u&\text{in }\mathbb{R}^n\setminus\bar{\Omega},
\end{cases}
\end{align*}
where $\{\varphi_Q\}_{Q\in\mathcal{W}_3}\subset\operatorname{C}^\infty(\mathbb{R}^n)$ is a partition of unity such that for all $Q\in\mathcal{W}_3$ we have
\begin{enumerate}
\item[$(\mathrm{P}_1)$] $0\leq\varphi_Q\leq1$ and $\sum_{Q\in\mathcal{W}_3}\varphi_Q=1$ in $\bigcup\mathcal{W}_3$,
\item[$(\mathrm{P}_2)$] $\operatorname{spt}\varphi_Q\subset17/16Q$, where $\lambda Q$ denotes the homothety of $Q$ by $\lambda$ about its centre,
\item[$(\mathrm{P}_3)$] $|\nabla^l\varphi_Q|\leq C\ell(Q)^{-l}$ for all $0\leq l\leq k$.
\end{enumerate}
Our proof mostly follows the lines of the original proof. We first prove an estimate on chains in $\mathcal{W}_1$, then suitably bound the norms of the derivatives in the exterior domains, and we conclude by showing that the extension has weak derivatives in full--space. We warn the reader that in the remainder of this section we may use the properties of the decomposition, reflection and partition of unity without mention.
\begin{lemma}[{\cite[Lem.~3.1]{Jones}}]\label{lem:chain}
Let $\mathcal{C}:=\{S_1,\ldots S_m\}\subset\mathcal{W}_1$ be a chain. Then for $0\leq l<k$ we have
\begin{align*}
\|\nabla^l(\pi_{S_1}u-\pi_{S_m}u)\|_{p,S_1}\leq C(m)\ell(S_1)^{k-l}\|\mathbb{A} u\|_{p,\cup\mathcal{C}}
\end{align*}
for all $u\in\operatorname{C}^\infty(\bar{\Omega},V)$.
\end{lemma}
\begin{proof}
We remark that $\operatorname{L}^p$--norms of polynomials of degree at most $d$ on adjacent cubes in $\mathcal{W}_1$ are comparable (see, e.g., \cite[Lem.~2.1]{Jones}). We get
\begin{align*}
\mathrm{LHS}&\leq \sum_{j=1}^{m-1}\|\nabla^l(\pi_{S_{j+1}}u-\pi_{S_j}u)\|_{p,S_1}\\
&\leq C(m) \sum_{j=1}^{m-1}\|\nabla^l(\pi_{S_{j+1}}u-\pi_{S_j\cup S_{j+1}}u)\|_{p,S_{j+1}}+\|\nabla^l(\pi_{S_j\cup S_{j+1}}u-\pi_{S_j}u)\|_{p,S_j}\\
&\leq C(m)\sum_{j=1}^{m-1}\left(\|\nabla^l(\pi_{S_{j+1}}u-u)\|_{p,S_{j+1}}+2\|\nabla^l(u-\pi_{S_j\cup S_{j+1}}u)\|_{p,S_j\cup S_{j+1}}\right.\\
&\left.+\|\nabla^l(u-\pi_{S_j}u)\|_{p,S_j}\right)
\end{align*}
and we can use the Poincar\'e--type inequality, Proposition \ref{prop:poinc}, to conclude.
\end{proof}
\begin{lemma}[{\cite[Prop.~3.4]{Jones}}]\label{lem:localbounds}
For $1\leq p\leq\infty$, we have $\|E_\Omega u\|_{\operatorname{V}^{\mathbb{A},p}(\mathbb{R}^n\setminus\bar{\Omega})}\leq C \|u\|_{\operatorname{W}^{\mathbb{A},p}(\Omega)}$ for all $u\in\operatorname{C}^\infty(\bar{\Omega},V)$.
\end{lemma}
\begin{proof}
We estimate on each cube in $\mathcal{W}_2$, distinguishing between small and large cubes. We also distinguish between $\mathbb{A}$ and the derivatives of order less than $k$. Let $Q_0\in\mathcal{W}_3$. Then, since $\varphi_Q$ sum to one in $Q_0$ and $\mathbb{A}\pi_{Q_0^*}u\equiv0$, we have
\begin{align*}
\|\mathbb{A} E_\Omega u\|_{p,Q_0}&=\left\|\mathbb{A} \sum_{Q\in\mathcal{W}_3}\varphi_Q( \pi_{Q^*}u-\pi_{Q_0^*}u)\right\|_{p,Q_0}\leq\left\|\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\mathbb{A}(\varphi_Q( \pi_{Q^*}u-\pi_{Q_0^*}u))\right\|_{p,Q_0}\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=0}^{k-1}\||\nabla^{k-j}\varphi_Q| |\nabla^j(\pi_{Q^*}u-\pi_{Q_0^*}u)|\|_{p,Q_0}\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=0}^{k-1}\ell(Q_0)^{j-k}\|\nabla^j(\pi_{Q^*}u-\pi_{Q_0^*}u)\|_{p,Q_0^*}\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\|\mathbb{A} u\|_{p,\cup\mathcal{C}(Q_0^*,Q^*)},
\end{align*}
where the last inequality follows from Lemma \ref{lem:chain}. With a similar reasoning we obtain, for $0\leq l\leq k-1$, that
\begin{align*}
\|\nabla^l E_\Omega u\|_{p,Q_0}\leq C\left(\|\nabla^l u\|_{p,Q_0^*}+\ell(Q_0)^{k-l}\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\|\mathbb{A} u\|_{p,\cup\mathcal{C}(Q_0^*,Q^*)}\right).
\end{align*}
We move on to the case $Q_0\in\mathcal{W}_2\setminus\mathcal{W}_3$, so if $Q\cap Q_0\neq\emptyset$, then $\ell(Q)\geq\ell(Q_0)/4\geq\varepsilon\delta/(64n)\geq C$. Let $0\leq l\leq k-1$, so that
\begin{align*}
\|\nabla^l E_\Omega u\|_{p,Q_0}&\leq\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\|\nabla^l(\varphi_Q \pi_{Q^*}u)\|_{p,Q_0}\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^l \ell(Q_0)^{j-l}\|\nabla^{j} \pi_{Q^*}u\|_{p,Q_0}\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^l \ell(Q_0)^{j-l}\|\nabla^{j} \pi_{Q^*}u\|_{p,Q^*}\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^l \ell(Q_0)^{j-l}(\|\nabla^{j}( \pi_{Q^*}u-u)\|_{p,Q^*}+\|\nabla^{j}u\|_{p,Q^*})\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\|u\|_{\operatorname{V}^{l,p}(Q^*,V)}+\ell(Q_0)^{k-l} \|\mathbb{A} u\|_{p,Q^*}.
\end{align*}
As, above, we similarly show that $\|\mathbb{A} E_\Omega u\|_{p,Q_0}\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3} \|u\|_{\operatorname{V}^{\mathbb{A},p}(Q^*)}$. There is no loss in assuming that $\ell(Q_0)\leq1$ for any $Q_0\in\mathcal{W}_2$, so that we can collect the estimates to obtain
\begin{align*}
\|E_\Omega u\|_{\operatorname{V}^{\mathbb{A},p}(Q_0)}\leq C \sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3} \|u\|_{\operatorname{V}^{\mathbb{A},p}(\mathcal{C}(Q_0^*,Q^*))}.
\end{align*}
It remains to use local finiteness of the partition of unity (see, e.g., \cite[Eqn.~(3.1-4)]{Jones}) and Lemma \ref{lem:sob_variants} to conclude.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:extension}]
It remains to show that $E_\Omega u$ has weak derivatives in $\mathbb{R}^n$, for which it suffices to show that $E_\Omega$ maps $\operatorname{V}^{k,\infty}(\bar{\Omega},V)$ functions to $\operatorname{V}^{k,\infty}(\mathbb{R}^n,V)$ functions. This we do in two steps. First, we show that the obvious candidate $(\nabla^l u)\chi_{\bar{\Omega}}+(\nabla^l E_\Omega u)\chi_{\mathbb{R}^n\setminus\bar{\Omega}}$ is bounded for all $0\leq l\leq k$. We need only prove this for $l=k$, the other cases being dealt with in Lemma \ref{lem:localbounds} for $p=\infty$. As before, we first take $Q_0\in\mathcal{W}_3$, where
\begin{align*}
|\nabla^l E_\Omega u|&\leq |\nabla^k\pi_{Q_0^*}u|+\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}|\nabla^k(\varphi_Q(\pi_{Q^*} u-\pi_{Q_0^*}u))|\\
&\leq C\left( |\nabla^k\pi_{Q_0^*}u|+\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\|\nabla^k u\|_{\infty,\mathcal{C}(Q_0^*,Q^*)}\right).
\end{align*}
Clearly, $p\mapsto\|\nabla^k p\|_{\infty,Q_0^*}$ is a norm on $\mathbb{R}_d[x]^V/\mathbb{R}_{k-1}[x]$, whereas $p\mapsto\|\nabla^k \Pi p\|_{\infty,Q_0^*}$ is a semi--norm. We therefore get that $\|\nabla^k\pi_{Q_0^*}u\|_{\infty,Q_0^*}\leq C \|\nabla^k\mathcal{P}_{Q_0^*}u\|_{\infty,Q_0^*}\leq C\|\nabla^k u\|_{\infty,Q_0^*}$, where the latter inequality is given by stability of averaged Taylor polynomials. Now consider the other case, whence $Q_0\in\mathcal{W}_2\setminus\mathcal{W}_3$, and recall that then $\ell(Q_0)\geq C$. We have
\begin{align*}
|\nabla^l E_\Omega u|&\leq \sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}|\nabla^k(\varphi_Q\pi_{Q^*})|\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^k\ell(Q)^{j-k}|\nabla^j\pi_\Omega u|\\
&\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^k\ell(Q_0)^{j-k}|\nabla^j\pi_\Omega u|\leq C\sum_{\emptyset\neq Q_0\cap Q\in\mathcal{W}_3}\sum_{j=1}^k|\nabla^j\pi_\Omega u|,
\end{align*}
so we can conclude as in the previous step. The second and final step is to show that $\nabla^l u$ is continuous for $0\leq l < k$. The proof of this fact can be found in \cite[Lem.~3.5]{Jones}.
\end{proof}
\subsection{Proofs of the main results}
We now begin the proof of Theorem \ref{thm:main}. It is clear that \ref{it:main_b} implies \ref{it:main_c} and that \ref{it:main_d} implies \ref{it:main_e}. We first prove that \ref{it:main_a} implies \ref{it:main_b} in full generality.
\begin{proof}[Proof of Theorem \ref{thm:main_k} (sufficiency of FDN)] Since $\mathbb{A}$ has FDN, by Theorem \ref{thm:tools}, there exists a bounded, linear extension operator $E_{\operatorname{B}} \colon\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\rightarrow\operatorname{V}^{\mathbb{A},1}(\mathbb{R}^n)$. A close inspection of the proof of Theorem \ref{thm:extension} reveals that $E_{\operatorname{B}}$ maps restrictions to $\operatorname{B}$ of $\operatorname{C}^\infty(\mathbb{R}^n,V)$--functions into $\operatorname{C}^\infty_c(\tilde{\operatorname{B}},V)$ for a larger ball $\tilde{\operatorname{B}}\Supset\operatorname{B}$, which depends on $\operatorname{B}$ only. We write $p:=n/(n-k+s)$ and use H\"older's Inequality to get that
\begin{align*}
\|u\|_{{\operatorname{B}}^{s,p}_q(\operatorname{B},V)}&\leq\|E_{\operatorname{B}}u\|_{{\operatorname{B}}^{s,p}_q(\mathbb{R}^n,V)}=\|E_{\operatorname{B}}u\|_{\operatorname{L}^p(\tilde{\operatorname{B}},V)}+\|E_{\operatorname{B}}u\|_{\dot{\operatorname{B}}^{s,p}_q(\mathbb{R}^n,V)}\\
&\lesssim\|E_{\operatorname{B}}u\|_{\operatorname{L}^{\frac{n}{n-1}}(\tilde{\operatorname{B}},V)}+\|E_{\operatorname{B}}u\|_{\dot{\operatorname{B}}^{s,p}_q(\mathbb{R}^n,V)}\\
&\lesssim\|\nabla^{k-1}E_{\operatorname{B}}u\|_{\operatorname{L}^{\frac{n}{n-1}}(\tilde{\operatorname{B}},V)}+\|E_{\operatorname{B}}u\|_{\dot{\operatorname{B}}^{s,p}_q(\mathbb{R}^n,V)}
\end{align*}
where the last estimate follows from Poincar\'e's Inequality with zero boundary values. We conclude by \cite[Thm.~1.3,~Thm.~8.4]{VS} and boundedness of $E_{\operatorname{B}}$.
\end{proof}
We will complete the proof of Theorem \ref{thm:main_k} at the end of this section. Returning to Theorem \ref{thm:main}, to see that \ref{it:main_b} implies \ref{it:main_d}, we prove the following:
\begin{theorem}\label{thm:compactness}
Let $\mathbb{A}$ be as in \eqref{eq:A} with $k=1$. Suppose that $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{\frac{n}{n-1}}(\operatorname{B},V)$. Then $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow\operatorname{L}^{q}(\operatorname{B},V)$ for all $1\leq q<\frac{n}{n-1}$.
\end{theorem}
The proof of Theorem \ref{thm:compactness} relies on the Riesz-Kolmogorov criterion and the following Nikolski\u{\i}--type estimate:
\begin{lemma}[Nikolski\u{\i}--type Estimate]\label{lem:Besov}
Let $\mathbb{A}$ be an elliptic operator of the form \eqref{eq:A}, $k=1$. Fix $R>0$. Then for every $0<s<1$ there exists a constant $c=c(s,R)>0$ such that if $u\in\operatorname{W}^{\mathbb{A},1}(\mathbb{R}^{n})$ vanishes identically outside $\operatorname{B}(0,R)$, then there holds
\begin{align*}
\int_{\mathbb{R}^{n}}|u(x+y)-u(x)|^{p}\operatorname{d}\! x \leq c\|\mathbb{A} u\|_{\operatorname{L}^{1}(\operatorname{B}(0,R),W)}^{p}|y|^{sp}.
\end{align*}
whenever $p<n/(n-1+s)$.
\end{lemma}
Note that by \textsc{Ornstein}'s Non--Inequality, $s=1$ is not allowed in the lemma. A more general version, showing in addition that ellipticity is also necessary for the estimate, can be found in \cite[Prop.~8.22]{VS}. We include an elementary proof.
\begin{proof}
Fix $0<s<1$. By smooth approximation (see \cite{BDG}), it is no loss of generality to assume that $u\in\operatorname{C}_{c}^{\infty}(\mathbb{R}^{n},V)$ and $\operatorname{spt}(u)\subset\operatorname{B}(0,R)$. Let $x\in\mathbb{R}^{n}$ be arbitrary but fixed and note that there exists a constant $c=c(s)>0$ such that for all $z,z',z''\in\mathbb{R}^{n}$ with $z\neq z',z\neq z''$ there holds
\begin{align}\label{eq:potentialest}
\left\vert \frac{1}{|z-z'|^{n-1}}-\frac{1}{|z-z''|^{n-1}}\right\vert \leq c |z'-z''|^{s}\Big(\frac{1}{|z-z'|^{n-1+s}}+\frac{1}{|z-z''|^{n-1+s}} \Big).
\end{align}
Since $\mathbb{A}$ is elliptic, the representation formula \eqref{eq:representation} yields by use of \eqref{eq:potentialest}
\begin{align*}
\int_{\mathbb{R}^{n}}|u(x+y)-u(x)|^{p}\operatorname{d}\! x &\;\sim \int_{\mathbb{R}^{n}}\left\vert\int_{\operatorname{B}(0,R)}\frac{\mathbb{A} u(z)}{|x+y-z|^{n-1}}-\frac{\mathbb{A} u(z)}{|x-z|^{n-1}}\operatorname{d}\! z\right\vert^{p}\operatorname{d}\! x\\
& \stackrel{\eqref{eq:potentialest}}{\lesssim} |y|^{sp}\int_{\mathbb{R}^{n}}\left\vert\int_{\operatorname{B}(0,R)}\frac{\mathbb{A} u(z)}{|x+y-z|^{n-1+s}}-\frac{\mathbb{A} u(z)}{|x-z|^{n-1+s}}\operatorname{d}\! z\right\vert^{p}\operatorname{d}\! x
\end{align*}
and since for every $y\in\mathbb{R}^{n}$ by Young's convolution inequality
\begin{align*}
\| (\mathbbm{1}_{\operatorname{B}(0,R)}|\mathbb{A} u|)*\tfrac{1}{|\cdot|^{n-1+s}}\|_{\operatorname{L}^{p}(\mathbb{R}^{n})}\leq \|\mathbb{A} u\|_{\operatorname{L}^{1}(\mathbb{R}^{n},W)}\|\tfrac{1}{|\cdot|^{n-1+s}}\|_{\operatorname{L}^{p}(\operatorname{B}(0,3R))},
\end{align*}
we conclude with the observation that $\|\tfrac{1}{|\cdot|^{n-1+s}}\|_{\operatorname{L}^{p}(\operatorname{B}(0,3R))}<\infty$ if and only if $p<n/(n-1+s)$. The proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:compactness}]
Recall that by the Riesz--Kolmogorov Theorem on relatively compact subsets of $\operatorname{L}^{p}$--spaces \cite[Thm.~4.26]{BrezisFA} on $\Omega\subset\mathbb{R}^{n}$ open and $1\leq p<\infty$, a subset $\mathcal{F}\subset\operatorname{L}^{p}(\Omega,V)$ is relatively compact in $\operatorname{L}^{p}(\Omega,V)$ if and only if (i) $\mathcal{F}$ is a bounded set in $\operatorname{L}^{p}(\Omega,V)$ and (ii) for all $\varepsilon>0$ there exists $\delta>0$ such that for all $f\in\mathcal{F}$ and all $y\in\mathbb{R}^{n}$ with $|y|<\delta$ there holds
\begin{align}
\|\overline{f}(\cdot+y)-\overline{f}(\cdot)\|_{\operatorname{L}^{p}(\mathbb{R}^{n},V)}<\varepsilon,
\end{align}
where $\overline{f}$ is the trivial extension of $f\in\mathcal{F}$ to $\mathbb{R}^{n}$.
Let $1\leq q<1^{*}$ and $\mathcal{F}$ be the unit ball in $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$. The embedding $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{1^{*}}(\operatorname{B},V)$ implies that $\mathcal{F}$ is bounded in $\operatorname{L}^{q}(\operatorname{B},V)$ which shows condition (i) of the Riesz--Kolmogorov criterion. As to (ii), let $\varepsilon>0$ be arbitrary. Given $\varrho>0$ sufficiently small (to be determined later on), let $\widetilde{\rho}_{\varrho}\colon [0,1]\to[0,1]$ be the Lipschitz function given by
\begin{align*}
\rho_{\varrho}(t):=\begin{cases} 1&\;\text{if}\;0<t<1-2\varrho,\\
-\frac{1}{\varrho}t+\frac{1-\varrho}{\varrho}&\;\text{if}\;1-2\varrho<t<1-\varrho,\\
0&\;\text{if}\;1-\varrho<t\leq 1,
\end{cases}
\end{align*}
and put $\rho_{\varrho}(x):=\widetilde{\rho}_{\varrho}(|x|)$, $x\in\mathbb{R}^{n}$, and finally set, for given $f\in\mathcal{F}$, $f_{\varrho}:=\rho_{\varrho}f$. Denoting $\operatorname{B}_{t}:=\operatorname{B}(0,t)$ for $t>0$, we note that if $|y|<\varrho$, then $f(\cdot+y)-f(\cdot)$ and $f_{\varrho}(\cdot+y)-f_{\varrho}(\cdot)$ coincide on $\operatorname{B}_{1-3\varrho}$. Let $f\in\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$ be arbitrary. We split
\begin{align*}
\int_{\mathbb{R}^{n}}|\overline{f}(x+y)-\overline{f}(x)|^{q}\operatorname{d}\! x = \left(\int_{\mathbb{R}^{n}\setminus\operatorname{B}_{1-3\varrho}}+\int_{\operatorname{B}_{1-3\varrho}}\right)|\overline{f}(x+y)-\overline{f}(x)|^{q}\operatorname{d}\! x =:\mathbf{I}_{\varrho}+\mathbf{II}_{\varrho},
\end{align*}
with an obvious definition of $\mathbf{I}_{\varrho}$ and $\mathbf{II}_{\varrho}$.
Ad $\mathbf{I}_{\varrho}$. As $|y|<\varrho$, if $x\in\mathbb{R}^{n}\setminus\operatorname{B}_{1-3\varrho}$, then $x+y\in\mathbb{R}^{n}\setminus\operatorname{B}_{1-4\varrho}$. Therefore, we obtain with a constant $c>0$ independent of $f\in\mathcal{F}$
\begin{align*}
\mathbf{I}_{\varrho} \leq c\int_{\operatorname{B}_{1}\setminus\operatorname{B}_{1-4\varrho}}|f(z)|^{q}\operatorname{d}\! x & \leq c\left(\int_{\operatorname{B}}|f|^{\frac{n}{n-1}}\operatorname{d}\! x\right)^{\frac{(n-1)q}{n}}\mathscr{L}^{n}(\operatorname{B}_{1}\setminus\operatorname{B}_{1-4\varrho})^{\frac{n-q(n-1)}{n}}\\
& \leq c\mathscr{L}^{n}(\operatorname{B}_{1}\setminus\operatorname{B}_{1-4\varrho})^{\frac{n-q(n-1)}{n}}
\end{align*}
and we may hence record that there exists $\delta_{1}>0$ such that if $0<\varrho<\delta_{1}$, then $\mathbf{I}_{\varrho}<\varepsilon/3$.
Ad $\mathbf{II}_{\varrho}$. Firstly, since $1\leq q<n/(n-1)$, we find and fix $0<s<1$ such that $q<n/(n-1+s)$. By Lemma \ref{lem:nec_ell}, $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{1^{*}}(\operatorname{B},V)$ implies that $\mathbb{A}$ is elliptic so that we are in position to suitably apply Lemma~\ref{lem:Besov}. Since $f(\cdot+y)-f(\cdot)$ equals $f_{\varrho}(\cdot+y)-f_{\varrho}(\cdot)$ on $\operatorname{B}_{1-3\varrho}$ and $f_{\varrho}$ is compactly supported in $\mathbb{R}^{n}$ with supports in a sufficiently large fixed ball, we find with a constant $c>0$ independent of $f\in\mathcal{F}$
\begin{align}\label{eq:kolmogorovest}
\begin{split}
\mathbf{II}_{\varrho} & = \int_{\operatorname{B}_{1-3\varrho}}|f_{\varrho}(x+y)-f_{\varrho}(x)|^{q}\operatorname{d}\! x \leq \int_{\mathbb{R}^{n}}|f_{\varrho}(x+y)-f_{\varrho}(x)|^{q}\operatorname{d}\! x \\
& \leq c\|\mathbb{A} f_{\varrho}\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}|y|^{sq}\\
& \leq c\big(\|\rho_{\varrho}\mathbb{A} f\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}+\|f\otimes_{\mathbb{A}}\operatorname{D}\!\rho_{\varrho}\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}\big)|y|^{sq}\\
& \leq c\big(\|\mathbb{A} f\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}+\|f\otimes_{\mathbb{A}}\operatorname{D}\!\rho_{\varrho}\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}\big)|y|^{sq}
\end{split}
\end{align}
Pick $\delta_{2}>0$ such that if $|y|<\delta_{2}$, then $c\sup_{f\in\mathcal{F}}\|\mathbb{A} f\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}|y|^{sq}<\varepsilon/3$. Finally, we note because of $|\operatorname{D}\!\rho_{\varrho}|\leq 4/\varrho$ by definition of $\rho_{\varrho}$,
\begin{align}
\left(\int_{\mathbb{R}^{n}}|\operatorname{D}\!\rho_{\varrho}|^{n}\operatorname{d}\! x\right)^{\frac{1}{n}} \leq \frac{c}{\varrho}\big( (1-2\varrho)^{n}-(1-3\varrho)^{n}\big)^{\frac{1}{n}}=1+\mathcal{O}(\varrho)
\end{align}
and so, by $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^{1^{*}}(\operatorname{B},V)$ and since $0<\varrho\leq 1$,
\begin{align*}
\|\operatorname{D}\!\rho_{\varrho}\otimes_{\mathbb{A}}f\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}|y|^{sq} & \leq \left(\int_{\mathbb{R}^{n}}|\operatorname{D}\!\rho_{\varrho}|^{n}\operatorname{d}\! x\right)^{\frac{q}{n}}\sup_{f\in\mathcal{F}}\|f\|_{\operatorname{W}^{\mathbb{A},1}(\operatorname{B})}^{q}|y|^{sq} \\ & \leq C\sup_{f\in\mathcal{F}}\|f\|_{\operatorname{W}^{\mathbb{A},1}(\operatorname{B})}^{q}|y|^{sq},
\end{align*}
and from here it is evident that there exists $\delta_{3}>0$ such that if $|y|<\delta_{3}$, then $\|\operatorname{D}\!\rho_{\varrho}\otimes_{\mathbb{A}}f\|_{\operatorname{L}^{1}(\operatorname{B}_{R},W)}^{q}|y|^{sq}<\varepsilon/3$ and so, by \eqref{eq:kolmogorovest}, $\mathbf{II}_{\delta}<2\varepsilon/3$ for all $f\in\mathcal{F}$. Now let $0<\delta<\varrho:=\min\{\delta_{1},\delta_{2},\delta_{3}\}$. Collecting estimates, we see that (ii) is satisfied and thus $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow\operatorname{L}^{q}(\operatorname{B},V)$.
\end{proof}
With an inexpensive modification of the proof of Theorem \ref{thm:compactness}, one can show that \ref{it:main_c} implies that $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow\operatorname{L}^q(\operatorname{B},V)$ for all $1\leq q<p$, which trivially then implies \ref{it:main_e}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
It remains to see that \ref{it:main_e} implies \ref{it:main_a}, which is now a simple consequence of the Equivalence Lemma \ref{lem:equivalencelemma}. We choose $E_{1}=\operatorname{W}^{\mathbb{A},1}(\operatorname{B})$, $E_{2}=\operatorname{L}^{1}(\operatorname{B},W)$, $E_{3}=\operatorname{L}^{1}(\operatorname{B},V)$, and $A:=\mathbb{A}\in\mathscr{L}(\operatorname{W}^{\mathbb{A},1}(\operatorname{B}),\operatorname{L}^{1}(\operatorname{B},W))$, whereas $B:=\iota$ is the embedding operator $\iota\colon\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\hookrightarrow \operatorname{L}^{1}(\operatorname{B},V)$. It is then clear that $\|u\|_{\operatorname{W}^{\mathbb{A},1}(\operatorname{B})}= \|u\|_*$, so the equivalence lemma yields that $\mathbb{A}$ has finite dimensional null--space.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main_k} (necessity of FDN)]
Assume that the embedding holds. By standard embeddings of Besov spaces, we have that $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{W}^{k-1,p}(\operatorname{B},V)$ for some $p>1$. If $k=1$, we use Theorem \ref{thm:main}, \ref{it:main_c} implies \ref{it:main_a}, to see that $\mathbb{A}$ has FDN. Otherwise, we give the following simple argument: assume that $\mathbb{A}$ is not FDN, so that the maps $u_j(x)=\exp(jx\cdot\xi)v$ lie in $\ker\mathbb{A}$ for some non--zero complex $\xi,v$. We traced this example back to \cite{Smith}, but it was likely known before. The assumed embedding and H\"older's Inequality give
\begin{align*}
j^{k-1}\left(\int_{\operatorname{B}}|\exp(jx\cdot\xi)|^p\operatorname{d}\! x\right)^\frac{1}{p}&\lesssim \|u_j\|_{\operatorname{W}^{k-1,p}(\operatorname{B},V)}\lesssim\|u_j\|_{\operatorname{L}^1(\operatorname{B},V)}\\
&\lesssim\left(\int_{\operatorname{B}}|\exp(jx\cdot\xi)|^p\operatorname{d}\! x\right)^\frac{1}{p},
\end{align*}
which leads to a contradiction. Here constants depend on $\operatorname{diam}\operatorname{B},p,n$ only.
\end{proof}
\section{Appendix}
\subsection{Miscellaneous background}
The following relevant facts we quote without proof:
\begin{lemma}[\cite{VS}, Proposition 6.1]\label{lem:canc}
Let $\mathbb{A}$ as in \eqref{eq:A} be elliptic. Then $\mathbb{A}$ is cancelling if and only if we have that
\begin{align*}
\int_{\mathbb{R}^n}\mathbb{A} u\operatorname{d}\! x=0
\end{align*}
for all $u\in\operatorname{C}^\infty(\mathbb{R}^n,V)$ such that the support of $\mathbb{A} u$ is compact.
\end{lemma}
\begin{lemma}[Peetre--Tartar Equivalence Lemma, {\cite[Lem.~11.1]{Tartar}}]\label{lem:equivalencelemma}
Let $E_{1}$ be a Banach space and let $E_{2},E_{3}$ be two normed spaces (with corresponding norms $\|\cdot\|_{i}$, $i\in\{1,2,3\}$) and let $A\in\mathscr{L}(E_{1},E_{2})$ and $B\in\mathscr{L}(E_{1},E_{3})$ be two bounded linear operators such that $B$ is compact and the norms $\|\cdot\|_{1}$ and $\|\cdot\|_{*}:=\|A\cdot\|_{2}+\|B\cdot\|_{3}$ are equivalent on $E_{1}$. Then $\dim(\ker A))<\infty$.
\end{lemma}
\begin{theorem}[{\cite[Thm.~4]{Kalamajska}}]\label{thm:Ka}
Let $\mathbb{A}$ as in \eqref{eq:A} be $\mathbb{C}$--elliptic, and $\Omega\subset\mathbb{R}^n$ be a star--shaped domain with respect to a ball. Then there exist an integer $d:=d(\mathbb{A})$, a linear map $\mathcal{P}\in\mathscr{L}(\operatorname{C}^\infty(\bar{\Omega},V),\mathbb{R}_d[x]^V)$ and a smooth map $K\in\operatorname{C}^\infty(\mathbb{R}^n\setminus\{0\},\mathscr{L}(W,V))$ such that $|\partial^\alpha K|\sim|\cdot|^{k-n-|\alpha|}$ and
\begin{align*}
u(x)=\mathcal{P}u(x)+\int_{\Omega}K(x-y)\mathbb{A} u(y)\operatorname{d}\! y
\end{align*}
for all $x\in\Omega$ and $u\in\operatorname{C}^\infty(\bar{\Omega},V)$. Therefore $\ker\mathbb{A}\subseteq\mathbb{R}_d[x]^V$.
\end{theorem}
\subsection{Other facts about $\operatorname{W}^{\mathbb{A},p}$}
We collect some complementary results that explain, e.g., our choice of definition for the $\mathbb{A}$--Sobolev spaces and of extension technique for $p=1$.
\begin{lemma}\label{lem:sob_variants}
Let $\mathbb{A}$ be as in \eqref{eq:A} have FDN. Then $\operatorname{W}^{\mathbb{A},p}(\operatorname{B},V)\simeq\operatorname{V}^{\mathbb{A},p}(\operatorname{B},V)$, for $1\leq p \leq\infty$.
\end{lemma}
\begin{proof}
One embedding is clear. Let $u\in\operatorname{W}^{\mathbb{A},p}(\operatorname{B},V)$. We recall from Theorem \ref{thm:Ka} that $u$ can be represented as $u(x)=\mathcal{P}u(x)+\int_{\operatorname{B}}K(x-y)\mathbb{A} u(y)\operatorname{d}\! y$, where $\mathcal{P}u$ is a polynomial of degree at most $d(\mathbb{A})$ and $|\partial^\alpha K|\sim|\cdot|^{k-n-|\alpha|}$. Let $1\leq l\leq k-1$. Then
\begin{align*}
\|\nabla^l u\|_{\operatorname{L}^p}\leq \|\nabla^l(u-\mathcal{P}u)\|_{\operatorname{L}^p}+\|\nabla^l\mathcal{P}u\|_{\operatorname{L}^p}.
\end{align*}
The first term can easily be controlled by the $\operatorname{L}^p$--norm of $\mathbb{A} u$ by Theorem \ref{thm:anal_harm}\ref{itm:riesz_domains}. The latter term defines a semi--norm on the space of polynomials of degree at most $d(\mathbb{A})$, so it can be controlled by the $\operatorname{L}^p$--norm. We get
\begin{align*}
\|\nabla^l u\|_{\operatorname{L}^p}\leq \|\mathbb{A} u\|_{\operatorname{L}^p}+\|\mathcal{P}u\|_{\operatorname{L}^p}\lesssim \|\mathbb{A} u\|_{\operatorname{L}^p}+\|\mathcal{P}u-u\|_{\operatorname{L}^p}+\|u\|_{\operatorname{L}^p}\lesssim \|\mathbb{A} u\|_{\operatorname{L}^p}+\|u\|_{\operatorname{L}^p},
\end{align*}
which concludes the proof. Here constants depend on the domain.
\end{proof}
\begin{lemma}\label{lem:extp>1}
Let $\mathbb{A}$ as in \eqref{eq:A} have FDN, $1< p <\infty$, and $\Omega\subset\mathbb{R}^n$ be a star--shaped domain with respect to a ball. Then there exists a bounded, linear extension operator $E_\Omega\colon\operatorname{W}^{\mathbb{A},p}(\Omega)\rightarrow\operatorname{V}^{k,p}(\mathbb{R}^n,V)$.
\end{lemma}
\begin{proof}
We use the extension suggested in \cite{Kalamajska94}, namely, in the notation of Theorem \ref{thm:Ka},
\begin{align*}
E_\Omega u(x):=\eta(x)\left(\mathcal{P}u(x)+\int_{\Omega}K(x-y)\mathbb{A} u (y)\operatorname{d}\! y\right)
\end{align*}
for $u\in\operatorname{C}^\infty(\bar{\Omega},V)$ and $x\in\mathbb{R}^n$. Here $\eta\in\operatorname{C}^\infty_c(\mathbb{R}^n)$ is a smooth cut--off that equals 1 in a neighbourhood of $\Omega$. We abbreviate $\mathcal{K}u:=K*(\mathbb{A} u\chi_\Omega)$. Let $0\leq l\leq k$, and let $\operatorname{B}$ be a ball containing the support of $\eta$. Then, with domain dependent constants,
\begin{align*}
\|\nabla^l E_\Omega u\|_{p,\operatorname{B}}\lesssim \sum_{j=0}^l \|\nabla^j(\mathcal{P}u+\mathcal{K}u)\|_{p,\operatorname{B}}\leq \|\mathcal{P}u\|_{\operatorname{V}^{l,p}(\operatorname{B},V)}+\sum_{j=0}^l \|\nabla^j\mathcal{K}u\|_{p,\operatorname{B}}.
\end{align*}
We note that $\|\cdot\|_{\operatorname{V}^{l,p}(\operatorname{B},V)}$ and $\|\cdot\|_{\operatorname{L}^p(\Omega,V)}$ both define norms on $\mathbb{R}_d[x]^V$, hence they are equivalent. We also remark that $\nabla^j\mathcal{K}u=(\nabla^j K)*(\mathbb{A} u\chi_\Omega)$, so that $\|\nabla^j\mathcal{K}u\|_{p,\operatorname{B}}\leq\|\nabla^j\mathcal{K}u\|_{p,\mathbb{R}^n}\lesssim\|\mathbb{A} u\|_{p,\Omega}$, where we use the growth bounds on the derivatives of $K$ and boundedness of Riesz potentials, and, in the case $j=l=k$, of singular integrals (Theorem \ref{thm:anal_harm}). Collecting, we get
\begin{align*}
\|\nabla^l E_\Omega u\|_{p,\operatorname{B}}\lesssim\|\mathcal{P}u\|_{p,\Omega}+\|\mathbb{A} u\|_{p,\Omega}\leq\|\mathcal{P}u+\mathcal{K}u\|_{p,\Omega}+\|\mathcal{K}u\|_{p,\Omega}+\|\mathbb{A} u\|_{p,\Omega}\lesssim\|u\|_{\operatorname{W}^{\mathbb{A},p}(\Omega)},
\end{align*}
where the last inequality is obtained from the representation formula and, again, boundedness of Riesz potentials.
\end{proof}
\begin{lemma}\label{lem:nec_ell}
Let $\mathbb{A}$ be as in \eqref{eq:A}, $k=1$. If $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{L}^p(\operatorname{B},V)$ for some $p>1$, then $\mathbb{A}$ is elliptic.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma \ref{lem:EC>emb}. Suppose $\mathbb{A}$ is not elliptic, such that there exist $\xi\in S^{n-1}$, $v\in V\setminus\{0\}$ such that $\mathbb{A}[\xi]v=0$. We put $u(x)=f(x\cdot\xi)v$ for some measurable $f\colon(-1,1)\rightarrow\mathbb{R}$. We have that
\begin{align*}
\int_{\operatorname{B}} |u|^q\operatorname{d}\! x=\int_{-1}^{1}\int_{\{\xi\}^\perp\cap\operatorname{B}}|f(y)|^q|v|^q\operatorname{d}\!\mathcal{H}^{n-1}\operatorname{d}\! y
=c(n)|v|^q\int_{-1}^{1}|f(y)|^q(1-y^2)^{\frac{n-1}{2}}\operatorname{d}\! y,
\end{align*}
so that
\begin{align*}
c(n,q)\int_{-1/2}^{1/2}|f(y)|^q\operatorname{d}\! y\leq \|u\|_{\operatorname{L}^q(\operatorname{B},V)}\leq C(n,q)\int_{-1}^{1}|f(y)|^q\operatorname{d}\! y.
\end{align*}
We now let $f\in\operatorname{L}^1(-1,1)$ such that $\int_{-1/2}^{1/2}|f|^p=\infty$, let $\operatorname{C}^\infty_c(-1,1)\ni\varphi_j\rightarrow f$ in $\operatorname{L}^1$, and put $u_j(x)=\varphi_j(x\cdot\xi)v$ for $x\in\operatorname{B}$. It is then clear that $\mathbb{A} u_j=0$ in the classical sense, and the assumed estimate applied to $u_j$ gives
\begin{align*}
\int_{-1/2}^{1/2}|\varphi_j(y)|^p\operatorname{d}\! y\lesssim\int_{-1}^{1}|\varphi(y)|\operatorname{d}\! y,
\end{align*}
which contradicts the choice of $f$.
\end{proof}
\begin{lemma}\label{lem:embimpliesEC}
Let $\mathbb{A}$ be as in \eqref{eq:A}. If $\operatorname{W}^{\mathbb{A},1}(\operatorname{B})\hookrightarrow\operatorname{W}^{k-1,n/(n-1)}(\operatorname{B},V)$, then $\mathbb{A}$ is elliptic and cancelling.
\end{lemma}
\begin{proof}
Necessity of ellipticity follows via Lemma \ref{lem:nec_ell} for $k=1$ or via simplifying the arguments for necessity of $\mathbb{C}$--ellipticity in the proof of Theorem \ref{thm:main_k} for $k>1$. We leave the details to the interested reader.
We next show that our assumed embedding implies $\dot{\operatorname{W}}^{\mathbb{A},1}(\mathbb{R}^n)\hookrightarrow\dot{\operatorname{W}}^{k-1,n/(n-1)}(\mathbb{R}^n,V)$ by a scaling argument and use the necessity part of \cite[Thm.~1.3]{VS}. Let $u\in\operatorname{C}^\infty_c(\mathbb{R}^n,V)$ be such that $\operatorname{spt} u\subset\operatorname{B}_r:=\operatorname{B}(0,r)$. Then $u_r(x):=u(rx)$ for $x\in\mathbb{R}^n$ is also a test function, with $\operatorname{spt} u_r\subset\operatorname{B}:=\operatorname{B}(0,1)$. We estimate, with constants independent of $r$:
\begin{align*}
\|\operatorname{D}\!^{k-1} u\|_{\operatorname{L}^{\frac{n}{n-1}}}&=\left(\int_{\operatorname{B}_r}|\operatorname{D}\!^{k-1}u(x)|^{\frac{n}{n-1}}\operatorname{d}\! x\right)^{\frac{n-1}{n}}=\left(\int_{\operatorname{B}}r^{\frac{n(k-1)}{n-1}}|\operatorname{D}\!^{k-1}u_r(y)|^{\frac{n}{n-1}}r^n\operatorname{d}\! y\right)^{\frac{n-1}{n}}\\
&=r^{n-k}\left(\int_{\operatorname{B}}|\operatorname{D}\!^{k-1}u_r(y)|^{\frac{n}{n-1}}\operatorname{d}\! y\right)^{\frac{n-1}{n}}\leq cr^{n-k}\int_{\operatorname{B}}|\mathbb{A} u_r(y)|+|u_r(y)|\operatorname{d}\! y\\
&\leq c\int_{\operatorname{B}_r}|\mathbb{A} u(x)|\operatorname{d}\! x=c\|\mathbb{A} u\|_{\operatorname{L}^1},
\end{align*}
where the last inequality follows from a change of variable and the Poincar\'e--type inequality with zero boundary values (for elliptic operators)
\begin{align}\label{eq:zerotracepoinc}
\|v\|_{\operatorname{L}^1(\Omega,V)}\leq c(\operatorname{diam}\Omega)^{n-k}\|\mathbb{A} v\|_{\operatorname{L}^1(\Omega,W)}
\end{align}
for all $v\in\operatorname{C}^\infty_c(\Omega,V)$.
\end{proof}
The inequality \eqref{eq:zerotracepoinc} follows from the Green--type Formula \eqref{eq:representation} and Theorem \ref{thm:anal_harm}\ref{itm:riesz_domains}. So does the inequality in Lemma \ref{lem:zerotraceemb} below.
\begin{lemma}\label{lem:zerotraceemb}
Let $\mathbb{A}$ as in \eqref{eq:A} be elliptic, $k=1$. Then for each $1\leq p<n/(n-1)$, there exists $c>0$ such that
\begin{align*}
\|u\|_{\operatorname{L}^p(\operatorname{B},V)}\leq c\|\mathbb{A} u\|_{\operatorname{L}^1(\operatorname{B},W)}
\end{align*}
for all $u\in\operatorname{C}^\infty_c(\operatorname{B},V)$.
\end{lemma}
|
1,108,101,565,130 | arxiv | \section{Convergence Analysis} \label{sec:analysis}
In this section, we present a convergence analysis of BCGD
and establish the optimal learning rate.
The optimality is defined to be the learning rate
which results in the fastest decrease in the loss
at the current parameters.
The standard $L_2$-loss will be mainly discussed.
However, we also present a convergence result for
general differentiable convex loss functions whose gradient are Lipshitz continuous in a bounded domain, such as $L_p$-loss where $p$ is even.
We measure the approximation error in terms of the
distance to the global optimum.
For example, when the $L_2$-loss is employed,
the error is $\mathcal{L}(\bm{W}^{\textbf{k}_k}) - \mathcal{L}(\bm{W}^*)=\|\bm{W}^{\textbf{k}_k}\bm{X} - \bm{W}^*\bm{X}\|_F^2$.
We first identify the effects of width in DLNs in gradient-based training under either the orth-identity or the balanced \cite{Arora2018convergenceDLN} initialization (Section~\ref{subsec:initialization}).
\begin{theorem} \label{thm:role of width}
Suppose the weight matrices are initialized
according to either the orth-identity or the balanced initialization, described in
Section~\ref{subsec:initialization}.
Let $n_\ell$ be the width of the $\ell$-th layer.
Then, the training process of
any gradient-based optimization methods (including GD, SGD, BCGD, BCSGD) is
independent of
the choice of $n_\ell$'s as long as it satisfies
\begin{equation} \label{role-width}
\min_{1 \le \ell < L} n_\ell \ge \max\{n_0, n_L \}.
\end{equation}
\end{theorem}
\begin{proof}
The proof can be found in Appendix~\ref{app:thm:role of width}.
\end{proof}
Theorem~\ref{thm:role of width} implies that
the width does not play any role in gradient-based training if the condition of \eqref{role-width} is met and
the weight matrices are initialized in a certain manner.
However, the same conclusion does not follow if the random initialization is employed.
This indicates that the role of width highly depends on
how the weight matrices are initialized.
With a proper initialization,
over-parameterization by the width
can be avoided.
\subsection{Convergence of BCGD}
We first focus on the standard $L_2$ loss function
and present a general convergence analysis of BCGD.
We do not make any assumptions other than
$\text{range}(\bm{Y}\bm{X}^\dagger) \subset \text{range}(\bm{W}_L^{(0)})$.
We follow the convention of
$0\times \infty = \frac{1}{\infty} = 0 \times \frac{1}{0} = 0$.
\begin{theorem} \label{thm:convg-l2}
Let $\ell(z;b) = (z-b)^2/2$.
Suppose
all columns of $\bm{W}_{L}^{(0)}$
are initialized to be in a subspace $K$ in $\mathbb{R}^{n_L}$ such that
$\text{range}(\bm{Y}\bm{X}^\dagger) \subset K$.
Then, the $k$-th sweep (the $kL$-th iteration) of
BCGD \eqref{def:bcgd}
with the learning rates of
\begin{equation} \label{LR-l2-loss-exact}
\eta^{\textbf{k}_{(s,\ell-1)}}_\ell =
\frac{\eta}{\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\|^2
\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}
\|^2}, \qquad 0 < \eta < 2,
\end{equation}
where
$\mathfrak{i}(\ell) = \ell$ if the ascending BCGD is employed
and
$\mathfrak{i}(\ell) = L-\ell+1$ if the descending BCGD is employed,
satisfies
\begin{equation} \label{Rate-l2-loss-exact}
\mathcal{L}(\bm{W}^{\textbf{k}_{k}}) - \mathcal{L}(\bm{W}^*)
\le
\left(\mathcal{L}(\bm{W}^{\textbf{k}_0}) - \mathcal{L}(\bm{W}^*)\right)
\prod_{s=0}^{k-1}
\prod_{\ell=1}^L
\left(\gamma^{\textbf{k}_{(s,\ell-1)}}\right)^2,
\end{equation}
where $\bm{W}^* = \bm{Y}\bm{X}^\dagger$,
$r_x= \text{rank}(\bm{X})$,
$r= \dim(K)$, and
\begin{align*}
\gamma^{\textbf{k}_{(s,\ell-1)}}
=
\max\left\{1-\frac{\eta}{\kappa_{r}^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})\kappa^2_{r_x}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})}, \eta -1 \right\}.
\end{align*}
Furthermore, the optimal learning rate is
\begin{equation} \label{LR-l2-Optimal}
\eta^{\textbf{k}_{(s,\ell-1)}}_\text{opt}
= \frac{
\left\|\frac{\partial \mathcal{L}}{\partial \bm{W}_{\mathfrak{i}(\ell)}}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(s,\ell-1)}}}\right\|_F^2}{\left\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\frac{\partial \mathcal{L}}{\partial \bm{W}_{\mathfrak{i}(\ell)}}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(s,\ell-1)}}}\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\right\|_F^2},
\end{equation}
and with the optimal learning rate of \eqref{LR-l2-Optimal}, we obtain
\begin{align*}
\mathcal{L}(\bm{W}^{\textbf{k}_{k}})
&=\mathcal{L}(\bm{W}^{\textbf{k}_{0}})
-
\sum_{s=0}^{k-1}\sum_{\ell=1}^{L}
\frac{
\left\|\frac{\partial \mathcal{L}}{\partial \bm{W}_{\mathfrak{i}(\ell)}}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(s,\ell-1)}}}\right\|_F^4}{\left\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(s,\ell-1)}}}\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\right\|_F^2}.
\end{align*}
\end{theorem}
\begin{proof}
The proof can be found in \ref{app:thm:convg-l2}.
\end{proof}
The optimality of \eqref{LR-l2-Optimal}
should be understood
in the sense that
it gives the smallest loss
for the next iterate.
The assumption of all columns of $\bm{W}_L^{(0)}$ being in $ \text{range}(\bm{Y}\bm{X}^\dagger) \subset K$
is automatically satisfied if $n_{L-1} \ge n_L$ and
$\bm{W}_L^{(0)}$ is a full rank matrix.
Also, since $\text{range}(\bm{W}^{(0)}_L)$ affects the rate of convergence
through $\kappa_{r}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})$,
a faster convergence is expected
if $\text{range}(\bm{W}^{(0)}_L) = \text{range}(\bm{Y}\bm{X}^\dagger)$.
If $n_L > n_{L-1} \ge n_0$,
the choice of $\bm{W}^{(0)}_L \approxeq \bm{Q}$
satisfies this, where
$\bm{Q}$ is orthogonal and $\text{range}(\bm{Q}) = \text{range}(\bm{Y}\bm{X}^T)$.
We remark that in many practical applications, the number of training data is typically larger than both the input and the output dimensions, i.e., $m > \max\{n_0, n_L\}$. Also, the input dimension is greater than the output dimension, i.e., $n_0 > n_L$.
For example, the MNIST handwritten digit dataset
contains $60,000$ training data whose input and output dimensions are $784$
and $10$, respectively.
Theorem~\ref{thm:convg-l2} indicates
that
as long as $n_\ell \ge \min\{r_x, r\}$, the approximation error is strictly decreasing after a single sweep of BCGD
if
either
$\kappa_{r}^2(\bm{W}_{L:(\mathfrak{i}(1)+1)}^{\textbf{k}_{(k,0)}})$
or
$\kappa_{r_x}^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})$
is positive.
Also, our analysis shows
the ineffectiveness of training a network which has a layer whose width is less than
$\max\{r_x, r\}$.
This is because
if $n_\ell < \max\{r_x, r\}$,
either $\sigma_{r}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})$
or
$\sigma_{r_x}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})$
is zero and thus, $\gamma^{\textbf{k}_{(k,\ell-1)}} = 1$.
This indicates that
in order for the faster convergence,
one should employ a network whose architecture satisfying $n_\ell \ge \max\{r_x, r\}$ for all $1 \le \ell < L$.
Also, if $\bm{W}_1^{(0)}$
is initialize in a way that all rows are in $\text{range}(\bm{X})$,
one can expect to find the least norm solution.
In order for an iteration of BCGD to strictly
decrease the approximation error,
it is important to guarantee the condition of
\begin{equation} \label{condition-non-singular}
\sigma_{r}^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})
\sigma_{r_x}^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}) > 0.
\end{equation}
In what follows, we show that
if the initial approximation error is sufficiently close to
the global optimum under the orth-identity initialization (Section~\ref{subsec:initialization}),
the convergence to the global optimum is guaranteed at a linear rate by the layer-wise training (BCGD).
\begin{theorem} \label{thm-l2-identity}
Under the same conditions of Theorem~\ref{thm:convg-l2},
let $\bm{X}$ be a full-row rank matrix
and $n_\ell \ge \max\{n_0, n_L\}$ for all $1 \le \ell < L$.
Suppose the weight matrices are initialized from the orth-identity initialization (Section~\ref{subsec:initialization}) and
the initial loss $\|\bm{W}^{\textbf{k}_{0}} - \bm{W}^*\|_F$
is less than or equal to $\tilde{\sigma}_{\min}/c$, where
$\tilde{\sigma}_{\min} = \sigma_{\min}(\bm{W}^*\bm{X})/\|\bm{X}\|$,
\begin{equation} \label{def:c-min}
c=1 + \kappa^2(\bm{X})\left(\frac{1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)\tilde{\sigma}_{\min}}\right), \qquad
h(L) = \frac{LR_L(1-R_L)^{2L-2}}{(1+R_L)^{3L-1}},
\end{equation}
and $R_L=\frac{2}{(5L-3)+\sqrt{(5L-3)^2-4L}}$.
Then, with the learning rates of \eqref{LR-l2-loss-exact},
the $k$-th sweep of BCGD satisfies
\begin{align*}
\mathcal{L}(\bm{W}^{\textbf{k}_k}) - \mathcal{L}(\bm{W}^*)
\le
\left(\mathcal{L}(\bm{W}^{\textbf{k}_0}) - \mathcal{L}(\bm{W}^*)\right)
(\gamma^{2L})^{k},
\end{align*}
where
$\gamma = 1 - \frac{\eta}{5\kappa^2(\bm{X})}$
and $0 < \eta \le 1$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma-min-sing-value}, the proof readily follows from Theorem~\ref{thm:convg-l2}.
\end{proof}
\begin{lemma} \label{lemma-min-sing-value}
Under the same conditions of Theorem~\ref{thm-l2-identity},
we have
\begin{align*}
\gamma^{\textbf{k}_{(k,\ell-1)}} < 1 - \frac{\eta}{\kappa^2(\bm{X})}
\left(\frac{1-R_L}{1+R_L}\right)^{2(L-1)} \le
\gamma = 1 - \frac{\eta}{5\kappa^2(\bm{X})}.
\end{align*}
\end{lemma}
\begin{proof}
The proof can be found in \ref{app:lemma-min-sing-value}.
\end{proof}
We remark that the rate of convergence for a single sweep is $\gamma^{2L}$.
When the speed of convergence is measured against the number of sweeps, this implies that the deeper the network is, the faster convergence is obtained.
Thus, if
the depth of a linear network is sufficiently large,
the global optimum can be reached (within machine accuracy) by the layer-wise training (BCGD) after updating each weight matrix only once.
Also, we note that the work of \cite{Arora2018convergenceDLN} also
has a similar initialization condition.
Theorem~\ref{thm-l2-identity} relies on the assumption that
the initial approximation is sufficiently close to the global optimum $\bm{W}^*\bm{X}$
in terms of $\bm{X}$, $\sigma_{\min}(\bm{W}^*\bm{X})$ and the depth $L$.
As a special case of $d_{\text{out}} = 1$,
a similar result can be obtained without this restriction.
\begin{theorem} \label{thm:l2-dout1}
Under the same conditions of Theorem~\ref{thm:convg-l2},
let $n_L = 1$,
$n_\ell \ge n_0$ for all $1 \le \ell < L$
and $\bm{X}$ is a full-row rank matrix.
Suppose
the weight matrices are initialized
from the orth-identity initialization (Section~\ref{subsec:initialization}),
and
the global minimizer is not
$\bm{W}^* \ne \bm{W}^{\textbf{k}_{(0,\ell-1)}}\left(\bm{I}_{n_0} - \|\bm{X}\|^2(\bm{XX}^T)^{-1}/\eta\right)$ for all $1 \le \ell \le L$,
and the depth $L$ is chosen to satisfy
$$
L \ge \frac{\log \left(\frac{\sigma_{\min}(\bm{W}^*\bm{X})}{c\|(\bm{W}^{\textbf{k}_0} - \bm{W}^*)\bm{X}\|_F}\right)}{\log(1-\eta/\kappa^2(\bm{X}))},
$$
where $c$ is defined in \eqref{def:c-min} and $0<\eta \le 1$.
Then, the $k$-th sweep of descending BCGD
with the learning rate of \eqref{LR-l2-loss-exact}
satisfies
\begin{equation}
\begin{split}
\mathcal{L}(\bm{W}^{\textbf{k}_k}) - \mathcal{L}(\bm{W}^*)
&<
\left(\mathcal{L}(\bm{W}^{\textbf{k}_0}) - \mathcal{L}(\bm{W}^*)\right)
\left(1 -\frac{\eta}{\kappa^2(\bm{X})} \right)^{2(L+k-1)}(\gamma^{2(L-1)})^{k-1},
\end{split}
\end{equation}
where $\gamma = 1 - \frac{\eta}{5\kappa^2(\bm{X})}$.
\end{theorem}
\begin{proof}
The proof can be found in \ref{app:thm:l2-dout1}.
\end{proof}
\subsection{Convergence of BCGD for general convex loss function} \label{convg-gen-loss}
We present a general convergence analysis of the layer-wise training (BCGD) for convex differentiable loss functions.
For general loss functions, let $\bm{W}^*$ be the solution to
$\min_{\bm{W}}\mathcal{L}(\bm{W})$.
For a matrix $A$, the matrix $L_{p,q}$ norm is defined by
$$
\|A\|_{p,q} = \left(\sum_{j=1}^n \left(\sum_{i=1}^m |a_{ij}|^p\right)^{q/p}\right)^{1/q}, \quad p, q \ge 1,
$$
and the max norm is $\|A\|_{\max} = \max_{i,j} |a_{ij}|$.
\begin{theorem} \label{thm:convg-convex}
Suppose $\ell(z;b)$ is convex and twice differentiable (as a function of $z$),
and that its second derivative satisfies
$
|\ell''(z;b)| \le \mathcal{C}(z).
$
If the learning rates satisfy
\begin{equation} \label{LR-convex}
0 < \eta_\ell^{\textbf{k}_{(k,\ell-1)}} \le \frac{1}{\|\mathcal{C}(\Delta^{\textbf{k}_{(k,\ell-1)}})\|_{\max} \|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}}\|^2\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\|^2},
\end{equation}
where $\mathcal{C}$ is applied element-wise
and $\Delta^{\textbf{k}_{(k,\ell-1)}} = \bm{W}^{\textbf{k}_{(k,\ell-1)}}\bm{X} - \bm{Y}$,
the $(Lk+\ell)$-th iteration of BCGD satisfies
\begin{align}
\mathcal{L}(\bm{\theta}^{\textbf{k}_{(k,\ell)}})
&\le
\mathcal{L}(\bm{\theta}^{\textbf{k}_{(k,\ell-1)}})
-\frac{\eta_\ell^{\textbf{k}_{(k,\ell-1)}}}{2}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2,
\end{align}
where
$
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}=\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial \bm{W}_{\mathfrak{i}(\ell)}}\big|_{\bm{\theta} = \bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}=
(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\bm{J}^{\textbf{k}_{(k,\ell-1)}}\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})^T.
$
Furthermore,
\begin{itemize}
\item The (near) optimal learning rate is
\begin{equation} \label{LR-gen-Opt}
\eta_\text{opt}^{\textbf{k}_{(k,\ell-1)}} =
\frac{\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2}
{\|\mathcal{C}(\Delta^{\textbf{k}_{(k,\ell-1)}})\|_{\max}
\|\bm{W}_{L:(\ell-1)}^{(k+1)}
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}
\bm{W}_{(\ell-1):1}^{(k)}\bm{X}\|_F^2}.
\end{equation}
\item For each $\ell$,
$\lim_{k \to \infty} \eta_\ell^{\textbf{k}_{(k,\ell-1)}}\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2 = 0$.
\item
$\frac{1}{kL}\sum_{s=0}^{k-1}\sum_{\ell=1}^L
\eta_\ell^{\textbf{k}_{(k,\ell-1)}}\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2 = \mathcal{O}(\frac{1}{kL})$.
\item If $0 < \inf_k \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\le \sup_k \eta_\ell^{\textbf{k}_{(k,\ell-1)}} \le 1$,
we have
$$\lim_{k \to \infty} \|\eta_\ell^{\textbf{k}_{(k,\ell-1)}} \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2 = 0,
\qquad \lim_{k \to \infty} \| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2 = 0.$$
Therefore,
$\{\bm{W}_\ell^{(k)}\}_{\ell=1}^L \overset{k\to\infty}{\to} \{\hat{\bm{W}}_\ell\}_{\ell=1}^L$
and
$\{\hat{\bm{W}}_\ell\}_{\ell=1}^L$ is a stationary point.
If $\hat{\bm{W}}_{L:1}$ is a local minimum,
then it is the global minimum.
\end{itemize}
\end{theorem}
\begin{proof}
The proof can be found in \ref{app:thm:convg-convex}.
\end{proof}
Theorem~\ref{thm:convg-convex} shows that as long as the learning rates satisfying \eqref{LR-convex} are bounded below away from 0 and above by 1 for all $k$ but finitely many,
the BCGD finds a stationary point at the rate of $\mathcal{O}(1/kL)$ where $k$ is the number of sweeps and $L$ is the depth of DLN.
Also,
since the loss $\ell$ is known a prior,
the (near) optimal learning rate can directly be applied in practice.
For example, when the $p$-norm is used for the loss, i.e., $\ell(z;b) = |z-b|^p/p$ where $1 < p < \infty$ and $p$ is even,
the (near) optimal learning rate is
\begin{equation} \label{LR-lp-Optimal}
\eta_\text{opt}^{\textbf{k}_{(k,\ell-1)}} = \frac{\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2}
{(p-1)\|\Delta^{\textbf{k}_{(k,\ell-1)}}\|_{\max}^{p-2}
\|\bm{W}_{L:(\ell-1)}^{(k+1)}
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}
\bm{W}_{(\ell-1):1}^{(k)}\bm{X}\|_F^2}.
\end{equation}
Note that when $p=2$, the above is identical to the optimal learning rate of \eqref{LR-l2-Optimal}.
\subsection{Convergence of BCSGD}
In this subsection, a convergence analysis of BCSGD \eqref{def:bcsgd} is presented
with the standard $L_2$-loss.
We first describe the block coordinate stochastic gradient descent (BCSGD) as follow.
At the $(Lk+\ell)$-th iteration,
an index $i_{Lk+\ell}$ is randomly chosen from $\{1,\cdots,m\}$
and the $\mathfrak{i}(\ell)$-th layer weight matrix is updated
according to
\begin{equation} \label{def:bcsgd}
\begin{split}
\bm{W}_{\mathfrak{i}(\ell)}^{(k+1)} =
\bm{W}_{\mathfrak{i}(\ell)}^{(k)}
- \eta_{\mathfrak{i}(\ell)}^{\textbf{k}_{(k,\ell-1)}} \frac{\partial \mathcal{L}_{i_{Lk+\ell}}(\bm{\theta}) }{\partial \bm{W}_{\mathfrak{i}(\ell)}}\bigg|_{\bm{\theta} = \bm{\theta}^{\textbf{k}_{(k,\ell-1)}}},
\end{split}
\end{equation}
where
$\textbf{k}_{(k,\ell)} = \textbf{k}_{(k,\ell-1)} + \bm{e}_{\mathfrak{i}(\ell)}$.
Again,
$\mathfrak{i}(\ell) = \ell$ if the ascending (bottom to top) ordering is employed
and $\mathfrak{i}(\ell) = L-\ell+1$ if the descending (top to bottom) ordering is employed.
Given a discrete random variable $i \sim \bm{\pi}$ on $[m]$,
we denote the expectation with respect to $i$
conditioned on all other previous random variables
by $\mathbf{E}_{i}$.
\begin{theorem} \label{thm:convg-l2-loss-BCSGD}
Let $\{\bm{W}_\ell^{(0)}\}_{\ell=1}^L$ be the initial weight matrices.
At the $(Lk+\ell)$-th iteration,
a data point $x_{i_{Lk+\ell}}$ is randomly independently chosen
where $i_{Lk+\ell}$ is a random variable whose probability distribution $\bm{\pi}^{\textbf{k}_{(k,\ell-1)}}$ is defined by
\begin{equation}
\bm{\pi}^{\textbf{k}_{(k,\ell-1)}}(i) = \frac{\|(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}x_{i})^T\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\|^2}{\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\|_F^4}, \qquad
1 \le i \le m.
\end{equation}
Then, the approximation by BCSGD \eqref{def:bcsgd} with the learning rates of
\begin{equation}
\eta_{{i_{Lk+\ell}}}^{\textbf{k}_{(k,\ell-1)}} = \frac{\sigma_{\min}^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})}{\sigma^2_{\max}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})}
\frac{\eta}{\|(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}x_{{i_{Lk+\ell}}})^T\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\|^2},
\end{equation}
for $0 < \eta < 2$,
satisfies
\begin{align*}
\mathbf{E}_{{i_{Lk+\ell}}}[\|{\Delta}^{\textbf{k}_{(k,\ell)}}\|_F^2]
&\le
\gamma_{\text{upp}}^{\textbf{k}_{(k,\ell-1)}}\|{\Delta}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
+
\frac{\eta^2\mathcal{L}(\bm{W}^*)}{\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})},
\\
\mathbf{E}_{{i_{Lk+\ell}}}[\|{\Delta}^{\textbf{k}_{(k,\ell)}}\|_F^2]
&\ge
\gamma_{\text{low}}^{\textbf{k}_{(k,\ell-1)}} \|\Delta^{\textbf{k}_{(k,\ell-1)}}\|^2_F
+
\frac{\eta^2\mathcal{L}(\bm{W}^*)}{\kappa^4(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})},
\end{align*}
where
$\bm{W}^* = \bm{Y}\bm{X}^\dagger$,
${\Delta}^{\textbf{k}_{(k,\ell)}} = \bm{W}_{L:1}^{\textbf{k}_{(k,\ell)}}\bm{X} - \bm{W}^*\bm{X}$,
\begin{align*}
\gamma_{\text{upp}}^{\textbf{k}_{(k,\ell-1)}} &=1 - \frac{1 - \left(1-\frac{\eta}{\kappa^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{k}})}\right)^2 }{\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{k}}\bm{X})},
\\
\gamma_{\text{low}}^{\textbf{k}_{(k,\ell-1)}}
&=
1 - \frac{1 - \left(1 - \frac{\eta}{\kappa^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{k}}\bm{X})}\right)^2 }{\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{k}}\bm{X})/\kappa^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{k}}\bm{X})}.
\end{align*}
\end{theorem}
\begin{proof}
The proof can be found in \ref{app:thm:convg-l2-loss-BCSGD}.
\end{proof}
Under the assumption that $\kappa^4(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})$ uniformly bounded above by $M_{\text{upp}}$
and $\gamma_{\text{low}}^{\textbf{k}_{(k,\ell-1)}}$ is uniformly bounded below away from zero by $\gamma_{\text{low}} > 0$,
one can conclude that
$$
\mathbf{E}[\|{\Delta}^{\textbf{k}_{k}}\|_F^2]
\ge \gamma_{\text{low}}^{Lk} \|{\Delta}^{\textbf{k}_{0}}\|_F^2 + \frac{\eta^2 \mathcal{L}(\bm{W}^*)(1 - \gamma_{\text{low}}^{Lk})}{M_{\text{upp}}(1-\gamma_{\text{low}}^L)} \to \frac{\eta^2 \mathcal{L}(\bm{W}^*)}{M_{\text{upp}}(1-\gamma_{\text{low}}^L)} \quad \text{as } \quad k \to \infty.
$$
Similarly, under the assumption that $\tilde{\kappa}^4(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})$ uniformly bounded below by $M_{\text{low}}$,
and $\gamma_{\text{upp}}^{\textbf{k}_{(k,\ell-1)}}$ is uniformly bounded above by $\gamma_{\text{upp}} < 1$,
we have
$$
\mathbf{E}[\|{\Delta}^{\textbf{k}_{k}}\|_F^2]
\le \gamma_{\text{upp}}^{Lk} \|{\Delta}^{\textbf{k}_{0}}\|_F^2 + \frac{\eta^2 \mathcal{L}(\bm{W}^*)(1 - \gamma_{\text{upp}}^{Lk})}{M_{\text{low}}(1-\gamma_{\text{upp}}^L)} \to \frac{\eta^2 \mathcal{L}(\bm{W}^*)}{M_{\text{low}}(1-\gamma_{\text{upp}}^L)} \quad \text{as } \quad k \to \infty.
$$
This indicates that unlike the BCGD,
if a randomly chosen datum is used to update a weight matrix,
an extra term, which is proportional to $\mathcal{L}(\bm{W}^*)$, is introduced
in both upper and lower bounds of the expected error.
Therefore, the BCSGD would not achieve the global optimum, unless $\mathcal{L}(\bm{W}^*) = 0$.
However, the expected loss by BCSGD will be within the distance proportional to $\mathcal{L}(\bm{W}^*)$ from $\mathcal{L}(\bm{W}^*)$.
In practice, $\mathcal{L}(\bm{W}^*)$ will almost never be zero.
This indicates that the stochasticity introduced by the random selection of mini-batch (of size 1) results in an implicit regularization effect, which avoids over-fitting.
We defer further characterization of BCSGD to future work.
Remark: The proposed stochastic gradient-descent in Theorem~\ref{thm:convg-l2-loss-BCSGD} can be viewed as a generalized version of the sampling used in \cite{strohmer2009randomized,needell2010randomized,leventhal2010randomized,zouzias2013randomized}.
\section{Least square solution} \label{app:lsq-sol}
Without the rank constraint,
the solution of \eqref{def:depth1-prob} is
\begin{equation} \label{def:glob-minimizer}
\bm{W}^*_\text{gen} = \bm{Y}\bm{X}^\dagger + \bm{M}(\bm{X}\bm{X}^\dagger - \bm{I}_{n_0}), \quad \forall \bm{M} \in \mathbb{R}^{n_L \times n_0},
\end{equation}
where $\bm{I}_{n}$ is the identity matrix of size $n\times n$ and $\bm{X}^\dagger$ is the Moore-Pensore pseudo-inverse of $\bm{X}$.
Assuming $\bm{X}$ is a full row rank matrix,
we have $\bm{W}^* = \bm{Y}\bm{X}^\dagger$,
which allows an explicit formula
$\bm{W}^*_{LSQ}= \bm{Y}\bm{X}^T(\bm{X}\bm{X}^T)^{-1}$.
If $\bm{X}$ is not a full row rank matrix,
\eqref{def:depth1-prob} allows infinitely many solutions.
In this case, the least norm solution is often sought and
it is $\bm{W}^* = \bm{Y}\bm{X}^\dagger$.
Also, for any $\bm{W}$, the following holds:
\begin{align*}
\mathcal{L}(\bm{W}) = \|\bm{W}\bm{X} - \bm{Y}\|_F^2
= \|\bm{W}\bm{X} - \bm{W}^*\bm{X}\|_F^2 + \mathcal{L}(\bm{W}^*).
\end{align*}
Thus, the minimizing $L_2$-loss is equivalent to minimizing $\|\bm{W}\bm{X} - \bm{W}^*\bm{X}\|_F^2$.
Furthermore, for whitened data,
the least norm solution is simply
$\bm{W}^* = \bm{Y}\bm{X}^T$.
With the rank constraint, we consider two cases.
If $\text{rank}(\bm{Y}\bm{X}^\dagger) \le n^*$,
the rank constrain plays no role in the minimization. Thus, the global minimizer
is \eqref{def:glob-minimizer}.
Let us consider the case of $\text{rank}(\bm{Y}\bm{X}^\dagger) > n^*$.
Let $r_x = \text{rank}(\bm{X})$,
and $\bm{X} = U_x\Sigma_xV_x^T$ be a compact singular value decomposition (SVD) of $\bm{X}$
where only $r_x$ left-singular vectors and
$r_x$ right-singular vectors corresponding to the non-zero singular values are calculated.
Then, $\bm{X}^\dagger = V_x\Sigma_x^{-1}U_x^T$.
and it can be checked that $\text{rank}(\bm{Y}V_x) = r^*= \text{rank}(\bm{Y}\bm{X}^\dagger)$.
Let $\bm{Y}V_x = \hat{U}_y\hat{\Sigma}_y\hat{V}_y^T$
be a compact SVD of $\bm{Y}V_x$.
It then can be shown that
the problem \eqref{def:depth1-prob} is equivalent to
$$
\min_{\bm{Z}} \|\bm{Z} - \bm{Y}V_x\|_F, \quad \text{subject to} \quad
\text{rank}(\bm{Z}) \le n^*.
$$
To be more precise, if $\bm{Z}^*$ is a solution (the best $n^*$-rank approximation to $\bm{Y}V_x$) to the above,
$\bm{W}^* = \bm{Z}^*\Sigma_x^{-1} U_x^T$ is a solution of \eqref{def:depth1-prob}, which can be explicitly written as
\begin{equation} \label{lsq-sol-rank-defi}
\bm{W}^* = \hat{U}_y \Sigma^* \hat{V}_y^T \Sigma_x^{-1} U_x^T,
\quad \Sigma^*= \begin{bmatrix} \bm{D}_{s} & \bm{0} \\ \bm{0} & \bm{0} \end{bmatrix},
\end{equation}
where $s = \min \{ n^*, r^*\}$ and $\bm{D}_{s}$ is the principal submatrix consisting of the first $s$ rows and columns of $\hat{\Sigma}_y$.
We remark that in general, \eqref{lsq-sol-rank-defi} and the best $n^*$-rank approximation to $\bm{Y}\bm{X}^\dagger$ are not the same.
\section{Gradient of the loss}
For reader's convenience, here we present the calculation of the gradient.
First, let us define the matrix $\bm{J} \in \mathbb{R}^{m \times d_{\text{out}}}$,
\begin{align*}
\bm{J}^{(k)} = [J_{ij}^{(k)}], \qquad
J_{ij}^{(k)} = \ell'( \mathcal{N}^L_j(\textbf{x}^i;\bm{\theta}^{(k)}); \textbf{y}^i_j), \qquad
1 \le i \le m, 1 \le j \le d_{\text{out}}.
\end{align*}
Note that if $\ell(a,b) = (a-b)^2/2$,
$
\bm{J}^{(k)} =(\bm{W}_{L:1}^{(k)}\bm{X} - \bm{Y} )^T.
$
\begin{lemma}
Let $\bm{\theta} = \{\bm{W}_\ell\}_{\ell=1}^L$
and $\mathcal{N}^L(\textbf{x};\bm{\theta}) = \bm{W}_L\bm{W}_{L-1}\cdots \bm{W}_1\textbf{x}$,
where $\bm{W}_\ell \in \mathbb{R}^{n_\ell \times n_{\ell-1}}$ for $1\le \ell \le L$.
Then
\begin{align*}
\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial \bm{W}_\ell}
=
(\bm{W}_{L}\cdots \bm{W}_{\ell+1})^T\bm{J}^T (\bm{W}_{\ell-1}\cdots \bm{W}_1 \bm{X})^T.
\end{align*}
\end{lemma}
\begin{proof}
Let us consider the case of $L=2$.
Let $\bm{\theta} = \{\bm{W}_2, \bm{W}_1\}$, i.e.,
$\mathcal{N}^2(\textbf{x}) = \bm{W}_2\bm{W}_1\textbf{x}$, where
$\bm{W}_1 \in \mathbb{R}^{n \times d_{\text{in}}}$,
and $\bm{W}_2 \in \mathbb{R}^{ d_{\text{out}} \times n}$.
For a matrix $\bm{M}$, let us denote the $j$-th row of $\bm{M}$ by $\bm{M}_{(j,:)}$
and the $i$-th column of $\bm{M}$ by
$\bm{M}_{(:,i)}$.
Since $L=2$, the loss function is
$\mathcal{L}(\bm{\theta}) =
\sum_{j=1}^{d_{\text{out}}}
\sum_{i=1}^m
\ell((\bm{W}_2)_{(j,:)}\bm{W}_1\textbf{x}^i; \textbf{y}^i_j)$.
The direct calculation shows that
$
\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial (\bm{W}_1)_{(t,:)}^T}
= \bm{X} \bm{J}(\bm{W}_2)_{(:,t)}$,
$\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial (\bm{W}_2)_{(t,:)}^T}
= \bm{W}_1\bm{X}\bm{J}_{(:,t)}$,
which gives
$$
\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial (\bm{W}_1)^T}
= \bm{X} \bm{J}\bm{W}_2, \qquad
\frac{\partial \mathcal{L}(\bm{\theta}) }{\partial (\bm{W}_2)^T}
= \bm{W}_1\bm{X}\bm{J}.
$$
For general $L$, it readily follows from the case of $L=2$
by letting
$\bm{X} \to \bm{W}_{\ell-1}\cdots \bm{W}_1 \bm{X}$,
$\bm{W}_1 \to \bm{W}_\ell$,
and
$\bm{W}_2 \to \bm{W}_{L}\cdots \bm{W}_{\ell+1}$.
\end{proof}
\section{Proof of Theorem \ref{thm:role of width}}
\label{app:thm:role of width}
For a matrix $\bm{A}$ of size $m\times n$
and a matrix $\bm{B}$ of size $k \times s$
where $m \ge k, n \ge s$,
we say $\bm{A}$ is equivalent to $\bm{B}$ upto zero-valued padding
if
$$
\bm{A} = \begin{bmatrix}
\bm{B} & \bm{0} \\
\bm{0} & \bm{0}
\end{bmatrix},
$$
and write $\bm{A} \approxeq \bm{B}$.
\begin{lemma} \label{lemma:width-zero-padding}
Suppose $\bm{W}_1^{\textbf{k}_{(k,\ell-1)}} \approxeq \tilde{\bm{W}}_1^{\textbf{k}_{(k,\ell-1)}} \in \mathbb{R}^{\max\{n_0, n_L\} \times n_0}$,
$\bm{W}_L^{\textbf{k}_{(k,\ell-1)}} \approxeq \tilde{\bm{W}}_L^{\textbf{k}_{(k,\ell-1)}} \in \mathbb{R}^{n_L\times \max\{n_0, n_L\}}$
and
$\bm{W}_j^{\textbf{k}_{(k,\ell-1)}} \approxeq \tilde{\bm{W}}_j^{\textbf{k}_{(k,\ell-1)}} \in \mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}}$ for all $1 < j < L$.
Then,
\begin{align*}
\bm{W}_\ell^{\textbf{k}_{(k,\ell)}} \approxeq
\tilde{\bm{W}}_\ell^{\textbf{k}_{(k,\ell)}} \in
\begin{cases}
\mathbb{R}^{\max\{n_0, n_L\} \times n_0}, & \text{if $\ell = 1$}, \\
\mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}}, & \text{if $1 < \ell < L$}, \\
\mathbb{R}^{n_L \times \max\{n_0, n_L\}}, & \text{if $\ell = L$},
\end{cases}.
\end{align*}
\end{lemma}
\begin{proof}
Let $d_{\max} = \max\{n_0,n_L\}$.
Note that if
$\bm{W}_1 \approxeq \tilde{\bm{W}}_1 \in \mathbb{R}^{d_{\max} \times n_0}$,
$\bm{W}_L \approxeq \tilde{\bm{W}}_L \in \mathbb{R}^{n_L \times d_{\max}}$,
and
$\bm{W}_j \approxeq \tilde{\bm{W}}_j \in \mathbb{R}^{d_{\max} \times d_{\max}}$ for $1 < j < L$,
since $n_j \ge d_{\max}$ for $1 < j < L$,
we have
$\bm{W}_{L:(j+1)} \approxeq \tilde{\bm{W}}_{L:(j+1)}$ and
$\bm{W}_{(j-1):1} \approxeq \tilde{\bm{W}}_{(j-1):1}$
for any $1 < j < L$.
Specifically,
\begin{align*}
\bm{W}_{L:(j+1)} =
\begin{bmatrix}
\tilde{\bm{W}}_{L:(j+1)} & \bm{0}
\end{bmatrix},
\qquad
\bm{W}_{(j-1):1} =
\begin{bmatrix}
\tilde{\bm{W}}_{(j-1):1} \\
\bm{0}
\end{bmatrix}.
\end{align*}
It then follows from the gradient descent update
\begin{align*}
\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell)}} &= \bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}} - \eta (\bm{W}_{L:({\mathfrak{i}(\ell)}+1)}^{\textbf{k}_{(s,\ell-1)}})^T\Delta^{\textbf{k}_{(s,\ell-1)}}\bm{XX}^T(\bm{W}_{({\mathfrak{i}(\ell)}-1):1}^{\textbf{k}_{(s,\ell-1)}})^T,
\end{align*}
where ${\mathfrak{i}(\ell)} = \ell$ if the ascending BCGD is employed
and ${\mathfrak{i}(\ell)} = L-\ell+1$ if the descending BCGD is employed,
that
we obtain
\begin{align*}
&(\bm{W}_{L:({\mathfrak{i}(\ell)}+1)}^{\textbf{k}_{(s,\ell-1)}})^T\Delta^{\textbf{k}_{(s,\ell-1)}}\bm{XX}^T(\bm{W}_{({\mathfrak{i}(\ell)}-1):1}^{\textbf{k}_{(s,\ell-1)}})^T \\
&\approxeq
(\tilde{\bm{W}}_{L:({\mathfrak{i}(\ell)}+1)}^{\textbf{k}_{(s,\ell-1)}})^T
\Delta^{\textbf{k}_{(k,\ell-1)}}\bm{XX}^T
(\tilde{\bm{W}}_{({\mathfrak{i}(\ell)}-1):1}^{\textbf{k}_{(s,\ell-1)}})^T
\in \mathbb{R}^{d_{\max} \times d_{\max}},
\end{align*}
and
\begin{align*}
(\bm{W}_{L:{\mathfrak{i}(1)}}^{\textbf{k}_{(s,0)}})^T\Delta^{\textbf{k}_{(s,0)}}\bm{XX}^T
&\approxeq
(\tilde{\bm{W}}_{L:{\mathfrak{i}(1)}}^{\textbf{k}_{(s,0)}})^T
\Delta^{\textbf{k}_{(k,0)}}\bm{XX}^T
\in \mathbb{R}^{d_{\max} \times n_0},
\\
\Delta^{\textbf{k}_{(s,L-1)}}\bm{XX}^T(\bm{W}_{({\mathfrak{i}(L)}-1):1}^{\textbf{k}_{(s,L-1)}})^T
&\approxeq
\Delta^{\textbf{k}_{(s,L-1)}}\bm{XX}^T(\tilde{\bm{W}}_{({\mathfrak{i}(L)}-1):1}^{\textbf{k}_{(s,L-1)}})^T
\in \mathbb{R}^{n_L \times d_{\max}}.
\end{align*}
By the assumption on $\bm{W}_j^{\textbf{k}_{(k,\ell-1)}} \approxeq \tilde{\bm{W}}_j^{\textbf{k}_{(k,\ell-1)}}$,
the proof is completed.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:role of width}]
If the initial weight matrices satisfy
$$
\bm{W}^{(0)}_j \approxeq \tilde{\bm{W}}_j^{(0)}
\in
\begin{cases}
\mathbb{R}^{\max\{n_0, n_L\} \times n_0}, & \text{if $\ell = 1$}, \\
\mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}}, & \text{if $1 < \ell < L$}, \\
\mathbb{R}^{n_L \times \max\{n_0, n_L\}}, & \text{if $\ell = L$},
\end{cases},
$$
it follows from Lemma~\ref{lemma:width-zero-padding} that
for any $s$ and $j$, there exists
$\tilde{\bm{W}}_j^{(s)}$ such that
$$
\bm{W}^{(s)}_j \approxeq \tilde{\bm{W}}_j^{(s)}
\in
\begin{cases}
\mathbb{R}^{\max\{n_0, n_L\} \times n_0}, & \text{if $\ell = 1$}, \\
\mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}}, & \text{if $1 < \ell < L$}, \\
\mathbb{R}^{n_L \times \max\{n_0, n_L\}}, & \text{if $\ell = L$},
\end{cases},
$$
which completes the proof for the balanced initialization.
Suppose $\min\{m,n\} > k=s$.
We then write $\bm{A} \approxeq_1 \bm{B}$ if
$\bm{A} \approxeq \tilde{\bm{B}}$ where
$\tilde{\bm{B}}$ is a square matrix of size $\min\{m,n\}$ such that
$$
\tilde{\bm{B}} = \begin{bmatrix}
\bm{B} & \bm{0} \\
\bm{0} & \bm{I}_{\min\{m,n\} - k}
\end{bmatrix}.
$$
Let $\bm{W}_j$ be a matrix of size $n_j \times n_{j-1}$
and $n_j \ge \max\{n_0, n_L\}$ for all $1\le j \le L$.
Suppose
\begin{equation} \label{cond-identity-init}
\begin{split}
\bm{W}_j &\approxeq \tilde{\bm{W}}_j
\in
\begin{cases}
\mathbb{R}^{\max\{n_0, n_L\} \times n_0}, & \text{if $j = 1$}, \\
\mathbb{R}^{n_L \times \max\{n_0, n_L\}}, & \text{if $j = L$},
\end{cases}, \\
\bm{W}_j &\approxeq_1 \tilde{\bm{W}}_j
\in
\mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}},
\text{if $1 < j < L$}.
\end{split}
\end{equation}
Let $
\bm{W}_L = \begin{bmatrix}
\tilde{\bm{W}_L} & \bm{0}
\end{bmatrix},
$
where
$\tilde{\bm{W}_L} \in \mathbb{R}^{n_L \times \max\{n_0, n_L\}}$.
Then,
\begin{align*}
\bm{W}_{L:(j+1)} &=
\begin{bmatrix}
\tilde{\bm{W}_L} & \bm{0}
\end{bmatrix}
\begin{bmatrix}
\hat{\bm{B}}_{(L-1):(j+1)} & \bm{0} \\
\bm{0} & \bm{0}
\end{bmatrix},
\\
\hat{\bm{B}}_{(L-1):(j+1)} &= \begin{bmatrix}
\tilde{\bm{W}}_{(L-1):(j+1)} & \bm{0} \\
\bm{0} & \bm{I}_{n_{\min}^{L-1}(j+1) - r}
\end{bmatrix},
\end{align*}
where $n_{\min}^{i}(j) = \min_{j-1 \le \ell \le i} n_\ell$
for $1\le j \le i+1$.
Thus, $\bm{W}_{L:(j+1)} \approxeq \tilde{\bm{W}}_{L:(j+1)}$.
Similarly, $\bm{W}_{(j-1):1} \approxeq \tilde{\bm{W}}_{(j-1):1}$.
It then follows from a similar argument used in Lemma~\ref{lemma:width-zero-padding}
that
if the initial weight matrices satisfy \eqref{cond-identity-init},
then the weight matrices updated by any gradient based optimization
also
satisfy \eqref{cond-identity-init}.
This completes the proof
for the identity initialization.
\end{proof}
\section{Proof of Theorem~\ref{thm:convg-l2}}
\label{app:thm:convg-l2}
\begin{proof}
For notational convenience, for $j > i$, let
$$
\bm{W}_j \bm{W}_{j-1} \cdots \bm{W}_{i} = \bm{W}_{j:i}.
$$
By definition,
it follows from the update rule that
\begin{align*}
\bm{W}_\ell^{(k+1)}
=
\bm{W}_\ell^{(k)}
- \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
(\bm{W}_{(\ell-1):1}^{(k+1)}
\bm{X} \bm{J}^{\textbf{k}_{(k,\ell-1)}}\bm{W}_{L:(\ell+1)}^{(k)})^T.
\end{align*}
By multiplying $\bm{W}_{(\ell-1):1}^{(k+1)}\bm{X}$ from right,
and $\bm{W}_{L:(\ell+1)}^{(k)}$ from left
and subtracting $\bm{W}^*\bm{X}$
in the both sides,
we obtain
\begin{multline*}
(\bm{W}_{L:(\ell+1)}^{(k)}\bm{W}_{\ell:1}^{(k+1)} - \bm{W}^*)\bm{X} \\
=
(\bm{W}_{L:\ell}^{(k)}\bm{W}_{(\ell-1):1}^{(k+1)} - \bm{W}^*)\bm{X}
- \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\bm{A}_\ell^{(k)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k+1)},
\end{multline*}
where
\begin{align*}
\bm{A}_\ell^{(k)} &= \bm{W}_{L:(\ell+1)}^{(k)}
(\bm{W}_{L:(\ell+1)}^{(k)})^T \in \mathbb{R}^{d_{\text{out}} \times d_{\text{out}}},
\\
\bm{B}_\ell^{(k)} &= \bm{X}^T(\bm{W}_{(\ell-1):1}^{(k)})^T
\bm{W}_{(\ell-1):1}^{(k)}\bm{X}
\in \mathbb{R}^{m \times m}.
\end{align*}
Since $\ell(a;b) = (a-b)^2/2$,
we have
\begin{align*}
\bm{X}\bm{J}^{\textbf{k}_{(k,\ell-1)}} &=
\bm{X}(\bm{W}^{(k)}_{L:\ell}\bm{W}^{(k+1)}_{(\ell-1):1}\bm{X} - \bm{Y})^T
\\
&=\bm{X}(\bm{W}^{(k)}_{L:\ell}\bm{W}^{(k+1)}_{(\ell-1):1}\bm{X} - \bm{Y}\bm{X}^\dagger\bm{X} + \bm{Y}\bm{X}^\dagger\bm{X} - \bm{Y})^T
\\
&=(\bm{W}^{(k)}_{L:\ell}\bm{W}^{(k+1)}_{(\ell-1):1}\bm{X}\bm{X}^T - \bm{Y}\bm{X}^\dagger\bm{X}\bm{X}^T + \bm{Y}(\bm{X}^\dagger\bm{X}\bm{X}^T - \bm{X}^T))^T
\\
&=
((\bm{W}^{(k)}_{L:\ell}\bm{W}^{(k+1)}_{(\ell-1):1} - \bm{W}^*)\bm{X}\bm{X}^T)^T
=
\bm{X}(\Delta^{\textbf{k}_{k,\ell-1}})^T,
\end{align*}
where $\bm{X}^\dagger\bm{X}\bm{X}^T = (\bm{X}^\dagger \bm{X})^T\bm{X}^T = (\bm{X}\bm{X}^\dagger \bm{X})^T = \bm{X}^T$
is used in the 4th equality.
Let
$$
\Delta^{\textbf{k}_{(k,\ell)}} := \bm{W}_{L:(\ell+1)}^{(k)}\bm{W}_{\ell:1}^{(k+1)}\bm{X}-\bm{W}^*\bm{X} \in \mathbb{R}^{d_{\text{out}} \times m}.
$$
Then we have
\begin{align*}
\Delta^{\textbf{k}_{(k,\ell)}} =
\Delta^{\textbf{k}_{(k,\ell-1)}}
- \eta^{\textbf{k}_{(k,\ell-1)}}_\ell
\bm{A}_\ell^{(k)}
\Delta^{\textbf{k}_{(k,\ell-1)}}
\bm{B}_\ell^{(k+1)}.
\end{align*}
Since $\bm{A}_\ell^{(k)}$ and $\bm{B}_\ell^{(k)}$ are symmetric,
we have diagonal transformations,
\begin{align*}
(\bm{U}_{\ell}^{(k)})^{T}\bm{A}_\ell^{(k)}\bm{U}_{\ell}^{(k)} &=
\bm{D}^{(k)}_{A,\ell} = \text{diag}(\lambda_{\ell, i}^{(k)}),
\quad 1 \le i \le d_{\text{out}}, \\
(\bm{V}_{\ell}^{(k)})^{T}\bm{B}_\ell^{(k)}\bm{V}_{\ell}^{(k)} &= \bm{D}^{(k)}_{B,\ell}=\text{diag}(\mu^{(k)}_{\ell, j}), \quad
1 \le j \le m,
\end{align*}
where $\bm{V}_{\ell}^{(k)}$ and $\bm{U}_{\ell}^{(k)}$ are orthogonal matrices, $\lambda_{\ell, 1}^{(k)} \ge \cdots \ge \lambda_{\ell, d_\text{out}}^{(k)}$,
and $\mu^{(k)}_{\ell, 1} \ge \cdots \ge \mu^{(k)}_{\ell, m}$.
We remark that
$\mu^{(k)}_{\ell, d_\text{in}+1} = \cdots = \mu^{(k)}_{\ell, m} = 0$
if $d_\text{in} = n_0 < m$.
Thus, we have
\begin{equation} \label{eqn-thm:convg-l2}
\Delta^{\textbf{k}_{(k,\ell)}} =
\Delta^{\textbf{k}_{(k,\ell-1)}}
- \eta^{\textbf{k}_{(k,\ell-1)}}_\ell
\bm{U}_{\ell}^{(k)} \bm{D}^{(k)}_{A,\ell} (\bm{U}_{\ell}^{(k)})^{T}
\Delta^{\textbf{k}_{(k,\ell-1)}}
{\bm{V}}_{\ell}^{(k+1)} \bm{D}^{(k+1)}_{B,\ell} ({\bm{V}}_{\ell}^{(k+1)})^{T}.
\end{equation}
Let
$
\tilde{\Delta}^{\textbf{k}_{(k,t,\ell)}} =
(\bm{U}_{\ell}^{(k)})^{T}\Delta^{\textbf{k}_{(k,t)}}{\bm{V}}_{\ell}^{(k+1)}.
$
Then, \eqref{eqn-thm:convg-l2} becomes
\begin{align*}
\tilde{\Delta}^{\textbf{k}_{(k,\ell,\ell)}} =
\tilde{\Delta}^{\textbf{k}_{(k,\ell-1,\ell)}}
- \eta^{\textbf{k}_{(k,\ell-1)}}_\ell
\bm{D}^{(k)}_{A,\ell}
\tilde{\Delta}^{\textbf{k}_{(k,\ell-1,\ell)}}
\bm{D}^{(k+1)}_{B,\ell}.
\end{align*}
Then, the $(i,j)$-entry of $\tilde{\Delta}_{\textbf{k}_{k,\ell}}$
is
\begin{align*}
(\tilde{\Delta}^{\textbf{k}_{(k,\ell)}})_{ij}
=
\left(1
-\eta^{\textbf{k}_{(k,\ell-1)}}_\ell \lambda_{\ell, i}^{(k)}\mu_{\ell, j}^{(k+1)}\right)(\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}})_{ij},
\quad 1 \le i \le d_\text{out}, 1 \le j \le m,
\end{align*}
and we have
\begin{align*}
\|\tilde{\Delta}^{\textbf{k}_{(k,\ell,\ell)}}\|_F^2
&= \sum_{i,j} \left(1
-\eta^{\textbf{k}_{(k,\ell-1)}}_\ell \lambda_{\ell, i}^{(k)}\mu_{\ell, j}^{(k+1)}\right)^2
(\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}})_{ij}^2 = \mathcal{F}(\eta^{\textbf{k}_{(k,\ell-1)}}_\ell).
\end{align*}
We then choose the learning rate which minimizes $\mathcal{F}(\eta^{\textbf{k}_{(k,\ell-1)}}_\ell)$
and it is
\begin{equation} \label{app:L2-Optimal-LR}
\eta^{\textbf{k}_{(k,\ell-1)}}_\text{opt}
= \frac{
\left\|\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}\right\|_F^2}{\left\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}}\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\right\|_F^2}.
\end{equation}
Thus, with the optimal learning rate of \eqref{app:L2-Optimal-LR}, we obtain
\begin{align*}
\|{\Delta}^{\textbf{k}_{(k,\ell)}}\|_F^2
&=\|{\Delta}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
-\eta^{\textbf{k}_{(k,\ell-1)}}_\text{opt}
\left\|\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}\right\|_F^2
\\
&=\|{\Delta}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
-\frac{
\left\|\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}\right\|_F^4}{\left\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}}\frac{\partial \mathcal{L}}{\partial \bm{W}_\ell}\big|_{\bm{\theta}=\bm{\theta}^{\textbf{k}_{(k,\ell-1)}}}\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\right\|_F^2}.
\end{align*}
For a matrix $\bm{M}$, the $j$-th column
and the $i$-th row of $\bm{M}$ are denoted by $(\bm{M})^j$ and $(\bm{M})_i$, respectively.
We note that all rows of $\Delta^{\textbf{k}_{(k,\ell-1,\ell)}}$
are in $\text{range}(\bm{X}^T)$
and $\text{span}\{ (\bm{V}_{\ell}^{(k+1)})^j; 1\le j \le r_x \} = \text{range}(\bm{X}^T)$, where $r_x = \text{rank}(\bm{X})$.
We remark that if $\mu_{\ell,k}^{(k+1)} = 0$ for some $k \le r_x$,
we choose the corresponding $(\bm{V}_{\ell}^{(k+1)})^k$
so that $\text{range}(\bm{X}) = \text{span}\{ (\bm{V}_{\ell}^{(k+1)})^j; 1\le j \le r_x \}$ holds.
Thus,
$(\Delta^{\textbf{k}_{(k,\ell)}}{\bm{V}}_{\ell}^{(k+1)})^j = 0$ for $j > r_x$.
This gives that the $(i,j)$-entry of $\tilde{\Delta}_{\textbf{k}_{k,\ell}}$
is equal to
\begin{align*}
(\tilde{\Delta}^{\textbf{k}_{(k,\ell)}})_{ij}
=
\left(1
-\eta^{\textbf{k}_{(k,\ell-1)}}_\ell \lambda_{\ell, i}^{(k)}\mu_{\ell, j}^{(k+1)}\right)(\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}})_{ij},
\quad 1 \le i \le d_\text{out}, 1 \le j \le r_x,
\end{align*}
and zero otherwise.
Suppose that $(\bm{W}_L^{(0)})^j \in K $ for all $1\le j \le n_{L-1}$
where
$\text{range}(\bm{Y}\bm{X}^\dagger) \subset K \subset \mathbb{R}^{n_L}$.
It then can be checked that
$(\bm{W}_L^{(k)})^j \in K $ for all $k$ and $j$
and thus
$(\Delta^{\textbf{k}_{(k,\ell-1)}})^j \in K$.
Also, from the similar argument used in the above, we have
$$
\text{span}\{(\bm{U}^{(k)}_\ell)^j | j=1,\cdots, r\} = K,\qquad r = \dim K.
$$
Thus,
$((\bm{U}^{(k)}_\ell)^T\Delta^{\textbf{k}_{(k,\ell)}}))_i = 0$ for $i > r$
and we have
\begin{equation} \label{eqn2-thm:convg-l2}
(\tilde{\Delta}^{\textbf{k}_{(k,\ell)}})_{ij}
=
\left(1
-\eta^{\textbf{k}_{(k,\ell-1)}}_\ell \lambda_{\ell, i}^{(k)}\mu_{\ell, j}^{(k+1)}\right)(\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}})_{ij},
\quad 1 \le i \le r, 1 \le j \le r_x,
\end{equation}
and zero otherwise.
If the learning rate $\eta^{\textbf{k}_{(k,\ell-1)}}_\ell$
is chosen to satisfy
\begin{equation} \label{eqn3-thm:convg-l2}
0 < \eta^{\textbf{k}_{(k,\ell-1)}}_\ell <
\frac{2}{\max_{i,j} \left(\lambda_{\ell, i}^{(k)}\mu_{\ell, j}^{(k+1)}\right)}
=
\frac{2}{
\sigma_{\max}^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})
\sigma_{\max}^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})},
\end{equation}
we have
\begin{align*}
((\tilde{\Delta}^{\textbf{k}_{(k,\ell)}})_{ij})^2
\le
((\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}})_{ij})^2
(\gamma^{\textbf{k}_{(k,\ell-1)}})^2,
\end{align*}
where $\gamma^{\textbf{k}_{(k,\ell-1)}} =\max\{\gamma^{\textbf{k}_{(k,\ell-1)}}_1, \gamma^{\textbf{k}_{(k,\ell-1)}}_2 \}$,
\begin{align*}
\gamma^{\textbf{k}_{(k,\ell-1)}}_1
&=
1-\eta^{\textbf{k}_{(s,\ell-1)}}_\ell
\sigma_{r}^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})
\sigma_{r}^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X})
, \\
\gamma^{\textbf{k}_{(k,\ell-1)}}_2 &=
\eta^{\textbf{k}_{(s,\ell-1)}}_\ell
\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})\|^2
\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(k,\ell-1)}}\bm{X}\|^2-1
.
\end{align*}
Note that from the relation of $\|M\|_F^2 = \text{Tr}(MM^T)$, we have
\begin{align*}
\|\tilde{\Delta}^{\textbf{k}_{(k,\ell)}}\|_F^2
&= \text{Tr}((\bm{U}_{\ell}^{(k)})^{T}\Delta^{\textbf{k}_{(k,\ell)}}\tilde{\bm{V}}_{k,\ell}
(\tilde{\bm{V}}_{k,\ell})^T
(\Delta^{\textbf{k}_{(k,\ell)}})^T\bm{U}_{\ell}^{(k)})
\\
&= \text{Tr}((\bm{U}_{\ell}^{(k)})^{T}\Delta^{\textbf{k}_{(k,\ell)}}
(\Delta^{\textbf{k}_{(k,\ell)}})^T\bm{U}_{\ell}^{(k)}) \\
&=\text{Tr}(\Delta^{\textbf{k}_{(k,\ell)}}
(\Delta^{\textbf{k}_{(k,\ell)}})^T\bm{U}_{\ell}^{(k)}(\bm{U}_{\ell}^{(k)})^{T})
\\
&= \text{Tr}(\Delta^{\textbf{k}_{(k,\ell)}}
(\Delta^{\textbf{k}_{(k,\ell)}})^T)
= \|\Delta^{\textbf{k}_{(k,\ell)}}\|_F^2.
\end{align*}
Therefore,
\begin{align*}
\|\tilde{\Delta}^{\textbf{k}_{(k,\ell)}}\|_F^2
\le \|\tilde{\Delta}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
(\gamma^{\textbf{k}_{(k,\ell-1)}})^2 \iff
\|\Delta^{\textbf{k}_{(k,\ell)}}\|_F^2
\le \|\Delta^{\textbf{k}_{(k,\ell-1)}}\|_F^2
(\gamma^{\textbf{k}_{(k,\ell-1)}})^2.
\end{align*}
By recursively applying the above, we obtain
\begin{align*}
\|\Delta^{\textbf{k}_{k}}\|_F^2
\le
\|\Delta^{\textbf{k}_{0}}\|_F^2
\prod_{s=0}^{k-1} \left(\prod_{\ell=1}^L
(\gamma^{\textbf{k}_{(s,\ell-1)}})^2\right),
\end{align*}
which completes the proof.
\end{proof}
\section{Proof of Lemma~\ref{lemma-min-sing-value}}
\label{app:lemma-min-sing-value}
\begin{proof}
Suppose $\|\bm{W}^{\textbf{k}_0} - \bm{W}^*\|_F \le \tilde{\sigma}_{\min} - c/\|\bm{X}\|$
where $\tilde{\sigma}_{\min} = \sigma_{\min}(\bm{W}^*\bm{X})/\|\bm{X}\|$,
where $c$ will be chosen later.
It then follows from the assumption that
$$
\|\bm{W}^{\textbf{k}_0}\bm{X} - \bm{W}^*\bm{X}\|_F \le
\|\bm{W}^{\textbf{k}_0} - \bm{W}^*\|_F \|\bm{X}\|
\le \sigma_{\min}(\bm{W}^*\bm{X}) - c.
$$
Then for any $\bm{W}$ satisfying $\|\bm{W}\bm{X} - \bm{W}^*\bm{X}\|_F \le \|\bm{W}^{\textbf{k}_0}\bm{X} - \bm{W}^*\bm{X}\|_F$,
we have
\begin{align*}
\sigma_{\min}(\bm{W}\bm{X}) &\ge \sigma_{\min}(\bm{W}^*\bm{X}) - \sigma_{\max}(\bm{W}\bm{X} - \bm{W}^*\bm{X}) \\
&\ge
\sigma_{\min}(\bm{W}^*\bm{X}) - \|\bm{W}\bm{X} - \bm{W}^*\bm{X}\|_F
\ge c > 0.
\end{align*}
From Theorem~\ref{thm:convg-l2}, since $\|\bm{W}^{\textbf{k}_j}\bm{X} - \bm{W}^*\bm{X}\|_F \le\|\bm{W}^{\textbf{k}_0}\bm{X} - \bm{W}^*\bm{X}\|_F$ for any $j$,
we obtain $\sigma_{\min}(\bm{W}^{\textbf{k}_j}\bm{X}) \ge c > 0$.
For notational convenience, let
$A = \bm{W}^{\textbf{k}_{(k,\ell-1)}}_{L:(\mathfrak{i}(\ell)+1)}$,
$B = \bm{W}^{\textbf{k}_{(k,\ell-1)}}_{\mathfrak{i}(\ell)}$,
and
$C = \bm{W}^{\textbf{k}_{(k,\ell-1)}}_{(\mathfrak{i}(\ell)-1):1}\bm{X}$.
Then, $\bm{W}^{\textbf{k}_{(k,\ell-1)}}\bm{X} = ABC$.
Note that $\sigma_{s}(ABC) = \sigma_{s}(C^TB^TA^T)$.
It then follows from
\begin{equation} \label{app:sv-lowbd}
\begin{split}
0 < c \le \sigma_{\min}(ABC) \le \sigma_{s}(ABC) &= \max_{S:\dim (S) = s} \min_{x \in S, \|x\|=1} \|ABCx\|
\\
&\le \|AB\|\max_{S:\dim (S) = s} \min_{x \in S, \|x\|=1} \|Cx\| \\
&= \|AB\|\sigma_{s}(C), \quad \min\{n_0, n_L\} \le s \le 1,
\end{split}
\end{equation}
that $\sigma_{s}(C) > \frac{c}{\|AB\|}$.
Similarly, $\sigma_{s}(A) > \frac{c}{\|BC\|}$.
Note that it follow from Theorem~\ref{thm:role of width} that
for any $s$ and $\ell$,
\begin{equation} \label{matrix-update-balanced}
\begin{split}
\bm{W}_j^{(s)} &\approxeq \tilde{\bm{W}}_j^{(s)}
\in
\begin{cases}
\mathbb{R}^{\max\{n_0, n_L\} \times n_0}, & \text{if $j = 1$}, \\
\mathbb{R}^{n_L \times \max\{n_0, n_L\}}, & \text{if $j = L$},
\end{cases}, \\
\bm{W}_j^{(s)} &\approxeq_1 \tilde{\bm{W}}_j^{(s)}
\in
\mathbb{R}^{\max\{n_0, n_L\} \times \max\{n_0, n_L\}},
\text{if $1 < j < L$}.
\end{split}
\end{equation}
Then, for any $\textbf{k}=(k_1,\cdots,k_L)$ and $\ell \in \{1,\cdots,L-1\}$, we have
\begin{align*}
\bm{W}_{L:\ell+1}^{\textbf{k}}
= \tilde{\bm{W}}_L^{(k_L)}\cdots
\tilde{\bm{W}}_{\ell+1}^{(k_{\ell+1})},
\quad
\bm{W}_{(\ell-1):1}^{\textbf{k}}\bm{X}
= \tilde{\bm{W}}_{\ell-1}^{(k_{\ell-1})}\cdots
\tilde{\bm{W}}_{1}^{(k_{1})}\bm{X}.
\end{align*}
Since $n_\ell \ge \max\{n_0, n_L\}$, we have
$$
\sigma_{\min}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})
\ge \prod_{j = \mathfrak{i}(\ell)+1}^L \sigma_{\min}(\tilde{\bm{W}}_{j}^{\textbf{k}_{(s,\ell-1)}}).
$$
Similarly,
$\sigma_{\min}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})
\ge
\sigma_{\min}(\bm{X})
\prod_{j=1}^{\mathfrak{i}(\ell)-1} \sigma_{\min}({\bm{W}}_{j}^{\textbf{k}_{(s,\ell-1)}}).
$
From \eqref{app:sv-lowbd}, we have
\begin{equation} \label{norm-lower-bound}
\begin{split}
\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\|
&\ge \sigma_{s}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})
\ge \frac{c}{\|\bm{W}_{\mathfrak{i}(\ell):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\|}, \\
\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\|
&\ge \sigma_{s}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})
\ge
\frac{c}{\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\|}.
\end{split}
\end{equation}
Let
\begin{equation}
\mathcal{R}(\bm{\theta}^{\textbf{k}_{(s,\ell-1)}})
= \max_{1 \le j \le \ell}
\|\bm{W}_{\mathfrak{i}(j)}^{\textbf{k}_{(s,\ell-1)}} - \bm{W}_{\mathfrak{i}(j)}^{(0)}\|,
\end{equation}
and
\begin{equation}
\mathcal{R}(sL + \ell)
= \max\left\{ \max_{0 \le i < s} \mathcal{R}(\bm{\theta}^{\textbf{k}_{(i,L-1)}}), \mathcal{R}(\bm{\theta}^{\textbf{k}_{(s,\ell-1)}}) \right\}.
\end{equation}
By applying the induction on the number of iterations of the BCGD,
we claim that
there exists $0 < R < 1$ such that
\begin{align*}
\mathcal{R}(k) \le R, \forall k.
\end{align*}
Since $\mathcal{R}(0) = 0$, the base case holds trivially.
Suppose $\mathcal{R}(sL+\ell-1) \le R$.
We want to show that $\mathcal{R}(sL+\ell) \le R$.
Note that since
$\bm{W}_{\mathfrak{i}(j)}^{\textbf{k}_{(s,\ell)}} = \bm{W}_{\mathfrak{i}(j)}^{\textbf{k}_{(s,\ell-1)}}$
for $j \ne \ell$,
it suffices to consider $\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell)}}$.
Suppose the learning rates satisfy \eqref{LR-l2-loss-exact}.
It follows from the BCGD updates
\begin{align*}
\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell)}}
&=
\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}}
-\eta_\ell^{\textbf{k}_{(s,\ell-1)}}
(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})^T
\Delta^{\textbf{k}_{(s,\ell-1)}}
(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})^T,
\end{align*}
that
\begin{align*}
&\|\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell)}} - \bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
\\
&\le
\|\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}}- \bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
+\eta_\ell^{\textbf{k}_{(s,\ell-1)}}
\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\|
\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\|
\|\Delta^{\textbf{k}_{(s,\ell-1)}}\|, \\
&\le
\|\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}}- \bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
+
\frac{\eta\|\Delta^{\textbf{k}_{(s,\ell-1)}}\|_F}{\|\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}}\|
\|\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\|}.
\end{align*}
Using \eqref{norm-lower-bound}, we obtain
\begin{equation} \label{main-eqn-01}
\|\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell)}}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
\le
\|\bm{W}_{\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
+
\eta\frac{
\|\bm{W}_{L:\mathfrak{i}(\ell)}^{\textbf{k}_{(s,\ell-1)}}\|
\|\bm{W}_{\mathfrak{i}(\ell):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}\|
\|\Delta^{\textbf{k}_{(s,\ell-1)}}\|_F}{c^2}.
\end{equation}
Also, note that by the induction hypothesis and \eqref{matrix-update-balanced}, we have
$\sigma_{\max}(\bm{W}_{j}^{\textbf{k}_{(s,\ell-1)}}) < 1 + R$ and
\begin{equation} \label{ineq-min-sing}
\begin{split}
R &> \|\bm{W}_{j}^{\textbf{k}_{(s,\ell-1)}} - \bm{W}_{j}^{(0)}\|
\ge \|(\bm{W}_{j}^{\textbf{k}_{(s,\ell-1)}} - \bm{W}_{j}^{(0)})z\| \\
&\ge \|\bm{W}_{j}^{(0)}z\| - \|\bm{W}_{j}^{\textbf{k}_{(s,\ell-1)}}z\|
=
1-\sigma_{\min}(\tilde{\bm{W}}_{j}^{\textbf{k}_{(s,\ell-1)}}) \\
\implies& \sigma_{\min}(\tilde{\bm{W}}_{j}^{\textbf{k}_{(s,\ell-1)}}) > 1 - R.
\end{split}
\end{equation}
where $\|z\|=1$.
Here, we set $z$ to be the right singular vector of $\bm{W}_j^{\textbf{k}_{(s,\ell-1)}}$
which corresponds to $\sigma_{\min}(\tilde{\bm{W}}_j^{\textbf{k}_{(s,\ell-1)}})$.
Then, $z$ has zero-values from ($\max\{n_0,n_L\} +1$)-th to $n_{j-1}$-th entries.
Recall that $\bm{W}_j^{(0)}$ is equivalent to an orthogonal matrix upto zero-valued padding.
This allows us to conclude $\|\bm{W}_{j}^{(0)}z\| = 1$, which makes the fourth equality of \eqref{ineq-min-sing} hold.
Thus, we have
\begin{align*}
\sigma_{\min}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})\sigma_{\min}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})
&\ge \sigma_{\min}(\bm{X})(1-R)^{L-1},
\\
\sigma_{\max}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})\sigma_{\max}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})
&\le \sigma_{\max}(\bm{X})(1+R)^{L-1}.
\end{align*}
It then follows from \eqref{Rate-l2-loss-exact} that
\begin{equation} \label{rate-bound}
\gamma^{\textbf{k}_{(k,j-1)}} =1 -\frac{\eta}{\kappa^2(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(k,\ell-1)}})\kappa^2(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})}< \gamma := 1 - \frac{\eta}{\kappa^2(\bm{X})}
\left(\frac{1-R}{1+R}\right)^{2(L-1)},
\end{equation}
for $0 \le k < s$ with $1 \le j \le L$
and $k=s$ with $1 \le j < \ell$.
From \eqref{main-eqn-01}, \eqref{rate-bound} and Theorem~\ref{thm:convg-l2}, we obtain
\begin{align*}
\|\bm{W}_{\mathfrak{i}(\ell)}^{(s+1)}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
\le
\|\bm{W}_{\mathfrak{i}(\ell)}^{(s)}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
+
\frac{\eta(1+R)^{L+1}}{c^2}\|\bm{X}\|\|\Delta^{\textbf{k}_{0}}\|_F
\gamma^{sL+\ell-1}.
\end{align*}
The recursive relation with respect to $s$ gives
\begin{align*}
\|\bm{W}_{\mathfrak{i}(\ell)}^{(s+1)}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|
&\le \sum_{t=0}^{s} \frac{(1+R)^{L+1}}{c^2}\eta \|\bm{X}\|\|\Delta^{\textbf{k}_{0}}\|_F
\gamma^{tL+\ell-1}
\\
&\le
\frac{(1+R)^{L+1}}{c^2}\eta \|\bm{X}\|\|\Delta^{\textbf{k}_{0}}\|_F\frac{1}{1-\gamma^{L}}
\\
&\le
\frac{(1+R)^{L+1}}{c^2}\eta \|\bm{X}\|\|\Delta^{\textbf{k}_{0}}\|_F\frac{1}{L\frac{\eta}{\kappa^2_{r_x}(\bm{X})}
\left(\frac{1-R}{1+R}\right)^{2(L-1)}} \\
&\le \frac{\|\bm{X}\|^2\|\bm{W}^{\textbf{k}_{0}}-\bm{W}^*\|_F\kappa^2(\bm{X})}{c^2}
\frac{(1+R)^{L+1}}{L\left(\frac{1-R}{1+R}\right)^{2(L-1)}}.
\end{align*}
Let $\tilde{c} = c/\|\bm{X}\|$.
If $R = R_L:=\frac{(5L-3)-\sqrt{(5L-3)^2-4L}}{2L}$
and
\begin{equation} \label{assumption-c}
\tilde{c} \ge \kappa^2(\bm{X})\left(\frac{-1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)}\right),
\end{equation}
where $h(L) = \frac{LR_L(1-R_L)^{2L-2}}{(1+R_L)^{3L-1}}$,
we have
$$
\|\bm{W}_{\mathfrak{i}(\ell)}^{(s+1)}-\bm{W}^{(0)}_{\mathfrak{i}(\ell)}\|\le
\frac{\|\bm{W}^{\textbf{k}_{0}}-\bm{W}^*\|_F\kappa^2(\bm{X})}{\tilde{c}^2}
\frac{(1+R)^{L+1}}{L\left(\frac{1-R}{1+R}\right)^{2(L-1)}} \le R.
$$
This can be checked as follow.
First, we note that the maximum of $x\frac{\left(\frac{1-x}{1+x}\right)^{2(L-1)}}{(1+x)^{L+1}}$
where $0 < x <1$ is obtained at $x=R_L$.
It also follows from the assumption of $\|\bm{W}^{\textbf{k}_{0}}-\bm{W}^*\|_F \le \tilde{\sigma}_{\min} -\tilde{c}$ that
\begin{align*}
&\tilde{c} \ge \kappa^2(\bm{X})\left(\frac{-1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)}\right) \\
&\iff
\frac{2h(L)}{\kappa^2(\bm{X})}\tilde{c} + 1 \ge \sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}
\\
&\implies \frac{(\tilde{\sigma}_{\min} -\tilde{c})\kappa^2(\bm{X})}{\tilde{c}^2}
\le h(L)=LR\frac{(1-R)^{2(L-1)}}{(1+R)^{3L-1}}
\\
&\implies \frac{\|\bm{W}^{\textbf{k}_{0}}-\bm{W}^*\|_F\kappa^2(\bm{X})}{\tilde{c}^2}
\le LR\frac{\left(\frac{1-R}{1+R}\right)^{2(L-1)}}{(1+R)^{L+1}}
\\
&\iff
\frac{\|\bm{W}^{\textbf{k}_{0}}-\bm{W}^*\|_F\kappa^2(\bm{X})}{\tilde{c}^2}
\frac{(1+R)^{L+1}}{L\left(\frac{1-R}{1+R}\right)^{2(L-1)}} \le R.
\end{align*}
Hence, $\|\bm{W}_{\mathfrak{i}(\ell)}^{(s+1)} - \bm{W}^{(0)}_{\mathfrak{i}(\ell)}\| < R$.
Thus, by induction, we conclude that
$\mathcal{R}(k) < R$ for all $k$.
By letting $\tilde{c} = \kappa^2(\bm{X})\left(\frac{-1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)}\right)$,
the assumption on $\|\bm{W}^{\textbf{k}_0}-\bm{W}^*\|_F$ becomes
\begin{align*}
\|\bm{W}^{\textbf{k}_0}-\bm{W}^*\|_F &\le
\tilde{\sigma}_{\min}
- \kappa^2(\bm{X})\left(\frac{-1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)}\right) \\
&= \frac{1}{2h(L)}\left(2h(L)\tilde{\sigma}_{\min} +\kappa^2(\bm{X})-\kappa(\bm{X})\sqrt{\kappa^2(\bm{X})+4h(L)\tilde{\sigma}_{\min}}\right)
\\
&=\frac{1}{2h(L)}\cdot \frac{(2h(L)\tilde{\sigma}_{\min} +\kappa^2(\bm{X}))^2-\kappa^2(\bm{X})(\kappa^2(\bm{X})+4h(L)\tilde{\sigma}_{\min})}{2h(L)\tilde{\sigma}_{\min} +\kappa^2(\bm{X})+\kappa(\bm{X})\sqrt{\kappa^2(\bm{X})+4h(L)\tilde{\sigma}_{\min}}}
\\
&=
\frac{2h(L)\tilde{\sigma}_{\min}}{2h(L)\tilde{\sigma}_{\min} +\kappa^2(\bm{X})\left(1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}\right)}
\\
&= \frac{\tilde{\sigma}_{\min}}{1 + \kappa^2(\bm{X})\left(\frac{1+\sqrt{1+4h(L)\tilde{\sigma}_{\min}/\kappa^2(\bm{X})}}{2h(L)\tilde{\sigma}_{\min}}\right)}.
\end{align*}
Therefore, under the above assumption on $\|\bm{W}^{\textbf{k}_0}-\bm{W}^*\|_F$,
we have
\begin{align*}
\gamma^{\textbf{k}_{(k,\ell-1)}} < \gamma_L := 1 - \frac{\eta}{\kappa^2(\bm{X})}
\left(\frac{1-R_L}{1+R_L}\right)^{2(L-1)}.
\end{align*}
Furthermore, it follows from
\begin{multline*}
LR_L = \frac{5L-3}{2}\left(1 - \sqrt{1-\frac{4L}{(5L-3)^2}}\right)
\\
= \frac{\frac{2L}{5L-3}}{1 + \sqrt{1-\frac{4L}{(5L-3)^2}}}
= \frac{2}{5-3/L}\cdot \frac{1}{1+\sqrt{1-\frac{4L}{(5L-3)^2}}},
\end{multline*}
that $\lim_{L\to \infty} LR_L = \frac{1}{5}$ and $\lim_{L\to \infty} R_L = 0$.
Also, since $LR_L$ and $R_L$ are decreasing functions of $L$, we have
\begin{align*}
\left(\frac{1-R_L}{1+R_L}\right)^{2(L-1)} \ge \left(1 - \frac{2R_L}{1+R_L}\right)^{2L} \ge 1 - \frac{4LR_L}{1+R_L} \ge \frac{1}{5}.
\end{align*}
Hence, we can conclude that
\begin{align*}
\gamma_L = 1 - \frac{\eta}{\kappa^2(\bm{X})}\frac{4LR_L}{1+R_L}
\le \gamma = 1 - \frac{\eta}{5\kappa^2(\bm{X})},
\end{align*}
which completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{thm:l2-dout1}}
\label{app:thm:l2-dout1}
\begin{proof}
Since $n_\ell \ge \max\{n_0, n_L\}$
and the initial weight matrices are from the orth-identity initialization (Section~\ref{subsec:initialization}),
it follows from Theorem~\ref{thm:role of width}
that $\bm{W}_{(\ell-1):1}^{(0)} \approxeq_1 \tilde{\bm{W}}^{(0)}_{(\ell-1):1} \in \mathbb{R}^{\max\{n_0,n_L\}\times n_0}$
and
$(\bm{W}_{(\ell-1):1}^{(0)})^T\bm{W}_{(\ell-1):1}^{(0)} = \bm{I}_{n_0}$.
Thus,
$$
\sigma_{\max}(\bm{W}_{(\ell-1):1}^{(0)}) = 1 = \sigma_{\min}(\bm{W}_{(\ell-1):1}^{(0)}).
$$
Note that since $\bm{X}$ is a full row-rank matrix, $\bm{X}\bm{X}^T$ is invertible.
In what follows, we will show that $\|\bm{W}_{L:(L-\ell+1)}^{(1)}\| = 0$
if
\begin{equation} \label{condition-non-zero}
\bm{W}^* = \bm{Y}\bm{X}^\dagger =
\bm{W}^{\textbf{k}_{(0,\ell-1)}}\left(\bm{I}_{n_0} - \|\bm{X}\|^2(\bm{XX}^T)^{-1}/\eta \right).
\end{equation}
Suppose $\bm{W}^*$ does not satisfy the condition of \eqref{condition-non-zero} for all $\ell$.
For $\ell = 1$, we have $\eta_1^{\textbf{k}_{(0,0)}} = \eta/\|\bm{X}\|^2$ since $(\bm{W}_{(L-1):1}^{(0)})^T\bm{W}_{(L-1):1}^{(0)} = \bm{I}_{n_0}$.
Suppose $\bm{W}_L^{(1)} = \bm{0}$
and let $\Delta^{\textbf{k}_{(0,\ell-1)}}_{W} = \bm{W}^{\textbf{k}_{(0,\ell-1)}} - \bm{W}^*$.
Then,
\begin{align*}
\bm{0} = \bm{W}_L^{(1)} &= \bm{W}_L^{(0)} - \eta_1^{\textbf{k}_{(0,0)}}
(\bm{W}_{(L-1):1}^{(0)}\bm{XX}^T(\Delta_W^{\textbf{k}_{(0,0)}})^T)^T, \\
\bm{W}_L^{(0)}&= \eta_1^{\textbf{k}_{(0,0)}}
\Delta_W^{\textbf{k}_{(0,0)}}
\bm{XX}^T
(\bm{W}_{(L-1):1}^{(0)})^T,
\\
\bm{W}^{\textbf{k}_{(0,0)}}&=
\eta_1^{\textbf{k}_{(0,0)}}
(\bm{W}^{\textbf{k}_{(0,0)}}-\bm{W}^*)
\bm{XX}^T
(\bm{W}_{(L-1):1}^{(0)})^T\bm{W}_{(L-1):1}^{(0)}, \\
\bm{W}^* &= \bm{W}^{\textbf{k}_{(0,0)}}\left(\bm{I}_{n_0} - (\eta_1^{\textbf{k}_{(0,0)}}\bm{XX}^T)^{-1} \right),
\end{align*}
which contradicts to the assumption of $\bm{W}^*$ being not satisfying \eqref{condition-non-zero}.
Hence, $\bm{W}_L^{(1)} \ne \bm{0}$.
Now, suppose $\|\bm{W}_{L:(L-\ell+2)}^{(1)}\| \ne 0$
and we want to show $\|\bm{W}_{L:(L-\ell+1)}^{(1)}\| \ne 0$.
Suppose not, i.e, $\bm{W}_{L:(L-\ell+1)}^{(1)} = \bm{0}$.
Then, we have
\begin{align*}
\bm{W}_{L-\ell+1}^{(1)} &= \bm{W}_{L-\ell+1}^{(0)} - \eta_\ell^{\textbf{k}_{(0,\ell-1)}}
(\bm{W}_{(L-\ell):1}^{(0)}\bm{XX}^T(\Delta_W^{\textbf{k}_{(0,\ell-1)}})^T\bm{W}_{L:(L-\ell+2)}^{(1)}
)^T, \\
\bm{0} = \bm{W}_{L:(L-\ell+1)}^{(1)} &= \bm{W}_{L:(L-\ell+2)}^{(1)}\bm{W}_{L-\ell+1}^{(0)} \\
&\qquad\quad- \eta_\ell^{\textbf{k}_{(0,\ell-1)}}
\|\bm{W}_{L:(L-\ell+2)}^{(1)}\|^2\Delta_W^{\textbf{k}_{(0,\ell-1)}}
\bm{XX}^T(\bm{W}_{(L-\ell):1}^{(0)})^T, \\
\bm{W}_{L:(L-\ell+2)}^{(1)}\bm{W}_{L-\ell+1}^{(0)}&= \eta_1^{\textbf{k}_{(0,0)}}
\Delta_W^{\textbf{k}_{(0,\ell-1)}}
\bm{XX}^T
(\bm{W}_{(L-\ell):1}^{(0)})^T,
\\
\bm{W}^{\textbf{k}_{(0,\ell-1)}}&= \eta_1^{\textbf{k}_{(0,0)}}
\Delta_W^{\textbf{k}_{(0,\ell-1)}}
\bm{XX}^T
(\bm{W}_{(L-1):1}^{(0)})^T\bm{W}_{(L-\ell):1}^{(0)},
\\
\bm{W}^* &= \bm{W}^{\textbf{k}_{(0,\ell-1)}}\left(\bm{I}_{n_0} - (\eta_1^{\textbf{k}_{(0,0)}}\bm{XX}^T)^{-1} \right),
\end{align*}
which contradicts to the assumption of $\bm{W}^*$.
Hence, $\bm{W}_{L:(L-\ell+1)}^{(1)} \ne \bm{0}$.
By induction, we conclude that
$\bm{W}_{L:(L-\ell+1)}^{(1)} \ne \bm{0}$ for all $\ell$.
Thus, it follows from Theorem~\ref{thm:convg-l2}
that
\begin{equation}
\begin{split}
\|\bm{W}^{\textbf{k}_{1}}\bm{X} - \bm{W}^*\bm{X}\|_F
&<
\|\bm{W}^{\textbf{k}_{0}}\bm{X} - \bm{W}^*\bm{X}\|_F
\left(1 -\frac{\eta}{\kappa^2(\bm{X})} \right)^{L}.
\end{split}
\end{equation}
Since $L$ is chosen to satisfy
$$
\|\bm{W}^{\textbf{k}_{1}}\bm{X} - \bm{W}^*\bm{X}\|_F \le
\|\bm{W}^{\textbf{k}_{0}}\bm{X} - \bm{W}^*\bm{X}\|_F
\left(1 -\frac{\eta}{\kappa^2(\bm{X})} \right)^{L}
\le \frac{\sigma_{\min}(\bm{W}^*\bm{X})}{c},
$$
where $c$ is defined in \eqref{def:c-min},
it follows from Lemma~\ref{lemma-min-sing-value}
and Theorem~\ref{thm-l2-identity}
that
$\|\bm{W}_{L:j}^{(s)}\| \ne 0$ for all $j$ and $s$,
and
$$
\|\bm{W}^{\textbf{k}_{s}}\bm{X} - \bm{W}^*\bm{X}\|_F \le
\|\bm{W}^{\textbf{k}_{1}}\bm{X} - \bm{W}^*\bm{X}\|_F(\gamma^{L-1})^{s-1}\left(1 -\frac{\eta}{\kappa^2(\bm{X})} \right)^{s-1}.
$$
Note that $\left(1 -\frac{1}{\kappa^2(\bm{X})} \right)^{s-1}$ is from the fact that
$\|\bm{W}_{L:2}^{(s)}\| \ne 0$ for all $1 \le s$.
Hence, we have
$$
\|\bm{W}^{\textbf{k}_{s}}\bm{X} - \bm{W}^*\bm{X}\|_F \le
\|\bm{W}^{\textbf{k}_{0}}\bm{X} - \bm{W}^*\bm{X}\|_F(\gamma^{L-1})^{s-1}\left(1 -\frac{\eta}{\kappa^2(\bm{X})} \right)^{L+s-1},
$$
and the proof is completed.
\end{proof}
\section{Proof of Theorem~\ref{thm:convg-convex}}
\label{app:thm:convg-convex}
\begin{proof}
For notational convenience, let $\ell(z) = \ell(z;b)$.
Since $\ell(\cdot)$ is convex, differentiable
and $|\ell'(z)-\ell'(x)|\le C_\text{Lip}|z-x|$, we have
\begin{align*}
\ell(z) \le \ell(x) + \ell'(x)(z-x) + \frac{1}{2}\ell''(x)(z-x)^2
\le \ell(x) + \ell'(x)(z-x) + \frac{C_\text{Lip}}{2}(z-x)^2.
\end{align*}
Let
$\bm{W}^{\textbf{k}_{(k,\ell)}} = \bm{W}_{L:(L-\ell+1)}^{(k+1)}\bm{W}_{(L-\ell):1}^{(k)}$,
$\hat{\textbf{y}}_{(k,\ell)}^i =\bm{W}^{\textbf{k}_{(k,\ell)}}\textbf{x}^i$
and
$$\hat{\textbf{Y}}^{(k,\ell)}=[\hat{\textbf{y}}_{(k,\ell)}^1,\cdots,\hat{\textbf{y}}_{(k,\ell)}^m]= \bm{W}^{\textbf{k}_{(k,\ell)}}\bm{X}.$$
Then, we have
\begin{equation} \label{thm:loss-ineq}
\begin{split}
&\ell((\hat{\textbf{y}}_{(k,\ell)}^i)_j; \textbf{y}_j^i)
\\
&\le
\ell((\hat{\textbf{y}}_{(k,\ell-1)}^i)_j; \textbf{y}_j^i)
+\ell'((\hat{\textbf{y}}_{(k,\ell-1)}^i)_j; \textbf{y}_j^i)
((\hat{\textbf{y}}_{(k,\ell)}^i)_j - (\hat{\textbf{y}}_{(k,\ell-1)}^i)_j)
\\
&\qquad+ \frac{C_\text{Lip}}{2}((\hat{\textbf{y}}_{(k,\ell)}^i)_j - (\hat{\textbf{y}}_{(k,\ell-1)}^i)_j)^2.
\end{split}
\end{equation}
For notational convenience, for $j > i$, let
$$
\bm{W}_j \bm{W}_{j-1} \cdots \bm{W}_{i} = \bm{W}_{j:i}.
$$
It follows from the BCGD update rule that
\begin{align*}
\bm{W}_{L-\ell+1}^{(k+1)}
=
\bm{W}_{L-\ell+1}^{(k)}
- \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
(\bm{W}_{(L-\ell):1}^{(k)}
\bm{X} \bm{J}^{\textbf{k}_{(k,\ell-1)}}\bm{W}_{L:(L-\ell+2)}^{(k+1)})^T.
\end{align*}
By multiplying $\bm{W}_{(L-\ell):1}^{(k)}\bm{X}$ from right,
and $\bm{W}_{L:(L-\ell+2)}^{(k+1)}$ from left in the both sides,
we obtain
\begin{multline*}
\bm{W}_{L:(L-\ell+1)}^{(k+1)}\bm{W}_{(L-\ell):1}^{(k)}\bm{X} \\
=
\bm{W}_{L:(L-\ell+2)}^{(k+1)}\bm{W}_{(L-\ell+1):1}^{(k)}\bm{X}
- \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{C}_\ell^{(k)},
\end{multline*}
where
\begin{align*}
\bm{A}_\ell^{(k)} &= \bm{W}_{L:(L-\ell+2)}^{(k)}
(\bm{W}_{L:(L-\ell+2)}^{(k)})^T \in \mathbb{R}^{d_{\text{out}} \times d_{\text{out}}},
\\
\bm{B}_\ell^{(k)} &= (\bm{W}_{(L-\ell):1}^{(k)}\bm{X})^T
\bm{W}_{(L-\ell):1}^{(k)}\bm{X}
\in \mathbb{R}^{m \times m}.
\end{align*}
Thus, we have
\begin{align*}
\hat{\textbf{Y}}_{(k,\ell)} - \hat{\textbf{Y}}_{(k,\ell-1)}
&= (\bm{W}^{\textbf{k}_{(k,\ell)}} - \bm{W}^{\textbf{k}_{(k,\ell-1)}})\bm{X}
= - \eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\bm{A}_\ell^{(k)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k+1)},
\end{align*}
where $\hat{\textbf{Y}}_{(k,\ell)} = \bm{W}^{\textbf{k}_{(k,\ell)}}\bm{X}$.
Let
$$
\mu_{\max}^{(k,\ell-1)} =
\sigma_{\max}^2(\bm{W}_{L:(L-\ell+2)}^{(k+1)})
\sigma_{\max}^2(\bm{W}_{(L-\ell):1}^{(k)}\bm{X}).
$$
Also, let $\Delta \mathcal{L}^{\textbf{k}_{(k,\ell)}} = \mathcal{L}(\bm{\theta}^{\textbf{k}_{(k,\ell)}})
-
\mathcal{L}(\bm{\theta}^{*})$,
$\Delta^{\textbf{k}_{(k,\ell)}} = (\bm{W}^{\textbf{k}_{(k,\ell-1)}}-\bm{W}^*)\bm{X}$,
and
\begin{align*}
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}} &=
(\bm{W}_{(L-\ell):1}^{(k)}\bm{X}\bm{J}^{\textbf{k}_{(k,\ell-1)}}\bm{W}_{L:(L-\ell+2)}^{(k+1)})^T, \\
\tilde{\mathcal{J}}^{\textbf{k}_{(k,\ell-1)}}&=\bm{W}_{L:(L-\ell+1)}^{(k+1)}
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}
\bm{W}_{(L-\ell):1}^{(k)}\bm{X}
= \bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k)},
\end{align*}
Let $\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell)}) = \mathcal{L}(\bm{\theta}^{\textbf{k}_{(k,\ell)}})$.
By combining it with \eqref{thm:loss-ineq},
\begin{align*}
\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell)})
&\le
\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\langle (\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T, \bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k)}\rangle_F
\\
&\qquad+\frac{C_\text{Lip}}{2}(\eta_\ell^{\textbf{k}_{(k,\ell-1)}})^2
\|\bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k)}\|_F^2 \\
&=\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
\\
&\qquad+\frac{C_\text{Lip}}{2}(\eta_\ell^{\textbf{k}_{(k,\ell-1)}})^2\|\bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k)}\|_F^2.
\end{align*}
It then can be checked that the learning rate which minimizes the above upper bound is
\begin{equation}
\eta_\text{opt}^{\textbf{k}_{(k,\ell-1)}} =
\frac{\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2}
{C_\text{Lip}
\|\bm{W}_{L:(\ell-1)}^{(k+1)}
\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}
\bm{W}_{(\ell-1):1}^{(k)}\bm{X}\|_F^2}.
\end{equation}
Also, it follows from
\begin{equation} \label{gen-loss-grad-ineq}
\begin{split}
\|\tilde{\mathcal{J}}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
&\le \sigma_{\max}^2(\bm{W}_{L:(L-\ell+1)}^{(k+1)})\sigma_{\max}^2(\bm{W}_{(L-\ell):1}^{(k)}\bm{X})
\|{\mathcal{J}}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
\\
&= \mu_{\max}^{(k,\ell-1)}\|{\mathcal{J}}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
\end{split}
\end{equation}
that
\begin{align*}
&\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell)})
\\
&\le \mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
+\frac{C_\text{Lip}}{2}(\eta_\ell^{\textbf{k}_{(k,\ell-1)}})^2\|\bm{A}_\ell^{(k+1)}
(\bm{J}^{\textbf{k}_{(k,\ell-1)}})^T
\bm{B}_\ell^{(k)}\|_F^2
\\
&\le\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2+\frac{C_\text{Lip}}{2}(\eta_\ell^{\textbf{k}_{(k,\ell-1)}})^2
\mu_{\max}^{(k,\ell-1)}
\|\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2
\\
&=
\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-(1-\frac{C_\text{Lip}}{2}\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\mu_{\max}^{(k,\ell-1)})
\eta_\ell^{\textbf{k}_{(k,\ell-1)}}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2.
\end{align*}
If $
0< \eta_\ell^{\textbf{k}_{(k,\ell-1)}} < \frac{2}{C_\text{Lip}\mu_{\max}^{(k,\ell-1)}},
$
unless $\|\mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F=0$,
the loss function is strictly decreasing.
Suppose $
0< \eta_\ell^{\textbf{k}_{(k,\ell-1)}} \le \frac{1}{C_\text{Lip}\mu_{\max}^{(k,\ell-1)}}.
$
Then, since
$
-(1- \frac{C_\text{Lip}}{2}\eta_\ell^{\textbf{k}_{(k,\ell-1)}}\mu_{\max}^{(k,\ell-1)} )
\le -\frac{1}{2},
$
we have
\begin{align*}
\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell)})
&\le
\mathcal{L}(\hat{\textbf{Y}}_{(k,\ell-1)})
-\frac{\eta_\ell^{\textbf{k}_{(k,\ell-1)}}}{2}
\| \mathcal{J}^{\textbf{k}_{(k,\ell-1)}}\|_F^2.
\end{align*}
By summing up the above, we have
\begin{align*}
\sum_{s=0}^{k-1} \sum_{\ell = 1}^L \frac{\eta_\ell^{\textbf{k}_{(s,\ell-1)}}}{2}
\| \mathcal{J}^{\textbf{k}_{(s,\ell-1)}}\|_F^2
&\le
\sum_{s=0}^{k-1} \sum_{\ell = 1}^L
\left(
\mathcal{L}(\hat{\textbf{Y}}_{(s,\ell-1)}) - \mathcal{L}(\hat{\textbf{Y}}_{(s,\ell)}) \right)
\le \mathcal{L}(\hat{\textbf{Y}}_{(0,0)}) < \infty.
\end{align*}
Therefore,
$\lim_{k \to \infty} \eta_\ell^{\textbf{k}_{(k,\ell)}}
\| \mathcal{J}^{\textbf{k}_{(k,\ell)}}\|_F^2 = 0$ for any $0 \le \ell < L$.
Also, it follows from the above that
$$
\frac{1}{k}\sum_{s=0}^{k-1}
\eta_\ell^{\textbf{k}_{(s,\ell)}}
\| \mathcal{J}^{\textbf{k}_{(s,\ell)}}\|_F^2
\le
\frac{1}{k}\sum_{s=0}^{k-1} \sum_{\ell = 1}^L
\eta_\ell^{\textbf{k}_{(s,\ell)}}
\| \mathcal{J}^{\textbf{k}_{(s,\ell)}}\|_F^2
\le \frac{2}{k}\mathcal{L}(\hat{\textbf{Y}}_{(0,0)}) = \mathcal{O}\left(\frac{1}{k}\right).
$$
Furthermore, for all $0 \le \ell < L$,
if $0<\inf_k \eta_\ell^{\textbf{k}_{(k,\ell)}} \le \sup_k \eta_\ell^{\textbf{k}_{(k,\ell)}} \le 1$,
we conclude that $\lim_{k \to \infty}
\| \mathcal{J}^{\textbf{k}_{(k,\ell)}}\|_F^2 = 0$
and
$\lim_{k \to \infty}
\|\eta_\ell^{\textbf{k}_{(k,\ell)}} \mathcal{J}^{\textbf{k}_{(k,\ell)}}\|_F^2 = 0$.
For each $\ell$,
$\lim_{k\to \infty} \bm{W}_\ell^{(k)} = \bm{W}_\ell^*$.
That is, the BCGD finds a critical point.
Since all local minima are global (see, \cite{Laurent2018deep}),
$\{\bm{W}_\ell^*\}_{\ell=1}^L$ is a global minimizer.
\end{proof}
\section{Proof of Theorem~\ref{thm:convg-l2-loss-BCSGD}}
\label{app:thm:convg-l2-loss-BCSGD}
\begin{proof}
For notational convenience, for $j > i$, let
$$
\bm{W}_{j:i}:=\bm{W}_j \bm{W}_{j-1} \cdots \bm{W}_{i}.
$$
By definition,
it follows from the update rule that
\begin{align*}
\bm{W}_{\ell_k}^{(\textbf{k}_k(\ell_k)+1)}
=
\bm{W}_{\ell_k}^{(\textbf{k}_k(\ell_k))}
- \eta^{\textbf{k}_{k}}_{\ell_k}
(\bm{W}^{\textbf{k}_k}_{(\ell_k-1):1}
\bm{X}_{:,i_k}\bm{J}_{i_k,:}^{\textbf{k}_k}
\bm{W}^{\textbf{k}_k}_{L:(\ell_k+1)}
)^T,
\end{align*}
where $i_k$ is randomly chosen indices from $[m]$ and
and $\ell_k$ is an index from $[L]$.
By multiplying $\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}$ from right, $\bm{W}_{L:(\ell_k+1)}^{\textbf{k}_{k}}$ from left
and subtracting $\bm{W}^*\bm{X}$ in the both sides,
we obtain
\begin{align*}
(\bm{W}^{\textbf{k}_{k+1}}- \bm{W}^*)\bm{X} =
(\bm{W}^{\textbf{k}_{k}} - \bm{W}^*)\bm{X}
- \eta^{\textbf{k}_{k}}_{\ell_k}
\bm{A}_{\ell_k}^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{J}_{i_k:}^{\textbf{k}_k})^T
\tilde{\bm{B}}_{\ell_k}^{\textbf{k}_{k}}\bm{X}
\end{align*}
where
\begin{align*}
\bm{A}_{\ell_k}^{\textbf{k}_{k}} &= \bm{W}_{L:(\ell_k+1)}^{\textbf{k}_{k}}
(\bm{W}_{L:(\ell_k+1)}^{\textbf{k}_{k}})^T \in \mathbb{R}^{d_{\text{out}} \times d_{\text{out}}},
\\
\tilde{\bm{B}}_{\ell_k}^{\textbf{k}_{k}} &= (\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}})^T
\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}
\in \mathbb{R}^{d_{\text{in}} \times d_{\text{in}}}.
\end{align*}
Since $\bm{A}_{\ell_k}^{\textbf{k}_{k}}$ is symmetric,
they are diagonalizable. Thus,
\begin{align*}
(\bm{U}_{\ell_k}^{\textbf{k}_{k}})^{T}\bm{A}_{\ell_k}^{\textbf{k}_{k}}\bm{U}_{\ell_k}^{\textbf{k}_{k}} &=
\bm{D}^{\textbf{k}_{k}}_{A,\ell_k} = \text{diag}(\lambda_{\ell_k, i}^{\textbf{k}_{k}}),
\quad 1 \le i \le d_{\text{out}},
\end{align*}
where $\bm{U}_{\ell_k}^{\textbf{k}_{k}}$ is orthogonal.
Let
$\Delta_W^{\textbf{k}_{k}} := \bm{W}^{\textbf{k}_{k}}-\bm{W}^*$
and
$
\Delta^{\textbf{k}_{k}} := \Delta_W^{\textbf{k}_{k}}\bm{X}.
$
Since $\ell(a;b)=(a-b)^2/2$,
we have
\begin{align*}
\Delta^{\textbf{k}_{k+1}} &=
\Delta^{\textbf{k}_{k}}
- \eta^{\textbf{k}_{k}}_{\ell_k}
\bm{A}_{\ell_k}^{\textbf{k}_{k}}
\left(\Delta_W^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{X}_{:i_k}^T)
-(\textbf{y}^{i_k} - \bm{W}^*\textbf{x}^{i_k})\bm{X}_{:i_k}^T \right)
\tilde{\bm{B}}_{\ell_k}^{\textbf{k}_{k}}\bm{X}.
\end{align*}
Let
$\mathcal{E} = \bm{Y} - \bm{W}^*\bm{X}$ and
$\mathcal{E}_{:,i_k}:= \textbf{y}^{i_k} - \bm{W}^*\textbf{x}^{i_k}$.
Then
\begin{multline*}
(\bm{U}_{\ell_k}^{\textbf{k}_{k}})^T\Delta^{\textbf{k}_{k+1}}
\\
=
(\bm{U}_{\ell_k}^{\textbf{k}_{k}})^T\Delta^{\textbf{k}_{k}}
- \eta^{\textbf{k}_{k}}_{\ell_k}
\bm{D}_{A,\ell_k}^{\textbf{k}_k} (\bm{U}_{\ell_k}^{\textbf{k}_{k}})^{T}
\left(\Delta_W^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{X}_{:i_k}^T)
-\mathcal{E}_{:,i_k}\bm{X}_{:i_k}^T \right)
\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}.
\end{multline*}
Let $\tilde{\Delta}^{\textbf{k}_{s,t,\ell}}= (\bm{U}_{\ell}^{\textbf{k}_{s}})^T \Delta^{\textbf{k}_{t}}$.
Then
\begin{equation} \label{eqn-BCSGD-err}
\tilde{\Delta}^{\textbf{k}_{k,k+1,\ell_k}} =
\tilde{\Delta}^{\textbf{k}_{k,k,\ell_k}}
- \eta^{\textbf{k}_{k}}_{\ell_k}
\bm{D}_{A,\ell_k}^{\textbf{k}_k} (\bm{U}_{\ell_k}^{\textbf{k}_{k}})^T
\left(\Delta_W^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{X}_{:i_k}^T)
-\mathcal{E}_{:,i_k}\bm{X}_{:i_k}^T \right)
\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}.
\end{equation}
Let $\bm{u}_{\ell_k, j}^{\textbf{k}_k}$ be the $j$-th column of
$\bm{U}_{\ell_k}^{\textbf{k}_{k}}$.
The $j$-th row of \eqref{eqn-BCSGD-err} satisfies
\begin{align*}
&\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2 \\
&=
\| (\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\Delta^{\textbf{k}_{k}}
- \eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\left(\Delta_W^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{X}_{:i_k}^T)
-\mathcal{E}_{:,i_k}\bm{X}_{:i_k}^T \right)
\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}\|^2 \\
&=
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
+ \left(\eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k} \right)^2
\|
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
(\Delta_W^{\textbf{k}_{k}}
\bm{X}_{:i_k}
-\mathcal{E}_{:,i_k})
\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}\|^2
\\
&\qquad-2\eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
(\Delta_W^{\textbf{k}_{k}}
\bm{X}_{:i_k}
-\mathcal{E}_{:,i_k})
\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}
(\Delta^{\textbf{k}_{k}})^T\bm{u}_{\ell_k, j}^{\textbf{k}_k}.
\end{align*}
Note that
\begin{align*}
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
(\Delta_W^{\textbf{k}_{k}}\bm{X}_{:i_k}-\mathcal{E}_{:,i_k})
\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}\|^2
= \|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2\bm{Q},
\end{align*}
where
$$
\bm{Q} = \|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\Delta_W^{\textbf{k}_{k}}
\bm{X}_{:i_k}\|^2 + \|(\bm{u}_{\ell_k,j}^{\textbf{k}_k})^T\mathcal{E}_{:,i_k}\|^2
-2
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\Delta_W^{\textbf{k}_{k}}
\bm{X}_{:i_k}(\mathcal{E}_{:,i_k})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}.
$$
Thus,
\begin{align*}
&\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2 \\
&=
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
-2\eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta_W^{\textbf{k}_{k}}
(\bm{X}_{:i_k}\bm{X}_{:i_k}^T) \tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}
(\Delta^{\textbf{k}_{k}})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}
\\
&\qquad
+ (\eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k} )^2\|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2\bm{Q}
\\
&\qquad +2\eta^{\textbf{k}_{k}}_{\ell_k}
\lambda_{\ell_k, j}^{\textbf{k}_k}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta_W^{\textbf{k}_{k}}
\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}
\bm{X}_{:i_k}(\mathcal{E}_{:,i_k})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}.
\end{align*}
Let
$$
\bm{B}_{\ell_k}^{\textbf{k}_{k}} =\bm{X}^T (\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}})^T
\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}.
$$
Let us reparameterize the learning rate as
$\eta^{\textbf{k}_{k}}_{\ell_k} = \tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}/\|\bm{X}_{:,i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2$
and define a discrete probability distribution $\bm{\pi}$ on $[m]$ to be
$\bm{\pi}(i) = \|\bm{X}_{:i}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2/\|\bm{X}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|_F^2$
for $1 \le i \le m$.
Since
$$
\bm{X}\mathcal{E}^T = \bm{X}(\bm{Y}(\bm{I}-\bm{X}^\dagger\bm{X})^T)^T
= \bm{X}(\bm{I}-\bm{X}^\dagger\bm{X})\bm{Y}^T
= (\bm{X}- \bm{X}\bm{X}^\dagger\bm{X})\bm{Y}^T= 0,
$$
we have $\mathbf{E}_{i_k}\left[\frac{\bm{X}_{:i_k}(\mathcal{E}_{:,i_k})^T}{\|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2}\right] = \frac{\bm{X}\mathcal{E}^T}{\|\bm{X}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|_F^2} = 0$.
By taking the expectation $\mathbf{E}_{i_k}$ with respect to $i_k \sim \bm{\pi}$,
we obtain
\begin{align*}
&\mathbf{E}_{i_k}[\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2]
\\
&=
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
-2\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}
\lambda_{\ell_k, j}^{\textbf{k}_k}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta_W^{\textbf{k}_{k}}
\mathbf{E}_{i_k}\left[\frac{\bm{X}_{:i_k}\bm{X}_{:i_k}^T}{\|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2}\right] \tilde{\bm{B}}^{\textbf{k}_k}_{\ell_k}\bm{X}(\Delta^{\textbf{k}_{k}})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}
\\
&\qquad
+(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}
\lambda_{\ell_k, j}^{\textbf{k}_k})^2
\left((\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta_W^{\textbf{k}_{k}}
\mathbf{E}_{i_k}\left[\frac{\bm{X}_{:i_k}\bm{X}_{:i_k}^T}{\|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2}\right] (\Delta_W^{\textbf{k}_{k}})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}
\right)
\\
&\qquad\qquad
+(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}
\lambda_{\ell_k, j}^{\textbf{k}_k})^2
\left((\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\mathbf{E}_{i_k}\left[\frac{\mathcal{E}_{:,i_k}(\mathcal{E}_{:,i_k})^T}{\|\bm{X}_{:i_k}^T\tilde{\bm{B}}^{\text{k}_k}_{\ell_k}\bm{X}\|^2}\right]
\bm{u}_{\ell_k, j}^{\textbf{k}_k}
\right)
\\
&=
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
-2\frac{\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,j}^{\textbf{k}_k}}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
\|(\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X})(\Delta^{\textbf{k}_{k}})^T\bm{u}_{\ell_k, j}^{\textbf{k}_k}\|^2
\\
&\qquad
+\frac{(\tilde{\eta}^{\ell_k}_{\textbf{k}_{k}}
\lambda_{\ell_k,i}^{\textbf{k}_k})^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
\left( \|(\Delta^{\textbf{k}_{k}})^T\bm{u}_{\ell_k, j}^{\textbf{k}_k}\|^2
+\|\mathcal{E}^T\bm{u}_{\ell_k, j}^{\textbf{k}_k}\|^2
\right)
\\
&=
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
+\frac{(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}
\lambda_{\ell_k,j}^{\textbf{k}_k})^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\mathcal{E}\|^2
\\
&\qquad
-\frac{\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k}}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta^{\textbf{k}_{k}}
\left(
-\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k}\bm{I}
+2
\bm{B}^{\textbf{k}_k}_{\ell_k}\right)(\Delta^{\textbf{k}_{k}})^T
\bm{u}_{\ell_k, j}^{\textbf{k}_k}.
\end{align*}
Suppose
$$
0 < \tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} < \frac{2 \lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k})}{ \lambda_{\max}(\bm{A}_{\ell_k}^{\textbf{k}_k})}
$$
and let
$\bm{M}^{\textbf{k}_k}_{\ell_k,j}:=-\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k}\bm{I}
+2\bm{B}^{\textbf{k}_k}_{\ell_k}.
$
Then, since
$\lambda_{\min}(\bm{M}^{\textbf{k}_k}_{\ell_k, j}) = 2\lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k}) - \tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k}
> 0$,
$\bm{M}^{\textbf{k}_k}_{\ell_k,j}$ is a positive definite symmetric matrix for all $j$.
Thus,
\begin{align*}
&\mathbf{E}_{i_k}[\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2]
\\
&\le
\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
-\frac{\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,j}^{\textbf{k}_k}}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
\lambda_{\min}(\bm{M}^{\textbf{k}_k}_{\ell_k, j})
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\Delta^{\textbf{k}_{k}}\|^2
\\
&\qquad+\frac{(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k})^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\mathcal{E}\|^2
\\
&\le \left(1 - \frac{\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,j}^{\textbf{k}_k}
\lambda_{\min}(\bm{M}^{\textbf{k}_k}_{\ell_k, j})
}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}\right)
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\Delta^{\textbf{k}_{k}}\|^2
+\frac{(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k})^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4} \|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T
\mathcal{E}\|^2,
\end{align*}
and similarly, we have
\begin{align*}
&\mathbf{E}_{i_k}[\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2]
\\
&\ge \left(1 - \frac{\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,j}^{\textbf{k}_k}
\lambda_{\max}(\bm{M}^{\textbf{k}_k}_{\ell_k, j})
}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}\right)
\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\Delta^{\textbf{k}_{k}}\|^2
+\frac{(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,j}^{\textbf{k}_k})^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4} \|(\bm{u}_{\ell_k,j}^{\textbf{k}_k})^T
\mathcal{E}\|^2.
\end{align*}
Since
\begin{align*}
-\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,j}^{\textbf{k}_k}
\lambda_{\min}(\bm{M}^{\textbf{k}_k}_{\ell_k, j})
&=
-\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,i}^{\textbf{k}_k}
(2\lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k}) - \tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k}
)
\\
&=
\left((\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k})
-\lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k})
\right)^2
-\lambda^2_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k}),
\end{align*}
if we set
$
\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} = \eta \frac{\lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k})}{\lambda_{\max}(\bm{A}_{\ell_k}^{\textbf{k}_k})},
$
where $0 < \eta < 2$,
we have
$$
-\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}} \lambda_{\ell_k,i}^{\textbf{k}_k}
\lambda_{\min}(\bm{M}^{\textbf{k}_k}_{\ell_k, i})
\le -\lambda_{\min}^2(\bm{B}^{\textbf{k}_k}_{\ell_k})
\left(1 - (1-\eta/\kappa(\bm{A}^{\textbf{k}_k}_{\ell_k}))^2 \right) := -\gamma^{\textbf{k}_k}_{\ell_k}.
$$
Thus, we obtain
\begin{multline*}
\mathbf{E}_{i_k}[\|(\tilde{\Delta}^{\textbf{k}_{k+1}})_{j:}\|^2]
\\
\le
\left(1 - \frac{\gamma^{\textbf{k}_k}_{\ell_k}}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}\right)\|(\tilde{\Delta}^{\textbf{k}_{k}})_{j:}\|^2
+
\frac{\eta^2\lambda^2_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k})\|(\bm{u}_{\ell_k, j}^{\textbf{k}_k})^T\mathcal{E}\|^2}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}.
\end{multline*}
By summing up with respect to $j$, we have
\begin{align*}
\mathbf{E}_{i_k}[\|{\Delta}^{\textbf{k}_{k+1}}\|_F^2]
&\le
\left(1 - \frac{\left(1 - (1-\eta/\kappa^2(\bm{W}_{L:(\ell_k+1)}^{\textbf{k}_{k}}))^2 \right)}{\tilde{\kappa}^4(\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X})}\right)\|{\Delta}^{\textbf{k}_{k}}\|_F^2
+
\frac{\eta^2\|\mathcal{E}\|_F^2}{\tilde{\kappa}^4(\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X})},
\end{align*}
where $\tilde{\kappa}(\cdot)$ is the scaled condition number defined to be
$
\tilde{\kappa}(\bm{X}) = \frac{\|\bm{X}\|_F}{|\sigma_{\min}(\bm{X})|}.
$
Similarly, since
\begin{align*}
\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k}\lambda_{\max}(\bm{M}^{\textbf{k}_k}_{\ell_k, j})
&=
2\lambda_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k})(\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k}) - (\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k})^2
\\
&=
\lambda^2_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k}) -
\left((\tilde{\eta}_{\ell_k}^{\textbf{k}_{k}}\lambda_{\ell_k,i}^{\textbf{k}_k}) - \lambda_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k})\right)^2
\\
&=
\lambda^2_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k}) -
\left(\eta \lambda_{\ell_k,i}^{\textbf{k}_k}\frac{\lambda_{\min}(\bm{B}_{\ell_k}^{\textbf{k}_k})}{\lambda_{\max}(\bm{A}_{\ell_k}^{\textbf{k}_k})} - \lambda_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k})\right)^2
\\
&\le
\lambda^2_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k})
\left(1 - \left(1 - \frac{\eta}{\kappa(\bm{B}_{\ell_k}^{\textbf{k}_k})}\right)^2 \right),
\end{align*}
we have
\begin{align*}
\mathbf{E}_{i_k}[\|{\Delta}^{\textbf{k}_{k+1}}\|_F^2]
\ge
r\|\tilde{\Delta}^{\textbf{k}_{k}}\|^2_F
+
\frac{\eta^2\|\mathcal{E}\|_F^2}{\kappa^4(\bm{W}_{L:(\ell_k+1)}^{\textbf{k}_{k}})\tilde{\kappa}^4(\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X})},
\end{align*}
where
$$
r= 1 - \frac{\lambda^2_{\max}(\bm{B}_{\ell_k}^{\textbf{k}_k})
\left(1 - \left(1 - \frac{\eta}{\kappa(\bm{B}_{\ell_k}^{\textbf{k}_k})}\right)^2 \right)}{\|\bm{W}_{(\ell_k-1):1}^{\textbf{k}_{k}}\bm{X}\|_F^4}.
$$
\end{proof}
\section{Conclusion} \label{sec:conclusion}
In this paper, we studied a layer-wise training for deep linear networks using the block coordinate gradient descent (BCGD).
We established a convergence analysis
and found the optimal learning rate which results in
the fastest decrease in the loss for the next iterate.
More importantly, the optimal learning rate can directly be applied in practice as no prior knowledge is required.
Also, we identified the effects of depth, width, and initialization
in the training process.
Firstly, we showed that when the orthogonal-like initialization is employed
and the width of the intermediate layers is great than or equal to both the input and output dimensions,
the width plays no roles in gradient-based training.
Secondly, under some assumptions,
we proved that the deeper the network is,
the faster the convergence is guaranteed (when the speed is measured against the number of sweeps).
In an extreme case, the global optimum (within machine accuracy) is achieved after updating each weight matrix only once.
Thirdly, we empirically demonstrated that
adding more layers
could drastically accelerate convergence,
when it is compared to those of a single layer,
even when the computational cost is considered.
Lastly, we establish a convergence analysis of the block coordinate stochastic gradient descent (BCSGD).
Our analysis indicates that the BCSGD cannot reach the global optimum, however, the converged loss will be staying close to the global optimum. This can be understood as an implicit regularization, which avoids over-fitting.
Numerical examples were provided to justify our theoretical findings and demonstrate the performance of the layer-wise training by BCGD.
\section{Numerical Examples}
\label{sec:example}
We provide numerical examples to demonstrate the performance of layer-wise training by BCGD and justify our theoretical findings.
We employ three different initialization schemes, described in Section~\ref{subsec:initialization}.
In all examples, the network architectures are met the condition of $n_\ell \ge \max\{d_\text{in},d_\text{out}\}$ unless otherwise stated. According to Theorem~\ref{thm:role of width}, when either the orth-indentity or the balanced initialization is employed, we simply set $n_\ell = \max\{n_0,n_L\}$ for all $1\le \ell < L$.
The approximation error is measured by the normalized distance to the global optimum, i.e.,
$\frac{1}{m}\mathcal{L}(\bm{W}^{\textbf{k}_k}) - \frac{1}{m}\mathcal{L}(\bm{W}^*)$.
When the $L_2$-loss is employed, the error after the $k$-th sweep is $\frac{1}{m}\left[\|\bm{W}^{(k)}\bm{X} - \bm{Y}\|_F^2 - \|\bm{W}^*\bm{X}-\bm{Y}\|_F^2\right]$.
For the convenience of visualization,
if the error is less than $10^{-10}$,
we simply set $10^{-10}$.
We note that the speed of convergence can be measured by either the number of sweeps
or the number of iterations.
Note also that updating each weight matrix once in a deep network
will require more time than doing so in a shallow network.
In what follows, we employ the layer-wise training by BCGD
for deep linear neural networks.
The learning rate is chosen to be (near) optimal according to \eqref{LR-gen-Opt}.
We emphasize that the (near) optimal learning rate of \eqref{LR-gen-Opt}
does not require any prior knowledge, and can completely be determined by the loss function, the current weight matrices and the input data matrix.
This allows us to avoid a cumbersome grid-search over learning rate.
When the $L_2$-loss is employed, the optimal learning rate of \eqref{LR-l2-Optimal}
is identical to the one of \eqref{LR-gen-Opt}.
\subsection{Random Data Experiments} \label{subsec:Random}
Unless otherwise stated, we generate the input data matrix $\bm{X} \in \mathbb{R}^{d_\text{in}\times m}$ whose entries are i.i.d. samples from
a Gaussian distribution $N(0,1/n_0)$
and the output data matrix $\bm{Y} \in \mathbb{R}^{d_\text{out}\times m}$ whose entries are i.i.d. samples from a uniform distribution on $(-1,2)$.
The number of training data is set to $m = 600$.
\subsubsection{Small Condition number} \label{subsec:smallC}
On the left of Figure~\ref{fig:OrhtInit-Cond2}, the approximation errors are plotted with respect to the number of sweeps of the descending BCGD at different depths $L$.
The input and output dimensions are
$d_{\text{in}} = n_0 = 128$ and $d_\text{out} = n_L = 10$, respectively.
The width of the $\ell$-th layer is $n_\ell = 128 = \max\{n_0,n_L\}$
and the orth-identity initialization (Section~\ref{subsec:initialization}) is employed.
We see that the faster convergence is obtained as the depth grows.
In an extreme case of the depth $L=400$, the global optimum is achieved by only after updating each weight matrix once.
These results are expected from Theorem~\ref{thm:convg-l2}.
To fairly compare the effects of depth in the acceleration of convergence, the approximation errors need to be plotted with respect to the number of iterations.
On the right of Figure~\ref{fig:OrhtInit-Cond2},
the errors are shown with respect to the number of iterations.
We now see that training a depth 1 network multiple times results in the fastest decrease in the loss. This implies that in order for the faster convergence, it is better to train a depth 1 network $L$ times
than to train a depth $L$ network once in this case.
We remark that the condition number of the input data matrix was $2.6614$.
In this case, we do not have any advantages of using deep networks over a depth 1 network.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/CondX2p6614_din128_dout10_W128_Ntrain600_X_Normal_Y_Uni_LossERRvsSweeps_OrthID_desBCGD_OptLR.pdf}
\includegraphics[height=4.8cm]{figures/CondX2p6614_din128_dout10_W128_Ntrain600_X_Normal_Y_Uni_LossERRvsITERs_OrthID_desBCGD_OptLR_marker.pdf}}
\caption{The approximation errors with respect to the number of (left) sweeps and (right) iterations of the descending BCGD with the optimal learning rate \eqref{LR-l2-Optimal} at different depths
$L=1, 10, 50, 100, 200, 400$.
The width is set to $\max\{n_0,n_L\} = 128$
and the orth-identity initialization is employed.
When the depth is 400, the global optimum is achieved by
after updating each weight matrix only once.
However, when the errors are compared against the number of iterations,
updating a single layer $L$ times results in
the faster loss decay
than updating a $L$ layer network once.
}
\label{fig:OrhtInit-Cond2}
\end{figure}
\subsubsection{Big Condition number} \label{subsec:bigC}
We now consider the input data matrix $\bm{X}$ whose condition number is rather big.
To do this, we first generate $\bm{X}$ as in the above
and conduct the singular value decomposition.
We then assign randomly generated numbers from $10^{-5} + \mathcal{U}(0,1)$
to the singular values.
In our experiment, the condition number of $\bm{X}$ was 236.
The output data matrix $\bm{Y}$ is generated in the same way as before.
In Figure~\ref{fig:OrhtInit-Cond200},
the approximation errors are plotted with respect to the number of (left) sweeps and (right) iterations of the descending BCGD at different depths $L=1,3,5,7,9,11$.
When the speed of convergence is measured against the number of sweeps, we see that the deeper the network is, the faster the convergence
is obtained.
When the amount of computation is considered, unlike the case where $\bm{X}$ has a good condition number,
we now see that the errors by deep linear networks
decay drastically faster than those by a shallow network of depth 1.
This demonstrates that over-parameterization by the depth
can indeed accelerate convergence, even when the computational cost
is considered.
We note that from Theorem~\ref{thm:role of width}, the width plays no role in gradient-based training, as the width of intermediate layers is $\max\{d_\text{in},d_\text{out}\}$.
Furthermore, the optimal learning rate is employed and adding more layers does not increase any representational power.
Therefore, this acceleration is solely contributed by the depth
and this clearly demonstrates the benefit of using deep networks.
We also observe that
the error decrease per iteration does not grow proportionally to the depth. In this case, either depth 5 or 7 performs the best among others.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/CondX200_din128_dout10_DoddW128_Ntrain600_X_Normal_Y_Uni_LossERRvsSweeps_OrthID_desBCGD_OptLR_marker.pdf}
\includegraphics[height=4.8cm]{figures/CondX200_din128_dout10_DoddW128_Ntrain600_X_Normal_Y_Uni_LossERRvsITERs_OrthID_desBCGD_OptLR_marker.pdf}}
\caption{The approximation errors with respect to the number of (left) sweeps and (right) iterations of the descending BCGD with the optimal learning rate \eqref{LR-l2-Optimal} at different depths.
The width is set to $\max\{n_0,n_L\} = 128$
and the orth-identity initialization is employed.
The condition number of the input data matrix is $236$.
In terms of the number of sweeps,
the deeper the network is, the faster convergence is obtained.
In terms of the number of iterations (i.e., the computational cost is considered),
unlike Figure~\ref{fig:OrhtInit-Cond2} where $\text{cond}(\bm{X}) \approx 2$,
the use of deep networks
drastically accelerates convergence of the loss
when it is compared to those by a depth 1 network.
}
\label{fig:OrhtInit-Cond200}
\end{figure}
\subsubsection{Comparison with GD}
Next, we compare the performance between
BCGD and the standard gradient descent (GD)
on the two same tasks of Section~\ref{subsec:smallC}
and ~\ref{subsec:bigC}.
By trial-and-error, we choose constant learning rates for GD that leads the fastest convergence.
We tried the learning rate of
$\eta = \frac{n_L}{3L \|X\|^2}$ from \cite{Du2019width},
however, we observed that it makes GD diverge within few iterations.
Despite the fact that GD updates all the weight matrices
in a single iteration, while BCGD updates only a singe matrix, we compare the performance
with respect to the number of iterations
to emphasize the performance of BCGD.
Figure~\ref{fig:OrhtInit-Cond2-GD}
shows the approximation errors by GD and BCGD.
The results for the task with a small (big) condition number are presented on the left (right).
In both cases, we observe that GD converges linearly
when learning rate is chosen properly.
However, choosing an appropriate learning rate
requires a time consuming fine tuning.
It is also clear that GD is highly sensitive
with respect to learning rate.
For example, on the left of Figure~\ref{fig:OrhtInit-Cond2-GD},
we see that GD with the learning rate of $10^{-5}$
produces a linear convergence,
however,
GD with the learning rate of $2\times 10^{-5}$
leads a highly oscillatory behavior in the error.
When it is compared to the results by BCGD
and also considering the fact that BCGD updates only a single matrix per iteration,
it is clear that BCGD converges significantly faster than GD, especially when the model matrix has a big condition number.
Furthermore, BCGD does not require one to put
any efforts on finding a proper learning rate.
This clearly demonstrate superior performances of BCGD
over GD in these cases.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/CondX2p7284_din128_dout10_W128_Ntrain600_X_Normal_Y_Uni_L400_LossERRvsITERs_OrthID_GD}
\includegraphics[height=4.8cm]{figures/CondX236_din128_dout10_W128_Ntrain600_L5_LossERRvsITERs_OrthID_GD}
}
\caption{The approximation errors by BCGD and GD with respect to the number of iterations.
Learning rates for GD are found by trial-and-error.
(Left)
The model matrix has a condition number of $2.6$ and
the depth and width of DLN is 400 and 128, respectively.
(Right) The model matrix has a condition number of $236$
and the depth and width of DLN is 5 and 128, respectively.
In all cases, the orth-identity initialization is employed.
We note that
BCGD updates only a single matrix per iteration,
while GD updates all the weight matrices per iteration.
}
\label{fig:OrhtInit-Cond2-GD}
\end{figure}
\subsubsection{Effect of Width}
From now on, the convergence speed is only measured against the number of iterations.
Next, we show the ineffectiveness of training a network which has a layer whose width is less than $\max\{d_\text{in},d_\text{out}\}$.
Figure~\ref{fig:ineffectiveness} shows the approximation errors
with respect to the number of iterations of the descending BCGD.
The input and output dimensions are
$d_{\text{in}} = 128$ and $d_\text{out} = 20$, respectively.
Two deep linear networks of depth $L=100$ are compared.
One has the architecture (Arch 1) of
$n_\ell = 20$ for all $1 \le \ell < L$.
The other has the architecture (Arch 2) of $n_\ell = 128$ for all $1 \le \ell < L$, but $n_{50}=20$.
Note that at the $k$-th iteration where $k =L-\ell+1 \bmod L$,
the $(L-\ell+1)$-th layer weight matrix is the only matrix updated.
For the network of Arch 1, we see that the errors decrease mostly only
after updating the first layer weight matrix.
The errors before and after updating the first layer
are marked as the circle symbols ($\circ$).
For the network of Arch 2, we see that the errors decrease mostly
after updating from the 50th to the 1st layer weight matrices.
The errors before and after updating the 50th and the 1st layer matrices
are marked as the asterisk symbols ($\ast$).
These are expected from Theorem~\ref{thm:convg-l2},
as
either $\sigma_{\min}(\bm{W}_{L:(\mathfrak{i}(\ell)+1)}^{\textbf{k}_{(s,\ell-1)}})$
or
$\sigma_{\min}(\bm{W}_{(\mathfrak{i}(\ell)-1):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X})$
is zero,
due to the network architecture.
Precisely, the Arch 1 results in $\sigma_{\min}(\bm{W}_{(L-\ell):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}) = 0$,
for all $s$ and $1 \le \ell < L$,
and the Arch 2 results in
$\sigma_{\min}(\bm{W}_{(L-\ell):1}^{\textbf{k}_{(s,\ell-1)}}\bm{X}) = 0$
for all $s$ and $1 \le \ell \le 50$.
For reference, the results by the network architecture (Arch 3) of $n_\ell = 128$ for all $\ell$ are shown as the dotted line.
We see the fastest convergence by the network of Arch 3 among others.
This demonstrates the ineffectiveness of training a deep linear network which has a layer whose width is less than $\max\{n_0,n_L\}$.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/din128_dout20_D100_Ntrain600_X_Normal_Y_Uni_LossERRvsSweeps_OrthID_desBCGD_OptLR.pdf}}
\caption{
The approximation errors with respect to the number of iterations of the descending BCGD
by three different network architectures.
The results by the network of Arch 1 $(n_0=128,n_j=20)$
are shown as the dash line,
those by the network of Arch 2
$(n_j=128, n_{50}=n_L=20)$
are shown as the solid line,
and those by the network of Arch 3
$(n_j=128, n_L=20)$
are shown as the dotted line.
This demonstrates the ineffectiveness of training a network which has a layer whose width is less than $\max\{n_0,n_L\}$.
}
\label{fig:ineffectiveness}
\end{figure}
\subsubsection{Ascending versus Descending}
We now compare the performance of layer-wise training by BCGD
with two update orderings (top to bottom and bottom to top).
Figure~\ref{fig:asc_vs_des}
shows the approximation errors with respect to the number of iterations of both the ascending and descending BCGD
at three different initialization schemes (Section~\ref{subsec:initialization}).
We employ the DLNs of depth $L=50$
and set the width of the $\ell$-th layer to $n_\ell = \max\{n_0,n_L\}$.
On the left, the input and output dimensions are
$d_{\text{in}} = 50$ and $d_\text{out} = 300$, respectively.
It can be seen that
for the orth-identity initialization,
the errors by the ascending BCGD
decay faster than those by the descending BCGD.
For the balanced initialization,
the opposite is observed.
For the random initialization, the errors by both the ascending and descending orderings behave similarly.
We see that the ascending BCGD with the orth-identity initialization
results in the fastest convergence among others.
On the right, the input and output dimensions are
$d_{\text{in}} = 300$ and $d_\text{out} = 50$, respectively.
It can be seen that
for the balanced and the random initialization,
the errors by the ascending BCGD
decay faster than those by the descending BCGD.
For the orth-identity initialization,
the opposite is observed.
In this case, the descending BCGD with the orth-identity initialization
results in the fastest convergence among others.
In all cases, we observe that the orth-identity initialization outperforms than other initialization schemes, regardless of the update ordering.
Also, we found that when the orth-identity initialization is employed,
the ascending BCGD performs better than the descending BCGD if the output dimension is larger than the input dimension,
and vice versa.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/CondX1p7290_din50_dout300_Depth50W300_Ntrain600_X_Normal_Y_Uni_ERRvsITERs_InitALL_BCGD_OptLR_marker.pdf}
\includegraphics[height=4.8cm]{figures/CondX5p6119_din300_dout50_Depth50W300_Ntrain600_X_Normal_Y_Uni_ERRvsITERs_InitALL_BCGD_OptLR_marker.pdf}}
\caption{The approximation errors with respect to the number of iterations of both the ascending and descending BCGD
by three different initialization schemes. The depth is $L=50$
and the training is done over $600$ data points.
(Left) $n_0 = 50, n_j = 300$ for $0 < j \le L$.
(Right) $n_j = 300, n_L = 50$ for $0\le j < L$.
When $n_0=50, n_j=300$, the ascending BCGD with the orth-identity initialization results in the fastest convergence among others.
When $n_j=300, n_L=50$, the descending BCGD with the orth-identity initialization results in the fastest convergence among others.
}
\label{fig:asc_vs_des}
\end{figure}
\subsection{Real Data Experiments}
We employ
the dataset from UCI Machine Learning Repository’s “Gas Sensor Array Drift at Different Concentrations”
\cite{Vergara2012Chemical,Rodriguez2014Calibration}.
Specifically, we used the dataset’s
“Ethanol” problem — a scalar regression task with 2565 examples, each comprising 128 features
(one of the largest numeric regression tasks in the repository).
The input and output data sets are normalized to have zero mean and unit variance.
After the normalization, the condition number of the input data matrix is 70,980.
We note that this is the same data set used in \cite{Arora2018optAccelerationDLN}.
The width of intermediate layers is set to $\max\{d_\text{in},d_\text{out}\}$
and the identity initialization (Section~\ref{subsec:initialization})
is employed.
On the left of Figure~\ref{fig:UCI-data}, we show the errors by the descending BCGD with respect to the number of iterations at five different depths $L=1, 2, 3, 4, 5$.
We use the optimal learning rate \eqref{LR-l2-Optimal},
which does not require any prior knowledge.
We clearly see that
the over-parameterization by depth significantly accelerates convergence.
We remark that in the work of \cite{Arora2018optAccelerationDLN}, although a different optimization method is used, the same problem is considered and the learning rate is chosen by a grid search.
Similar implicit acceleration was demonstrated only for
$L_4$-loss, not $L_2$-loss.
In our experiment, by exploiting the layer-wise training and the optimal learning rate,
we demonstrate implicit acceleration for $L_2$-loss.
On the right of Figure~\ref{fig:UCI-data}, we show the results by $L_4$-loss, i.e,
\begin{align*}
\frac{1}{m}\left[\|\bm{W}^{(k)}\bm{X} - \bm{Y}\|_{4,4}^4 - \|\bm{W}^*\bm{X}-\bm{Y}\|_{4,4}^4\right].
\end{align*}
The near optimal learning rate of \eqref{LR-lp-Optimal} is employed.
We observe that updating a single layer multiple-times results in the fastest error convergence than updating multiple layers once.
In this case, there is no advantages of using deep networks.
For reference, we also plot the best error shown at \cite{Arora2018optAccelerationDLN} after 1,000,000 iterations
as the dashed line.
Unlike the conclusion of \cite{Arora2018optAccelerationDLN},
we found that the depth leads to acceleration for the $L_2$-loss,
but not for the $L_4$-loss.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/UCIdata_LallW128_LossERR_pinv_vsITERs_ID_desBCGD_OptLR_marker.pdf}
\includegraphics[height=4.8cm]{figures/UCIdata_L4Loss_LallW128_LossERR_pinv_vsITERs_ID_desBCGD_nearOptLR_marker.pdf}
}
\caption{The distances to the global optimum by (left) $L_2$-loss and (right) $L_4$-loss with respect to the number of iterations.
The network is trained over the UCI Machine Learning Repository's dataset of 2565 examples.
The condition number of $\bm{X}$ is 70,980.
The identity initialization is employed.
The width is set to $n_\ell = 128$.
In all depths,
the errors by deep linear networks
decay faster than those by a single layer one.
}
\label{fig:UCI-data}
\end{figure}
We now train DLNs on the MNIST handwritten digit classification dataset.
For an input image, its corresponding output vector contains a 1 in the index for the correct class and zeros elsewhere.
The input and output dimensions are $d_\text{in}=784$ and $d_\text{out}=10$, respectively.
In order to strictly compare the effect of depth, we employ the identity initialization to completely remove the randomness from the initialization.
Also, we set the width to $784 = \max\{d_\text{in},d_\text{out}\}$ according to Theorem~\ref{thm:role of width}.
The networks are trained over the entire MNIST training dataset of 60,000 samples.
The input data matrix $\bm{X}$ is not full rank.
Figure~\ref{fig:mnist} shows the distances to the global optimum by $L_2$-loss with respect to the number of iterations of the descending BCGD at ten different depths $L=1,\cdots,10$.
Thus, the speed of convergence is measured against
the amount of computation.
We observe the accelerated convergence
by the network whose depth is even but not odd.
We also see that the results by DLNs of odd-depth are very similar so that the lines are overlapping each other.
In this case, the depth 2 network performs the best among others.
We suspect that there is a connection between the parity of depth
and the acceleration in convergence.
We defer such further investigation to future work.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/MNIST_L1t4W784_LossERR_pinv_vsITERs_ID_desBCGD_OptLR.pdf}
\includegraphics[height=4.8cm]{figures/MNIST_L1t8W784_LossERR_pinv_vsITERs_ID_desBCGD_OptLR.pdf}
}
\caption{(to be viewed in color) The distances to the global optimum by $L_2$-loss with respect to the number of iterations of the descending BCGD.
The identity initialization is employed.
The network is trained over the MNIST training dataset of 60,000 samples.
The width of intermediate layers is $n_\ell = 784$.
The results by DLNs of odd-depth are very similar so that the lines are overlapping each other.
The acceleration in convergence is observed by DLNs of even-depth.
}
\label{fig:mnist}
\end{figure}
Lastly, we compare the performance of BCGD
to GD on the same real data sets.
Again, the learning rates for GD are chosen
by trial-and-error.
Figure~\ref{fig:realdata-GD}
shows the error trajectories
for the same learning tasks.
On the left and right, the results for the UCI and the MNIST datasets are presented, respectively.
In all cases, we see that
BCGD converges faster than GD
while GD with a well-chosen learning rate converges linearly.
We emphasize that GD updates all the weights matrices per iteration,
while BCGD updates only a single matrix per iteration.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=4.8cm]{figures/UCI_conv_Depth4_Width128_Id_GD.pdf}
\includegraphics[height=4.8cm]{figures/MNIST_L2W784_LossERR_pinv_vsITERs_ID_GD.pdf}
}
\caption{(to be viewed in color) The distances to the global optimum by $L_2$-loss with respect to the number of iterations.
The identity initialization is employed.
(Left) The UCI dataset with $L=4$ and $n_\ell = 128$.
(Right) The MNIST dataset with $L=2$ and $n_\ell = 784$.
}
\label{fig:realdata-GD}
\end{figure}
\section{Introduction}
Deep learning has drawn a lot of attention from both academia and industry
due to its tremendous empirical success in various applications \cite{krizhevsky2012imagenet,hinton2012deep,silver2016mastering,wu2016google}.
One of the key components in the success of deep learning is the intriguing ability of gradient-based optimization methods.
Despite of the non-convex and non-smooth nature of the loss function,
it somehow finds a local (or global) minimum, which performs well in practice.
Mathematical analysis of this phenomenon has been undertaken.
There are several theoretical works, which show that
under the assumption of over-parameterization,
more precisely, very wide networks,
the (stochastic) gradient descent algorithm finds a global minimum
\cite{allen2018convergence,du2018gradient-DNN,du2018gradient-shallow,zou2018stochastic,oymak2019towards}.
These theoretical progresses have its own importance, however,
it does not directly help practitioners to have better training results.
This is mainly because there are still many parameters to be determined a priori; learning rate, the depth of network, the width of intermediate layers, optimization algorithms with its own internal parameters, to name just a few.
The learning rates from existing theoretical works are not applicable in practice. For example, when a fully-connected ReLU network of depth 10 is trained over 1,000 training data,
theoretically guaranteed learning rate is
either
$\eta \approx \frac{1}{1000^2\cdot 2^{10}}\approx 10^{-9}$ \cite{du2018gradient-DNN}
or
$\eta \approx \frac{1}{1000^4\cdot 10^{2}} \approx 10^{-14}$ \cite{allen2018convergence}.
Thus,
practitioners typically choose these aforementioned parameters by either a grid search
or trial and error.
Despite its expressive power, training deep neural networks is not an easy task.
It has been widely known that the deeper the network is, the harder it is to be trained \cite{Srivastava2015training}.
Empirical success of deep learning heavily relies on
numerous engineering tricks used in the training process.
These includes but not limited to dropout \cite{Srivastava2014Dropout},
dropconnect \cite{Wan2013DropConnect}, batch-normalization \cite{Ioffe2015BN}, weight-normalization \cite{Salimans2016WN},
pre-training \cite{Dahl2011PreTrain}, and data augmentation \cite{Cirecsan2012DataAugmentation}.
Although these techniques are shown to be effective in many machine learning applications, it lacks rigorous justifications
and hinders a thorough mathematical understanding of the training process of deep learning.
The layer-wise training is an alternative to the standard end-to-end back-propagation, especially for training deep neural networks.
The underlying principle
is to train only a few layers (or a single layer) at a time,
rather than train the whole layers simultaneously.
This approach is not new and has been proposed in several different contexts.
One stream of layer-wise training
is adaptive training. At each stage, only a few layers (or a single) are trained.
Once training is done, new layers are added.
By fixing all the previously trained layers for the rest of the training,
only newly added layers are trained.
This procedure is repeated.
The works of this direction include \cite{Fahlman1990cascade,Lengelle1996training,Kulkarni2017LayerWiseTrain,Belilovsky2018LayerWiseTrain,Marquez2018DeepCascade,Malach2018PCADL,Mosca2017Boosting,Huang2017Boosting}.
Another stream of layer-wise training is the block coordinate descent (BCD) method \cite{Zhang2017BCDtwo,Zeng2018BCDanalysis,Carreira2014BCDtwo,Askari2018BCDtwo,Gu2018BCDtwo,Lau2018BCDthree,Taylor2016BCTthree}.
The BCD is a Gauss-Seidel type of gradient-free methods,
which trains each layer at a time by freezing all other layers,
in a sequential order.
Thus, all layers are updated once in every sweep of training.
This paper concerns with the layer-wise training in this line of approach.
In \cite{Hinton2006ReduceDimNN,Bengio2007LayerWiseTrain}, layer-wise training is employed as a pretraining strategy.
Deep linear network (DLN) is a neural network that uses linear activation functions.
Although DLN is not a popular choice in practice, it is an active research subject
as it is a class of decent simplified models for understanding the deep neural network with non-linear activation functions \cite{Saxe2013orthInit,Hardt2016identityMatters,Arora2018optAccelerationDLN,Arora2018convergenceDLN,Bartlett2019gdIdentityDLN}.
DLN has a trivial representation power (product of weight matrices),
however,
its training process is not trivial at all.
It has been studied the loss surface of DLNs \cite{Lu2017depth,Kawaguchi2016deep,Laurent2018deep}
and it is shown that although the loss surface is not convex, there are no spurious local minima.
The works of \cite{Arora2018convergenceDLN, Du2019width} studied a convergence analysis of gradient descent for DLNs, under various settings.
\cite{Arora2018convergenceDLN} showed that under some assumptions, the gradient descent finds a global optimum.
The learning rate from the analysis, however, is not applicable in practice as it requires prior knowledge of the global minimizer.
The theoretically guaranteed learning rate of \cite{Arora2018convergenceDLN} should meet
$\eta \le \frac{c^{(4L-2)/L}}{6144L^3 \|\bm{W}^*\|_F^{(6L-4)/L}}$,
where $\bm{W}^*$ is the global minimizer, $c$ is a constant related to the initial error, and $L$ is the depth.
\cite{Du2019width} showed that under the assumptions of Gaussian random initialization, and severely wide networks, the gradient descent finds a global optimum.
The learning rate from the analysis of \cite{Du2019width} does not require any prior knowledge of $\bm{W}^*$ and can be applied in practice.
However, we found that it leads divergence of GD in all of our tests.
The theoretically guaranteed width of \cite{Du2019width} is too large to be used.
For example, if the condition number of the input data matrix (full rank) is $100$,
the width should be at least $(100^2)^3=10^{12}$.
In this paper, we study a layer-wise training for DLNs
using a block coordinate gradient descent (BCGD) \cite{Tseng2009BCGD-sepa,Tseng2009BCGD-linC}.
Similar to BCD, the BCGD trains each layer at a time
in a sequential order
by freezing all other layers at their last updated values.
However, a key difference is the use of gradient descent
in every update.
\textit{We aim to identify the effects of depth, width, and initialization in the training process}
through the lens of DLNs.
We first establish a general convergence analysis and found the optimal learning rate, which leads to the fastest decrease in the loss for the next iterate.
More importantly, the optimal learning rate can directly be applied in practice.
Neither trial and error nor a grid search for tuning parameters are required.
To illustrate the performance of BCGD with the optimal learning rate,
we consider a learning task of fitting 600 data (see Section~\ref{subsec:Random} for details)
and plot the training loss trajectories by BCGD and GD
in Figure~\ref{fig:motivation}.
Despite the fact that
BCGD updates only a single matrix per iteration,
while GD updates all the weight matrices per iteration,
we clearly see that BCGD converges drastically faster than GD.
The learning rates for GD are found by trial-and-error.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=6cm]{figures/CondX2p7284_din128_dout10_W128_Ntrain600_X_Normal_Y_Uni_L100_LossERRvsITERs_OrthID_GD.pdf}}
\caption{The $L_2$-distance to the optimum with respect to the number of iterations.
The input dimension is 128 and
the output dimension is 10.
A 100-layer linear network of width 128 is employed
and the orthogonal initialization is employed.
BCGD uses the optimal learning rate \eqref{LR-l2-Optimal}
and GD uses learning rates from trial-and-error.
}
\label{fig:motivation}
\end{figure}
Next, we show that when the orthogonal-like initialization is employed,
as long as the width of intermediate layers is greater than or equal to both the input and output dimensions, \textit{the width plays no role in any gradient-based training}.
Also, we rigorously prove that when (i) the orthogonal-like initialization is used, (ii) the initial loss is sufficiently small,
whenever the depth is sufficiently large,
the convergence to the global optimum (within machine accuracy) is guaranteed by updating
each weight matrix only once.
Furthermore, we found that
a well-chosen depth could result in a significant acceleration in convergence
when it is compared to those of a single layer,
even when the computational cost is considered.
This clearly demonstrates the benefit of using deep networks (over-parameterization via depth).
Similar behavior was empirically reported in \cite{Arora2018optAccelerationDLN} as implicit acceleration.
Lastly, we establish a convergence analysis of the block coordinate stochastic gradient descent (BCSGD).
Our analysis indicates that the BCSGD cannot reach the global optimum, however, the converged loss will be staying close to the global optimum. This can be understood as an implicit regularization, which avoids over-fitting.
The rest of paper is organized as follows.
In Section~\ref{sec:setup}, we present
the mathematical setup and introduce the block coordinate (stochastic) gradient descent.
We then present a general convergence analysis and the optimal learning rate in Section~\ref{sec:analysis}.
In Section~\ref{sec:example}, several numerical examples using both synthetic and real data sets are presented to demonstrate the effectiveness of the layer-wise training by BCGD and justify our theoretical findings.
\section*{Acknowledgments}
The author would like to thank Dr. Pual Dupuis for his helpful discussion in the early stages of this work,
Dr. Mark Ainsworth for his helpful comments and suggestions on both analysis and examples,
and
Dr. Nadav Cohen for sharing code for numerical experiments.
\section{Setup and Preliminary} \label{sec:setup}
Let $\mathcal{N}^L:\mathbb{R}^{d_{\text{in}}} \to \mathbb{R}^{d_{\text{out}}}$ be a feed-forward linear neural network with $L$ layers and having $n_\ell$ neurons in the $\ell$-th layer.
We denote the weight matrix in the $\ell$-th layer
by $\bm{W}_\ell \in \mathbb{R}^{n_\ell \times n_{\ell-1}}$.
Here $n_0 = d_{\text{in}}$ and $n_L = d_{\text{out}}$.
Let $\bm{\theta} = \{\bm{W}_\ell\}_{\ell=1}^L$
be the set of all weight matrices.
Then the $L$-layer linear neural network can be written as
\begin{align*}
\mathcal{N}^L(\textbf{x};\bm{\theta}) = \bm{W}_{L}\bm{W}_{L-1}\cdots \bm{W}_{1}\textbf{x}.
\end{align*}
Given a set of training data
$\mathcal{T}=\{(\textbf{x}^i, \textbf{y}^i)\}_{i=1}^m$,
the goal is to learn the parameters $\{\bm{W}_j\}_{j=1}^L$
which minimize the loss function $\mathcal{L}(\bm{\theta})$ defined by
\begin{equation} \label{def:loss}
\mathcal{L}(\bm{\theta}) =
\sum_{i=1}^m
\mathcal{L}_i(\bm{\theta}), \qquad
\mathcal{L}_i(\bm{\theta}) =
\sum_{j=1}^{d_{\text{out}}}
\ell(\mathcal{N}^L_j(\textbf{x}^i; \bm{\theta}); \textbf{y}^i_j).
\end{equation}
Here $\ell(a;b)$ is a metric which measures the discrepancy between the prediction and the output data.
For example, the choice of $\ell(a;b) = (a-b)^p/p$ results in
the standard $L_p$-loss function.
For a matrix $\bm{A} \in \mathbb{R}^{m\times n}$, the spectral norm, the condition number
and
the scaled condition number
are defined to be
$$
\|\bm{A}\| = \max_{\|x\|_2=1} \|\bm{A}\textbf{x}\|_2, \qquad
\kappa_r(\bm{A})=\frac{\sigma_{\max}(\bm{A})}{\sigma_{r}(\bm{A})},
\qquad
\tilde{\kappa}(\bm{A}) = \frac{\|\bm{A}\|_F}{\sigma_{\min}(\bm{A})},
$$
respectively.
Here $\|\cdot\|_2$ is the Euclidean norm,
$\|\cdot\|_F$ is the Frobenius norm,
$\sigma_{\max}(\cdot)$ is the largest singular value,
and
$\sigma_{r}(\cdot)$ is
the $r$-th largest singular value.
Also,
we denote
the $\min\{m,n\}$-th largest singular value
by $\sigma_{\min}(\cdot)$.
When $r = \min\{m,n\}$, we simply write the condition number
as $\kappa(\cdot)$.
\subsection{Global minimum of $L_2$ loss}
Since this paper mainly concerns with the standard $L_2$-loss, here we discuss its global minimum, which depends on the network architecture being used.
Let $\bm{X} =[\textbf{x}^1,\cdots,\textbf{x}^m] \in \mathbb{R}^{n_0 \times m}$ be the input data matrix
and $\bm{Y} = [\textbf{y}^1, \cdots, \textbf{y}^m] \in \mathbb{R}^{n_L \times m}$ be the output data matrix.
Then, the problem of minimizing the $L_2$-loss function is
\begin{equation} \label{def:depthL-prob}
\min_{\bm{W}_j\in \mathbb{R}^{n_j\times n_{j-1}}, 1\le j\le L} \|\bm{W}_{L:1}\bm{X} - \bm{Y}\|_F^2, \quad \text{where} \quad
\bm{W}_{L:1}:=\bm{W}_L\cdots \bm{W}_1.
\end{equation}
This problem is closely related to
\begin{equation} \label{def:depth1-prob}
\min_{\bm{W}\in \mathbb{R}^{n_L\times n_0}} \|\bm{W}\bm{X} - \bm{Y}\|_F^2, \quad \text{subject to} \quad \text{rank}(\bm{W}) \le \min \{n_0,\cdots, n_L\}.
\end{equation}
Since the rank of $\bm{W}_{L:1}$ is at most $n^*:=\min \{n_0,\cdots, n_L\}$,
the minimized losses from \eqref{def:depthL-prob} and \eqref{def:depth1-prob} should be the same.
Thus, if $\{\bm{W}_\ell^{*}\}_{\ell=1}^L$ is a solution of \eqref{def:depthL-prob},
$\bm{W}^*_{L:1}$ should be a global minimizer of \eqref{def:depth1-prob}.
Therefore, a global minimizer of \eqref{def:depthL-prob} and its corresponding minimized loss can be understood through \eqref{def:depth1-prob}.
In \ref{app:lsq-sol}, we briefly discuss the solutions of \eqref{def:depth1-prob}.
\subsection{Block Coordinate Gradient Descent} \label{subsec:bcgd}
In this paper, we consider
the block coordinate gradient descent (BCGD).
The method commences with an initialization
$\bm{\theta}^{\textbf{k}_{0}} = \{\bm{W}_{\ell}^{(0)}\}_{\ell=1}^L$.
Let $\textbf{k} = (\text{k}_1,\cdots,\text{k}_L)$ be a multi-index,
where each $\text{k}_\ell$ indicates the number of updates of the $\ell$-th layer weight matrix $\bm{W}_\ell$.
After the $k$-th iteration,
we obtain a multi-index $\textbf{k}^{(k)} = (\text{k}_1^{(k)},\cdots, \text{k}_L^{(k)})$
and its corresponding parameters are $\bm{\theta}^{\textbf{k}^{(k)}} = \{\bm{W}_\ell^{(\text{k}_\ell^{(k)})}\}_{\ell=1}^L$.
Given $\textbf{k}^{(k)} = (\text{k}_1^{(k)},\cdots,\text{k}_L^{(k)})$,
let
$$
\bm{W}_{i:j}^{\textbf{k}^{(k)}} := \bm{W}_i^{(\text{k}_i^{(k)})}\bm{W}_{i-1}^{(\text{k}_{i-1}^{(k)})}\cdots \bm{W}_j^{(\text{k}_j^{(k)})}, \qquad 1\le j < i \le L.
$$
If $\text{k}_j^{(k)}=k$ for all $j$,
we write $\bm{W}_{i:j}^{(k)} := \bm{W}_i^{(k)}\bm{W}_{i-1}^{(k)}\cdots \bm{W}_j^{(k)}$
for $1\le j < i \le L$.
For notational completeness, we set $\bm{W}_{i:j} = \bm{I}$ whenever $i<j$.
Also, we simply write $\bm{W}_{L:1}^{\textbf{k}}$
as $\bm{W}^{\textbf{k}}$.
At the $(Lk+\ell)$-th iteration of BCGD,
the $\mathfrak{i}(\ell)$-th layer weight matrix is updated
according to
\begin{equation} \label{def:bcgd}
\begin{split}
\bm{W}_{\mathfrak{i}(\ell)}^{(k+1)} =
\bm{W}_{\mathfrak{i}(\ell)}^{(k)}
- \eta_{\mathfrak{i}(\ell)}^{\textbf{k}_{(k,\ell-1)}} \frac{\partial \mathcal{L}(\bm{\theta}) }{\partial \bm{W}_{\mathfrak{i}(\ell)}}\bigg|_{\bm{\theta} = \bm{\theta}^{\textbf{k}_{(k,\ell-1)}}},
\end{split}
\end{equation}
where $\textbf{k}_{(k,\ell)} = \textbf{k}_{(k,\ell-1)} + \bm{e}_{\mathfrak{i}(\ell)}$, $\bm{e}_{j} = (0,\cdots, 0, \overset{j \text{-th}}{1}, 0, \cdots, 0)$, and
$$
\textbf{k}_{(k,0)} = \textbf{k}_{k} = (k,\cdots,k), \quad
\textbf{k}_{(k,L)} = \textbf{k}_{k+1} =(k+1,\cdots,k+1).
$$
Here
$\mathfrak{i}(\ell) = \ell$ if the ascending (bottom to top) ordering is employed
and $\mathfrak{i}(\ell) = L-\ell+1$ if the descending (top to bottom) ordering is employed.
We refer the BCGD with the bottom to top (top to bottom) ordering
as the ascending (descending) BCGD.
Given a linear neural network of depth $L$,
a single sweep of the ascending (descending) BCGD
consists of $L$-iterations starting from the first layer (the last layer) to the last layer (the first layer).
That is, after a single sweep, all weight matrices are updated only once, in the order of from $\bm{W}_1$ to $\bm{W}_L$ ($\bm{W}_L$ to $\bm{W}_1$).
When $L=1$, the BCGD is identical to GD.
\subsection{Initialization}
\label{subsec:initialization}
Any gradient-based optimization starts with an initialization
$\bm{\theta}^{\textbf{k}_0} = \{\bm{W}_{\ell}^{(0)}\}_{\ell=1}^L$,
where $\textbf{k}_{0} = (0,\cdots,0)$.
Here we present three initialization schemes for training DLNs.
Let $\bm{A}$ be a matrix of size $m\times n$
and $\bm{B}$ be of size $k \times s$
where $m \ge k, n \ge s$.
We say $\bm{A}$ is equivalent to $\bm{B}$ upto zero-valued padding
if
$$
\bm{A} = \begin{bmatrix}
\bm{B} & \bm{0} \\
\bm{0} & \bm{0}
\end{bmatrix},
$$
and write $\bm{A} \approxeq \bm{B}$.
Suppose $\min\{m,n\} > k=s$.
We then write $\bm{A} \approxeq_1 \bm{B}$ if
$\bm{A} \approxeq \tilde{\bm{B}}$ where
$\tilde{\bm{B}}$ is a square matrix of size $\min\{m,n\}$ such that
$$
\tilde{\bm{B}} = \begin{bmatrix}
\bm{B} & \bm{0} \\
\bm{0} & \bm{I}_{\min\{m,n\} - k}
\end{bmatrix}.
$$
Here $\bm{I}_n$ is the identity matrix of size $n$.
We consider the following weight initialization schemes.
\begin{itemize}
\item Orthogonal Initialization \cite{Saxe2013orthInit}: $\bm{W}_j^{(0)} \approxeq \bm{Q}^j_{\min\{n_j,n_{j-1}\}}$ for all $1 \le j \le L$,
where $\bm{Q}_{n}$ is an orthogonal matrix of size
$n$.
\begin{itemize}
\item Orth-Identity Initialization:
$\bm{W}_j^{(0)} \approxeq_1 \bm{Q}^j_{\min\{n_j,n_{j-1},\max\{n_0,n_L\}\}}$ for $1 \le j \le L$.
This is a special case of orthogonal initialization
that is proposed in the present work.
\item Identity Initialization \cite{Hardt2016identityMatters,Bartlett2019gdIdentityDLN}: $\bm{W}_j^{(0)} \approxeq \bm{I}_{\min\{n_j,n_{j-1}\}}$ for $1 \le j \le L$.
\end{itemize}
\item Balanced Initialization \cite{Arora2018convergenceDLN}:
Given a randomly drawn matrix $\bm{W}^{(0)} \in \mathbb{R}^{n_L\times n_0}$,
let us take a singular value decomposition $\bm{W}^{(0)} = \bm{U\Sigma V}^T$,
where $\bm{U} \in \mathbb{R}^{n_L \times \min\{n_0,n_L\}}$,
$\Sigma \in \mathbb{R}^{\min\{n_0,n_L\} \times \min\{n_0,n_L\}}$
is diagonal, and
$\bm{V} \in \mathbb{R}^{n_0 \times \min\{n_0,n_L\}}$
have orthogonal columns.
Set
$\bm{W}^{(0)}_L \approxeq \bm{U\Sigma}^{1/L}$,
$\bm{W}^{(0)}_j \approxeq \bm{\Sigma}^{1/L}$ for $1< j < L$,
$\bm{W}^{(0)}_1 \approxeq \bm{\Sigma}^{1/L}\bm{V}^T$.
\item Random Initialization: $(\bm{W}_j^{(0)})_{ik} \sim N(0,\sigma_j^2)$ for all $1\le j \le L$.
Often $\sigma_j^2$ is chosen to $1/n_{j-1}$ so that
the expected value of the square norm of each row is 1.
\end{itemize}
The orth-indentity initialization
can be viewed as a hybrid initialization between the orthogonal and the identity initialization schemes.
This paper primarily concerns with the orth-indentity initialization.
|
1,108,101,565,131 | arxiv | \section{Introduction and overview} \label{sec: intro}
Let $\dot{\mathfrak{g}}$ be any symmetrizable Kac-Moody Lie algebra. Pick a collection $z_1,\dots,z_N$ of distinct points in the complex plane. The quadratic Hamiltonians of the quantum \emph{Gaudin model} for these data are the elements
\begin{equation} \label{quad Ham intro}
\mathcal{H}_i \coloneqq \sum_{\substack{j=1\\j\neq i}}^N \frac{\Xi_{ij}}{z_i-z_j}
,\qquad i=1,\dots,N,
\end{equation}
of the (suitably completed) tensor product $U(\dot{\mathfrak{g}})^{\ox N}$, where the notation $\Xi_{ij}$ means $\Xi$ acting in tensor factors $i$ and $j$.
Here $\Xi = \sum_{\alpha} \Xi_{(\alpha)}$ is the (possibly infinite) sum over all root spaces of $\dot{\mathfrak{g}}$ of the canonical elements $\Xi_{(\alpha)}\in \dot{\mathfrak{g}}_\alpha \ox \dot{\mathfrak{g}}_{-\alpha}$ defined by the standard bilinear form on $\dot{\mathfrak{g}}$ \cite[Chapter 2]{KacBook}. The action of $\Xi$ is well-defined on tensor products of highest-weight $\dot{\mathfrak{g}}$-modules. Let $L_{\lambda}$ denote the irreducible $\dot{\mathfrak{g}}$-module of highest weight $\lambda\in \mathfrak{h}^* = \dot{\mathfrak{g}}_0^*$, and pick a collection $\lambda_{1},\dots,\lambda_N$ of weights. Then in particular $\mathcal{H}_i$ are well-defined as linear maps in $\End(\bigotimes_{i=1}^N L_{\lambda_i})$. These maps commute amongst themselves. The \emph{Bethe ansatz} is a technique for finding their joint eigenvectors and eigenvalues. One constructs a vector $\psi$ called the \emph{weight function} or \emph{Schechtman-Varchenko vector}, which depends on variables called \emph{Bethe roots}. Provided these variables obey certain \emph{Bethe ansatz equations}, then $\psi$ is a joint eigenvector of the $\mathcal{H}_i$, with certain explicit eigenvalues. Let us stress that this statement is known to hold for arbitrary symmetrizable Kac-Moody algebras $\dot{\mathfrak{g}}$. Indeed, it follows from results in \cite{SV,RV}, as we recall in an appendix.
In the special case where $\dot{\mathfrak{g}}$ is of finite type, much more is known. Namely, in that case the quadratic Gaudin Hamiltonians $\mathcal{H}_i$ belong to a commutative subalgebra $\mathcal B \subset U(\dot{\mathfrak{g}})^{\otimes N}$ called the \emph{Gaudin} \cite{Fopers} or \emph{Bethe} \cite{MTV1} subalgebra. The Schechtman-Varchenko vector is a joint eigenvector for this commutative algebra $\mathcal B$ \cite{FFR}, and the eigenvalues are encoded as functions on a space of \emph{opers} (see below for the definition). In fact there is even a stronger result that the image of $\mathcal B$ in $\End(\bigotimes_{i=1}^N L_{\lambda_i})$ can be identified with the algebra of functions on a certain space of monodromy-free opers whose singularities are at the marked points $z_i$ and whose residues at these singularities are given by the highest weights $\lambda_i$ -- see \cite{MTVschubert} in type A and \cite{RybnikovProof} in all finite types.
Now suppose $\dot{\mathfrak{g}}$ is of untwisted affine type. Two natural questions arise \cite{FFsolitons}:
\begin{enumerate}
\item Are there higher Gaudin Hamiltonians? \emph{i.e.} are the quadratic Hamiltonians above part of some larger commutative subalgebra of (a suitable completion of) $U(\dot{\mathfrak{g}})^{\ox N}$, such that $\psi$ is still a common eigenvector?
\item If yes, then what parameterizes the eigenvalues of these higher Hamiltonians?
\end{enumerate}
In this paper we shall provisionally assume that the answer to the first question is yes, and give a conjectural answer to the second. Namely, we introduce a notion of meromorphic affine opers on $\mathbb{P}^1$ (affine opers have been defined previously in \cite{MR1896178, Fopersontheprojectiveline}), and then the main result of the paper is that:
\begin{enumerate}[(i)]
\item There is a notion of quasi-canonical form for affine opers which is the direct generalisation of the canonical form in finite type, and yet
\item The functions on the space of affine opers turn out to be of a very different character than in the finite case. Namely, they are given by hypergeometric integrals, over cycles of a certain twisted homology defined by the levels of the modules at the marked points.
\end{enumerate}
We conjecture that these hypergeometric integrals give the eigenvalues of (``local'') higher affine Gaudin Hamiltonians. This observation in turn allows us to make a conjecture about the form of the higher affine Gaudin Hamiltonians themselves.
\bigskip
To explain these statements, let us begin by recalling the situation in finite types.
Consider first $\dot{\mathfrak{g}}$ of finite type of rank $\ell$. The spectrum of the Gaudin algebra for $\dot{\mathfrak{g}}$ is described by ${{}^L\!\g}$-opers, where ${{}^L\!\g}$ is the Langlands dual of $\dot{\mathfrak{g}}$, \emph{i.e.} the Kac-Moody algebra with transposed Cartan matrix. Let ${{}^L\!\g} = {}^L\mathfrak{n}_- \oplus {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+$ be a Cartan decomposition and $\bar p_{-1} \coloneqq \sum_{i=1}^\ell \check f_i\in {}^L\mathfrak{n}_-$ the corresponding principal nilpotent element ($\check f_i$ are Chevalley generators). A \emph{Miura ${{}^L\!\g}$-oper} is a connection of the form
\begin{subequations}
\begin{equation} d + \left(\bar p_{-1} + u(z)\right) dz \end{equation}
where $u(z)$ is a meromorphic function valued in ${}^L\mathfrak{h} = \mathfrak{h}^*$. Let $\alpha_i\in \mathfrak{h}^*$, $i=1,\dots,\ell$ be the simple roots of $\dot{\mathfrak{g}}$; they are also the simple coroots of ${{}^L\!\g}$. For the Gaudin model with regular singularities as described above, $u(z)$ generically takes the form
\begin{equation} u(z) = - \sum_{i=1}^N \frac{\lambda_i}{z-z_i} + \sum_{j=1}^m \frac{\alpha_{c(j)}}{z-w_j},\label{u} \end{equation}
\label{mop}\end{subequations}
where $w_1,\dots, w_m$ are the $m\in \mathbb{Z}_{\geq 0}$ Bethe roots, with ``colours'' $\{c(j)\}_{j=1}^m \subset \{1,\dots,\ell\}$.
An ${{}^L\!\g}$-\emph{oper} is a gauge equivalence class of connections of the form
\begin{equation} d + (\bar p_{-1} + b(z)) dz, \nonumber\end{equation}
where $b(z)$ is a meromorphic function valued in ${}^L\b_+ = {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+$, under the gauge action of the unipotent subgroup ${}^L\!N = \exp({}^L\mathfrak{n}_+)$. So in particular each Miura ${{}^L\!\g}$-oper defines an underlying ${{}^L\!\g}$-oper, namely the equivalence class to which it belongs. It is known that each ${{}^L\!\g}$-oper has a \emph{unique} representative of the form
\begin{equation} d + \left( \bar p_{-1} + \sum_{k\in \bar E} \bar v_k(z)\bar p_k \right) dz. \label{canform}\end{equation}
Here the sum is over the (finite) set\footnote{In exactly one case, that of type $D_{2n}$, $\bar E$ is a multiset.} $\bar E$ of exponents of ${{}^L\!\g}$. For each exponent $k\in \bar E$, $\bar p_k\in {}^L\mathfrak{n}_+$ is a certain nonzero element of grade $k$ in the principal gradation of ${{}^L\!\g}$. Its coefficient $\bar v_k(z)$ is a meromorphic function valued in $\mathbb{C}$.
Since this representative is unique, these functions $\{\bar v_k(z)\}_{k\in \bar E}$ are good coordinates on the space of ${{}^L\!\g}$-opers. On the underlying ${{}^L\!\g}$-oper of the Miura ${{}^L\!\g}$-oper in \eqref{mop}, the functions $\{\bar v_k(z)\}_{k\in \bar E}$ will generically have poles at all the poles of $u(z)$. The Bethe equations are precisely the equations needed to ensure they in fact \emph{only} have poles at the marked points $\{z_i\}_{i=1}^N$ and \emph{not} at the Bethe roots $\{w_j\}_{j=1}^m$. Suppose the Bethe equations hold. Then the Schechtman-Varchenko vector $\psi$ obeys $S_k(z) \psi = \bar v_k(z)\psi $ for all $k\in \bar E$, where $\{S_k(z)\}_{k\in \bar E}$ are certain generating functions of the Gaudin algebra.
\bigskip
Let us now turn to affine types and try to follow the steps above as closely as possible. Suppose $\dot{\mathfrak{g}}$ is an untwisted affine Kac-Moody algebra with Cartan matrix of rank $\ell$. Let ${{}^L\!\g}$ be its Langlands dual, with Cartan decomposition ${{}^L\!\g} ={}^L\mathfrak{n}_- \oplus {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+$. Define a \emph{Miura ${{}^L\!\g}$-oper} to be a connection of the form
\begin{equation} d + \left( p_{-1} + u(z) \right) dz \label{amop} \end{equation}
where $u(z)$ is again a meromorphic function valued in ${}^L\mathfrak{h}=\mathfrak{h}^*$, and where now $p_{-1} \coloneqq \sum_{i=0}^{\ell} \check f_i\in {}^L\mathfrak{n}_-.$
The Cartan subalgebra ${}^L\mathfrak{h}=\mathfrak{h}^*$ of ${{}^L\!\g}$ is now of dimension $\ell+2$. As a basis, we may choose the simple roots $\alpha_i$, $i=0,1,\dots, \ell$, of $\dot{\mathfrak{g}}$ (which are the simple coroots of ${{}^L\!\g}$) together with a choice of derivation element. It is natural to choose the derivation corresponding to the principal gradation of ${{}^L\!\g}$. So let us pick a derivation element $\rho \in {}^L\mathfrak{h}$ such that $[\rho, \check e_i] = \check e_i$, $[\rho,\check f_i] = -\check f_i$ for each $i=0,1,\dots,\ell$.
By analogy with \eqref{mop} one can expect that for the Gaudin model with regular singularities at the marked points $\{z_i\}_{i=1}^N$ the relevant Miura ${{}^L\!\g}$-opers are those with $u(z)$ just as in \eqref{u} except that now the ``colours'' of the Bethe roots $\{c(j)\}_{j=1}^m \subset \{0,1,\dots,\ell\}$ can include $0$. We can write $u(z)$ in our basis as
\begin{equation} u(z) = \sum_{i=0}^\ell u_i(z) \alpha_i - \frac{\varphi(z)}{h^\vee}\rho \nonumber\end{equation}
where $\{u_i(z)\}_{i=0}^\ell$ and $\varphi(z)$ are meromorphic functions valued in $\mathbb{C}$. (It proves convenient to include the factor of one over the Coxeter number $h^\vee$ of ${{}^L\!\g}$.) The function $\varphi(z)$ depends only on the levels $k_i$ of the $\dot{\mathfrak{g}}$-modules $L_{\lambda_i}$, \emph{i.e.} the values of the central element of $\dot{\mathfrak{g}}$ on these modules:
\begin{equation} \label{twist function intro}
\varphi(z) = \sum_{i=1}^N \frac{k_i}{z-z_i}.
\end{equation}
Now define an ${{}^L\!\g}$-\emph{oper} to be a gauge equivalence class of connections of the form
\begin{equation} d + (p_{-1} + b(z)) dz, \nonumber\end{equation}
where $b(z)$ is a meromorphic function valued in ${}^L\b_+ = {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+$, under the gauge action of the subgroup ${}^L\!N_+ = \exp({}^L\mathfrak{n}_+)$. This subgroup is no longer unipotent, but it is still easy to make sense of gauge transformations grade-by-grade in the principal gradation. See \S\ref{sec: def oper} below. In this way we shall show that each ${{}^L\!\g}$-oper has a representative of the form
\begin{equation} d + \Bigg( p_{-1} - \frac{\varphi(z)}{h^\vee} \rho + \sum_{r\in E} v_r(z) p_r \Bigg) dz. \label{acf}\end{equation}
Here $E$ denotes the set of positive exponents of ${{}^L\!\g}$, which is now an infinite set\footnote{once again, multiset, in the case of type $\null^1 D_{2n}$.}. For each $r\in \pm E$, $p_r$ is a certain nonzero element of grade $r$ in the principal gradation of ${{}^L\!\g}$, and its coefficient $v_r(z)$ is a meromorphic function valued in $\mathbb{C}$.
In particular, the underlying ${{}^L\!\g}$-oper of the Miura ${{}^L\!\g}$-oper in \eqref{amop} has a representative of this form.
\emph{However}, in stark contrast to the case of finite type algebras above, the representative \eqref{acf} is not unique, because there is a residual gauge freedom. This freedom is generated by gauge transformations of the form $\exp( \sum_{r\in E_{\geq 2}} g_r(z) p_r )$, where $g_r(z)$ are meromorphic functions valued in $\mathbb{C}$ and $E_{\geq 2}$ is the set of positive exponents of ${{}^L\!\g}$ excluding $1$. Such a transformation preserves the form of the connection \eqref{acf} and the function $\varphi(z)$ while sending, for each\footnote{There is a subtlety for $r=1$; see Corollary \ref{cor: v1} below.} $r\in E_{\geq 2}$,
\begin{equation} v_r(z) \longmapsto v_r(z) - g'_r(z) + \frac{r \varphi(z)}{h^\vee} g_r(z). \label{vr}\end{equation}
Consequently, these $v_r(z)$ are not themselves well-defined functions on the space of ${{}^L\!\g}$-opers, and one should not expect them to parameterize eigenvalues of Gaudin Hamiltonians. Rather, one should take appropriate integrals of them. Indeed, consider the multivalued (for generic $k_i$) function on $\mathbb{C} \setminus\{z_1,\dots,z_N\}$ defined as
\begin{equation*}
\mathcal P(z) \coloneqq \prod_{i=1}^N (z-z_i)^{k_i}.
\end{equation*}
If we multiply $v_r(z)$ by $\mathcal P(z)^{-r/h^\vee}$ then its transformation property \eqref{vr} can equivalently be written as
\begin{equation*}
\mathcal P(z)^{-r/h^\vee} v_r(z) \longmapsto \mathcal P(z)^{-r/h^\vee} v_r(z) - \partial_z \big( \mathcal P(z)^{-r/h^\vee} g_r(z) \big).
\end{equation*}
We now see that in order to get gauge-invariant quantities we should consider integrals
\begin{equation} \label{ci}
I^\gamma_r \coloneqq \int_\gamma \mathcal P(z)^{-r/h^\vee} v_r(z) dz
\end{equation}
over any cycle $\gamma$ along which $\mathcal P^{-r/h^\vee}$ has a single-valued branch. The prototypical example of such a cycle is a Pochhammer contour, drawn below around two distinct points $z_i$ and $z_j$, $i, j = 1, \ldots, N$:
\begin{center}
\begin{tikzpicture}[scale=.6]
\filldraw (0,0) node [below right=-.5mm]{\scriptsize $z_i$} circle (2pt);
\filldraw (4,0) node [below right=-.5mm]{\scriptsize $z_j$} circle (2pt);
\draw[-stealth', postaction={decorate,decoration={markings,mark=at position .4 with {\arrow{stealth'}}}}] (2,0) .. controls (-2,-2) and (-2,1.25) .. (2,1.25) node[above]{$\gamma$};
\draw[postaction={decorate,decoration={markings,mark=at position .8 with {\arrow{stealth'}}}}] (2,1.25) .. controls (6,1.25) and (6,-2) .. (2,0);
\draw[postaction={decorate,decoration={markings,mark=at position .8 with {\arrow{stealth'}}}}] (2,0) .. controls (-2,2) and (-3,-1.5) .. (2,-1.5);
\draw (2,-1.5) .. controls (7,-1.5) and (6,2) .. (2,0);
\end{tikzpicture}
\end{center}
Another way of formulating the above, described in more detail in \S\ref{sec: twisted homology}, is to note that the transformation property \eqref{vr} says that the $1$-form $v_r(z) dz$ is really an element of some suitably defined twisted cohomology, and \eqref{ci} represents its integral over the class of a cycle $\gamma$ in the dual twisted homology. (For an introduction to twisted homology and local systems see \emph{e.g.} \cite{EFK}. Note, though, that the local system underlying the twisted homology described above is conceptually distinct from the ``usual'' local system associated to Gaudin models, namely the local system defined by the KZ connection.)
Next one should ask about the role of the Bethe equations. Consider the underlying ${{}^L\!\g}$-oper of the Miura ${{}^L\!\g}$-oper \eqref{amop} with $u(z)$ as in \eqref{u}. We shall show that the Bethe equations are precisely the equations needed to ensure that there exists a choice of gauge in which the functions $v_r(z)$ only have poles at the marked points $\{z_i\}_{i=1}^n$ and not at the Bethe roots $\{w_j\}_{j=1}^m$. The Bethe equations thus ensure that the integrands $\mathcal P(z)^{-r/h^\vee} v_r(z) dz$ in \eqref{ci} have no residues at the Bethe roots. Thus, in particular, if the Bethe equations hold then the integrals \eqref{ci} do not depend on the position of the chosen contour $\gamma$ relative to these Bethe roots.
The form of the functions \eqref{ci} on the space of ${{}^L\!\g}$-opers leads us to conjecture the existence of a collection $\{ \mathcal S_r(z) \}_{r \in E}$ of meromorphic functions valued in (a suitable completion of) $U(\dot{\mathfrak{g}})^{\otimes N}$, whose properties are listed in Conjecture \ref{conj: higher Ham}. These ensure, in particular, that the corresponding integrals
\begin{equation} \label{op ci}
\hat Q^\gamma_r \coloneqq \int_\gamma \mathcal P(z)^{-r/h^\vee} \mathcal S_r(z) dz
\end{equation}
mutually commute in the quotient of (the completion of) $U(\dot{\mathfrak{g}})^{\otimes N}$ in which the central elements act by the levels $k_i$. Moreover, we conjecture that the Schechtman-Varchenko vector $\psi$ is a simultaneous eigenvector of the $\hat Q^\gamma_r$ with eigenvalues given by \eqref{ci}. That is, $\hat Q^\gamma_r \psi = I^\gamma_r \psi$ for any choice of contour $\gamma$ as above and any $r \in E$.
A first non-trivial check of these conjectures is to show that in the case $\dot{\mathfrak{g}}' = \widehat{\mathfrak{sl}}_M$, for $M \geq 3$, there are commuting \emph{cubic} Hamiltonians fitting this pattern. In a forthcoming paper \cite{LVY2}, we explicitly construct such cubic Hamiltonians and prove that they commute; we also check that $\psi$ is an eigenvector with the expected eigenvalues as in \eqref{ci}, at least for $0$ and $1$ Bethe roots.
Our conjecture on the general form of the ``local'' higher affine Gaudin Hamiltonians in \eqref{op ci} is motivated by the recent construction of local integrals of motion in \emph{classical} affine Gaudin models. Specifically, it was shown in \cite{V17} that classical affine Gaudin models provide a unifying framework for describing a broad family of classical integrable field theories. One of the defining features of such theories is that the Poisson bracket of their Lax matrix is characterised by a certain rational function, called the \emph{twist function} $\varphi(z)$. We restrict attention in this article to those with twist function of the form \eqref{twist function intro}. It was subsequently shown in \cite{LMV17}, in the case when $\dot{\mathfrak{g}}$ is the untwisted affine Kac-Moody algebra associated with a semisimple Lie algebra of classical type, how to associate an infinite set $\{ Q^x_r \}_{r \in E}$ of local integrals of motion in such a theory to each zero $x$ of the twist function. These local charges were obtained by generalising the original procedure of \cite{Evans:1999mj} for classical principal chiral models on compact Lie groups of classical type, which had later also been extended to various other classical integrable field theories in \cite{Evans:2000hx,Evans:2000qx,Evans:2005zd}, see also \cite{Evans:2001sz}.
As we argue in \S\ref{sec: classical lim}, the integral over the contour $\gamma$ in \eqref{op ci} localises in the classical limit to critical points of the function $\mathcal{P}(z)$, in other words to zeroes of the twist function $\varphi(z)$. In this sense, the operators \eqref{op ci} provide natural quantisations of the local integrals of motion $Q^x_r$ in the classical affine Gaudin model.
Let us finally note that the appearance of hypergeometric integrals, as in \eqref{ci}, is very suggestive in relation to recent work on the massive ODE/IM correspondence for the Fateev model \cite{Lukyanov:2013wra, Bazhanov:2013cua}.
\bigskip
The paper is organised as follows.
In \S\ref{sec: affine algebra}, to set the notation we recall the definition of an affine Kac-Moody algebra $\dot{\mathfrak{g}}$ and its Langlands dual ${{}^L\!\g}$, focusing on the latter for the purpose of this paper. In particular, we recall the definition and main properties of its principal subalgebra.
In \S\ref{sec: opers} we introduce the space of meromorphic ${{}^L\!\g}$-opers on $\mathbb{P}^1$, working in a fixed global coordinate on $\mathbb{C} \subset \mathbb{P}^1$. The main result of this section is Theorem \ref{thm: quasi-canonical form} which describes the quasi-canonical form of an ${{}^L\!\g}$-oper $[\nabla]$. This allows us to describe gauge invariant functions on the space of ${{}^L\!\g}$-opers as hypergeometric integrals of the form \eqref{ci} in Corollary \ref{cor: opint}.
In \S\ref{sec: Miura opers} we introduce a class of Miura ${{}^L\!\g}$-opers with simple poles at the marked points $z_i$, $i = 1, \ldots, N$ with residues $\lambda_i \in \mathfrak{h}^\ast$, and additional simple poles at the Bethe roots $w_j$, $j = 1, \ldots, m$. The ${{}^L\!\g}$-oper $[\nabla]$ underlying such a Miura ${{}^L\!\g}$-oper $\nabla$ is shown to be regular at each of the Bethe roots $w_j$ if and only if the Bethe equations hold. Moreover, we show that the eigenvalues of the quadratic Gaudin Hamiltonians \eqref{quad Ham intro} on the tensor product $\bigotimes_{i=1}^N L_{\lambda_i}$ appear as the residues at the $z_i$ in the coefficient of $p_1$ in any quasi-canonical form of the ${{}^L\!\g}$-oper $[\nabla]$.
Based on the description of functions on the space of ${{}^L\!\g}$-opers from Corollary \ref{cor: opint}, in \S\ref{sec: main conj} we formulate our main conjecture about the form of the higher Gaudin Hamiltonians of an affine Gaudin model associated with the affine Kac-Moody algebra $\dot{\mathfrak{g}}$. See Conjecture \ref{conj: higher Ham}.
In \S\ref{sec: coord} we give a coordinate-independent definition of meromorphic ${{}^L\!\g}$-opers on an arbitrary Riemann surface $\Sigma$. In particular, we compare and contrast the description of the space of ${{}^L\!\g}$-opers in the cases when ${{}^L\!\g}$ is of finite and affine type.
Specialising the discussion of \S\ref{sec: coord} to the case $\Sigma = \mathbb{P}^1$, \S\ref{sec: twisted homology} is devoted to a coordinate-independent description of the functions on the space of ${{}^L\!\g}$-opers from Corollary \ref{cor: opint}.
In \S\ref{sec: discussion} we discuss various connections between the present work and the literature. In particular, we compare our main Theorem \ref{thm: quasi-canonical form} with the procedure of Drinfel'd and Sokolov \cite{DS} for constructing classical integrals of motion of generalised (m)KdV. We also mention connections with the (massive) ODE/IM correspondence. We provide motivation for Conjecture \ref{conj: higher Ham} by relating the form of the classical limit of the higher Gaudin Hamiltonians with the existing hierarchy of classical integrals of motion in classical affine Gaudin models.
Finally, in appendix \ref{sec: hyp arr} we briefly review the work of Schechtmann and Varchenko \cite{SV} on the diagonalisation of the quadratic Gaudin Hamiltonians for an arbitrary Kac-Moody algebra $\dot{\mathfrak{g}}$ by the Bethe ansatz.
\subsubsection*{Acknowledgements}
CY is grateful to E. Mukhin for interesting discussions.
The authors thank M. Magro for interesting discussions.
This work is partially supported by the French Agence Nationale de la Recherche (ANR) under grant ANR-15-CE31-0006 DefIS.
\section{The affine algebra ${{}^L\!\g}$} \label{sec: affine algebra}
\subsection{Cartan data and defining relations} \label{sec: Cartan data}
Let $\dot{\mathfrak{g}} \coloneqq \dot{\mathfrak{g}}(A)$ be an untwisted affine Kac-Moody algebra with indecomposable Cartan matrix $A \coloneqq (A_{ij})_{i,j=0}^\ell$, and let ${{}^L\!\g} \coloneqq \dot{\mathfrak{g}}({}^t\!A)$ be its Langlands dual, namely the affine Kac-Moody algebra associated with the transposed Cartan matrix.
We have the Cartan decomposition
\begin{equation} \dot{\mathfrak{g}} = \mathfrak{n}_- \oplus \mathfrak{h} \oplus \mathfrak{n}_+\nonumber\end{equation}
where $\mathfrak{h}$ is a complex vector space of dimension $\dim \mathfrak{h} = \ell + 2$. The sets of simple roots $\{ \alpha_i \}_{i=0}^\ell$ and simple coroots $\{ \check \alpha_i \}_{i=0}^\ell$ of $\dot{\mathfrak{g}}$ are by definition linearly independent subsets of $\mathfrak{h}^\ast$ and $\mathfrak{h}$, respectively, such that $A_{ij} = \langle\alpha_j, \check\alpha_i\rangle$ for $i, j \in I \coloneqq \{ 0,\ldots, \ell \}$. Here $\langle\cdot,\cdot\rangle: \mathfrak{h}^\ast \times \mathfrak{h} \to \mathbb{C}$ denotes the canonical pairing.
In the Cartan decomposition of ${{}^L\!\g}$,
\begin{equation} {{}^L\!\g} = {}^L\mathfrak{n}_- \oplus {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+,\nonumber\end{equation}
we may identify ${}^L\mathfrak{h} = \mathfrak{h}^*$. Then $\{ \alpha_i \}_{i=0}^\ell$ is a set of simple \emph{coroots} of ${{}^L\!\g}$, and $\{ \check\alpha_i \}_{i=0}^\ell$ a set of simple \emph{roots} of ${{}^L\!\g}$.
In terms of the Chevalley generators $\check e_i$, $i \in I$, of ${}^L\mathfrak{n}_+$ and $\check f_i$, $i \in I$, of ${}^L\mathfrak{n}_-$, the defining relations of ${{}^L\!\g}$ are given by
\begin{subequations} \label{KM relations}
\begin{alignat}{2}
\label{KM rel a} [x, \check e_i] &= \langle x,\check \alpha_i\rangle \check e_i, &\qquad
[x, \check f_i] &= - \langle x,\check\alpha_i\rangle \check f_i, \\
\label{KM rel b} [x, x'] &= 0, &\qquad
[\check e_i, \check f_j] &= \alpha_i \delta_{i,j},
\end{alignat}
for any $x, x' \in {}^L\mathfrak{h}$, together with the Serre relations
\begin{equation} \label{KM rel c}
(\text{ad}\, \check e_i)^{1- A_{ji}} \check e_j = 0, \qquad (\text{ad}\, \check f_i)^{1- A_{ji}} \check f_j = 0.
\end{equation}
\end{subequations}
\begin{remark}
We shall be mostly concerned with the Lie algebra ${{}^L\!\g}$ rather than $\dot{\mathfrak{g}}$. Nevertheless, since we have in mind applications to the Gaudin model for $\dot{\mathfrak{g}}$, we prefer to keep the notation adapted to $\dot{\mathfrak{g}}$, at the cost of the somewhat non-standard appearance of these relations \eqref{KM relations} and others below.
\end{remark}
Let $a_i$ (resp. $\check a_i$), $i\in I$, be the unique positive relatively prime integers such that $A\, \null^t \!(a_0, \ldots, a_\ell) = 0$ (resp. $\null^t\!A\,\null^t\!(\check a_0, \ldots, \check a_\ell) =0$).
Define
\begin{equation} h\coloneqq \sum_{i=0}^\ell a_i\qquad h^\vee \coloneqq \sum_{i=0}^\ell \check a_i.\nonumber\end{equation}
Then $h$ is the Coxeter number of $\dot{\mathfrak{g}}$ (and the dual Coxeter number of ${{}^L\!\g}$) while $h^\vee$ is the Coxeter number of ${{}^L\!\g}$ (and the dual Coxeter number of $\dot{\mathfrak{g}}$). Define also
\begin{equation} \delta \coloneqq \sum_{i=0}^\ell a_i \alpha_i,\qquad \mathsf k \coloneqq \sum_{i=0}^\ell \check a_i \check \alpha_i. \nonumber\end{equation}
Then $\delta$ spans the centre of ${{}^L\!\g}$ while $\mathsf k$ spans the centre of $\dot{\mathfrak{g}}$. Denote by $\dot{\mathfrak{g}}' = [\dot{\mathfrak{g}},\dot{\mathfrak{g}}]$ and ${}^L\!\g'\coloneqq [{{}^L\!\g},{{}^L\!\g}]$ the derived subalgebras of $\dot{\mathfrak{g}}$ and ${{}^L\!\g}$, respectively.
We shall suppose that $\check a_0 = 1$ and $a_0 = 1$.
\begin{remark} \label{rem: not A2l}
One has $\check a_0 = 1$ and $a_0 = 1$ for all affine Kac-Moody algebras except for type $\null^2\!A_{2k}$. In type $\null^2\!A_{2k}$ one can choose to take either $\check a_0 = 1$ and $a_0 = 2$ or vice versa $\check a_0 = 2$ and $a_0 = 1$.
The Cartan matrices in these two descriptions are transposes of one another so that in this case ${{}^L\!\g}$ and $\dot{\mathfrak{g}}$ are both twisted, of type $\null^2\!A_{2k}$. Since we have in mind applications to the Gaudin model for an untwisted affine Kac-Moody algebra $\dot{\mathfrak{g}}$, we shall not consider this case.
\end{remark}
Recall that given any $d\in \mathfrak{h}$ such that $\langle \delta, d\rangle \neq 0$, $\{\check\alpha_i\}_{i=0}^\ell \cup \{d\}$ forms a basis of $\mathfrak{h}$; and similarly, given any $\Lambda\in {}^L\mathfrak{h}$ such that $\langle \Lambda, \mathsf k\rangle \neq 0$, $\{\alpha_i\}_{i=0}^\ell \cup \{ \Lambda\}$ provides a basis for ${}^L\mathfrak{h}$. We call such elements $d$ and $\Lambda$ \emph{derivation elements} of $\mathfrak{h}$ and ${}^L\mathfrak{h}$, respectively.
Let $\mathsf d\in \mathfrak{h}$ be a derivation element of $\dot{\mathfrak{g}}$ such that
\begin{equation*}
\langle \alpha_i , \mathsf d \rangle = \delta_{i,0},\qquad i\in I.
\end{equation*}
Such a $\mathsf d$ is unique up to the addition of a multiple of $\mathsf k$. Having made such a choice we define a non-degenerate symmetric bilinear form $(\cdot | \cdot) : \mathfrak{h} \times \mathfrak{h} \to \mathbb{C}$ on $\mathfrak{h}$ by
\begin{equation} \label{bilinear form def}
(\check\alpha_i | x) = a_i \check a_i^{-1} \langle \alpha_i, x \rangle, \qquad
(\mathsf d | \mathsf d) = 0
\end{equation}
for any $i \in I$ and $x \in \mathfrak{h}$. It extends uniquely to an invariant symmetric bilinear form on the whole of $\dot{\mathfrak{g}}$ \cite[Proposition 2.2]{KacBook}, which we also denote $(\cdot | \cdot) : \dot{\mathfrak{g}} \times \dot{\mathfrak{g}} \to \mathbb{C}$. It also induces a linear isomorphism $\nu: \mathfrak{h}\SimTo {}^L\mathfrak{h}$, and hence we have a non-degenerate symmetric bilinear form $(\nu^{-1}(\cdot) |\nu^{-1}(\cdot) ): {}^L\mathfrak{h} \times {}^L\mathfrak{h} \to \mathbb{C}$ on ${}^L\mathfrak{h}$, which henceforth we shall also denote by $(\cdot|\cdot)$. The latter then extends uniquely to an invariant symmetric bilinear form $(\cdot | \cdot) : {{}^L\!\g} \times {{}^L\!\g} \to \mathbb{C}$ on the whole of ${{}^L\!\g}$.
There exists a unique set $\{\Lambda_i\}_{i=0}^\ell \subset {}^L\mathfrak{h}$ of derivation elements of ${}^L\mathfrak{h}$, the fundamental coweights of ${{}^L\!\g}$ (and the fundamental weights of $\dot{\mathfrak{g}}$) relative to our choice of $\mathsf d$, such that
\begin{equation}
\langle \Lambda_i, \mathsf d \rangle = 0\qquad\text{and}\qquad
\langle \Lambda_i, \check\alpha_j \rangle = \delta_{i,j},\qquad i,j\in I.\label{def: Lambda}
\end{equation}
Likewise, there exists a unique set $\{ \check\Lambda_i \}_{i=0}^\ell \subset \mathfrak{h}$ of derivation elements of $\mathfrak{h}$, the fundamental coweights of $\dot{\mathfrak{g}}$ (and the fundamental weights of ${{}^L\!\g}$) such that
\begin{equation}
\langle \Lambda_0, \check\Lambda_i \rangle = 0\qquad\text{and}\qquad
\langle \alpha_i, \check\Lambda_j \rangle = \delta_{i,j},\qquad i,j\in I.\label{def: co Lambda}
\end{equation}
In particular, we have $\check\Lambda_0 = \mathsf d$.
\subsection{Principal gradation} \label{sec: principal grad}
Let $\check Q \coloneqq \bigoplus_{i=0}^\ell \mathbb{Z} \check \alpha_i$ be the root lattice of ${{}^L\!\g}$. We have the root space decomposition
\begin{equation}{{}^L\!\g} = \bigoplus_{\check\alpha\in \check Q} {{}^L\!\g}_{\check\alpha}, \nonumber\end{equation}
where ${{}^L\!\g}_{\check\alpha} \coloneqq \{ x \in {{}^L\!\g} \,|\, [h, x] = \langle h,\check\alpha\rangle x \;\text{for all} \; h \in {}^L\mathfrak{h}\}$. In particular, for the origin of the root lattice $0 \in \check Q$ we have ${{}^L\!\g}_0 = {}^L\mathfrak{h}$.
The \emph{height} of a root $\check\alpha = \sum_{i=0}^\ell r_i \check\alpha_i \in \check Q$ is $\hgt(\check \alpha) \coloneqq \sum_{i=0}^\ell r_i$. The \emph{principal gradation} of ${{}^L\!\g}$ is the $\mathbb{Z}$-gradation defined by
\begin{equation*}
{{}^L\!\g} = \bigoplus_{n \in \mathbb{Z}} {{}^L\!\g}_n,\qquad {{}^L\!\g}_n \coloneqq \bigoplus_{\substack{\check\alpha\in\check Q\\ \hgt (\check\alpha) = n}} {{}^L\!\g}_{\check \alpha}.
\end{equation*}
Equivalently, the principal gradation is the $\mathbb{Z}$-gradation defined by
\begin{equation} \deg(\check e_i) = 1,\qquad \deg(\check f_i) = -1, \qquad i\in I,\nonumber\end{equation}
and $\deg({}^L\mathfrak{h}) = 0$.
In particular ${{}^L\!\g}_{0} = {}^L\mathfrak{h}$, so that the notation ${{}^L\!\g}_0$, where the subscript $0$ could stand for either $0 \in \check Q$ or $0 \in \mathbb{Z}$, is unambiguous.
Let $\rho \in {}^L\mathfrak{h}$ be the unique derivation element of ${}^L\mathfrak{h}$ such that
\begin{equation*}
\langle \rho, \check \alpha_i \rangle = 1, \qquad (\rho | \rho) = 0,
\end{equation*}
for every $i \in I$. By the first property we have $\langle \rho, \mathsf k \rangle = h^\vee$.
The $\ad$-eigenspaces of $\rho$ are the subspaces ${{}^L\!\g}_n$, $n\in \mathbb{Z}$. Indeed, we have
\begin{equation*}
[\rho, \check e_i] = \check e_i,\qquad [\rho,\check f_i] = -\check f_i, \qquad i\in I.
\end{equation*}
\subsection{Principal subalgebra and exponents}\label{sec: princ sub}
\def\mathcal L{\mathcal L}
Define $p_{-1}$, the \emph{cyclic element} of ${{}^L\!\g}$, as
\begin{equation}
p_{-1} \coloneqq \sum_{i=0}^\ell \check f_i.\label{def: pm1}
\end{equation}
It belongs to the $(-1)^{\rm st}$-grade of the derived subalgebra ${}^L\!\g'=[{{}^L\!\g},{{}^L\!\g}]$.
There is a realization of ${}^L\!\g'$ as the central extension of a certain twisted loop algebra $\mathcal L$, in such a way that the power of the formal loop variable $t$ measures the grade in the principal gradation. (Equivalently, the derivation element $\rho\in {{}^L\!\g}$ is realized as $t \del_t$.) By studying this realization, one establishes some important facts about the adjoint action of $p_{-1}$. Here we shall merely recall these facts; for more details see \cite[Chapter 14]{KacBook}.
Let
\begin{equation} \pi : {}^L\!\g' \longrightarrow \mathcal L \cong {}^L\!\g'/\mathbb{C}\delta\nonumber\end{equation}
be the canonical projection. The twisted loop algebra $\mathcal L$ is the direct sum of the image and the kernel of the adjoint action of $\pi(p_{-1})$:
\begin{subequations}\label{Ld}
\begin{equation}\mathcal L = \ker(\ad_{\pi(p_{-1})}) \oplus \textup{im}(\ad_{\pi(p_{-1})}),\end{equation}
and this decomposition respects the principal gradation, \emph{i.e.} for each $n\in \mathbb{Z}$,
\begin{equation} \mathcal L_n = \ker(\ad_{\pi(p_{-1})})_n \oplus \textup{im}(\ad_{\pi(p_{-1})})_n, \end{equation}
\end{subequations}
where $\mathcal L_n \coloneqq \pi({}^L\!\g'_n)$.
The graded subspaces $\textup{im}(\ad_{\pi(p_{-1})})_n$ are all of dimension $\ell$ and moreover
\begin{equation*}
\ad_{\pi(p_{-1})} : \textup{im}(\ad_{\pi(p_{-1})})_n \overset{\sim}\longrightarrow \textup{im}(\ad_{\pi(p_{-1})})_{n-1}
\end{equation*}
is a linear isomorphism for each $n$.
The graded subspaces $\ker(\ad_{\pi(p_{-1})})_n$ have dimensions encoded by the exponents. Indeed, the multiset of \emph{exponents} of ${{}^L\!\g}$ is by definition the multiset consisting of each integer $n$ with multiplicity $\dim(\ker(\ad_{\pi(p_{-1})})_n)$. One has $\dim(\ker(\ad_{\pi(p_{-1})})_n)= \dim(\ker(\ad_{\pi(p_{-1})})_{-n})$ and $\dim(\ker(\ad_{\pi(p_{-1})})_0)=0$. So the multiset of exponents is of the form $\pm E$, where we denote by $E$ the multiset of strictly positive exponents. The kernel $\ker(\ad_{\pi(p_{-1})})$ forms an abelian Lie subalgebra of the twisted loop algebra $\mathcal L$, called the \emph{principal subalgebra}.
We need the ``lift'' to ${{}^L\!\g}$ of the decomposition \eqref{Ld}. For each $n\in \mathbb{Z}_{\neq 0}$, we have ${{}^L\!\g}_n = {}^L\!\g'_n$, the map $\pi|_{{}^L\!\g'_n} : {}^L\!\g'_n \xrightarrow\sim \mathcal L_n$ is a linear isomorphism, and one defines
\begin{equation} \mathfrak{a}_n \coloneqq (\pi|_{{}^L\!\g'_n})^{-1}\left( \ker(\ad_{\pi(p_{-1})})_n\right),\qquad
\mathfrak{c}_n \coloneqq (\pi|_{{}^L\!\g'_n})^{-1}\left( \textup{im}(\ad_{\pi(p_{-1})})_n\right).\nonumber\end{equation}
Meanwhile the subspaces $\mathfrak{a}_0$ and $\mathfrak{c}_0$ of ${{}^L\!\g}_0={}^L\mathfrak{h}$ are defined as
\begin{equation}
\mathfrak{a}_0 \coloneqq \mathbb{C}\delta \oplus \mathbb{C}\rho,\qquad
\mathfrak{c}_0 \coloneqq \ad_{p_{-1}}( \mathfrak{c}_{1} )
.\nonumber\end{equation}
Then for each $n\in \mathbb{Z}$ we have the direct sum decomposition
\begin{equation} \label{lg n decompA}
{{}^L\!\g}_n = \mathfrak{a}_n \oplus \mathfrak{c}_n.
\end{equation}
Let $\mathfrak{a} = \bigoplus_{n\in \mathbb{Z}} \mathfrak{a}_n$ and $\mathfrak{c} = \bigoplus_{n\in \mathbb{Z}} \mathfrak{c}_n$, so ${{}^L\!\g} = \mathfrak{a} \oplus \mathfrak{c}$.
One has $\dim(\mathfrak{c}_n)=\ell$ for each $n\in \mathbb{Z}$ and the linear map
\begin{equation*}
\ad_{p_{-1}} : \mathfrak{c}_{n} \overset{\sim}\longrightarrow \mathfrak{c}_{n-1}
\end{equation*}
is an isomorphism for every $n\in \mathbb{Z}$.
The subspace $\mathfrak{a}$ is a Lie subalgebra, the \emph{principal subalgebra of ${{}^L\!\g}$}. It is the central extension, by a one-dimensional centre $\mathbb{C}\delta$, of the principal subalgebra $\textup{im}(\ad_{\pi(p_{-1})})$ of $\mathcal L$, equipped with a derivation element $\rho$. Indeed, we may pick a basis $\{p_n\}_{n\in \pm E} \cup \{ \delta, \rho\}$ of $\mathfrak{a}$ where for each exponent $n\in \pm E$, $p_n\in \mathfrak{a}_n$.
This basis can be so chosen that the non-trivial Lie algebra relations of $\mathfrak{a}$ are given by
\begin{equation} \label{Lie alg a com rel}
[p_m, p_n] = m \delta_{m+n,0} \, \delta, \qquad [\rho, p_n] = n\, p_n, \qquad m, n \in \pm E.
\end{equation}
The restriction to $\mathfrak{a}$ of the bilinear form $(\cdot | \cdot)$ on ${{}^L\!\g}$ is non-degenerate, with the non-trivial pairings given by
\begin{equation*}
(\delta | \rho) = (\rho | \delta) = h^\vee, \qquad (p_m | p_n) = h^\vee \delta_{m+n,0}, \qquad m, n \in \pm E.
\end{equation*}
\begin{remark}\label{rem: pirem}$ $
\begin{enumerate}[(a)]
\item $\pm 1$ are always exponents with multiplicity 1. We keep $p_{-1}$ as in \eqref{def: pm1} and set
\begin{equation} p_1 = \sum_{i=0}^\ell a_i \check e_i.\nonumber\end{equation}
\item The pattern of exponents is periodic with period $rh^\vee$, where ${}^r\!X_{\mathsf N}$ is the type of ${{}^L\!\g}$ in Kac's notation. For a table of the patterns of exponents in all types see e.g. \cite[Chapter 14]{KacBook} or \cite[\S 5]{DS}.
\item The exponents of $\dot{\mathfrak{g}}$ and ${{}^L\!\g}$ are the same \cite[Corollary 14.3]{KacBook}, which is important for Conjecture \ref{conj: higher Ham} below. (Consequently, if ${}^sY_{\mathsf M}$ is the type of $\dot{\mathfrak{g}}$ then the pattern of exponents is also periodic with period $s h$, which need not equal $rh^\vee$. For us $s=1$ since $\dot{\mathfrak{g}}$ is untwisted.)
\item In all types except ${}^1\!D_{2k}$, the multiset $E$ of positive exponents is actually a set, \emph{i.e.} $\dim(\mathfrak{a}_n)\in \{0,1\}$ for all $n\in \mathbb{Z}_{\neq 0}$. In such cases, for each $j\in E$ the basis element $p_j\in \mathfrak{a}_j$ is unique up to rescaling. Exceptionally, in type $\null^1 \!D_{2k}$ one has $\dim(\mathfrak{a}_{2k-1 + (4k-2) n})=2$ for every $n\in\mathbb{Z}$. For each $n \geq 0$ one must therefore pick two basis vectors, each one labelled by one of the two distinct copies of $2k-1 + (4k-2) n$ in $E$.\label{rem: p freedom} (The basis vectors for $n\leq 0$ are then fixed by the form of the bilinear form above.)
\item
The action of the $\mathbb{C}$-linear map $\ad_{p_{-1}} : {{}^L\!\g} \to {{}^L\!\g}$ on the subspaces ${{}^L\!\g}_n$, $n \in \mathbb{Z}$, of the principal gradation of ${{}^L\!\g}$ can be summarised in the following diagram
\begin{equation*}
\begin{tikzpicture}[bij/.style={above,sloped,inner sep=0.6pt}]
\matrix (m) [matrix of math nodes, row sep=.8em, column sep=2.5em,text height=1.5ex, text depth=0.25ex]
{
& & & \mathbb{C} \rho & \mathfrak{a}_{-1} & \mathfrak{a}_{-2} & \cdots\\
\cdots & \mathfrak{c}_2 & \mathfrak{c}_1 & \mathfrak{c}_0 & \mathfrak{c}_{-1} & \mathfrak{c}_{-2} & \cdots\\
\cdots & \mathfrak{a}_2 & \mathfrak{a}_1 & \mathbb{C} \delta & & & \\
};
\path[->] (m-2-1) edge node[bij]{$\sim$} (m-2-2);
\path[->] (m-2-2) edge node[bij]{$\sim$} (m-2-3);
\path[->] (m-2-3) edge node[bij]{$\sim$} (m-2-4);
\path[->] (m-3-3) edge (m-3-4);
\path[->] (m-2-4) edge node[bij]{$\sim$} (m-2-5);
\path[->] (m-1-4) edge (m-1-5);l
\path[->] (m-2-5) edge node[bij]{$\sim$} (m-2-6);
\path[->] (m-2-6) edge node[bij]{$\sim$} (m-2-7);
\node at ($1/2*(m-1-4)+ 1/2*(m-2-4)$) {$\oplus$};
\node at ($1/2*(m-2-2)+ 1/2*(m-3-2)$) {$\oplus$};
\node at ($1/2*(m-2-3)+ 1/2*(m-3-3)$) {$\oplus$};
\node at ($1/2*(m-2-4)+ 1/2*(m-3-4)$) {$\oplus$};
\node at ($1/2*(m-1-5)+ 1/2*(m-2-5)$) {$\oplus$};
\node at ($1/2*(m-1-6)+ 1/2*(m-2-6)$) {$\oplus$};
\end{tikzpicture}
\end{equation*}
where each column corresponds to a subspace ${{}^L\!\g}_n$ decomposed as in \eqref{lg n decompA}.
\item \label{rem: Bn vs im p-1}
Recall the decomposition \eqref{Ld} of the subquotient $\mathcal L$. In fact $\mathfrak{c}_n = \textup{im}(\ad_{p_{-1}})_n$ and $\mathfrak{a}_n = \ker(\ad_{p_{-1}})_n$ for every $n\in \mathbb{Z}$, with precisely the following exceptions: $\mathfrak{c}_0 \neq ( \textup{im} \ad_{p_{-1}} )_0$ and $\mathfrak{c}_{-1} \neq ( \textup{im} \ad_{p_{-1}} )_{-1}$ since $\delta = [p_1, p_{-1}]$ and $p_{-1} = [p_{-1}, \rho]$ both belong to the image of $\ad_{p_{-1}} : {{}^L\!\g} \to {{}^L\!\g}$; and $\mathfrak{a}_1 \neq ( \ker \ad_{p_{-1}} )_1$ since $\ad_{p_{-1}} : {{}^L\!\g}_1 \hookrightarrow {{}^L\!\g}_0$ is injective. \qedhere
\end{enumerate}
\end{remark}
\section{${{}^L\!\g}$-opers and quasi-canonical form} \label{sec: opers}
\subsection{Inverse limits} \label{sec: inverse limits}
Recall the subalgebras ${}^L\mathfrak{h}$ and ${}^L\mathfrak{n}_+$ of ${{}^L\!\g}$ from \S\ref{sec: Cartan data}. We introduce also the Borel subalgebra ${}^L\b_+ \coloneqq {}^L\mathfrak{h} \oplus {}^L\mathfrak{n}_+ \subset {{}^L\!\g}$. These can be described in terms of the principal gradation of ${{}^L\!\g}$ as ${}^L\mathfrak{n}_+ = \bigoplus_{n > 0} {{}^L\!\g}_n$ and ${}^L\b_+ = \bigoplus_{n \geq 0} {{}^L\!\g}_n$.
Moreover, there is a natural descending $\mathbb{Z}_{> 0}$-filtration on ${}^L\mathfrak{n}_+$ (and ${}^L\b_+$) by Lie ideals
\begin{equation*}
{}^L\mathfrak{n}_k = \bigoplus_{n \geq k} {{}^L\!\g}_n, \qquad k \in \mathbb{Z}_{> 0}.
\end{equation*}
Since ${}^L\mathfrak{n}_k \subset {}^L\!\g'$ for each $k \in \mathbb{Z}_{> 0}$, these ideals also define a descending $\mathbb{Z}_{>0}$-filtration on the derived subalgebra ${}^L\b'_+ \coloneqq {}^L\b_+ \cap {}^L\!\g'$.
Let $\mathcal{M}$ be the field of meromorphic functions on $\mathbb{P}^1 \coloneqq \mathbb{C} \cup \{ \infty \}$. For any Lie subalgebra $\mathfrak{p} \subset {{}^L\!\g}$ we introduce the Lie algebra $\mathfrak{p}(\mathcal{M}) \coloneqq \mathfrak{p} \otimes \mathcal{M}$ of $\mathfrak{p}$-valued meromorphic functions on $\mathbb{P}^1$.
The Lie algebras ${}^L\mathfrak{n}_k(\mathcal{M})$, $k \in \mathbb{Z}_{> 0}$ endow ${}^L\mathfrak{n}_+(\mathcal{M})$ with a descending $\mathbb{Z}_{>0}$-filtration by ideals such that the quotient Lie algebras ${}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M})$, $k \in \mathbb{Z}_{>0}$ are nilpotent. Consider the Lie algebra defined as the inverse limit
\begin{equation*}
{}^L \hat\mathfrak{n}_+(\mathcal{M}) \coloneqq \varprojlim {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}).
\end{equation*}
By definition, its elements are infinite sums $\sum_{n > 0} y_n$, with $y_n \in {{}^L\!\g}_n (\mathcal{M})$, which truncate to finite sums when working in the quotient ${}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M})$ for any $k \in \mathbb{Z}_{> 0}$.
\begin{remark}
It should be stressed that for a given element $\sum_{n > 0} y_n$ of ${}^L \hat\mathfrak{n}_+(\mathcal{M})$, the orders of the poles of the ${{}^L\!\g}_n$-valued meromorphic functions $y_n$ are allowed to increase without bound as $n$ increases. Thus ${}^L \hat\mathfrak{n}_+(\mathcal{M})$ is strictly larger than ${}^L \hat\mathfrak{n}_+ \ox \mathcal{M}$, where ${}^L \hat\mathfrak{n}_+ \coloneqq \varprojlim {}^L\mathfrak{n}_+ /{}^L\mathfrak{n}_k$ is the completion of ${}^L\mathfrak{n}_+$.
\end{remark}
We also have the inverse limits
\begin{alignat*}{3} {}^L \hat\b_+(\mathcal{M}) &\coloneqq &{}^L\mathfrak{h}(\mathcal{M}) &\oplus {}^L \hat\mathfrak{n}_+(\mathcal{M}) &&= \varprojlim {}^L\b(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}),\\
{}^L \hat\g(\mathcal{M}) &\coloneqq {}^L\mathfrak{n}_-(\mathcal{M}) \oplus {}&{}^L\mathfrak{h}(\mathcal{M}) &\oplus {}^L \hat\mathfrak{n}_+(\mathcal{M}) &&= \varprojlim {{}^L\!\g}(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}).
\end{alignat*}
The latter is an inverse limit of vector spaces only, since the ${}^L\mathfrak{n}_k(\mathcal{M})$ are not Lie ideals in ${{}^L\!\g}(\mathcal{M})$. Nonetheless, ${}^L \hat\g(\mathcal{M})$ is a Lie algebra, with ${{}^L\!\g}(\mathcal{M})$ as a subalgebra.\footnote{Given any two elements $x= \sum_{n>-N} x_n$, $y=\sum_{n>-M} y_n$ of the vector space ${}^L \hat\g(\mathcal{M})$, with each $x_n,y_n\in {{}^L\!\g}_n(\mathcal{M})$, their Lie bracket $[x,y] = \sum_{k>-N-M} \sum_{\substack{n>-N,m>-M\\ n+m = k}} [x_n,y_m]$ is a well-defined element of ${}^L \hat\g(\mathcal{M})$ since the inner sum is finite for each $k$. This bracket obeys the Jacobi identity and agrees with the usual bracket on ${{}^L\!\g}(\mathcal{M})\subset {}^L \hat\g(\mathcal{M})$.}
\subsection{The group ${}^L \! \hat N_+(\mathcal{M})$} \label{sec: group lhN}
For every $k \in \mathbb{Z}_{> 0}$, the Baker-Campbell-Hausdorff formula then endows the vector space ${}^L\mathfrak{n}_+(\mathcal{M})/ {}^L\mathfrak{n}_k(\mathcal{M})$ with the structure of a group. Specifically, we denote this group by
\begin{equation*}
\exp \big( {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \big) \coloneqq \{ \exp(m) \,|\, m \in {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \},
\end{equation*}
whose elements are denoted formally as exponentials of elements in ${}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M})$. The group operation is then defined as
\begin{equation} \label{BCH formula}
\exp(x) \exp(y) \coloneqq \exp(x \bullet y) = \exp(x+y+ \mbox{\small $\frac{1}{2}$} [x,y] + \ldots)
\end{equation}
for all $x, y \in {}^L\mathfrak{n}_+(\mathcal{M})/ {}^L\mathfrak{n}_k(\mathcal{M})$. Here $x \bullet y$ is given by the Baker-Campbell-Hausdorff formula, whose first few term are shown in the exponent on the right hand side of \eqref{BCH formula}. The sum is finite because ${}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M})$ is nilpotent.
Now the formal exponential map
$\exp : {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \SimTo \exp\big( {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \big)$
is a bijection by definition,
and there are canonical group homomorphisms $\pi^m_k$ making the following diagram commutative:
\begin{equation*}
\begin{tikzpicture}[bij/.style={below,sloped,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=2.7em, column sep=2.5em,text height=1.5ex, text depth=0.25ex]
{
\exp \big( {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_m(\mathcal{M}) \big) & \exp \big( {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \big)\\
{}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_m(\mathcal{M}) & {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M})\\
};
\path[->>] (m-1-1) edge node[above]{$\pi^m_k$} (m-1-2);
\path[<-] (m-1-2) edge node[bij]{$\sim$} node[left=2.5mm]{\raisebox{-2.5mm}{$\exp$}} (m-2-2);
\path[<-] (m-1-1) edge node[bij]{$\sim$} node[left=2.5mm]{\raisebox{-2.5mm}{$\exp$}} (m-2-1);
\path[->>] (m-2-1) edge (m-2-2);
\end{tikzpicture}
\end{equation*}
for all $m \geq k > 0$. We define a group ${}^L \! \hat N_+(\mathcal{M})$ as the corresponding inverse limit
\begin{equation} \label{pro-unipotent}
{}^L \! \hat N_+(\mathcal{M}) \coloneqq \varprojlim \exp \big( {}^L\mathfrak{n}_+(\mathcal{M}) / {}^L\mathfrak{n}_k(\mathcal{M}) \big).
\end{equation}
The above commutative diagram defines an exponential map $\exp : {}^L \hat\mathfrak{n}_+(\mathcal{M}) \to {}^L \! \hat N_+(\mathcal{M})$.
\subsection{Definition of an ${{}^L\!\g}$-oper} \label{sec: def oper}
Now, and until \S\ref{sec: coord} below, we shall pick and fix a global coordinate $z$ on $\mathbb{C} \subset \mathbb{P}^1$. Thus, for any $f\in {}^L \hat\b_+(\mathcal{M})$ its holomorphic de Rham differential is $d f = dz \del_z f$.
Define $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ to be the affine space of connections of the form
\begin{equation} \nabla = d + p_{-1} dz + b dz, \qquad b\in {}^L \hat\b_+(\mathcal{M}). \label{affine space}\end{equation}
\begin{remark}
This is an affine space over ${}^L \hat\b_+(\mathcal{M})$. For the moment, in calling it a space of connections we mean merely that it admits an action of the group ${}^L \! \hat N_+(\mathcal{M})$ by gauge transformations, as we shall now describe. In \S\ref{sec: coord} we will discuss its behaviour under coordinate transformations.
\end{remark}
Define the adjoint action of the group ${}^L \! \hat N_+(\mathcal{M})$ on the vector space ${}^L \hat\g(\mathcal{M})$ as follows. Let $g = \exp(m) \in {}^L \! \hat N_+(\mathcal{M})$ with $m = \sum_{n > 0} m_n \in {}^L \hat\mathfrak{n}_+(\mathcal{M})$.
\begin{subequations} \label{gauge transf}
For any $u \in {}^L \hat\g(\mathcal{M})$, which we write as $u=\sum_{n \geq M} u_n$ for some $M \in \mathbb{Z}$, we define the adjoint action of $g$ on $u$ as
\begin{equation} \label{gauge transf c}
g u g^{-1} \coloneqq \sum_{k \geq 0} \frac{1}{k!} \ad_m^k u = \sum_{n \geq M} u_n + \sum_{n \geq M} \sum_{r > 0} [m_r, u_n] + \frac{1}{2} \sum_{n \geq M} \sum_{r,s > 0} \big[ m_s, [m_r, u_n] \big] + \ldots
\end{equation}
where the dots represent terms involving an increasing number of $m_n$'s with $n \geq 1$. Since $\deg m_n = n$ in the principal gradation of ${{}^L\!\g}$, it follows that for each $k \in \mathbb{Z}_{> 0}$ there are only finitely many terms of degree less than $k$ in the expression on the right hand side. Therefore the sum on the right hand side of \eqref{gauge transf c} is a well-defined element of ${}^L \hat\g(\mathcal{M})$.
\begin{lemma} \label{lem: ga}The definition \eqref{gauge transf c} defines an action of the group ${}^L \! \hat N_+(\mathcal{M})$ on ${}^L \hat\g(\mathcal{M})$.\end{lemma}
\begin{proof}
By the Baker-Campbell-Hausdorff formula we have
\begin{equation*}
\bigg( \sum_{k \geq 0} \frac{1}{k!} \ad_m^k \bigg) \bigg( \sum_{\ell \geq 0} \frac{1}{\ell!} \ad_n^\ell \bigg) u = \sum_{k \geq 0} \frac{1}{k!} \ad_{m \bullet n}^k u,
\end{equation*}
for any $m, n \in {}^L \hat\mathfrak{n}_+(\mathcal{M})$ and $u \in {}^L \hat\g(\mathcal{M})$.
\end{proof}
Now we define also
\begin{equation} \label{gauge transf a}
(dg) g^{-1} \coloneqq \sum_{k \geq 1} \frac{1}{k!} \ad_m^{k-1} dm
= \sum_{n > 0} d m_n + \frac{1}{2} \sum_{n,r > 0} [m_r, d m_n] + \ldots,
\end{equation}
\end{subequations}
which is a well-defined sum in ${}^L \hat\mathfrak{n}_+(\mathcal{M})dz$.
\begin{lemma} \label{lem: dga} For any $g,h\in {}^L \! \hat N_+(\mathcal{M})$, we have
\begin{equation} d(gh) (gh)^{-1} = g \left((dh) h^{-1}\right) g^{-1} + (dg) g^{-1}. \nonumber\end{equation}
\end{lemma}
\begin{proof} By direct calculation from the definitions \eqref{gauge transf} one verifies that
\begin{equation} d( gyg^{-1}) = \left[ dg g^{-1}, gyg^{-1}\right] + g (dy) g^{-1},\nonumber\end{equation}
for any $y \in {}^L \hat\g(\mathcal{M})$ and any $g\in {}^L \! \hat N_+(\mathcal{M})$.
By Lemma \ref{lem: ga}, we have $(gh) y (gh)^{-1} = g (h y h^{-1}) g^{-1}$, and on applying $d$ to both sides we obtain
\begin{equation} \left[- d(gh) (gh)^{-1} + g \left((dh) h^{-1}\right) g^{-1} + (dg) g^{-1}, x \right] = 0\nonumber\end{equation}
where $x=gyg^{-1}$ is arbitrary. Since the centre of ${}^L \hat\mathfrak{n}_+(\mathcal{M})$ is trivial, the result follows.
\end{proof}
Observe that $g p_{-1} g^{-1} -p_{-1} \in {}^L \hat\b_+(\mathcal{M})$, so that if $u \in {}^L \hat\g(\mathcal{M})$ is of the form $u=p_{-1} + b$ with $ b\in {}^L \hat\b_+(\mathcal{M})$ then so is $gug^{-1}$. Hence, from Lemmas \ref{lem: ga} and \ref{lem: dga}, we have the following.
\begin{proposition}
We have an action of ${}^L \! \hat N_+(\mathcal{M})$ on $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ defined by
\begin{align*}
{}^L \! \hat N_+(\mathcal{M}) \times \op_{{{}^L\!\g}}(\mathbb{P}^1) &\longrightarrow \op_{{{}^L\!\g}}(\mathbb{P}^1), \\
(g, d + p_{-1} dz + b dz) &\longmapsto d + g p_{-1} g^{-1} dz - (dg) g^{-1} + g b g^{-1}dz,
\end{align*}
which we refer to as the action by \emph{gauge transformations}. If $\nabla \in \op_{{{}^L\!\g}}(\mathbb{P}^1)$ then we denote by $\nabla^g \in \op_{{{}^L\!\g}}(\mathbb{P}^1)$ its gauge transformation by an element $g \in {}^L \! \hat N_+(\mathcal{M})$. \qed\end{proposition}
Our main object of interest, the space of \emph{${{}^L\!\g}$-opers}, can now be defined as the quotient of the affine space \eqref{affine space} by this gauge action
\begin{equation*}
\Op_{{{}^L\!\g}}(\mathbb{P}^1) \coloneqq \op_{{{}^L\!\g}}(\mathbb{P}^1) \big/ {}^L \! \hat N_+(\mathcal{M}).
\end{equation*}
In fact, we shall be interested in certain affine subspaces of $\Op_{{{}^L\!\g}}(\mathbb{P}^1)$ defined as follows.
\subsection{Twist function $\varphi$} \label{sec: twist function}
Fix a choice of meromorphic function $\varphi$ on $\mathbb{P}^1$, called the \emph{twist function}. We call a derivation element $\Lambda$ of ${}^L\mathfrak{h}$ \emph{normalised} if $\langle \Lambda, \mathsf k \rangle = 1$.
Define $\op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ to be the affine subspace of $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ consisting of connections of the form
\begin{equation} d + p_{-1} dz - \Lambda \varphi dz + b' dz, \qquad b' \in {}^L \hat\b'_+(\mathcal{M}). \nonumber\end{equation}
\begin{lemma} \label{lem: Lambda indep}
The affine subspace $\op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is independent of the choice of normalised derivation element $\Lambda$, and it is stable under ${}^L \! \hat N_+(\mathcal{M})$-valued gauge transformations.
\begin{proof}
Let $\Lambda$ and $\Lambda'$ be two choices of normalised derivation element of ${}^L\mathfrak{h}$. Then we have $\langle \Lambda, \mathsf k \rangle - \langle \Lambda', \mathsf k \rangle = 0$ so that $\Lambda - \Lambda'$ is in the span of the simple roots $\alpha_i$, $i \in I$ and hence $\Lambda - \Lambda' \in {}^L\mathfrak{h} \cap {}^L\b'_+$. It follows that $\op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is independent of $\Lambda$.
Since we have the direct sum of vector spaces ${{}^L\!\g} = {}^L\!\g' \oplus \mathbb{C} \Lambda$, and so in particular ${}^L\b_+ = {}^L\b'_+ \oplus \mathbb{C} \Lambda$, it follows from the definition of the action of ${}^L \! \hat N_+(\mathcal{M})$ on $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ by gauge transformations that $\op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is stable.
\end{proof}
\end{lemma}
Given a choice of twist function $\varphi$, we may now define the corresponding affine subspace of ${{}^L\!\g}$-opers as
\begin{equation} \label{lg opers twist}
\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi \coloneqq \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi \big/ {}^L \! \hat N_+(\mathcal{M}).
\end{equation}
If $\nabla \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ then we shall denote its class in $\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ by $[\nabla]$.
We introduce also the \emph{twisted de Rham differential} corresponding to the twist function $\varphi$. For every $f\in {}^L \hat\b_+(\mathcal{M})$,
\begin{align} \label{twisted de Rham def}
d^\varphi f &\coloneqq df - {h^\vee}^{-1}\varphi (\ad_\rho f) dz\\
&\,=dz \left( \del_z f - {h^\vee}^{-1} \varphi(\ad_\rho f) \right).\nonumber
\end{align}
\subsection{Quasi-canonical form of an ${{}^L\!\g}$-oper} \label{sec: quasi-can form}
Recall, from \S\ref{sec: princ sub}, the definition of the principal subalgebra $\mathfrak{a}$ of ${{}^L\!\g}$, and its basis $\{ p_j \}_{j \in \pm E} \cup \{ \delta, \rho \}$ where $E$ is the multiset of positive exponents of ${{}^L\!\g}$.
Let $\hat\mathfrak{a}(\mathcal{M})$ denote the completion of the algebra $\mathfrak{a}(\mathcal{M})$ of $\mathfrak{a}$-valued mermorphic functions on $\mathbb{P}^1$:
\begin{equation*}
\hat \mathfrak{a}(\mathcal{M}) \coloneqq \varprojlim \mathfrak{a}(\mathcal{M})/(\mathfrak{a} \cap \mathfrak{n}_k)(\mathcal{M}).
\end{equation*}
For each $n \in \mathbb{Z}_{\geq 0}$, let $\hat\mathfrak{a}_{\geq n}(\mathcal{M}) \coloneqq \varprojlim \mathfrak{a}_{\geq n}(\mathcal{M})/(\mathfrak{a}_{\geq n} \cap \mathfrak{n}_k)(\mathcal{M})$, where $\mathfrak{a}_{\geq n}\coloneqq \bigoplus_{j=n}^\8 \mathfrak{a}_j$. These are Lie subalgebras of $\hat\mathfrak{a}(\mathcal{M})$.
\begin{theorem} \label{thm: quasi-canonical form}
Every class $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ has a representative $\nabla \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ of the form
\begin{equation*}
\nabla = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + a dz, \qquad a\in \hat\mathfrak{a}_{\geq 1}(\mathcal{M}).
\end{equation*}
We say that such a representative is in \emph{quasi-canonical form}. For any $g \in {}^L \! \hat N_+(\mathcal{M})$, $\nabla^g$ is still in quasi-canonical form if and only if $g = \exp(f) \in \exp( \hat\mathfrak{a}_{\geq 2}(\mathcal{M}) )$, in which case
$\nabla^g = \nabla - d^\varphi f$.
Equivalently but more explicitly, every class $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ has a \emph{quasi-canonical} representative of the form
\begin{equation}
\nabla = d + \Bigg( p_{-1} - \frac{\varphi}{h^\vee} \rho + \sum_{j \in E} v_j p_j \Bigg) dz,\label{qcf}
\end{equation}
where $v_j$ is a meromorphic function on $\mathbb{P}^1$ for each positive exponent $j \in E$. The gauge transformations in ${}^L \! \hat N_+(\mathcal{M})$ preserving quasi-canonical form are precisely those of the form $\exp\big( \sum_{j \in E_{\geq 2}} f_j p_j \big)$ with $f_j$ meromorphic functions on $\mathbb{P}^1$. The effect of such gauge transformations on the functions $v_j$ is to send
\begin{equation}
v_j \longmapsto v_j - f'_j+ \frac{j \varphi}{h^\vee} f_j\label{vuptof}
\end{equation}
for all $j\in E_{\geq 2}$, and to leave $v_1$ invariant.
\end{theorem}
\begin{proof}
Let $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$. Since ${h^\vee}^{-1} \rho$ is a normalised derivation element of ${}^L\mathfrak{h}$, see \S\ref{sec: principal grad}, it follows using Lemma \ref{lem: Lambda indep} that there is a representative of $[\nabla]$ of the form $\nabla = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + \sum_{n \geq 0} u_ndz \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ for some functions $u_n \in {}^L\!\g'_n(\mathcal{M})$.
Let $g \in {}^L \! \hat N_+(\mathcal{M})$ be of the form $g = \exp(m)$ with $m = \sum_{n>0} m_n$ where $m_n \in \mathfrak{c}_n(\mathcal{M})$ for each $n > 0$. Using \eqref{gauge transf} we determine the gauge transformation of $\nabla$ by $g$ to be
\begin{equation*}
\nabla^g = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + \sum_{n \geq 0} a_ndz
\end{equation*}
where $a_n \in{}^L\!\g'_n(\mathcal{M})$ for each $n \geq 0$ are of the form
\begin{align} \label{recursion can form}
a_ndz &= u_ndz + [m_{n+1}, p_{-1}] dz + F_n\big( \{ u_k, dm_k, m_k \}_{k<n} \big)\\
&\qquad - d^\varphi m_n + [m_n, u_0] + \mbox{\small $\frac{1}{2}$} \big[ m_n, [m_1, p_{-1}] \big] dz + \mbox{\small $\frac{1}{2}$} (1 - \delta_{n,1}) \big[ m_1, [m_n, p_{-1}] \big] dz. \notag
\end{align}
The last term on the first line of the right hand side contains all the terms involving only $m_k$ and $u_k$ with $k < n$, and the second line contains those terms involving $m_n$. Let $w_ndz$, $w_n \in {}^L\!\g'_n(\mathcal{M})$, denote the sum of all these terms, \emph{i.e.} we rewrite \eqref{recursion can form} as
\begin{equation} \label{recursion can form bis}
a_n = u_n + [m_{n+1}, p_{-1}] + w_n.
\end{equation}
We can now use \eqref{recursion can form bis} to determine $m_n \in \mathfrak{c}_n(\mathcal{M})$ recursively for all $n > 0$ by requiring that $a_n \in \mathfrak{a}_n(\mathcal{M})$ for each $n \geq 0$. Indeed, suppose $m_k$ has been determined for each $k \leq n$. Then $w_n$ is known (in fact $w_0 = 0$ for the base case) and so decomposing $u_n + w_n$ relative to the direct sum \eqref{lg n decompA} (or rather ${}^L\!\g'_0 = \mathbb{C} \delta \oplus \mathfrak{c}_0$ in the case $n=0$) we can use the injectivity of $\ad_{p_{-1}} : \mathfrak{c}_{n+1} \to \mathfrak{c}_n$ to fix $m_{n+1}$ uniquely so as to cancel the component of $u_n + w_n$ in $\mathfrak{c}_n$, thereby ensuring that $a_n \in \mathfrak{a}_n(\mathcal{M})$ for all $n > 0$ or $a_0 \in (\mathbb{C} \delta)(\mathcal{M})$. This proves $\nabla^g \in d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + (\mathbb{C} \delta \oplus \hat\mathfrak{a}_{\geq 1})(\mathcal{M})dz$.
Let us write $\nabla^g = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + \delta \chi dz + a'dz$ with $\chi \in \mathcal{M}$ and $a' \in \hat\mathfrak{a}_{\geq 1}(\mathcal{M})$. In order to remove the term in $\delta$, we can apply a further gauge transformation by $h = \exp(- \chi p_1)$, which yields
\begin{equation*}
\nabla^{hg} = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + a'dz + d^\varphi (\chi p_1).
\end{equation*}
The last two terms belong to $\hat\mathfrak{a}_{\geq 1}(\mathcal{M})dz$, which completes the proof of the first statement.
Finally, suppose $\nabla = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + udz$ where $u \in \hat\mathfrak{a}_{\geq 1}(\mathcal{M})$ and let $g = \exp(m)$ for some $m \in {}^L \hat\mathfrak{n}_+(\mathcal{M})$. Then $\nabla^g = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + vdz$ where $v = \sum_{n \geq 0} a_n \in {}^L \hat\b'_+(\mathcal{M})$ is given by \eqref{recursion can form}. We want to recursively determine the components $m_n \in {{}^L\!\g}_n (\mathcal{M})$ of $m$ so that $v \in \hat\mathfrak{a}_{\geq 1}(\mathcal{M})$. Considering first the case $n=0$ we have $u_0 = a_0 = 0$ so that \eqref{recursion can form} reduces to $[m_1, p_{-1}] = 0$, and therefore $m_1 = 0$ since $ \ad_{p_{-1}} : {{}^L\!\g}_1 \to {{}^L\!\g}_0$ is injective. In particular, for every $n \geq 0$ the last two terms on the right hand side of \eqref{recursion can form} are now absent. Suppose that having $a_k \in \mathfrak{a}_k(\mathcal{M})$ for all $k < n$ requires that $m_k \in \mathfrak{a}_k(\mathcal{M})$ for each $k \leq n$. It just remains to show that the condition $a_n \in \mathfrak{a}_n(\mathcal{M})$ also implies $m_{n+1} \in \mathfrak{a}_{n+1}(\mathcal{M})$. For this we note that all the terms contained in $F_n\big( \{ u_k, dm_k, m_k \}_{k<n} \big)$ are commutators, which vanish using the fact that $u \in \hat\mathfrak{a}_{\geq 1}(\mathcal{M})$, $m_1=0$, $m_k \in \mathfrak{a}_k(\mathcal{M})$ for $1 < k \leq n$ and $\mathfrak{a}_{\geq 1}$ is abelian. So \eqref{recursion can form} now simply reads
\begin{equation*}
a_ndz - u_ndz + d^\varphi m_n = [m_{n+1}, p_{-1}] dz.
\end{equation*}
The left hand side clearly belongs to $\mathfrak{a}_n(\mathcal{M})dz$, using the fact that $\ad_\rho m_n = n m_n$. On the other hand, the right hand side belongs instead to $\mathfrak{c}_n(\mathcal{M})dz$ since $\mathfrak{c}_n = (\textup{im} \ad_{p_{-1}})_n$ for every $n > 0$, cf. Remark \ref{rem: pirem}(\ref{rem: Bn vs im p-1}). Hence both sides vanish so that, in particular, $m_{n+1} \in \mathfrak{a}_{n+1}(\mathcal{M})$. The vanishing of the left hand side is the final statement about the form of $\nabla^g - \nabla$.
\end{proof}
Although the quasi-canonical form of an ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is not unique, the coefficient $v_1$ of $p_1$ in any quasi-canonical form is the same. To emphasise the origin of this distinction between $v_1$ and all the remaining coefficients $v_j$, $j\in E_{\geq 2}$, the following is helpful.
\begin{proposition}
Every class $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ has a representative of the form
\begin{equation}
\nabla = d + \Bigg( p_{-1} - \frac{\varphi}{h^\vee} \rho + v_0 \delta + \sum_{j \in E} v_j p_j \Bigg) dz,
\end{equation}
where $v_j$ is a meromorphic function on $\mathbb{P}^1$ for each $j \in \{0\} \cup E$. The gauge transformations in ${}^L \! \hat N_+(\mathcal{M})$ preserving this form are precisely those of the form $\exp\big( \sum_{j \in E} f_j p_j \big)$ with $f_j$ meromorphic functions on $\mathbb{P}^1$. The effect of such gauge transformations on the functions $v_j$ is as in \eqref{vuptof} for all $j\in E_{\geq 2}$, and now also
\begin{align}
v_0 & \longmapsto v_0 + f_1 \nonumber\\
v_1 &\longmapsto v_1 - f'_1+ \frac{\varphi}{h^\vee} f_1.\nonumber
\end{align}
\end{proposition}
\begin{proof}The proof is very similar to that of Theorem \ref{thm: quasi-canonical form}.\end{proof}
Consequently, if one works not with ${{}^L\!\g}$ but with the quotient by the centre ${{}^L\!\g}/\mathbb{C}\delta$ then the distinction between $v_1$ and the rest disappears, as follows. (We return to this point in \S\ref{sec: quad Ham} below.)
\begin{corollary}\label{cor: v1}
For an $({{}^L\!\g}/\mathbb{C}\delta)$--oper $[\nabla] \in \Op_{{{}^L\!\g}/\mathbb{C}\delta}(\mathbb{P}^1)^\varphi$, there is always a quasi-canonical representative of the form \eqref{qcf}. The gauge transformations in ${}^L \! \hat N_+(\mathcal{M})$ preserving this form are precisely those of the form $\exp\big( \sum_{j \in E} f_j p_j \big)$ with $f_j$ meromorphic functions on $\mathbb{P}^1$. The effect of such gauge transformations on the functions $v_j$ is as in \eqref{vuptof} but now for all $j\in E$ (including $1$).\qed
\end{corollary}
Returning to ${{}^L\!\g}$-opers, we have the following explicit expression for the coefficient $v_1$ in any quasi-canonical form.
\begin{proposition} \label{prop: can form u1}
The coefficient of $p_1\in \mathfrak{a}_1$ of any quasi-canonical form of an ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is
\begin{equation*}
v_1 = {h^\vee}^{-1} \big( \mbox{\small $\frac{1}{2}$} (u_0 | u_0) + (\rho | u'_0) - {h^\vee}^{-1} \varphi (\rho | u_0) + (p_{-1} | u_1) \big),
\end{equation*}
where
\begin{equation} \nabla = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + \sum_{n \geq 0} u_n dz \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi, \nonumber\end{equation}
with $u_n \in {}^L\!\g'_n$, is any representative of $[\nabla]$.
\begin{proof}
In the present case, the recursion relation \eqref{recursion can form} for $n=0$ gives $u_0 = - [m_1, p_{-1}]$. Note that here we are including in $m_1$ the term $-\chi p_1$ coming from the subsequent gauge transformation performed in the second step of the proof of Theorem \ref{thm: quasi-canonical form}. Using this, the relation \eqref{recursion can form} for $n=1$ then reads
\begin{align*}
a_1 dz &= u_1 dz + [m_2, p_{-1}] dz + \mbox{\small $\frac{1}{2}$} \big[ m_1, [m_1, p_{-1}] \big] dz + [m_1, u_0] dz - d^\varphi m_1\\
&= u_1 dz + [m_2, p_{-1}] dz - \mbox{\small $\frac{1}{2}$} \big[ m_1, [m_1, p_{-1}] \big] dz - d^\varphi m_1.
\end{align*}
By applying the linear map $(p_{-1}| \cdot)$ to both sides we find
\begin{equation*}
(p_{-1} | a_1)dz = (p_{-1} | u_1) dz + \mbox{\small $\frac{1}{2}$} (u_0| u_0) dz - (p_{-1} | d^\varphi m_1),
\end{equation*}
where to obtain the second term on the right hand side we have used again the fact that $u_0 = - [m_1, p_{-1}]$. To evaluate further the last term above, we note that
\begin{equation*}
(p_{-1} | d^\varphi m_1) = ([p_{-1}, \rho] | d^\varphi m_1) = (\rho | [d^\varphi m_1, p_{-1}]) = - (\rho | d u_0) + {h^\vee}^{-1} \varphi (\rho | u_0) dz,
\end{equation*}
where in the last step we used the definition \eqref{twisted de Rham def} of the twisted de Rham differential.
Since $(p_{-1} | p_1) = h^\vee$, we arrive at the desired expression for $v_1 = {h^\vee}^{-1} (p_{-1} | a_1)$.
\end{proof}
\end{proposition}
\begin{remark} \label{rem: can form u1}
Let $\nabla \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ be as in the statement of Proposition \ref{prop: can form u1} and introduce $\wt u_0 \coloneqq - {h^\vee}^{-1} \rho \,\varphi + u_0 \in {{}^L\!\g}_0(\mathcal{M})$ and $\wt u_n \coloneqq u_n \in {{}^L\!\g}_n(\mathcal{M})$ for every $n > 0$. Then we have
\begin{equation} \nabla = d + p_{-1} dz + \sum_{n \geq 0} \wt u_n dz,\nonumber\end{equation}
and, using the fact that $(\rho | \rho) = 0$, cf. \S\ref{sec: principal grad}, the expression for the coefficient $v_1$ in any quasi-canonical form of $[\nabla]$ can be rewritten as
\begin{equation*}
v_1 = {h^\vee}^{-1} \big( \mbox{\small $\frac{1}{2}$} (\wt u_0 | \wt u_0) + (\rho | \wt u'_0) + (p_{-1} | \wt u_1) \big) dz. \qedhere
\end{equation*}
\end{remark}
\subsection{Twisted homology and functions on the space of affine opers} \label{sec: twisted homology coord}
Our goal is to describe functions $\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi\to \mathbb{C}$ on the space of meromorphic ${{}^L\!\g}$-opers on $\mathbb{P}^1$.
Theorem \ref{thm: quasi-canonical form} shows that one well-defined map $\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi\to \mathcal{M}$ is given by extracting the coefficient $v_1$ of $p_1$ in any quasi-canonical form (and Proposition \ref{prop: can form u1} gives the explicit formula). Obviously we can then ``pair'' this function with any point $p\in \mathbb{P}^1$ where $v_1$ doesn't have a pole, by simply evaluating it there, $v_1 \mapsto v_1(p)$.
Yet Theorem \ref{thm: quasi-canonical form} also shows that the remaining data in the oper comes in the form of functions $v_i$, $i\in E_{\geq 2}$, defined only up to certain ``twisted'' derivatives. So they are in some sense cohomology elements.
In \S\ref{sec: coord} we shall make that idea precise by showing that each of the functions $v_i$, $i\in E_{\geq 2}$, represents a cocycle in the cohomology of the de Rham complex with coefficients in a certain local system. A generalization of the usual de Rham theorem states that there is a pairing (given by integrating) between such cocycles and the cycles of the singular homology with coefficients in the dual local system. For the moment though, we are not quite in a position to invoke such results: a local system is a vector bundle with a flat connection and we cannot yet identify the correct bundle, since we have no handle on its transition functions.
Nonetheless, it is already possible to define the integrals one should take to obtain functions $\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi\to \mathbb{C}$, as follows.
First, let us now and for the remainder of this article restrict attention to the case when the twist function $\varphi$ has only simple poles, \emph{i.e.} we shall take it to be of the form
\begin{equation} \label{twist function}
\varphi(z) \coloneqq \sum_{i=1}^N \frac{k_i}{z - z_i},
\end{equation}
for some $k_i \in \mathbb{C}^\times$, $i=1,\ldots, N$. It has simple poles in the subset $\{ z_i \}_{i=1}^N \subset \mathbb{P}^1$.
\begin{remark}
Based on the situation in finite types, \cite{FFT,VY3}, our expectation is that introducing a pole of order $p\geq 2$ at $z_i$ in the twist function $\varphi$ (and more generally in the Miura ${{}^L\!\g}$-opers of \S\ref{sec: class of Miura opers} below) will correspond to a Gaudin model in which one assigns to the marked point $z_i$ a representation of a \emph{Takiff algebra} $\dot{\mathfrak{g}}[t]/t^p\dot{\mathfrak{g}}[t]$ over the affine Kac-Moody algebra $\dot{\mathfrak{g}}$.
\end{remark}
We denote the complement of the set of marked points $\{ z_i \}_{i=1}^N$ as
\begin{equation} \label{set X def}
X \coloneqq \mathbb{C} \setminus \{ z_i \}_{i=1}^N.
\end{equation}
Consider the multivalued holomorphic function $\mathcal{P}$ on $X$ defined by
\begin{equation}\label{def: P}
\mathcal{P}(z) \coloneqq \prod_{i=1}^N (z - z_i)^{k_i},
\end{equation}
which is related to the twist function as $\varphi(z) = \partial_z \log \mathcal{P}(z)$.
Observe that the ambiguity in the function $v_j$, namely \eqref{vuptof}, can be expressed as
\begin{equation*}
\mathcal{P}(z)^{-j/h^\vee} v_j(z) \longmapsto \mathcal{P}(z)^{-j/h^\vee} v_j(z) - \partial_z \big( \mathcal{P}(z)^{-j/h^\vee} f_j(z) \big).
\end{equation*}
We therefore obtain the following corollary of Theorem \ref{thm: quasi-canonical form}.
\begin{corollary}\label{cor: opint}
Suppose
\begin{equation} \nabla = d + p_{-1} dz - {h^\vee}^{-1} \rho \, \varphi dz + \sum_{j \in E} v_j p_j dz\nonumber\end{equation}
is a quasi-canonical form of an oper $[\nabla]\in \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$.
Let $r\in E_{\geq 2}$ be a positive exponent greater than or equal to 2.
Let $\gamma$ be any contour in $X = \mathbb{C} \setminus \{ z_i \}_{i=1}^N$ such that
\begin{enumerate}
\item $\gamma$ is closed;
\item there exists a single-valued branch of the function $\mathcal{P}^{-r/h^\vee}$ along $\gamma$;
\item $v_r$ has no poles (and is therefore holomorphic) along $\gamma$.
\end{enumerate}
Then the following integral is gauge-invariant, \emph{i.e.} it depends only on the oper $[\nabla]$ and is independent of the choice of quasi-canonical form:
\begin{equation} I_r^\gamma([\nabla]) \coloneqq \int_\gamma \mathcal{P}(z)^{-r/h^\vee} v_r(z)dz.\nonumber\end{equation}
This function is invariant under smooth deformations of the contour $\gamma$ which do not cross any pole of $v_r$ or any of the marked points $\{z_i\}_{n=1}^\8$. \qed
\end{corollary}
\section{Miura ${{}^L\!\g}$-opers and the Bethe equations} \label{sec: Miura opers}
\subsection{A class of Miura ${{}^L\!\g}$-opers} \label{sec: class of Miura opers}
Define a \emph{Miura ${{}^L\!\g}$-oper} as a connection of the form
\begin{equation} \label{g-oper with twist}
\nabla \coloneqq d + p_{-1} dz + u \, dz \in \op_{{{}^L\!\g}}(\mathbb{P}^1)
\end{equation}
where $u \in \null^L \mathfrak{h}(\mathcal{M}) = \mathfrak{h}^\ast(\mathcal{M})$, using the natural identification $\null^L \mathfrak{h} = \mathfrak{h}^\ast$.
Let
$\MOp_{{{}^L\!\g}}(\mathbb{P}^1)$
denote the affine space of all Miura ${{}^L\!\g}$-opers. Given a Miura ${{}^L\!\g}$-oper $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ we refer to its class $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)$ as the underlying ${{}^L\!\g}$-oper.
Recall the twist function $\varphi \in \mathcal{M}$ defined in \eqref{twist function}. Given any choice of normalised derivation element $\Lambda$ of ${}^L\mathfrak{h}$, cf. \S\ref{sec: def oper}, we introduce the affine subspace
\begin{equation} \label{MOp with twist}
\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi \coloneqq d + p_{-1} dz - \Lambda \varphi dz + {}^L\mathfrak{h}'(\mathcal{M}) dz
\end{equation}
of $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ where ${}^L\mathfrak{h}'$ is the span of the simple roots $\{ \alpha_i \}_{i=0}^\ell$. It follows from the first part of the proof of Lemma \ref{lem: Lambda indep} that $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ is independent of the choice of normalised derivation $\Lambda$.
In this paper we shall be interested in Miura ${{}^L\!\g}$-opers \eqref{g-oper with twist} where the meromorphic $\mathfrak{h}^\ast$-valued function $u \in \mathfrak{h}^\ast(\mathcal{M})$ has at most simple poles. Fix a collection of weights $\lambda_1, \ldots, \lambda_N \in \mathfrak{h}^\ast$. We shall, more specifically, be interested in the case when $u$ has a simple pole at each marked point $z_i$, $i =1, \ldots, N$, with residue $-\lambda_i \in {}^L\mathfrak{h}$. We will furthermore allow the function $u$ to have simple poles at some additional $m \in \mathbb{Z}_{\geq 0}$ marked points $w_j$, $j = 1, \ldots, m$, with residues there given by simple roots $\alpha_{c(j)}$, for some function $c : \{ 1, \ldots, m\} \to I= \{ 0,\ldots, \ell \}$. In other words, we shall consider Miura ${{}^L\!\g}$-opers of the form
\begin{equation} \label{u Miura op def}
\nabla = d + p_{-1} dz - \sum_{i=1}^N \frac{\lambda_i}{z - z_i} dz + \sum_{j=1}^m \frac{\alpha_{c(j)}}{z - w_j} dz.
\end{equation}
The residue of $\nabla$ at infinity is the weight $\lambda_\infty \coloneqq \sum_{i=1}^N \lambda_i - \sum_{j=1}^m \alpha_{c(j)} \in {}^L\mathfrak{h}$.
Decomposing each weight $\lambda_i \in \mathfrak{h}^\ast$ with respect to the basis $\{ \alpha_i \}_{i=1}^\ell \cup \{ \rho, \delta \}$, we may write it as
\begin{equation} \label{hw lambda i}
\lambda_i = \dot{\lambda}_i + \frac{k_i}{h^\vee} \rho - \Delta_i \delta
\end{equation}
for some $\dot{\lambda}_i \in \dot\mathfrak{h}^\ast \coloneqq \textup{span}_{\mathbb{C}} \{ \alpha_j \}_{j=1}^\ell$, $k_i \coloneqq \langle \lambda_i, \mathsf k \rangle \in \mathbb{C}$ and $\Delta_i \coloneqq -\langle \lambda_i, \mathsf d \rangle \in \mathbb{C}$. Since $\dot\lambda_i$, $\delta$ and the simple roots $\alpha_{c(j)}$ all lie in ${}^L\mathfrak{h}'$, it follows that $\nabla$ belongs to the space $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ with the twist function $\varphi$ defined as in \eqref{twist function} in terms of the $k_i$, $i=1,\ldots, N$.
\subsection{Regular points} \label{sec: regular points}
Let $\mathcal{M}^{\rm reg}_x$ be the $\mathbb{C}$-algebra of meromorphic functions on $\mathbb{P}^1$ which are holomorphic at $x$.
We shall say that an ${{}^L\!\g}$-connection $\nabla = d + p_{-1} dz + b \, dz$ in $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ is \emph{regular} at a point $x \in \mathbb{C}$ if in fact $b \in {}^L \hat\b_+(\mathcal{M}^{\rm reg}_x)$, \emph{i.e.} $b$ has no pole at $x$. Let $\op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$ denote the set of all such ${{}^L\!\g}$-connections. It is stabilised by the action of the subgroup ${}^L \! \hat N_+(\mathcal{M}^{\rm reg}_x) \subset {}^L \! \hat N_+(\mathcal{M})$ on $\op_{{{}^L\!\g}}(\mathbb{P}^1)$ by gauge transformations. In particular, we can define the quotient space
\begin{equation*}
\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x \coloneqq \op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x \big/ {}^L \! \hat N_+(\mathcal{M}^{\rm reg}_x).
\end{equation*}
If $x$ is not a pole of the twist function $\varphi$ we may similarly define the space $\op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi_x$ of ${{}^L\!\g}$-connections of the form $\nabla = d + p_{-1} dz - \Lambda \varphi dz + b' dz$ where $\Lambda$ is a normalised derivation element of ${}^L\mathfrak{h}$ and $b' \in {}^L \hat\b'_+(\mathcal{M}^{\rm reg}_x)$. We then also define
\begin{equation*}
\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi_x \coloneqq \op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi_x \big/ {}^L \! \hat N_+(\mathcal{M}^{\rm reg}_x).
\end{equation*}
\begin{lemma}
For each $x\in \mathbb{C}$ there is a canonical injection
\begin{equation} \label{Op reg to Op}
\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x \lhook\joinrel\relbar\joinrel\rightarrow \Op_{{{}^L\!\g}}(\mathbb{P}^1).
\end{equation}
When $x$ is not a pole of $\varphi$ there is a canonical injection $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^{\varphi}_x \hookrightarrow \Op_{{{}^L\!\g}}(\mathbb{P}^1)^{\varphi}$.
\begin{proof}
Since ${}^L \! \hat N_+(\mathcal{M}^{\rm reg}_x) \subset {}^L \! \hat N_+(\mathcal{M})$ we certainly have a well-defined canonical map $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x \to \Op_{{{}^L\!\g}}(\mathbb{P}^1)$.
Suppose that two ${{}^L\!\g}$-connections $\nabla, \nabla' \in \op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$, regular at $x$, define the same class $[\nabla] = [\nabla']$ in $\Op_{{{}^L\!\g}}(\mathbb{P}^1)$. We must show that they also define the same class in $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$.
Applying the procedure in the first half of the proof of Theorem \ref{thm: quasi-canonical form} to both of the ${{}^L\!\g}$-connections $\nabla, \nabla' \in \op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$, with $\mathcal{M}$ there replaced by $\mathcal{M}^{\rm reg}_x$, we find that they can each be brought to a quasi-canonical form which is regular at $x$ using a gauge transformation in ${}^L \! \hat N_+(\mathcal{M}^{\rm reg}_x)$. On the other hand, by the argument in the second half of the proof of Theorem \ref{thm: quasi-canonical form} with $\mathcal{M}$ there replaced by $\mathcal{M}^{\rm reg}_x$, we also deduce that these two quasi-canonical forms are related by a gauge transformation in $\exp(\hat \mathfrak{a}_{\geq 2}(\mathcal{M}^{\rm reg}_x))$. It now follows that $\nabla$ and $\nabla'$ define the same class in $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$.
\end{proof}
\end{lemma}
We will identify $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$ with its image in $\Op_{{{}^L\!\g}}(\mathbb{P}^1)$ under the injection \eqref{Op reg to Op}. We then say that an ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)$ is \emph{regular} at $x \in \mathbb{P}^1$ if it lies in $\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x$. More concretely, this means that there exists a representative of the class $[\nabla]$ in $\op_{{{}^L\!\g}}^{\rm reg}(\mathbb{P}^1)_x$, \emph{i.e.} which has no pole at $x$.
Recall the set $X = \mathbb{C} \setminus \{ z_i \}_{i=1}^N$ introduced in \S\ref{sec: twisted homology coord}.
We define the space of \emph{${{}^L\!\g}$-opers regular on $X$} as
\begin{equation*}
\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_X \coloneqq \bigcap_{x \in X} \Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_x \subset \Op_{{{}^L\!\g}}(\mathbb{P}^1).
\end{equation*}
Since the twist function has no poles in $X$, we may also define the space of \emph{${{}^L\!\g}$-opers with twist function $\varphi$ regular on $X$} as
\begin{equation*}
\Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi_X \coloneqq \bigcap_{x \in X} \Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi_x \subset \Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi.
\end{equation*}
\subsection{Bethe equations}
\begin{proposition} \label{prop: Miura oper BAE}
Let $x \in X$ and suppose $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ has the form
\begin{equation*}
\nabla = d + \Big( p_{-1} - {h^\vee}^{-1} \varphi \, \rho + \frac{\alpha_i}{z - x} + r \Big) dz
\end{equation*}
for some simple root $\alpha_i$, $i \in I$, where $r \in {}^L\mathfrak{h}'(\mathcal{M})$ is regular at $x$. Then $[\nabla]$ is regular at $x$, \emph{i.e.} there is a representative of $[\nabla]$ which is regular at $x$, if and only if
\begin{equation} \label{affine BAE}
h^\vee \langle r(x), \check\alpha_i \rangle = \varphi(x).
\end{equation}
\begin{proof}
Suppose first that $\wt\nabla$ is a representative of $[\nabla]$ which is regular at $x$. Then the gauge transformation parameter $g \in {}^L \! \hat N_+(\mathcal{M})$ determined by following the recursive procedure of Theorem \ref{thm: quasi-canonical form} is of the form $g = \exp(-\chi \delta) \exp(\sum_{n>0} m_n)$ where $\chi \in \mathcal{M}^{\rm reg}_x$ and $m_n \in \mathfrak{c}_n(\mathcal{M}^{\rm reg}_x)$, $n > 0$ are all regular at $x$. Therefore, the quasi-canonical form $\wt\nabla^g$ of $[\nabla]$ is regular at $x$. Then, in particular, its component in $\mathfrak{a}_1$ must be regular. Yet by Proposition \ref{prop: can form u1} the latter is proportional to (note that in the notation of Proposition \ref{prop: can form u1} we have $u_0 = \frac{\alpha_i}{z - x} + r$ and $u_1 = 0$ in the present case)
\begin{equation*}
\frac{(- \alpha_i | - \alpha_i + 2 \rho)}{2(z-x)^2} dz + \frac{(\alpha_i | r(x)) - {h^\vee}^{-1} \varphi(x) (\alpha_i | \rho)}{z - x} dz + \ldots
\end{equation*}
where the dots represent terms regular at $z = x$. Recalling that $\langle \rho, \check \alpha_i \rangle = 1$ for all $i \in I$, and in view of \eqref{bilinear form def}, we see that the double pole term here vanishes and the simple pole term vanishes only if the equation \eqref{affine BAE} holds.
Conversely, suppose \eqref{affine BAE} holds. Let $g = \exp \big( \! -\frac{1}{z-x} \check e_i \big)$. For all $u \in {}^L\mathfrak{h}(\mathcal{M})$ we have
\begin{align*}
(dg) g^{-1} &= \check e_i \frac{dz}{(z-x)^2}, \qquad
g u g^{-1} = u - \frac{1}{z-x} [\check e_i, u] = u + \frac{\langle u, \check \alpha_i \rangle}{z-x} \check e_i,\\
g p_{-1} g^{-1} &= p_{-1} - \frac{1}{z-x} [\check e_i, p_{-1}] + \frac{1}{2(z-x)^2} \big[ \check e_i, [\check e_i, p_{-1}] \big] = p_{-1} - \frac{\alpha_i}{z-x} - \frac{\check e_i}{(z-x)^2}.
\end{align*}
Therefore, with $u = - {h^\vee}^{-1} \varphi\, \rho + \frac{\alpha_i}{z-x} + r$ we find
\begin{align*}
\nabla^g &= d - (dg) g^{-1} + g p_{-1} g^{-1} dz + g u g^{-1} dz\\
&= d + \Big( p_{-1} - {h^\vee}^{-1} \varphi\, \rho + r(z) + \frac{\langle r(z), \check \alpha_i \rangle - {h^\vee}^{-1} \varphi(z)}{z-x} \check e_i \Big) dz.
\end{align*}
(The coefficient of the $(z-x)^{-2}$ term is $-1-1+\langle \alpha_i,\check\alpha_i\rangle=0$.)
This is regular at $x$ by virtue of \eqref{affine BAE}, so the ${{}^L\!\g}$-oper $[\nabla] = [\nabla^g]$ is regular at $x$.
\end{proof}
\end{proposition}
\begin{remark} \label{rem: Miura oper BAE}
In the statement of Proposition \ref{prop: Miura oper BAE}, if we write $\nabla \in \op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ as
\begin{equation*}
\nabla = d + \Big( p_{-1} + \frac{\alpha_i}{z - x} + \wt r \Big) dz
\end{equation*}
where $\wt r \in {}^L\mathfrak{h}(\mathcal{M})$ is regular at $x$, noting that $\varphi$ is regular at $x \in X$, then the Bethe equation \eqref{affine BAE} for the regularity of $[\nabla]$ at $x$ simply reads
\begin{equation*} \langle \wt r(x), \check\alpha_i \rangle = 0.\qedhere\end{equation*}
\end{remark}
Recall the subset $X = \mathbb{C} \setminus \{ z_i \}_{i=1}^N$ of $\mathbb{P}^1$ introduced in \S\ref{sec: twisted homology coord}.
\begin{corollary} \label{cor: Bethe equations}
Let $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ be of the form \eqref{u Miura op def}. We have $[\nabla] \in \Op^{\rm reg}_{{{}^L\!\g}}(\mathbb{P}^1)_X$ if and only if
\begin{equation} \label{Bethe equations}
- \sum_{i=1}^N \frac{(\lambda_i|\alpha_{c(j)})}{w_j - z_i} + \sum_{\substack{i =1\\ i \neq j}}^m \frac{(\alpha_{c(i)}|\alpha_{c(j)})}{w_j - w_i} = 0, \qquad j= 1, \ldots, m.
\end{equation}
We refer to these as the \emph{Bethe equations}.
In particular, when the Bethe equations hold then there exists a quasi-canonical representative of $[\nabla]$ in which the coefficient functions $v_i(z)$, $i\in E$, have no singularities at the Bethe roots $w_j$, $j=1,\dots,m$.
\begin{proof}
The ${{}^L\!\g}$-oper $[\nabla]$ is certainly regular away from the points $z_i$, $i =1, \ldots, N$ and $w_j$, $j=1, \ldots, m$, since the defining representative $\nabla$ in \eqref{u Miura op def} is regular there.
And by Proposition \ref{prop: Miura oper BAE}, see also Remark \ref{rem: Miura oper BAE}, the ${{}^L\!\g}$-oper $[\nabla]$ is also regular at each of the $w_j$ if and only if the $j^{\rm th}$ Bethe equation \eqref{Bethe equations} holds.
\end{proof}
\end{corollary}
Define the \emph{master function} to be
\begin{align} \label{Master function}
\Phi &\coloneqq \sum_{\substack{i, j=1\\ i < j}}^N (\lambda_i|\lambda_j) \log(z_i - z_j) - \sum_{i=1}^N \sum_{j=1}^m (\lambda_i|\alpha_{c(j)}) \log (z_i - w_j) \notag\\
&\qquad\qquad\qquad\qquad\quad + \sum_{\substack{i, j=1\\ i < j}}^m (\alpha_{c(i)}|\alpha_{c(j)}) \log (w_i - w_j).
\end{align}
It is a multivalued function on $\mathbb{C}\setminus\{z_1,\dots,z_N,w_1,\dots,w_m\}$. One sees that the Bethe equations \eqref{Bethe equations} are given by
\begin{equation*}
\frac{\partial \Phi}{\partial w_j} = 0, \qquad j = 1, \ldots, m.
\end{equation*}
Moreover it is known -- see Appendix \ref{sec: hyp arr} for a brief review -- that the eigenvalues of the quadratic Hamiltonians \eqref{quad Ham intro} are given in terms of the partial derivates $\partial \Phi / \partial z_i$.
The following result shows that the partial derivatives of the master function can be read off from the ${{}^L\!\g}$-oper underlying the Miura ${{}^L\!\g}$-oper $\nabla$ of \eqref{u Miura op def}.
\begin{theorem} \label{thm: Miura oper p1 coeff}
Let $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ be a Miura oper of the form \eqref{u Miura op def}. The coefficient of $p_1$ in any quasi-canonical form of the underlying ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(\mathbb{P}^1)$ is
\begin{equation*}
\frac{1}{h^\vee} \Bigg( \sum_{i=1}^N \frac{\mbox{\small $\frac{1}{2}$} (\lambda_i | \lambda_i + 2 \rho)}{(z- z_i)^2} + \sum_{i=1}^N \frac{\partial \Phi / \partial z_i}{z - z_i} + \sum_{j=1}^m \frac{\partial \Phi / \partial w_j}{z - w_j} \Bigg) dz.
\end{equation*}
\begin{proof}
Let us write the Miura ${{}^L\!\g}$-oper in \eqref{u Miura op def} as $\nabla = d + p_{-1} dz + u(z) dz$ with
\begin{equation*}
u(z) = \frac{\alpha_{c(j)}}{z - w_j} + \wt r(z), \qquad
\wt r(z) \coloneqq - \sum_{i=1}^N \frac{\lambda_i}{z - z_i} + \sum_{\substack{i=1\\ i \neq j}}^m \frac{\alpha_{c(i)}}{z - w_i}.
\end{equation*}
The result follows from a direct computation, using the expression given in Remark \ref{rem: can form u1} for the coefficient of $p_1$ in any quasi-canonical form of the ${{}^L\!\g}$-oper $[\nabla]$, with $\wt u_0 = u$ given above and $\wt u_1 = 0$. Explicitly, we find on the one hand
\begin{equation*}
\mbox{\small $\frac{1}{2}$} (u(z)|u(z)) = \frac{1}{2} \sum_{i=1}^N \frac{(\lambda_i|\lambda_i)}{(z - z_i)^2} + \frac{1}{2} \sum_{j=1}^m \frac{(\alpha_{c(j)}| \alpha_{c(j)})}{(z - w_j)^2} + \sum_{i=1}^N \frac{\partial \Phi / \partial z_i}{z - z_i} + \sum_{j=1}^m \frac{\partial \Phi / \partial w_j}{z - w_j},
\end{equation*}
where the derivatives of the master function \eqref{Master function} with respect to the variables $z_i$, $i = 1, \ldots, N$ and $w_j$, $j = 1, \ldots, m$ read
\begin{equation*}
\frac{\partial \Phi}{\partial z_i} = \sum_{\substack{j=1\\ j \neq i}}^N \frac{(\lambda_i|\lambda_j)}{z_i - z_j} - \sum_{j=1}^m \frac{(\lambda_i|\alpha_{c(j)})}{z_i - w_j}, \qquad
\frac{\partial \Phi}{\partial w_j} = - \sum_{i=1}^N \frac{(\lambda_i|\alpha_{c(j)})}{w_j - z_i} + \sum_{\substack{i =1\\ i \neq j}}^m \frac{(\alpha_{c(i)}|\alpha_{c(j)})}{w_j - w_i}.
\end{equation*}
On the other hand, we also have
\begin{equation*}
(\rho|u'(z)) = \sum_{i=1}^N \frac{(\rho|\lambda_i)}{(z - z_i)^2} - \sum_{j=1}^m \frac{(\rho| \alpha_{c(j)})}{(z - w_j)^2}.
\end{equation*}
Adding the above and using the fact that $2 (\alpha_i | \rho) = (\alpha_i | \alpha_i)$ for any simple root $\alpha_i$ (since $\langle \rho, \check\alpha_i \rangle = 1$) we obtain the result.
\end{proof}
\end{theorem}
\section{Conjectures on affine Gaudin Hamiltonians} \label{sec: main conj}
Before turning to the affine case, let us recall some features of the situation in finite types.
When $\dot{\mathfrak{g}}$ is a Kac-Moody algebra of finite type, the \emph{quantum Gaudin algebra} is a commutative subalgebra of $U(\dot{\mathfrak{g}}^{\oplus N})$ generated by the coefficients in the partial fraction decompositions of a finite collection of $U(\dot{\mathfrak{g}}^{\oplus N})$-valued meromorphic functions $S_k(z)$, indexed by the exponents $k\in\bar E$. The $S_k(z)$ have poles at the marked points $z_1,\dots,z_N$. They commute amongst themselves and with the diagonal action of $\dot{\mathfrak{g}}$. In particular, $1\in \bar E$, and the explicit form of $S_1(z)$ is
\begin{equation} \label{S1 def finite}
S_1(z) = \sum_{i=1}^N \frac{\mathcal{C}^{(i)}}{(z - z_i)^2} + \sum_{i=1}^N \frac{\mathcal{H}_i}{z - z_i},
\end{equation}
where the $\mathcal{H}_i$ are the quadratic Hamiltonians in \eqref{quad Ham intro} and where $\mathcal{C}^{(i)}$ is the copy of the Casimir element $\mathcal{C}\in U(\dot{\mathfrak{g}})^\dot{\mathfrak{g}}$ in the $i^{\rm th}$ tensor factor of $U(\dot{\mathfrak{g}}^{\oplus N})$. More generally, the pole terms of highest order in each $S_k(z)$ are $\sum_{i=1}^N \mathcal{C}_{k+1}^{(i)}\big/(z - z_i)^{k+1}$, where $\mathcal{C}_{k+1}\in U(\dot{\mathfrak{g}})^\dot{\mathfrak{g}}$ is a central element -- as indeed it must be for $S_k(z)$ to commute with the diagonal action of $\dot{\mathfrak{g}}$. Each $S_k(z)$ has degree $k+1$ as an element of $U(\dot{\mathfrak{g}}^{\oplus N})$.
As we sketched in the introduction, to each Miura ${{}^L\!\g}$-oper of the form \eqref{mop}, with the Bethe roots $w_i$ obeying the Bethe equations, there corresponds a joint eigenvector $\psi$ of the functions $S_k(z)$, and the joint eigenvalues are given by the coefficients $\bar v_k(z)$ of the $\bar p_k$ in the canonical form of the underlying ${{}^L\!\g}$-oper.
Now, not all these features can be precisely preserved in the affine case. Indeed, in finite types the centre $U(\dot{\mathfrak{g}})^\dot{\mathfrak{g}}$ is isomorphic (via the Harish-Chandra isomorphism) to a graded polynomial algebra $\mathbb{C}[\{\mathcal{C}_{k+1}\}_{k\in \bar E}]$ in $\rank\dot{\mathfrak{g}}$ generators of the correct degrees. But in affine types the centre is much smaller. Namely, the centre of the (completed, as in \S\ref{sec: ao} below) envelope of an affine Kac-Moody algebra is isomorphic to the graded polynomial algebra in only two generators, $\mathsf k$ and $\mathcal{C}$ (of degrees $0$ and $2$; the definition of $\mathcal{C}$ in the affine case is in \eqref{Omega Kac} below) \cite{CIcentre}. Thus, one should not expect to find meromorphic functions $S_k(z)$, indexed by the positive exponents $k\in E$, such that they commute with the diagonal action of $\dot{\mathfrak{g}}$ for each $z\in X= \mathbb{C}\setminus\{z_1,\dots,z_N\}$ \emph{and} have degrees $k+1$.\footnote{That would be impossible in any type with an even exponent, since any polynomial in $\mathsf k$ and $\mathcal{C}$ has degree $k+1$ with $k$ odd; in other types these considerations merely make it seem unnatural.}
This is consistent with the results in \S\ref{sec: opers} -- \S\ref{sec: Miura opers} above: we saw in Theorem \ref{thm: quasi-canonical form} that the coefficients $v_k(z)$ of the quasi-canonical form of an ${{}^L\!\g}$-oper are defined, for $k\in E_{\geq 2}$, only up to the addition of twisted derivatives. So they themselves are not good candidates for the eigenvalues of such would-be generating functions. But we also saw that there are well-defined functions on the space of opers given by integrals, as in Corollary \ref{cor: opint}. It is natural to think that these functions are the eigenvalues of higher Gaudin Hamiltonians. That in turn suggests that such Hamiltonians are \emph{themselves} given by such integrals. This is the content of Conjecture \ref{conj: higher Ham} below.
To state it, we must define an appropriate completion of $U(\dot{\mathfrak{g}}^{\oplus N})$ when $\dot{\mathfrak{g}}$ is of untwisted affine type.
\subsection{Completion of $U(\dot{\mathfrak{g}}^{\oplus N})$}
\label{sec: ao}
Let $\dot{\mathfrak{g}} = \dot{\mathfrak{g}}(A)$ be an untwisted affine Kac-Moody algebra as in \S\ref{sec: Cartan data}. Let $\dot A = (A_{ij})_{i,j=1}^\ell$ denote the Cartan matrix of finite type obtained from the Cartan matrix $A$ of affine type by removing the $0^{\rm th}$ row and column, and $\dot\dot{\mathfrak{g}}\coloneqq \dot{\mathfrak{g}}(\dot A)$ the corresponding finite-dimensional simple Lie algebra.
The Lie algebra $\dot{\mathfrak{g}}$ can be realised as the semi-direct product $\hat{\mathcal L}\dot \dot{\mathfrak{g}} \rtimes \mathbb{C} t \del_t$ of the central extension $\hat{\mathcal L}\dot \dot{\mathfrak{g}} \cong_\mathbb{C}\mathcal L \dot \dot{\mathfrak{g}} \oplus \mathbb{C} \mathsf k$
of the loop algebra $\mathcal L \dot \dot{\mathfrak{g}} \coloneqq \dot\dot{\mathfrak{g}}[t, t^{-1}]$ with derivation element $\mathsf d$ acting as the derivative $t \del_t$ in the formal loop variable $t$. In what follows we shall identify $\dot{\mathfrak{g}}$ with $\hat{\mathcal L}\dot \dot{\mathfrak{g}} \rtimes \mathbb{C} t \del_t$.
Let $\dot{\mathfrak{g}}^{\oplus N}$ denote the $N$-fold direct sum of $\dot{\mathfrak{g}}$. We denote by $X^{(i)}$ the copy of any $X \in \dot{\mathfrak{g}}$ in the $i^{\rm th}$ summand, for $i = 1, \ldots, N$.
Consider the left ideals $\mathcal I_n \coloneqq U(\dot{\mathfrak{g}}^{\oplus N}) (t^n\dot\dot{\mathfrak{g}}[t])^{\oplus N}$, for $n \in \mathbb{Z}_{\geq 0}$, of the universal enveloping algebra $U(\dot{\mathfrak{g}}^{\oplus N})$. They define a descending $\mathbb{Z}_{\geq 0}$-filtration on $U(\dot{\mathfrak{g}}^{\oplus N})$, that is to say we have $\mathcal I_0 \supset \mathcal I_1 \supset \mathcal I_2 \supset \ldots$ with $\cap_{n \geq 0} \mathcal I_n = \{ 0 \}$. Define the corresponding completion of $U(\dot{\mathfrak{g}}^{\oplus N})$ as the inverse limit
\begin{equation*}
\hat U(\dot{\mathfrak{g}}^{\oplus N}) \coloneqq \varprojlim_n U(\dot{\mathfrak{g}}^{\oplus N}) / \mathcal I_n.
\end{equation*}
By definition, an element of $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ is a possibly infinite sum
\begin{equation} x = \sum_{m\geq 0} x_m\label{cex}\end{equation}
of elements in $U(\dot{\mathfrak{g}}^{\oplus N})$, with $x_m \in \mathcal I_m$ for all $m > 0$ so that only finitely many terms contribute when one works modulo any $\mathcal I_n$.
Since the $\mathcal I_n$, $n \geq 0$ are only left ideals, the quotients $U(\dot{\mathfrak{g}}^{\oplus N}) / \mathcal I_n$ are not associative algebras. However, the multiplication in $U(\dot{\mathfrak{g}}^{\oplus N})$ is continuous with respect to the linear topology whose basis of open neighbourhoods for $0$ is $\{ \mathcal I_n \}_{n \geq 0}$. So the completion $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ is an associative algebra.
The tensor product $\bigotimes_{j=1}^N L_{\lambda_j}$ of irreducible $\dot{\mathfrak{g}}$-modules is \emph{smooth} as a module over $U(\dot{\mathfrak{g}}^{\oplus N})$, meaning that for every $v\in \bigotimes_{j=1}^N L_{\lambda_j}$ there exists $n\in \mathbb{Z}_{\geq 0}$ such that $\mathcal I_n v = 0$. Therefore $\bigotimes_{j=1}^N L_{\lambda_j}$ is a module over the completion $\hat U(\dot{\mathfrak{g}}^{\oplus N})$.
Let $\hat U_{\bm k}(\dot{\mathfrak{g}}^{\oplus N})$, with $\bm k \coloneqq (k_i)_{i=1}^\ell$, denote the quotient of the algebra $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ by the ideal $J_{\bm k}$ generated by the elements $\mathsf k^{(i)} - k_i$, namely
\begin{equation*}
\hat U_{\bm k}(\dot{\mathfrak{g}}^{\oplus N}) \coloneqq \hat U(\dot{\mathfrak{g}}^{\oplus N}) / J_{\bm k}.
\end{equation*}
The action of $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ on $\bigotimes_{j=1}^N L_{\lambda_j}$ factors through the quotient $\hat U_{\bm k}(\dot{\mathfrak{g}}^{\oplus N})$.
In particular, if we define
\begin{equation} \label{k of z}
\mathsf k(z) \coloneqq \sum_{i=1}^N \frac{\mathsf k^{(i)}}{z - z_i}
\end{equation}
then the image of $\mathsf k(z)$ in $\hat U_{\bm k}(\dot{\mathfrak{g}}^{\oplus N})$ is the twist function $\varphi(z)$ as in \eqref{twist function}, cf. \eqref{hw lambda i}.
We have the usual ascending filtration $\mathbb{C} 1 =\mathcal{F}_0 \subset \mathcal{F}_1 \subset \mathcal{F}_2\subset \dots$ of the universal enveloping algebra $U(\dot{\mathfrak{g}}^{\oplus N})$ of the Lie algebra $\dot{\mathfrak{g}}^{\oplus N}$. Every $x\in U(\dot{\mathfrak{g}}^{\oplus N})$ belongs to some filtered subspace; the \emph{degree} of $x$ is by definition the smallest $k\in \mathbb{Z}_{\geq 0}$ such that $x\in \mathcal{F}_k$.
Let us say that an element $x\in \hat U(\dot{\mathfrak{g}}^{\oplus N})$ has \emph{(finite) degree} $k\in \mathbb{Z}_{\geq 0}$ if, when $x$ is written as a sum as in \eqref{cex}, the degrees of the $x_m$ are bounded above and $k$ is their maximum.
\subsection{Conjectures}\label{sec: conjectures}
\begin{conjecture} \label{conj: higher Ham}
There exist nonzero $\hat U(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic functions $\mathcal S_i(z)$, $i \in E$, on $\mathbb{P}^1$ with the following properties:
\begin{enumerate}[(i)]
\item For each $i\in E$, $\mathcal S_i(z)$ has degree $i+1$.
\item For any $i, j \in E$ we have
\begin{align*}
[\mathcal S_i(z), \mathcal S_j(w)] &= \left(h^\vee \partial_z - i \mathsf k(z)\right) \mathcal A_{ij}(z, w) + \left(h^\vee\partial_w - j \mathsf k(w) \right) \mathcal B_{ij}(z, w),
\end{align*}
for some $\hat U(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic functions $\mathcal A_{ij}(z,w), \mathcal B_{ij}(z,w)$ on $\mathbb{P}^1 \times \mathbb{P}^1$.
\item For each $i \in E$ and each $j = 1, \ldots, N$ we have
\begin{equation*}
\left[\mathcal H_j, \mathcal S_i(z)\right] = \left(h^\vee \partial_z - i \mathsf k(z) \right)\mathcal D^j_i(z),
\end{equation*}
for some $\hat U(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic function $\mathcal D^j_i(z)$ on $\mathbb{P}^1$.
\item For each $i \in E$ and any $x \in \dot{\mathfrak{g}}$ we have, writing $\Delta x\coloneqq\sum_{j=1}^N x^{(j)}$,
\begin{equation*}
\left[\Delta x, \mathcal S_i(z)\right] = \left(h^\vee \partial_z - i \mathsf k(z) \right)\mathcal C^x_i(z),
\end{equation*}
for some $\hat U(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic function $\mathcal C^x_i(z)$ on $\mathbb{P}^1$. \qed
\end{enumerate}
\end{conjecture}
Suppose such functions $\mathcal S_i(z)$ do exist.
For any contour $\gamma$ as in Corollary \ref{cor: opint} and any $i \in E$, denote by $\hat Q^\gamma_i$ the image of
\begin{equation} \label{integrated operators} \int_\gamma \mathcal{P}(z)^{-i / h^\vee} \mathcal S_i(z) dz
\end{equation}
in the quotient $\hat U_{\bm k}(\dot{\mathfrak{g}}^{\oplus N})$. Then these $\hat Q^\gamma_i$ are commuting Hamiltonians, as follows.
\begin{corollary}
Given Conjecture \ref{conj: higher Ham}, one has
\begin{equation} [\hat Q^\gamma_i, \hat Q^{\eta}_j] = 0\nonumber\end{equation}
for any $i, j \in E$ and any pair of contours $\gamma, \eta$.
Moreover, each $\hat Q^\gamma_i$
commutes with the diagonal action of $\dot{\mathfrak{g}}$ and with the quadratic Hamiltonians $\mathcal H_j$, $j =1, \ldots, N$. \qed
\end{corollary}
\begin{conjecture} \label{conj: e-val op}
Let $\psi \in \big( \! \bigotimes_{j=1}^N L_{\lambda_j} \big)_{\lambda_\infty}$ be the Schechtman-Varchenko vector associated with the Miura ${{}^L\!\g}$-oper $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ in \eqref{u Miura op def}. For every $j \in E$, let $v_j(z)$ be the coefficient of $p_j$ in any quasi-canonical form of the ${{}^L\!\g}$-oper $[\nabla]$.
If the Bethe roots $w_j$, $j = 1, \ldots, m$, satisfy the Bethe equations \eqref{Bethe equations} then
\begin{equation*}
\hat Q^\gamma_i \psi = \int_\gamma \mathcal{P}(z)^{-i / h^\vee} v_i(z) dz\; \psi
\end{equation*}
for every $i \in E$ and any choice of contour $\gamma$ as in Corollary \ref{cor: opint}. \qed
\end{conjecture}
The remainder of this section is devoted to showing that these conjectures are consistent with what is known about the quadratic Hamiltonians for affine Gaudin models.
In a separate paper \cite{LVY2} we explicitly construct $\mathcal S_2(z)$ in the case $\dot{\mathfrak{g}}' = \widehat{\mathfrak{sl}}_M$, $M\geq 3$ and show that the statements of Conjecture \ref{conj: higher Ham} hold for $i=1,2$. In these cases we also verify Conjecture \ref{conj: e-val op} for $m=0, 1$.
\subsection{Quadratic affine Gaudin Hamiltonians} \label{sec: quad Ham}
Recall $\dot{\mathfrak{g}} \cong \hat{\mathcal L}\dot \dot{\mathfrak{g}} \rtimes \mathbb{C} t \del_t$. Fix a basis $I^a$, for $a = 1, \ldots, \dim\dot\dot{\mathfrak{g}}$, of $\dot\dot{\mathfrak{g}}$. Recall the non-degenerate bilinear form $(\cdot|\cdot): \dot{\mathfrak{g}} \times \dot{\mathfrak{g}} \to \mathbb{C}$ on $\dot{\mathfrak{g}}$ from \S\ref{sec: Cartan data}. It restricts to a non-degenerate bilinear form $\dot \dot{\mathfrak{g}} \times \dot\dot{\mathfrak{g}} \to \mathbb{C}$ on $\dot\dot{\mathfrak{g}}$. Let $I_a$ be the dual basis of $\dot\dot{\mathfrak{g}}$ with respect to this restriction.
A basis of $\dot{\mathfrak{g}}$ is then given by $I^a_n \coloneqq I^a \otimes t^n$, for $a = 1, \ldots, \dim \dot\dot{\mathfrak{g}}$ and $n \in \mathbb{Z}$, together with $\mathsf k$ and $\mathsf d$. The corresponding dual basis of $\dot{\mathfrak{g}}$, with respect to $(\cdot|\cdot)$, is given by $I_{a, -n} \coloneqq I_a \otimes t^{-n}$, for $a = 1, \ldots, \dim \dot\dot{\mathfrak{g}}$ and $n \in \mathbb{Z}$, together with $\mathsf d$ and $\mathsf k$.
In terms of these bases, the quadratic Gaudin Hamiltonians \eqref{quad Ham intro} in this untwisted affine case then take the form
\begin{equation} \label{Gaudin Ham def}
\mathcal H_i = \sum_{\substack{j=1\\ j \neq i}}^N \frac{\mathsf k^{(i)} \mathsf d^{(j)} + \mathsf d^{(i)} \mathsf k^{(j)} + \sum_{n \in \mathbb{Z}} I^{(i)}_{a, -n} I^{a(j)}_n}{z_i - z_j}\in \hat U(\dot{\mathfrak{g}}^{\oplus N}), \qquad i = 1, \ldots, N.
\end{equation}
(Here and below we employ summation convention: $I_a I^a := \sum_{a=1}^{\dim \dot \dot{\mathfrak{g}}} I_a I^a$.)
For every $i = 1, \ldots, N$, the completed enveloping algebra $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ also contains the $i^{\rm th}$ copy of the \emph{quadratic Casimir} of $\dot{\mathfrak{g}}$, namely the element of $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ defined as
\begin{equation} \label{Omega def}
\mathcal{C}^{(i)} \coloneqq (\mathsf k^{(i)} + h^\vee) \mathsf d^{(i)} + \mbox{\small $\frac{1}{2}$} I^{(i)}_{a, 0} I^{a(i)}_0 + \sum_{n > 0} I^{(i)}_{a, -n} I^{a(i)}_n.
\end{equation}
\begin{proposition} \label{prop: Casimir hom grad}
Each $\mathcal{C}^{(i)}$, $i=1, \ldots, N$ is central in $\hat U(\dot{\mathfrak{g}}^{\oplus N})$.
Its action on the tensor product of irreducible highest weight $\dot{\mathfrak{g}}$-module $\bigotimes_{j=1}^N L_{\lambda_j}$, for any $\lambda_1, \ldots, \lambda_N \in \mathfrak{h}^\ast$, is given by multiplication by $\mbox{\small $\frac{1}{2}$} (\lambda_i | \lambda_i + 2 \rho)$.
\begin{proof}
It suffices to consider the case $N=1$, for which we can drop all superscripts labelling the copy of $\dot{\mathfrak{g}}$ in the direct sum $\dot{\mathfrak{g}}^{\oplus N}$. For the first statement we will simply show that $\mathcal{C}$ as defined in \eqref{Omega def} coincides with the quadratic Casimir for a general Kac-Moody algebra \cite[\S 2.5]{KacBook} in the affine case. The second part of the statement will then follow from \cite[Corollary 2.6]{KacBook}.
We have the Cartan subalgebra $\dot\mathfrak{h}= \mathfrak{h} \cap \dot\dot{\mathfrak{g}}$ of $\dot\dot{\mathfrak{g}}$ and the root space decomposition
\begin{equation*}
\dot\dot{\mathfrak{g}} = \dot\mathfrak{h} \oplus \bigoplus_{\alpha \in \dot\Delta} \dot\dot{\mathfrak{g}}_\alpha,
\end{equation*}
where $\dot\Delta$ denotes the root system of $\dot\dot{\mathfrak{g}}$. Let $\dot\Delta_+ \subset \dot\Delta$ denote the subset of positive roots. The corresponding root space decomposition of the untwisted affine Kac-Moody algebra $\dot{\mathfrak{g}}$ reads
\begin{equation*}
\dot{\mathfrak{g}} = \bigoplus_{\wt\alpha \in \Delta} \dot{\mathfrak{g}}_{\wt\alpha},
\end{equation*}
where $\Delta \coloneqq \{ \alpha + n \delta \, |\, \alpha \in \dot\Delta, n \in \mathbb{Z} \}$ is the root system of $\dot{\mathfrak{g}}$. We denote the subset of positive roots by $\Delta_+ \coloneqq \{ \alpha + n \delta \, |\, \alpha \in \dot\Delta, n > 0 \} \cup \dot\Delta_+$. Explicitly, $\dot{\mathfrak{g}}_{\pm \alpha+n\delta} = \dot\dot{\mathfrak{g}}_{\pm \alpha} \otimes t^n$ for every $\alpha \in \dot\Delta_+$ and $n \in \mathbb{Z}$, while $\dot{\mathfrak{g}}_{n\delta} = \dot\mathfrak{h} \otimes t^n$ for all $n \in \mathbb{Z} \setminus \{ 0 \}$ and $\dot{\mathfrak{g}}_0 = \mathfrak{h}$.
We fix a basis $e^s_{\wt\alpha}$, $s = 1, \ldots, \dim \dot{\mathfrak{g}}_{\wt\alpha}$ of the root space $\dot{\mathfrak{g}}_{\wt\alpha}$ for each $\wt\alpha \in \Delta_+$ and denote by $e^s_{-\wt\alpha}$, $s = 1, \ldots, \dim \dot{\mathfrak{g}}_{\wt\alpha}$ its dual basis in $\dot{\mathfrak{g}}_{-\wt\alpha}$. Also fix a basis $\{ u_i \}_{i=1}^{\dim \mathfrak{h}} = \{ h_i \}_{i=1}^\ell \cup \{ \mathsf k, \mathsf d \}$ of $\mathfrak{h}$, where $\{ h_i \}_{i=1}^\ell$ is a basis of $\dot\mathfrak{h}$, and let $\{ u^i \}_{i=1}^{\dim \mathfrak{h}} = \{ h^i \}_{i=1}^\ell \cup \{ \mathsf d, \mathsf k \}$ be its dual basis, where $\{ h^i \}_{i=1}^\ell$ is the basis of $\dot\mathfrak{h}$ dual to $\{ h_i \}_{i=1}^\ell$.
We may now rewrite the expression \eqref{Omega def} for the quadratic Casimir using the above dual bases of $\dot{\mathfrak{g}}$. For the second term on the right hand side of \eqref{Omega def} we have
\begin{equation*}
I_{a, 0} I^a_0 = \sum_{i=1}^\ell h_i h^i + \sum_{\alpha \in \dot\Delta_+} \big( e_\alpha e_{-\alpha} + e_{-\alpha} e_\alpha \big) = 2 \nu^{-1}(\dot\rho) + \sum_{i=1}^\ell h_i h^i + 2 \sum_{\alpha \in \dot\Delta_+} e_{-\alpha} e_\alpha,
\end{equation*}
where we dropped the superscript `$s$' on the basis elements $e^s_{\pm \alpha}$ for $\alpha \in \dot\Delta_+$ since in this case $\dim \dot{\mathfrak{g}}_{\pm \alpha} = 1$. In the second equality we used the relation $[e_\alpha, e_{-\alpha}] = \nu^{-1}(\alpha)$ and set $\dot\rho \coloneqq \mbox{\small $\frac{1}{2}$} \sum_{\alpha \in \dot\Delta_+} \alpha$.
On the other hand, the infinite sum over $n>0$ in \eqref{Omega def} can be written as
\begin{equation*}
\sum_{n > 0} I_{a, -n} I^a_n = \sum_{\wt\alpha \in \Delta_+ \setminus \dot\Delta_+} \sum_{s = 1}^{\dim \dot{\mathfrak{g}}_{\wt\alpha}} e^s_{-\wt\alpha} e^s_{\wt\alpha}.
\end{equation*}
Recall the set of fundamental coweights $\{ \check\Lambda_i \}_{i=0}^\ell$ of $\dot{\mathfrak{g}}$ defined by \eqref{def: co Lambda}. The set of fundamental coweights $\{ \check\omega_i \}_{i=1}^\ell$ of $\dot\dot{\mathfrak{g}}$ can be identified with $\check\omega_i = \check\Lambda_i - a_i \check\Lambda_0$ for each $i = 1, \ldots,\ell$. If we set $\epsilon_i \coloneqq a_i \check a_i^{-1}$ for $i = 0, \ldots, \ell$, then
\begin{equation*}
\nu^{-1}(\dot\rho) = \sum_{i=1}^\ell \epsilon_i^{-1} \check\omega_i = \sum_{i=0}^\ell \epsilon_i^{-1} (\check\Lambda_i - a_i \check\Lambda_0) = \nu^{-1}(\rho) - h^\vee \mathsf d
\end{equation*}
where in the second step we used the assumption that $a_0 = 1$, cf. Remark \ref{rem: not A2l}.
Therefore, combining all the above we can rewrite the quadratic Casimir \eqref{Omega def} as
\begin{equation} \label{Omega Kac}
\mathcal{C} = \nu^{-1}(\rho) + \mbox{\small $\frac{1}{2}$} \sum_{i=1}^{\dim \mathfrak{h}} u_i u^i + \sum_{\wt\alpha \in \Delta_+} \sum_{s = 1}^{\dim \dot{\mathfrak{g}}_{\wt\alpha}} e^s_{-\wt\alpha} e^s_{\wt\alpha},
\end{equation}
which coincides with its expression given in \cite[\S 2.5]{KacBook}, as required.
\end{proof}
\end{proposition}
By direct analogy with the finite-dimensional case, cf. \eqref{S1 def finite}, it is natural to introduce the $\hat U(\dot{\mathfrak{g}}^{\oplus N})$-valued meromorphic function
\begin{equation} \label{S1 def affine}
S_1(z) \coloneqq \sum_{i=1}^N \frac{\mathcal{C}^{(i)}}{(z - z_i)^2} + \sum_{i=1}^N \frac{\mathcal{H}_i}{z - z_i}.
\end{equation}
We then have the following direct generalisation of the finite-dimensional case.
\begin{theorem}
Let $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ be of the form \eqref{u Miura op def}. If the set of Bethe roots $w_j$, $j = 1, \ldots, m$ satisfy the Bethe equations \eqref{Bethe equations}, then the eigenvalue of $S_1(z)$ on the subspace $\big( \! \bigotimes_{j=1}^N L_{\lambda_j} \big)_{\lambda_\infty}$ of weight $\lambda_\infty = \sum_{i=1}^N \lambda_i - \sum_{j=1}^m \alpha_{c(j)} \in \mathfrak{h}^\ast$, is given by $h^\vee$ times the coefficient of $p_1$ in any quasi-canonical form of the underlying ${{}^L\!\g}$-oper $[\nabla]$.
\begin{proof}
This follows form Theorem \ref{thm: Miura oper p1 coeff} together with Proposition \ref{prop: Casimir hom grad} and \eqref{evalue eq}.
\end{proof}
\end{theorem}
The expression \eqref{S1 def affine} can alternatively be described as follows.
For any $x \in \dot{\mathfrak{g}}$ we define the $\dot{\mathfrak{g}}^{\oplus N}$-valued meromorphic function, cf. \eqref{k of z},
\begin{equation*}
x(z) \coloneqq \sum_{i=1}^N \frac{x^{(i)}}{z - z_i}.
\end{equation*}
We then introduce the \emph{formal Lax matrix} of the Gaudin model associated with $\dot{\mathfrak{g}}$ as the element
\begin{equation} \label{formal Lax}
\mathcal L(z) \coloneqq \mathsf k \otimes \mathsf d(z) + \mathsf d \otimes \mathsf k(z) + \sum_{n \in \mathbb{Z}} I_{a, -n} \otimes I^a_n(z)
\end{equation}
of the completed tensor product $\dot{\mathfrak{g}} \,\hat\otimes\, \dot{\mathfrak{g}}^{\oplus N}$. Then the generating function \eqref{S1 def affine} for the quadratic affine Gaudin Hamiltonians can be rewritten as
\begin{equation} \label{S1 alernative}
S_1(z) = \mbox{\small $\frac{1}{2}$} \nord{\!\big( \mathcal L(z) \big| \mathcal L(z) \big)\!} - \; h^\vee \mathsf d'(z),
\end{equation}
where $\nord{\cdot}$ denotes normal ordering by mode numbers, \emph{i.e.} $\nord{I^a_m I^b_n}$ is $I_n^b I^a_m$ if $m\geq 0$ and $I^a_mI^b_n$ otherwise.
\begin{remark} This expression can be regarded as a quantisation of the generating function for the quadratic Hamiltonians of a \emph{classical} affine Gaudin model \cite{V17}. Indeed, the latter is a meromorphic function valued in a completion of the symmetric algebra $S(\dot{\mathfrak{g}}^{\oplus N})$, given explicitly by $\mbox{\small $\frac{1}{2}$} (\mathcal L(z) | \mathcal L(z))$. In fact, the above expression for $S_1(z)$ can be heuristically obtained from $\mbox{\small $\frac{1}{2}$} (\mathcal L(z) | \mathcal L(z))$ by using the commutation relations of $\dot{\mathfrak{g}}$ to rewrite each term so that all raising operators, \emph{i.e.} $I^a_n$ with $n > 0$, appear on the right. This procedure results in a meaningless infinite sum, but the term $- h^\vee \mathsf d'(z)$ can be thought of as its regularisation; see \cite[\S 2.11]{KacBook} for a similar motivation of the linear term $\nu^{-1}(\rho)$ in the expression \eqref{Omega Kac} for the quadratic Casimir of $\dot{\mathfrak{g}}$ in the proof of Proposition \ref{prop: Casimir hom grad}.
\end{remark}
Now we explain how the generating function $S_1(z)$ of the quadratic Hamiltonians fits into our conjecture on the form of the higher affine Gaudin Hamiltonians. We first reinterpret it in light of Corollary \ref{cor: v1}. Define the \emph{local Lax matrix} as the part of the formal Lax matrix \eqref{formal Lax} involving only the loop generators of $\dot{\mathfrak{g}}$, namely
\begin{equation} \label{local Lax matrix}
L(z) \coloneqq \sum_{n \in \mathbb{Z}} I_{a, -n} \otimes I^a_n(z).
\end{equation}
The expression \eqref{S1 alernative} for $S_1(z)$ can now be rewritten as follows
\begin{equation} \label{S1 with twisted der d}
S_1(z) = \mathcal S_1(z) - \, \big(h^\vee \partial_z - \mathsf k(z) \big) \mathsf d(z),
\end{equation}
where we defined
\begin{equation*}
\mathcal S_1(z) \coloneqq \mbox{\small $\frac{1}{2}$} \nord{\!\big( L(z) \big| L(z) \big)\!}.
\end{equation*}
Let $\psi \in \big( \! \bigotimes_{j=1}^N L_{\lambda_j} \big)_{\lambda_\infty}$ denote the Schechtman-Varchenko vector corresponding to the Miura ${{}^L\!\g}$-oper $\nabla \in \MOp_{{{}^L\!\g}}(\mathbb{P}^1)$ in \eqref{u Miura op def}. Recall the expression \eqref{hw lambda i} for the weights $\lambda_i \in \mathfrak{h}^\ast$, $i = 1, \ldots, N$. On $\bigotimes_{j=1}^N L_{\lambda_j}$, $\mathsf k(z)$ acts as $\varphi(z)$ as we noted above, and the action of $\mathsf d(z)$ on $\psi$ is given by
\begin{align*}
\mathsf d(z) \psi &= \sum_{i=1}^N \frac{\Delta_i - m_0}{z - z_i} \psi \eqqcolon \Delta(z) \psi,
\end{align*}
where $m_0$ is the number of Bethe roots associated to the affine simple root $\alpha_0$.
In other words, the action of \eqref{S1 with twisted der d} on the Schechtman-Varchenko vector reads
\begin{equation} \label{S1 on psi}
S_1(z) \psi = \mathcal S_1(z) \psi - h^\vee \bigg( \Delta'(z) - \frac{\varphi(z)}{h^\vee} \Delta(z) \bigg) \psi.
\end{equation}
Observe that the final term is a twisted derivative of degree 1.
Now recall from Corollary \ref{cor: v1} that if instead of working with ${{}^L\!\g}$-opers we were to consider ${{}^L\!\g}/\mathbb{C} \delta$-opers, then the coefficients of all the $p_i$'s, $i \in E$ in a quasi-canonical form would be on an equal footing since the coefficient $v_1(z)$ of $p_1$ would also only be defined up to a twisted derivative
\begin{equation*}
v_1 \longmapsto v_1 - f'_1 + \frac{\varphi}{h^\vee} f_1,
\end{equation*}
with $f_1 \in \mathcal{M}$. In particular, only its integral
\begin{equation*}
\int_\gamma \mathcal{P}(z)^{- 1 /h^\vee} v_1(z) dz
\end{equation*}
over a cycle $\gamma$ as in Corollary \ref{cor: opint} would provide a well-defined function on the space of ${{}^L\!\g}/\mathbb{C} \delta$-opers.
In exactly the same way as we conjecture the spectrum of an affine Gaudin model associated with $\dot{\mathfrak{g}}$ to be described by ${{}^L\!\g}$-opers, the space of ${{}^L\!\g}/\mathbb{C} \delta$-opers should describe the spectrum of an affine Gaudin model associated with the derived algebra $\dot{\mathfrak{g}}'$. Indeed, since the weights appearing as the residues in a Miura ${{}^L\!\g}/\mathbb{C} \delta$-oper are classes in ${}^L\mathfrak{h} / \mathbb{C}\delta = \mathfrak{h}^\ast/\mathbb{C} \delta$, \emph{i.e.} weights in $\mathfrak{h}^\ast$ defined up to an arbitrary multiple of $\delta$, we should not include the generator $\mathsf d$ on the Gaudin model side since it pairs non-trivially with the weight $\delta$. And one way to disregard the generator $\mathsf d$ from the expression \eqref{S1 with twisted der d} for $S_1(z)$ is to consider its integral
\begin{equation*}
\int_\gamma \mathcal{P}(z)^{- 1/ h^\vee} S_1(z) dz
\end{equation*}
over a contour $\gamma$ as in Corollary \ref{cor: opint}. Indeed, it follows from \eqref{S1 on psi} that the action of this operator on the Schechtman-Varchenko vector $\psi$ coincides with the action of the operator
\begin{equation*}
\int_\gamma \mathcal{P}(z)^{- 1/ h^\vee} \mathcal S_1(z) dz.
\end{equation*}
The following Lemma shows that Conjecture \ref{conj: higher Ham} holds at least for $i=1$.
\begin{lemma}
For any distinct $z, w \in X$ we have
\begin{equation*}
\big[ \mathcal S_1(z), \mathcal S_1(w) \big] = \big( h^\vee \partial_z - \mathsf k(z) \big) \mathcal A(z, w) - \big( h^\vee\partial_w - \mathsf k(w) \big) \mathcal A(z, w),
\end{equation*}
where $\mathcal A(z, w)$ is the $\hat U(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic function on $\mathbb{P}^1 \times \mathbb{P}^1$ given by
\begin{equation*}
\mathcal A(z, w) \coloneqq \frac{1}{z - w} \sum_{\substack{i,j=1\\ i \neq j}}^N \frac{\sum_{n \in \mathbb{Z}} n I^{(i)}_{a, -n} I^{a (j)}_n}{(z - z_i)(w - z_j)}.
\end{equation*}
Also, for each $j=1,\dots,N$,
\begin{equation*}
\big[ \mathcal H_j , \mathcal S_1(w) \big] = \big( h^\vee \del_w - \mathsf k(w)\big) \sum_{\substack{i=1\\ i \neq j}}^N \frac{\sum_{n \in \mathbb{Z}} n I^{(j)}_{a, -n} I^{a (i)}_n}{(w - z_i)(w-z_j)}.
\end{equation*}
Moreover, for any $x \in \dot{\mathfrak{g}}$ we have
$[ \Delta x, \mathcal S_1(z) ] = \big( h^\vee \partial_z - \mathsf k(z) \big) [x, \mathsf d](z)$.
\begin{proof}
Since the quadratic Gaudin Hamiltonians $\mathcal{H}_i$, $i = 1, \ldots, N$ mutually commute and the $\mathcal{C}^{(i)}$, $i = 1, \ldots, N$ are central in $\hat U(\dot{\mathfrak{g}}^{\oplus N})$ by Proposition \ref{prop: Casimir hom grad}, it follows that $[S_1(z), S_1(w)] = 0$.
Noting that
$[ \mathsf d(z), \mathcal S_1(w) ] = \mathcal A(z, w)$
one has \begin{equation} \big[h^\vee \mathsf d'(z) - \mathsf k(z) \mathsf d(z), \mathcal S_1(w)\big] = \big( h^\vee \del_z - \mathsf k(z)\big) \mathcal A(z,w) \nonumber\end{equation}
Using the relation \eqref{S1 with twisted der d} we get the first result. It also follows that
$[S_1(z), \mathcal S_1(w)] = - (h^\vee \del_w - \mathsf k(w)) \mathcal A(z,w)$, from which, taking the residue at $z=z_j$, we obtain the commutators with $\mathcal H_j$.
The last part follows similarly from the relation $[\Delta x, S_1(z)] = 0$ which is a consequence of the fact that both $\mathcal{C}^{(i)}$ and $\mathcal H_i$, for each $i =1, \ldots, N$, commute with the diagonal action of $\dot{\mathfrak{g}}$.
\end{proof}
\end{lemma}
\section{Coordinate invariance and meromorphic ${{}^L\!\g}$-opers on curves}\label{sec: coord}
Throughout \S\ref{sec: opers}\---\ref{sec: Miura opers} we fixed a global coordinate $z$ on $\mathbb{C} \subset \mathbb{P}^1$ and studied meromorphic ${{}^L\!\g}$-opers in that coordinate.
Let us now consider meromorphic ${{}^L\!\g}$-opers in local charts, and discuss their behaviour under changes in coordinate. In this section only, we shall work over an arbitrary Riemann surface $\Sigma$.
When ${{}^L\!\g}$ is of finite type, an ${{}^L\!\g}$-oper on $\Sigma$ is a triple $(\mathcal F, \mathcal B, \nabla)$ where $\mathcal F$ is a principal ${}^L \! G$-bundle, $\mathcal B$ is an ${}^L \! B$-reduction and $\nabla$ is a connection on $\mathcal F$ with certain properties; see \emph{e.g.} \cite{BD91, Fre07}. Concretely, such a triple can be constructed by gluing together trivial ${}^L \! G$-bundles over coordinate patches, each equipped with a connection given by an ${{}^L\!\g}$-oper in canonical form, using the ${}^L \! B$-valued transition functions relating canonical forms in different coordinates (see equation \eqref{transition can to can} below) \cite{Fopersontheprojectiveline, Fre07}.
The abstract definition of an ${{}^L\!\g}$-oper as a triple can be generalised to the case when ${{}^L\!\g}$ is of affine type \cite{Fopersontheprojectiveline}. However, since the quasi-canonical form is not unique in this case by Theorem \ref{thm: quasi-canonical form}, and there is no naturally preferred quasi-canonical form, it is less clear how to construct such a triple explicitly. We therefore proceed differently: we first define the space of ${{}^L\!\g}$-opers over any coordinate patch as in \S\ref{sec: opers} and then glue these together to form a sheaf, the sheaf of ${{}^L\!\g}$-opers over $\Sigma$.
\subsection{The sheaf of ${{}^L\!\g}$-opers $\Op_{{{}^L\!\g}}$}
For any open subset $U \subset \Sigma$ we let $\mathcal{K}(U)$ denote the field of meromorphic functions on $U$.
We denote by $\mathcal{K}$ the sheaf $U \mapsto \mathcal{K}(U)$ of meromorphic functions on $\Sigma$. When $\Sigma = \mathbb{P}^1$, the field $\mathcal{M}$ of meromorphic functions on $\mathbb{P}^1$, introduced in \S\ref{sec: inverse limits}, is the field of global sections of $\mathcal{K}$.
For any open subset $U \subset \Sigma$, we define the Lie algebra ${}^L \hat\mathfrak{n}_+(\mathcal{K}(U))$ and the group ${}^L \! \hat N_+(\mathcal{K}(U))$ as in \S\ref{sec: def oper}. We also set ${}^L \hat\b_+(\mathcal{K}(U)) \coloneqq {}^L\mathfrak{h}(\mathcal{K}(U)) \oplus {}^L \hat\mathfrak{n}_+(\mathcal{K}(U))$.
To begin with, let us suppose that $U\subset \Sigma$ is an open subset equipped with a holomorphic coordinate $t:U \to \mathbb{C}$. Define $\op_{{{}^L\!\g}}(U)$ to be the affine space of connections of the form
\begin{equation} \label{nf}
\nabla\coloneqq d + p_{-1}dt + b dt, \quad b\in {}^L \hat\b_+(\mathcal{K}(U)).
\end{equation}
As in \S\ref{sec: def oper}, it admits an action of the group ${}^L \! \hat N_+(\mathcal{K}(U))$ by gauge transformations, and we define the space of meromorphic ${{}^L\!\g}$-opers on $U$ to be the quotient
\begin{equation} \label{def: Uop}
\Op_{{{}^L\!\g}}(U) \coloneqq \op_{{{}^L\!\g}}(U) \big/ {}^L \! \hat N_+(\mathcal{K}(U)).
\end{equation}
The proof of the following is as for Theorem \ref{thm: quasi-canonical form}.
\begin{theorem}\label{thm: qcU}
Let $U \subset \Sigma$ be open with a holomorphic coordinate $t : U \to \mathbb{C}$. Every class $[\nabla] \in \Op_{{{}^L\!\g}}(U)$ has a representative of the form
\begin{equation*}
\nabla = d + \Bigg( p_{-1} - \frac{\varphi}{h^\vee} \rho + \sum_{i\in E} v_i p_i \Bigg) dt
\end{equation*}
where $\varphi \in \mathcal{K}(U)$ and $v_i \in \mathcal{K}(U)$ for each $i\in E$. We call such a form \emph{quasi-canonical}. It is unique up to residual gauge transformations as in Theorem \ref{thm: quasi-canonical form}.
\qed
\end{theorem}
We would like to understand the behaviour of such quasi-canonical representatives under changes in local coordinate. We will come back to this in \S\ref{sec: qc form curve} below. The first problem is to formulate the definition of $\Op_{{{}^L\!\g}}(U)$ itself in a coordinate-independent fashion.
Indeed, suppose $s:U\to \mathbb{C}$ is another holomorphic coordinate on the same open set $U \subset \Sigma$, with $t= \mu(s)$. The connection in \eqref{nf} becomes
\begin{equation} \nabla = d + p_{-1} \mu'(s) ds + b \mu'(s) ds.\label{nfp}\end{equation}
This is no longer of the form \eqref{nf} in the new coordinate $s$, and in this sense the definition of $\op_{{{}^L\!\g}}(U)$ is coordinate dependent. However, it is possible to re-express $\Op_{{{}^L\!\g}}(U)$ as the quotient of a suitably larger affine space of connections $\widetilde\op_{{{}^L\!\g}}(U)\supset \op_{{{}^L\!\g}}(U)$, which itself is coordinate \emph{independent}, by some larger group of gauge transformations ${}^L \! \hat B_+(\mathcal{K}(U)) \supset {}^L \! \hat N_+(\mathcal{K}(U))$ to be defined below.
Indeed, let $\widetilde\op_{{{}^L\!\g}}(U)$ be the affine space consisting of all connections of the form
\begin{equation} \label{tnf}
\widetilde \nabla = d + \left(\sum_{i=0}^\ell \psi_i\check f_i + b\right) dt
\end{equation}
with $\psi_i$ a nonzero element of $\mathcal{K}(U)$ for each $i\in I$, and $b\in {}^L \hat\b_+(\mathcal{K}(U))$.
Observe that the definition of $\widetilde{\op}_{{{}^L\!\g}}(U)$ is independent of the choice of coordinate. (The derivative $\mu'$ in \eqref{nfp} belongs to $\mathcal{K}(U)$ since it is holomorphic and non-vanishing on $U$.)
Now we define the group ${}^L \! \hat B_+(\mathcal{K}(U))$ and its action on $\widetilde{\op}_{{{}^L\!\g}}(U)$ by gauge transformations.
First, let $P\coloneqq \bigoplus_{i=0}^\ell \mathbb{Z} \Lambda_i\subset {}^L\mathfrak{h}$ denote the lattice of integral coweights of ${{}^L\!\g}$, where $\{\Lambda_i\}_{i=0}^\ell$ are the fundamental coweights of ${{}^L\!\g}$ defined in \eqref{def: Lambda}.
Let ${}^L \! H(\mathcal{K}(U))$ denote the abelian group generated by elements of the form $\phi^\lambda$, $\phi \in \mathcal{K}(U) \setminus \{ 0 \}$, $\lambda\in P$, subject to the relations $\phi^{\lambda} \psi^{\lambda} = (\phi\psi)^\lambda$, $\phi^{\lambda+\mu} = \phi^\lambda \phi^\mu$ for all $\phi,\psi\in \mathcal{K}(U)\setminus \{ 0 \}$ and $\lambda,\mu\in P$. (Note that this definition makes sense for \emph{any} open subset $U \subset \Sigma$, but to describe the action of the group ${}^L \! H(\mathcal{K}(U))$ on $\widetilde{\op}_{{{}^L\!\g}}(U)$ we shall only need the case when $U$ is a coordinate chart.)
For each $\check\alpha\in \check Q$ in the root lattice of ${{}^L\!\g}$, we have the (adjoint) action of the group ${}^L \! H(\mathcal{K}(U))$ on the space ${{}^L\!\g}_{\check\alpha}(\mathcal{K}(U))$ of meromorphic functions on $U$ valued in the root space ${{}^L\!\g}_{\check\alpha}$, given by
\begin{equation} \label{aa}
\phi^\lambda n \phi^{-\lambda} \coloneqq \phi^{\langle \lambda, \check\alpha\rangle} n,
\end{equation}
for all $n\in {{}^L\!\g}_{\check\alpha}(\mathcal{K}(U))$, $\phi \in \mathcal{K}(U) \setminus \{ 0 \}$ and $\lambda \in P$.
Here $\langle \lambda, \check\alpha\rangle\in \mathbb{Z}$, by definition of $P$, so that $\phi^{\langle \lambda, \check \alpha \rangle} \in \mathcal{K}(U)$. Hence we get an action on the Lie algebra ${}^L \hat\mathfrak{n}_+(\mathcal{K}(U))$. Then ${}^L \! H(\mathcal{K}(U))$ acts also on the group ${}^L \! \hat N_+(\mathcal{K}(U))$, with $\phi^\lambda\exp(n)\phi^{-\lambda} \coloneqq \exp(\phi^\lambda n\phi^{-\lambda})$.
We may now define the desired group to be the semi-direct product
\begin{equation*}
{}^L \! \hat B_+(\mathcal{K}(U)) \coloneqq {}^L \! \hat N_+(\mathcal{K}(U)) \rtimes {}^L \! H(\mathcal{K}(U)).
\end{equation*}
That is, ${}^L \! \hat B_+(\mathcal{K}(U))$ is the group generated by elements of the form $\exp(n) \phi^\lambda$ with $n\in {}^L \hat\mathfrak{n}_+(\mathcal{K}(U))$, $\phi\in \mathcal{K}(U) \setminus \{ 0 \}$ and $\lambda\in P$, with the group product given by
\begin{equation*}
(\exp(n) \phi^{\lambda})( \exp(m) \psi^{\mu}) \coloneqq \Big( \! \exp(n) \exp\big( \phi^{\lambda} m \phi^{-\lambda}\big) \Big) \big(\phi^\lambda \psi^\mu \big),
\end{equation*}
for any $m, n\in {}^L \hat\mathfrak{n}_+(\mathcal{K}(U))$, $\phi, \psi \in \mathcal{K}(U) \setminus \{ 0 \}$ and $\lambda, \mu \in P$.
Finally, we define the gauge action of ${}^L \! H(\mathcal{K}(U))$ on connections in $\widetilde{\op}_{{{}^L\!\g}}(U)$, of the form \eqref{tnf}, by
\begin{equation} \label{def: Haction}
\phi^{\lambda} \left(d + \sum_{i=0}^\ell \psi_i\check f_idt + bdt\right) \phi^{-\lambda} \coloneqq d + \sum_{i=0}^\ell \phi^{-\langle\lambda,\check\alpha_i\rangle} \psi_i\check f_i dt - \lambda \phi^{-1} d\phi + \phi^{\lambda} b\phi^{-\lambda} dt,
\end{equation}
where, again, $ \phi^{\lambda} b\phi^{-\lambda}$ is defined by extending \eqref{aa} to ${}^L \hat\b_+(\mathcal{K}(U))$ by linearity.
\begin{lemma} Equation \eqref{def: Haction} defines an action of the group ${}^L \! H(\mathcal{K}(U))$ on the space of connections $\widetilde\op_{{{}^L\!\g}}(U)$. Combining it with the action of ${}^L \! \hat N_+(\mathcal{K}(U))$ defined as in \S\ref{sec: def oper}, we obtain a well-defined action of ${}^L \! \hat B_+(\mathcal{K}(U))$ on $\widetilde\op_{{{}^L\!\g}}(U)$. \qed
\end{lemma}
\begin{lemma}\label{lem: Uop}
The space of meromorphic ${{}^L\!\g}$-opers on a coordinate chart $U$ is equal to the quotient of $\widetilde \op_{{{}^L\!\g}}(U)$ by this gauge action of ${}^L \! \hat B_+(\mathcal{K}(U))$:
\begin{equation*}
\Op_{{{}^L\!\g}}(U) = \widetilde\op_{{{}^L\!\g}}(U) \big/ {}^L \! \hat B_+(\mathcal{K}(U)).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\widetilde \nabla \in \widetilde \op_{{{}^L\!\g}}(U)$ be as in \eqref{tnf}. On inspecting \eqref{def: Haction}, we see that its ${}^L \! H(\mathcal{K}(U))$-orbit has a unique representative in $\op_{{{}^L\!\g}}(U)$, namely
$( \prod_{i=0}^\ell \psi_i^{\Lambda_i} ) \widetilde \nabla ( \prod_{i=0}^\ell \psi_i^{-\Lambda_i} )$.
\end{proof}
\begin{remark}
If we were to replace $P$ by $P\oplus \mathbb{C}\delta$ in the definition of ${}^L\tilde H(\mathcal{K}(U))$ then the quotient $\widetilde\op_{{{}^L\!\g}}(U) \big/ {}^L \! \hat B_+(\mathcal{K}(U))$ would be smaller than $\Op_{{{}^L\!\g}}(U)$; in fact it would be isomorphic to $\Op_{{{}^L\!\g}/\mathbb{C}\delta}(U)$.
\end{remark}
Now suppose $U \subset \Sigma$ is \emph{any} open subset, not necessarily a coordinate chart. Let $\{ U_\alpha \}_{\alpha \in A}$ be an open cover of $U$ by coordinate charts, \emph{i.e.} open subsets $U_\alpha \subset \Sigma$ for each $\alpha$ in some indexing set $A$ with holomorphic coordinates $t_\alpha : U_\alpha \to \mathbb{C}$ such that $U = \cup_{\alpha \in A} U_\alpha$. We define $\Op_{{{}^L\!\g}}(U)$ to be the set of collections $\{ [\nabla_\alpha] \in \Op_{{{}^L\!\g}}(U_\alpha) \}_{\alpha \in A}$ with the following property: for any pair of overlapping charts $U_\alpha \cap U_\beta \neq \emptyset$ and any choice of representatives $\nabla_\alpha \in \widetilde{\op}_{{{}^L\!\g}}(U_\alpha)$ and $\nabla_\beta \in \widetilde{\op}_{{{}^L\!\g}}(U_\beta)$, their restrictions $\nabla_\alpha|_{U_\alpha \cap U_\beta}$ and $\nabla_\beta|_{U_\alpha \cap U_\beta}$ define the same class in $\Op_{{{}^L\!\g}}(U_\alpha \cap U_\beta)$. That is, the pair of representatives $\nabla_\alpha \in \widetilde{\op}_{{{}^L\!\g}}(U_\alpha)$ and $\nabla_\beta \in \widetilde{\op}_{{{}^L\!\g}}(U_\beta)$ considered on the overlap $U_\alpha \cap U_\beta$ are related by a gauge transformation in ${}^L \! \hat B_+(\mathcal{K}(U_\alpha \cap U_\beta))$. Since ${}^L \! \hat B_+(\mathcal{K}(U_\alpha))$ and ${}^L \! \hat B_+(\mathcal{K}(U_\beta))$ are naturally subgroups of ${}^L \! \hat B_+(\mathcal{K}(U_\alpha \cap U_\beta))$, the above property does not depend on the choice of representatives of the ${{}^L\!\g}$-opers $[\nabla_\alpha] \in \Op_{{{}^L\!\g}}(U_\alpha)$ for each $\alpha \in A$. This defines the \emph{sheaf of ${{}^L\!\g}$-opers} $\Op_{{{}^L\!\g}}$.
\subsection{Quasi-canonical form} \label{sec: qc form curve}
Let $U \subset \Sigma$ be open and $[\nabla] \coloneqq \{ [\nabla_\alpha] \in \Op_{{{}^L\!\g}}(U_\alpha) \}_{\alpha \in A}$ be an element of $\Op_{{{}^L\!\g}}(U)$. Call $\widetilde{\nabla} \coloneqq \{ \widetilde{\nabla}_\alpha \in \op_{{{}^L\!\g}}(U_\alpha) \}_{\alpha \in A}$ a representative of $[\nabla]$ if $[\widetilde{\nabla}_\alpha] = [\nabla_\alpha]$ for each $\alpha \in A$. We shall say that this representative is in \emph{quasi-canonical form} if for each $\alpha \in A$, $\widetilde{\nabla}_\alpha$ is a quasi-canonical form as in Theorem \ref{thm: qcU} with respect to the local coordinate $t_\alpha : U_\alpha \to \mathbb{C}$, \emph{i.e.} for each $\alpha \in A$ we have
\begin{equation} \label{oper rep Ualpha}
\widetilde{\nabla}_\alpha = d + \Bigg( p_{-1} - \frac{\varphi_\alpha(t_\alpha)}{h^\vee} \rho + \sum_{i\in E} v_{\alpha, i}(t_\alpha) p_i \Bigg) dt_\alpha
\end{equation}
for some $\varphi_\alpha \in \mathcal{K}(U_\alpha)$ and $v_{\alpha, i} \in \mathcal{K}(U_\alpha)$, $i \in E$. In this section we identify which sheaves the collections of functions $\{ \varphi_\alpha \}_{\alpha \in A}$ and $\{ v_{\alpha, i} \}_{\alpha \in A}$, $i \in E$ define sections of.
It suffices to consider an open subset $U \subset \Sigma$ equipped with a pair of holomorphic coordinates $t : U \to \mathbb{C}$ and $s : U \to \mathbb{C}$, and to determine the gauge transformation parameter in ${}^L \! \hat B_+(\mathcal{K}(U))$ relating quasi-canonical forms in each coordinate. In the above notation, $U$ corresponds to the overlap $U_\alpha \cap U_\beta$ of the open sets $U_\alpha$ and $U_\beta$ with coordinates $t = t_\alpha : U_\alpha \to \mathbb{C}$ and $s = t_\beta : U_\beta \to \mathbb{C}$, respectively. So suppose that we start with a representative of an ${{}^L\!\g}$-oper $[\nabla]\in \Op_{{{}^L\!\g}}(U)$ which is in quasi-canonical form in the $t$ coordinate, as in Theorem \ref{thm: qcU}:
\begin{equation*}
\nabla = d + p_{-1} dt - \frac{\varphi(t)}{h^\vee} \rho dt + \sum_{i\in E} v_i(t) p_i dt.
\end{equation*}
In terms of the other coordinate $s$ with $t=\mu(s)$ we have
\begin{equation*}
\nabla = d + p_{-1} \mu'(s) ds - \frac{\varphi(\mu(s))}{h^\vee} \rho \mu'(s)ds + \sum_{i\in E} v_i(\mu(s)) p_i \mu'(s) ds.
\end{equation*}
This can be brought into quasi-canonical form in the $s$ coordinate by performing a gauge transformation by $\mu'(s)^\rho\in {}^L \! H(\mathcal{K}(U))$. Indeed, one finds that
\begin{subequations}\label{ccp}
\begin{equation*}
\mu'(s)^\rho \,\nabla\, \mu'(s)^{-\rho} =
d + p_{-1} ds - \frac{\tilde\varphi(s)}{h^\vee} \rho ds + \sum_{i\in E} \tilde v_i(s) p_i ds
\end{equation*}
making use of the second relation in \eqref{Lie alg a com rel}, and where we defined
\begin{align} \label{phipt}
- \frac{1}{h^\vee} \tilde\varphi(s) &\coloneqq - \frac{1}{h^\vee} \varphi(\mu(s))\mu'(s) - \frac{\mu''(s)}{\mu'(s)},\\
\tilde v_i(s) &\coloneqq v_i(\mu(s)) \mu'(s)^{i+1}, \qquad i\in E.\label{vpt}
\end{align}
\end{subequations}
Here we will interpret the transformation property \eqref{phipt} as well as \eqref{vpt} in the case $i = 1$. We will come back to \eqref{vpt} for $i \in E_{\geq 2}$ in \S\ref{sec: twisted coh curve} below.
We shall need the following notation.
For any $k \in \mathbb{Z}$, let us denote by $\Omega^k \coloneqq \Omega^{\otimes k}$ the $k^{\rm th}$ tensor power\footnote{Recall that over $\mathbb{P}^1$, $\Omega^k$ is defined for all $k\in \mathbb{Z}/2$. In particular $\Omega^{1/2}$ is the tautological line bundle. For our purposes with ${{}^L\!\g}$ affine we shall need only integer powers.} of the canonical line bundle
(\emph{i.e.} the cotangent bundle)
$\Omega$ over $\Sigma$.
We denote by $U \mapsto \Gamma(U,\Omega^k)$ the sheaf of meromorphic sections of $\Omega^k$.
Also let $U \mapsto \Conn(U, \Omega)$ denote the sheaf of meromorphic connections on $\Omega$.
\begin{theorem}\label{thm: ct}
Let $U \subset \Sigma$ be open and $\nabla$ be any quasi-canonical form of an ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(U)$.
The coefficient of $\rho$ is the component of a connection in $\Conn(U, \Omega)$.
\end{theorem}
\begin{proof}
The coefficient of $\rho$ in $\nabla$, or to be more precise the collection of coefficients of $\rho$ for every $\nabla_\alpha \in \op_{{{}^L\!\g}}(U_\alpha)$ where $\nabla = \{ \nabla_\alpha \}_{\alpha \in A}$ relative to a cover $\{ U_\alpha \}_{\alpha \in A}$ of $U$, is independent of the choice of quasi-canonical form $\nabla$ by Lemma \ref{lem: Lambda indep}.
In the local trivialization defined by the coordinate $t$, a meromorphic section of $\Omega^{k}$ is given by a meromorphic function $f(t)$, and a connection $\Gamma(U,\Omega^{k}) \to \Gamma(U, \Omega \ox \Omega^{k})$ is a differential operator $f(t)\mapsto f'(t) + A(t) f(t)$, specified by a meromorphic function $A(t)$, the component of the connection in this local trivialization. Here $f'(t) + A(t) f(t)$ must transform as a section of $\Omega \ox \Omega^{k}$, which is to say that
\begin{equation} \tilde f'(s) + \tilde A(s) \tilde f(s) = \left( f'(t) + A(t)f(t)\right) \mu'(s)^{k+1}.\nonumber\end{equation}
Now in fact
\begin{align*}
\tilde f'(s) + \tilde A(s) \tilde f(s)
&= \del_s\big( f(t) \mu'(s)^{k}\big) + \tilde A(s) f(t) \mu'(s)^k\\
&= f'(t) \mu'(s)^{k+1} + k f(t) \mu'(s)^{k-1} \mu''(s)+ \tilde A(s) f(t) \mu'(s)^k
\end{align*}
and we see that indeed $A$ must transform as $\tilde A(s) = A(t) \mu'(s) - k \mu''(s)/\mu'(s)$. On comparing with \eqref{phipt} the first result follows.
\end{proof}
\subsection{Interlude: Comparison with finite type opers}\label{fto}
Before proceeding, it is interesting to compare the transformation properties \eqref{ccp} with those of opers of finite type. To that end, we now briefly recall the situation for finite type opers. Suppose, for this subsection only, that ${{}^L\!\g}$ is of finite type. Let $U \subset \Sigma$ be an open subset with two coordinates $t : U \to \mathbb{C}$ and $s : U \to \mathbb{C}$. Define $\op_{{{}^L\!\g}}(U)$ to be the affine space of connections of the form
\begin{equation*}
\nabla \coloneqq d + \bar p_{-1}dt + b dt, \quad b\in {}^L\b_+(\mathcal{K}(U)).
\end{equation*}
with $\bar p_{-1} \coloneqq \sum_{i=1}^\ell \check f_i$, and define $\widetilde\op_{{{}^L\!\g}}(U)$ to be the affine space of connections of the form
\begin{equation*}
\widetilde \nabla \coloneqq d + \left(\sum_{i=1}^\ell \psi_i\check f_i + b\right) dt
\end{equation*}
with $\psi_i$ a nonzero element of $\mathcal{K}(U)$ for each $i\in I\setminus\{0\}$, and with $b\in {}^L\b_+(\mathcal{K}(U))$. (In finite type ${}^L \hat\b_+={}^L\b_+$ since ${}^L\mathfrak{n}_m=\{0\}$ for all $m \geq h^\vee$.) Then the definition of the space $\Op_{{{}^L\!\g}}(U)$ of ${{}^L\!\g}$-opers on $U$ in (\ref{def: Uop}) and Lemma \ref{lem: Uop} remains correct as written. Let $\bar\rho \coloneqq \sum_{i=1}^\ell \bar \Lambda_i$ be the sum of the fundamental coweights $\bar\Lambda_i$ of ${{}^L\!\g}$.
There is a unique element $\bar p_1\in {}^L\mathfrak{n}_+$ such that $\{\bar p_{-1}, 2 \bar \rho, \bar p_1\}$ form an $\mathfrak{sl}_2$-triple:
\begin{equation} \label{sl2 triple finite}
[\bar p_{-1}, \bar p_1] = -2\bar\rho, \qquad [2\bar \rho, \bar p_{\pm 1}] = \pm 2 \bar p_{\pm 1}.
\end{equation}
The analogue of Theorem \ref{thm: qcU} in finite type is the following statement:
Each gauge equivalence class $[\nabla] \in \Op_{{{}^L\!\g}}(U)$ contains a unique representative of the form
\begin{equation} \label{fcf}
\nabla = d + \bar p_{-1} dt + \sum_{i\in \bar E} \bar v_i(t) \bar p_i dt,
\end{equation}
where the (multi)set $\bar E$ of exponents is now finite and where, for each $i\in \bar E$ we have $\bar v_i \in \mathcal{K}(U)$ and $\bar p_i\in {}^L\mathfrak{n}_+$ are elements such that
\begin{equation} \label{finite pi properties}
[\bar p_{1}, \bar p_i] = 0, \qquad [\bar \rho, \bar p_i] = i \bar p_i.
\end{equation}
In terms of the new coordinate $s$ with $t=\mu(s)$ we have
\begin{equation*}
\nabla = d + \bar p_{-1} \mu'(s) ds + \sum_{i\in \bar E} \bar v_i(\mu(s)) \bar p_i \mu'(s) ds.
\end{equation*}
Similarly to the affine case above, we may first perform a gauge transformation by $\mu'(s)^{\bar \rho}\in {}^L \! H(\mathcal{K}(U))$ to bring the $\bar p_{-1}$ term into the canonical form $\bar p_{-1} ds$, namely
\begin{equation*}
\mu'(s)^{\bar \rho} \,\nabla\, \mu'(s)^{-\bar \rho} =
d + \bar p_{-1} ds - \bar \rho \,\frac{\mu''(s)}{\mu'(s)} ds + \sum_{i\in \bar E} \bar v_i(\mu(s)) \bar p_i \mu'(s)^{i+1} ds.
\end{equation*}
However, in contrast to the affine case, we are not yet done, because it is necessary -- in order to reach the canonical form -- to remove the $\bar \rho$ term by performing a further gauge transformation by $\exp\big(\bar p_1 \frac{\mu''(s)}{2 \mu'(s)}\big)$. One finds that
\begin{equation} \label{transition can to can}
\exp\left(\bar p_1 \frac{\mu''(s)}{2 \mu'(s)}\right) \mu'(s)^{\bar\rho} \,\nabla\, \mu'(s)^{-\bar\rho} \exp\left( \! -\bar p_1 \frac{\mu''(s)}{2 \mu'(s)}\right)\\
= d + \bar p_{-1} ds + \sum_{i\in \bar E} \tilde{\bar v}_i(s) \bar p_i ds
\end{equation}
using both the relations \eqref{finite pi properties} and \eqref{sl2 triple finite}, and where we defined
\begin{subequations} \label{ccpfinite}
\begin{align}
\tilde{\bar v}_1(s) &\coloneqq \bar v_1(\mu(s)) \mu'(s)^2 - \mbox{\small $\frac{1}{2}$} (S\mu)(s),\\
\tilde{\bar v}_i(s) &\coloneqq \bar v_i(\mu(s)) \mu'(s)^{i+1}, \qquad i\in \bar E, i>1.
\end{align}
\end{subequations}
Here $S\mu$ is the \emph{Schwarzian derivative} of $\mu$,
\begin{equation*}
S\mu \coloneqq \frac{\mu'''}{\mu'} - \frac{3}{2} \left( \frac{\mu''}{\mu'} \right)^2.
\end{equation*}
Now, what \eqref{ccpfinite} shows is that each of the $\bar v_i$, $i > 1$ transforms as a section of the power $\Omega^{i+1}$ of the canonical bundle, but that $\bar v_1$ transforms as a \emph{projective connection}; see \cite{Fopersontheprojectiveline, Fre07}. Since \eqref{fcf} was the unique canonical form of the ${{}^L\!\g}$-oper in this chart, that means there is an isomorphism, in finite type ${{}^L\!\g}$,
\begin{equation} \Op_{{{}^L\!\g}}(U) \simeq \Proj(U) \times \prod_{\substack{i\in \bar E\\ i>1}} \Gamma(U,\Omega^{i+1}),\label{ft op}\end{equation}
where $\Proj(U)$ denotes the space of projective connections on $U$.
\begin{remark}
Let us emphasise why, in the affine case, there is no analogue of the second gauge transformation by $\exp\big(\bar p_1 \frac{\mu''(s)}{2 \mu'(s)}\big)$ performed above. When ${{}^L\!\g}$ is of affine type it is the central element $\delta$ (and not $\rho$) which appears in the bracket $[p_1,p_{-1}] = \delta$. The derivation element $\rho$ is not in the derived subalgebra. Hence, as we saw in Lemma \ref{lem: Lambda indep}, there is no way to remove the term $- \varphi/h^\vee \rho \, dt$ from a quasi-canonical form via ${}^L \! \hat N_+(\mathcal{K}(U))$-valued gauge transformations. Rather, the twist function $\varphi$ forms part of the data defining the underlying ${{}^L\!\g}$-oper (and Theorem \ref{thm: ct} gives its properties under coordinate transformations).
\end{remark}
\subsection{Twisted cohomologies} \label{sec: twisted coh curve}
Now we return to the case in which ${{}^L\!\g}$ is of affine type. We would like to give a coordinate-independent description of the space of affine opers analogous to \eqref{ft op} in finite types.
Let $[\nabla] \in \Op_{{{}^L\!\g}}(U)$ be a meromorphic ${{}^L\!\g}$-oper on an open subset $U \subset \Sigma$ and let $\nabla$ be a representative in quasi-canonical form.
According to Theorem \ref{thm: ct}, the coefficient of $\rho$ in $\nabla$ defines a (trivially flat, since we are working on a curve) connection on $\Omega$ over $U$, which we denote by
\begin{equation*}
\nabla|_\rho : \Gamma(U, \Omega) \longrightarrow \Gamma(U, \Omega \otimes \Omega),
\end{equation*}
If $t : U \to \mathbb{C}$ is a coordinate on $U$ then it can be written as $f \mapsto \nabla|_\rho f = df - {h^\vee}^{-1} \varphi f dt$ for some $\varphi \in \mathcal{K}(U)$.
We therefore obtain a surjective map
\begin{equation} \label{Op to Conn}
\Op_{{{}^L\!\g}}(U) \relbar\joinrel\twoheadrightarrow \Conn(U, \Omega), \qquad [\nabla] \longmapsto \nabla|_\rho
\end{equation}
into the space of meromorphic connections on $\Omega$ over $U$. Given any $\overline{\nabla} \in \Conn(U, \Omega)$, we denote its preimage in $\Op_{{{}^L\!\g}}(U)$ under the map \eqref{Op to Conn} by $\Op_{{{}^L\!\g}}(U)^{\overline{\nabla}}$. This can be seen as a coordinate-independent version, over an open subset of an arbitrary curve $U \subset \Sigma$, of the space $\Op_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ introduced in \S\ref{sec: twist function}.
For each $j \in \mathbb{Z}$, $\nabla|_\rho$ induces a connection on the line bundle $\Omega^j$,
\begin{equation*}
\nabla|_\rho : \Gamma(U, \Omega^j) \longrightarrow \Gamma(U, \Omega \otimes \Omega^j).
\end{equation*}
In a local coordinate $t : U \to \mathbb{C}$ it takes the form $f \mapsto \nabla|_\rho f = df - j {h^\vee}^{-1} \varphi f dt$.
The transformation property \eqref{vpt} suggests that the coefficient of $p_j$ in $\nabla$, for each $j \in E$, defines a section of $\Omega^{j+1}$. This is indeed the case for $j=1$ since the coefficient of $p_1$ in $\nabla$, \emph{i.e.} the collection of coefficients of $p_1$ for every $\nabla_\alpha \in \op_{{{}^L\!\g}}(U_\alpha)$, is independent of the choice of quasi-canonical form $\nabla$ by Proposition \ref{prop: can form u1}. However, it is not quite true for $j\in E_{\geq 2}$ since, unlike the coefficient $\varphi_\alpha$ of $\rho$ and $v_{\alpha, 1}$ of $p_1$ in \eqref{oper rep Ualpha}, the coefficient $v_{\alpha, j}$ of $p_j$, $j \in E_{\geq 2}$ in each chart $U_\alpha$ depends on the choice of quasi-canonical representative $\nabla_\alpha$ by Theorem \ref{thm: qcU}. More precisely, as formulated in Theorem \ref{thm: quasi-canonical form}, the coefficient of $p_j$ in $\nabla$ is defined up to the addition of an element $\nabla|_\rho f$.
Recall that a \emph{local system} is a vector bundle equipped with a flat connection. In our case, for each $j \in E$ we have the local system $(\Omega^j, \nabla|_\rho)$ consisting of the line bundle $\Omega^j$ equipped with the connection $\nabla|_\rho$.
Given any local system one has the associated de Rham complex with coefficients in that local system. In our case it is
\begin{equation*}
0 \longrightarrow \Gamma(U, \Omega^j) \xrightarrow{\nabla|_\rho} \Gamma(U, \Omega \otimes \Omega^j)
\longrightarrow 0.
\end{equation*}
The \emph{de Rham cohomology of $U$ with coefficients in $(\Omega^j, \nabla|_\rho)$}, or simply the \emph{twisted de Rham cohomology}, is then by definition the quotient
\begin{equation*}
H^1(U,\Omega^j, \nabla|_\rho) \coloneqq \Gamma(U,\Omega \otimes \Omega^j)
\big/\nabla|_\rho \Gamma(U, \Omega^j).
\end{equation*}
We now see that, for each $j \in E_{\geq 2}$, the coefficient of $p_j$ in $\nabla$ defines an element of this twisted de Rham cohomology which is independent of the choice of quasi-canonical form $\nabla$. In other words, we have the following analogue of Theorem \ref{thm: ct}.
\begin{proposition} \label{prop: coeff pj}
Let $U \subset \Sigma$ be open and $\nabla$ be any quasi-canonical form of an ${{}^L\!\g}$-oper $[\nabla] \in \Op_{{{}^L\!\g}}(U)$. For each $j\in E_{\geq 2}$, the coefficient of $p_j$ belongs to $H^1(U, \Omega^j, \nabla|_\rho)$. The coefficient of $p_1$ belongs to $\Gamma(U, \Omega^2)$. \qed
\end{proposition}
Combining this with Theorem \ref{thm: ct} we arrive at the following.
\begin{theorem}\label{thm: space of opers}
For any open subset $U \subset \Sigma$, the space $\Op_{{{}^L\!\g}}(U)$ fibres over $\Conn(U, \Omega)$ and we have the isomorphism
\begin{equation*}
\Op_{{{}^L\!\g}}(U)^{\overline{\nabla}} \simeq \Gamma(U,\Omega^2) \times
\prod_{j\in E_{\geq 2}} H^1(U,\Omega^j, \overline{\nabla})
\end{equation*}
for the fibre over any connection $\overline{\nabla} \in \Conn(U, \Omega)$. \qed
\end{theorem}
\begin{remark}\label{rem: sqo}
Recall Corollary \ref{cor: v1}. One can also define the sheaf $\Op_{{{}^L\!\g}/\mathbb{C}\delta}$ of $({{}^L\!\g}/\mathbb{C}\delta)$-opers over $\Sigma$, and one has the analogue of the above theorem, with \begin{equation} \Op_{{{}^L\!\g}/\mathbb{C}\delta}(U)^{\overline{\nabla}} \simeq
\prod_{j\in E} H^1(U,\Omega^j, \overline{\nabla})\nonumber\end{equation}
for every $\overline{\nabla} \in \Conn(U, \Omega)$.
\end{remark}
In the present paper, our main interest lies in ${{}^L\!\g}$-opers over $\mathbb{P}^1$. What we need is the analogous statement to Theorem \ref{thm: space of opers} for the space $\Op_{{{}^L\!\g}}^\mathrm{reg}(\mathbb{P}^1)_X^\varphi$ of global meromorphic ${{}^L\!\g}$-opers which are holomorphic on the complement $X = \mathbb{C} \setminus \{ z_i\}_{i=1}^N$ of the set of marked points, as defined in \S\ref{sec: regular points}.
For every $j \in E$, let us denote by $\Gamma_X(\mathbb{P}^1,\Omega^{j+1})$ the space of global meromorphic sections of $\Omega^{j+1}$ that are holomorphic on $X$. Let $H^1_X(\mathbb{P}^1,\Omega^j, \nabla|_\rho)$ be the corresponding twisted cohomologies. Also let $\Conn_X(\mathbb{P}^1, \Omega)$ denote the space of global meromorphic connections on $\Omega$ which are holomorphic on $X$.
\begin{theorem}\label{thm: space of opers on CP1}
$\Op^\mathrm{reg}_{{{}^L\!\g}}(\mathbb{P}^1)_X$ fibres over $\Conn_X(\mathbb{P}^1, \Omega)$ and we have the isomorphism
\begin{equation*}
\Op^\mathrm{reg}_{{{}^L\!\g}}(\mathbb{P}^1)_X^{\overline{\nabla}} \simeq \Gamma_X(\mathbb{P}^1,\Omega^2) \times \prod_{j\in E_{\geq 2}} H^1_X(\mathbb{P}^1,\Omega^j, \overline{\nabla})
\end{equation*}
for the fibre over any $\overline{\nabla} \in \Conn_X(\mathbb{P}^1, \Omega)$. \qed
\end{theorem}
\section{Twisted homology and the integral pairing}\label{sec: twisted homology}
Recall the integrals from Corollary \ref{cor: opint}.
With Theorem \ref{thm: space of opers on CP1} in hand we can give a coordinate-independent description of functions on $\Op_{{{}^L\!\g}}^\mathrm{reg}(\mathbb{P}^1)_X^{\overline{\nabla}}$.
\subsection{Twisted homology}
Suppose $U\subset X$ is an open subset with a holomorphic coordinate $t:U\to \mathbb{C}$. In the trivialization defined by the coordinate $t$, a horizontal section of the local system $(\Omega^j, \nabla|_\rho)$, for any $j \in \mathbb{Z}$, is given by a holomorphic function $f$ on $U$ such that $df - j {h^\vee}^{-1} \varphi f dt = 0$. Recall the multivalued holomorphic function $\mathcal{P}$ on the complement $X = \mathbb{C} \setminus \{ z_i \}_{i=1}^N$ from \eqref{def: P}: concretely, $f$ is a constant multiple of a univalued branch of $\mathcal{P}^{j/h^\vee}$ over $U$.
Recall that a \emph{singular $p$-simplex in $X$} is a continuous map $\sigma: \Delta_p\to X$ from the standard $p$-simplex $\Delta_p$ to $X$. (We shall need only $p\in \{0,1,2\}$.)
Define a \emph{twisted $p$-simplex in $X$ of degree $j$} to be a pair $\sigma \otimes f$ consisting of a singular $p$-simplex $\sigma$ in $X$ together with a horizontal section $f$ of $(\Omega^{j}, \nabla|_\rho)$ over an open neighbourhood of $\sigma(\Delta_p)$ in $X$.
A \emph{twisted $p$-chain of degree $j$} is then a finite formal sum
\begin{equation} \gamma = \sum_k \sigma_k \otimes g_k\nonumber\end{equation}
of twisted $p$-simplices in $X$ of degree $j$.
Let $C_p(X,\Omega^j, \nabla|_\rho)$ be the (infinite-dimensional) complex vector space of twisted $p$-chains in $X$ of degree $j$, where scalar multiplication of chains is by scalar multiplication of the horizontal sections.
Recall that the usual boundary operator sends a singular $p$-simplex $\sigma$ to $\sum_{k=0}^p (-1)^k s_k$ where $s_k$ is the restriction of $\sigma$ to the $k^{\rm th}$ face of $\Delta_p$ (which is canonically identified with $\Delta_{p-1}$). In our twisted setting, the boundary operator $\partial$ is the linear map
\begin{equation} \label{twisted boundary}
\partial : C_{p}(X, \Omega^j, \nabla|_\rho) \longrightarrow C_{p-1}(X,\Omega^j, \nabla|_\rho),
\end{equation}
defined by $\sigma \otimes f \mapsto \sum_{k=0}^p (-1)^k s_k \otimes f_k$
where $f_k$ is the restriction of $f$ to an open neighbourhood of $s_k(\Delta_{p-1})$ in $X$.
The property $\partial^2 = 0$ follows from the same property in the usual setting.
The kernel of the map \eqref{twisted boundary}, which we denote by $Z_{p}(X, \Omega^j, \nabla|_\rho)$, is the space of \emph{closed} twisted $p$-chains of degree $j$. The \emph{twisted homology of $X$} is then the quotient of vector spaces
\begin{equation*}
H_1(X, \Omega^j, \nabla|_\rho) \coloneqq Z_1(X, \Omega^j, \nabla|_\rho) / \partial C_2(X, \Omega^j, \nabla|_\rho).
\end{equation*}
and its elements are \emph{twisted cycles of degree $j$}.
\subsection{Twisted de Rham theorem}
Let $\omega\in \Gamma_X(\mathbb{P}^1,\Omega\ox\Omega^j)$ and $(\sigma,f)$ be a twisted $1$-simplex in $X$ of degree $-j$, for any $j \in E$. On an open neighbourhood $U$ of $\sigma(\Delta_1)$ we have the holomorphic $1$-form $f\omega\in \Gamma_X(U,\Omega)$. Define the integral of $\omega$ over $\sigma \otimes f$ to be the usual integral of $f\omega$ over the singular $1$-simplex $\sigma$:
\begin{equation*}
\int_{\sigma \otimes f} \omega \coloneqq \int_\sigma f \omega.
\end{equation*}
Extending by linearity we have the integral $\int_\gamma \omega$ over any $\gamma\in C_1(X,\Omega^{-j}, \nabla|_\rho)$.
In the same way, one defines the integral of a $0$-form $\omega\in \Gamma_X(\mathbb{P}^1,\Omega^j)$ over a twisted $0$-chain $\gamma\in C_0(X,\Omega^{-j}, \nabla|_\rho)$; and also in principle of a $2$-form over a twisted $2$-chain, although of course since we are on a curve the only meromorphic $2$-form we have to integrate is the zero $2$-form.
\begin{proposition}[Twisted Stokes's theorem]
Let $p \in \{0, 1\}$. For any $p$-form $\omega \in \Gamma_X(\mathbb{P}^1,\Omega^{\wx p} \otimes \Omega^j)$ and any twisted $(p+1)$-chain $\gamma\in C_{p+1}(X,\Omega^{-j}, \nabla|_\rho)$,
\begin{equation*}
\int_\gamma \nabla|_\rho \omega = \int_{\partial\gamma} \omega.
\end{equation*}
\end{proposition}
\begin{proof}
By linearity it suffices to consider $\gamma = \sigma \otimes f$ with $\sigma$ a singular $(p+1)$-simplex on $X$ and $f$ a horizontal section of $\Omega^{-j}$ over an open neighbourhood of $\sigma(\Delta_{p+1})$. We then have
\begin{equation*}
\int_\gamma \nabla|_\rho \omega = \int_\sigma f \nabla|_\rho \omega = \int_\sigma d (f \omega) = \int_{\partial \sigma} f\omega
= \int_{\partial (\sigma \otimes f) } \omega = \int_{\partial \gamma} \omega,
\end{equation*}
where in the second equality we used $d (f \omega) = (\nabla|_\rho f)\omega + f \nabla|_\rho \omega = f \nabla|_\rho \omega$ (since $f$ is horizontal) and in the third equality we used the usual Stokes's theorem.
\end{proof}
\begin{proposition}[Complex de Rham theorem]
There is a bilinear pairing between twisted homologies and cohomologies, given by integrating twisted forms over twisted chains:
\begin{equation*}
H_i(X, \Omega^{-j}, \nabla|_\rho) \times H^i_X(\mathbb{P}^1,\Omega^j, \nabla|_\rho) \longrightarrow \mathbb{C}, \qquad
(\gamma, \omega) \longmapsto \int_\gamma \omega.
\end{equation*}
for $i\in\{0,1\}$ and any $j \in E$.
\end{proposition}
\begin{proof} The fact that the integral pairing between forms and chains descends to a well-defined pairing between homologies and cohomologies follows from the twisted version of Stokes's theorem above.
\end{proof}
\section{Discussion} \label{sec: discussion}
\subsection{Smooth opers, Drinfel'd-Sokolov and (m)KdV} \label{sec: smooth DS}
The present paper concerned the role of affine opers in describing the spectrum of a (conjectured) family of higher Hamiltonians for affine Gaudin models. Affine Miura opers also play a conceptually quite distinct role in mathematical physics: namely they serve as the phase space of classical generalized mKdV theories.
In the latter context, the procedure of Theorem \ref{thm: quasi-canonical form} for putting an affine oper into quasi-canonical form essentially appears in the paper of Drinfel'd and Sokolov \cite[\S6]{DS}. Specifically, if one replaces meromorphic functions with smooth functions on the circle, and -- crucially -- if one sets the twist function $\varphi$ to zero, then our procedure in \S\ref{sec: quasi-can form} coincides with the procedure of \cite{DS} to construct the densities of Hamiltonians of the classical generalised ${{}^L\!\g}$-(m)KdV hierarchy. In what follows we elaborate on this last statement, and contrast the two settings.
In our earlier definition of ${{}^L\!\g}$-opers from \S\ref{sec: opers} and of Miura ${{}^L\!\g}$-opers from \S\ref{sec: class of Miura opers}, one can replace the algebra $\mathcal{M}$ of meromorphic functions on $\mathbb{P}^1$ by the algebra $C^\8(S^1, \mathbb{C})$ of smooth functions on the circle. On doing so, we obtain the spaces of smooth ${{}^L\!\g}$-opers $\Op_{{{}^L\!\g}}(S^1)$ and Miura ${{}^L\!\g}$-opers $\MOp_{{{}^L\!\g}}(S^1)$ on $S^1$. Furthermore, one may also consider the spaces of smooth ${{}^L\!\g}/\mathbb{C} \delta$-opers $\Op_{{{}^L\!\g}/\mathbb{C} \delta}(S^1)$ and Miura ${{}^L\!\g}/\mathbb{C} \delta$-opers $\MOp_{{{}^L\!\g}/\mathbb{C} \delta}(S^1)$, cf. Corollary \ref{cor: v1} and \S\ref{sec: quad Ham}.
The phase space of ${{}^L\!\g}$-mKdV can then be identified with the set $\MOp_{{{}^L\!\g}/\mathbb{C} \delta}(S^1)^0$ of Miura ${{}^L\!\g}/\mathbb{C} \delta$-opers with zero twist function $\varphi = 0$. Indeed, let $\sigma: S^1 \to (0,2\pi)$ denote the natural coordinate on the circle $S^1 = \mathbb{R}/ 2\pi\mathbb{Z}$. Then a connection $\nabla \in \MOp_{{{}^L\!\g}/\mathbb{C} \delta}(S^1)^0$ takes the form
\begin{equation} \label{mkdv}
\nabla = d + p_{-1}d\sigma + \sum_{i=1}^\ell u_i(\sigma) \alpha_i d\sigma
\end{equation}
where $u_i(\sigma) \in C^\infty(S^1, \mathbb{C})$ are smooth functions on the circle, the \emph{classical ${{}^L\!\g}$-mKdV fields}. Recalling that ${}^L\mathfrak{h}'$ is the span of the simple roots $\{ \alpha_i \}_{i=0}^\ell$, here we implicitly identify the quotient ${}^L\mathfrak{h}'/\mathbb{C} \delta$ with the span of the subset $\{ \alpha_i \}_{i=1}^\ell$.
To go from mKdV to KdV we first need some definitions. Let $\overline{{}^L\mathfrak{n}}_+$ be the finite-dimensional nilpotent Lie subalgebra of ${}^L\mathfrak{n}_+$ generated by $\check e_i$, $i=1, \ldots, \ell$. We may form the infinite-dimensional nilpotent subalgebra $\overline{{}^L\mathfrak{n}}_+(C^\infty(S^1, \mathbb{C}))$ of the Lie algebra ${}^L \hat\mathfrak{n}_+(C^\infty(S^1, \mathbb{C}))$ defined as a completion of ${}^L\mathfrak{n}_+ \otimes C^\infty(S^1, \mathbb{C})$ as in \S\ref{sec: inverse limits}. The Baker-Campbell-Hausdorff formula \eqref{BCH formula} then endows the vector space $\overline{{}^L\mathfrak{n}}_+(C^\infty(S^1, \mathbb{C}))$ with the structure of a group which we denote by $\overline{{}^L \! N}_+(C^\infty(S^1, \mathbb{C}))$. This is a subgroup of ${}^L \! \hat N_+(C^\infty(S^1, \mathbb{C}))$ defined just as in \S\ref{sec: group lhN} but with $\mathcal{M}$ replaced by $C^\infty(S^1, \mathbb{C})$.
The canonical map $\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0 \rightarrow \Op_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0, \nabla \mapsto [\nabla]$ factors through
\begin{equation} \label{mKdV to KdV}
\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0 \lhook\joinrel\relbar\joinrel\rightarrow \op_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0 \relbar\joinrel\twoheadrightarrow \mathscr M \coloneqq \op_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0 \big/ \overline{{}^L \! N}_+(C^\infty(S^1, \mathbb{C})).
\end{equation}
The phase space of ${{}^L\!\g}$-KdV is by definition the quotient space $\mathscr M$. Consider now the connection $\nabla \in \MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0$ given in \eqref{mkdv} which we write as
\begin{equation*}
\nabla = d + \bar p_{-1}d\sigma + \check f_0 d\sigma + \sum_{i=1}^\ell u_i(\sigma) \alpha_i d\sigma \in \op_{{{}^L\!\g}}(S^1)^0
\end{equation*}
with $\bar p_{-1} = \sum_{i=1}^\ell \check f_i$.
Since $[\check e_i, \check f_0] = 0$ for $i = 1, \ldots, \ell$ it follows that $\check f_0$ is invariant under the adjoint action of $\overline{{}^L \! N}_+(C^\infty(S^1, \mathbb{C}))$ on ${}^L \hat\g(C^\infty(S^1, \mathbb{C}))$. Therefore, exactly as in the finite-dimensional setting recalled in \S\ref{fto}, the image of $\nabla$ under the map \eqref{mKdV to KdV} has a unique representative of the form
\begin{equation} \label{KdV connection}
d + p_{-1} d\sigma + \sum_{i\in \bar E} v_i(\sigma) \bar p_i d\sigma.
\end{equation}
Here $\bar p_1$ denotes the unique element in $\overline{{}^L\mathfrak{n}}_+$ such that $\{ \bar p_{-1}, 2 \rho - 2h^\vee \Lambda_0, \bar p_1 \} \subset {{}^L\!\g}/\mathbb{C} \delta$ forms an $\mathfrak{sl}_2$-triple, and the $\bar p_i$, $i \in \bar E$ span the kernel of $\ad \bar p_1 : \overline{{}^L\mathfrak{n}}_+ \to \overline{{}^L\mathfrak{n}}_+$.
The smooth functions $v_i(\sigma)\in C^\8(S^1,\mathbb{C})$ are the \emph{classical ${{}^L\!\g}$-KdV fields}.
Now Theorem \ref{thm: quasi-canonical form}, and in particular also Corollary \ref{cor: v1}, generalises to the smooth setting. Therefore, starting from the smooth Miura ${{}^L\!\g}/\mathbb{C}\delta$-oper $\nabla$ in \eqref{mkdv} we obtain a quasi-canonical form
\begin{equation} \label{mKdV can form}
d + p_{-1} d\sigma + \sum_{i\in E} h_i(\sigma) p_i d\sigma
\end{equation}
of the underlying smooth ${{}^L\!\g}/\mathbb{C} \delta$-oper $[\nabla] \in \Op_{{{}^L\!\g}/\mathbb{C} \delta}(S^1)^0$.
Let $g \in {}^L \! \hat N_+(C^\infty(S^1, \mathbb{C}))$ denote the gauge transformation parameter sending the Miura ${{}^L\!\g}/\mathbb{C}\delta$-oper \eqref{mkdv} to the quasi-canonical form \eqref{mKdV can form}. For any $i \in E$, the \emph{$i^{\rm th}$ mKdV flow} is given by
\begin{equation} \label{mKdV flow}
\frac{\partial}{\partial t_i} \nabla_{\partial_\sigma} = \big[ (g^{-1} p_{-i} g)_+, \nabla_{\partial_\sigma} \big]
\end{equation}
where ${{}^L\!\g} \mapsto {}^L\mathfrak{n}_+$, $X \mapsto X_+$ is the canonical projection onto ${}^L\mathfrak{n}_+$ relative to the Cartan decomposition of ${{}^L\!\g}$. Note that since $p_{-i}$ commutes with $\mathfrak{a}_{\geq 2}$ in the quotient ${{}^L\!\g}/\mathbb{C}\delta$, the expression $g^{-1} p_{-i} g$ does not depend on the ambiguity in the quasi-canonical form described in Theorem \ref{thm: quasi-canonical form}. Now the right hand side of \eqref{mKdV flow} takes values in ${}^L\mathfrak{h}'/\mathbb{C}\delta$ by \cite[Lemma 6.7]{DS} so that equation \eqref{mKdV flow} indeed defines a flow, for each $i \in E$, on the phase space $\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0$ of ${{}^L\!\g}$-mKdV. Furthermore, these flows are mutually commuting \cite[Proposition 6.5]{DS} and define the \emph{${{}^L\!\g}$-mKdV hierarchy}. Similarly, one can also define commuting flows on the phase space $\mathscr M$ of ${{}^L\!\g}$-KdV \cite[\S 6.2]{DS}, giving rise to the \emph{${{}^L\!\g}$-KdV hierarchy}.
The smooth functions $h_i(\sigma)\in C^\8(S^1,\mathbb{C})$ appearing in the quasi-canonical form \eqref{mKdV can form} are the \emph{densities of the ${{}^L\!\g}$-(m)KdV Hamiltonians}.
Indeed, as in Theorem \ref{thm: quasi-canonical form}, the $h_i$ are defined up to the addition of exact derivatives (now not twisted, since $\varphi = 0$). To get gauge-invariant functions we should integrate them over a cycle, and on the circle that leaves only one possibility. Thus, we obtain the following functions on the phase space $\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0$ of ${{}^L\!\g}$-mKdV, or alternatively on the phase space $\mathscr M$ of ${{}^L\!\g}$-KdV:
\begin{equation*}
\mathscr H_i \coloneqq \int_0^{2 \pi} h_i(\sigma) d\sigma, \qquad i\in E.
\end{equation*}
These are the \emph{${{}^L\!\g}$-(m)KdV Hamiltonians}. They are conserved quantities under the flows of both the ${{}^L\!\g}$-mKdV hierarchy \eqref{mKdV flow} and the ${{}^L\!\g}$-KdV hierarchy \cite[Proposition 6.6]{DS}. Moreover, they generate these flows with respect to certain Poisson brackets on $\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^0$ and $\mathscr M$, respectively \cite[Propositions 6.11 and 6.10]{DS}.
It is interesting to note that the equation \eqref{mKdV flow} also defines a flow on $\MOp_{{{}^L\!\g}/\mathbb{C}\delta}(S^1)^\varphi$ for any choice of smooth function $\varphi \in C^\infty(S^1, \mathbb{C})$ since $\rho \not\in {}^L\!\g'$. In fact, one could define ${{}^L\!\g}$-mKdV flows in the analytic setting of the present paper with non-zero twist function $\varphi$. In the case $\varphi = 0$, the ${{}^L\!\g}$-(m)KdV flows have previously been discussed in the analytic setting in \cite{MR3239138} when ${{}^L\!\g} = \widehat{\mathfrak{sl}}_N$ and in \cite{MR3297115} for ${{}^L\!\g}$ of type $\null^2 A_2$.
\subsection{${{}^L\!\g}$-(m)KdV on polygonal domains}
As recalled in \S\ref{sec: smooth DS}, the affine space of Miura ${{}^L\!\g}$-opers $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ considered in the present article does not quite correspond to the phase space of ${{}^L\!\g}$-mKdV. Indeed, for us the twist function $\varphi$ plays a central role in characterising the affine Gaudin model, just as in the classical case \cite{V17}.
In the present section we show that the simple class of rational Miura ${{}^L\!\g}$-opers in $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$ introduced in \S\ref{sec: class of Miura opers} can, nevertheless, alternatively be described in terms of Miura ${{}^L\!\g}$-opers of the mKdV-type as in \eqref{mkdv}, \emph{i.e.} with zero twist function. The price to pay, however, is that one then needs to work with multivalued Miura ${{}^L\!\g}$-opers defined over a polygonal region of the complex plane (whose shape now encodes the same information as the twist function did). For this reason we prefer to keep the twist function explicit and work with the space $\MOp_{{{}^L\!\g}}(\mathbb{P}^1)^\varphi$.
Let us remark, in passing, that the situation described below is very reminiscent of the relation between the modified sinh-Gordon equation and the sinh-Gordon equation in the context of the massive ODE/IM correspondence \cite{Lukyanov:2010rn}.
Recall the class of Miura ${{}^L\!\g}$-opers of the form \eqref{u Miura op def} introduced in \S\ref{sec: class of Miura opers}. We consider the analogous class of Miura ${{}^L\!\g}/\mathbb{C}\delta$-oper of the form
\begin{subequations} \label{Miura rho explicit}
\begin{equation} \label{Miura rho explicit a}
\nabla = d + p_{-1} dz + \widetilde u(z) dz - \frac{\varphi(z)}{h^\vee} \rho\, dz,
\end{equation}
where $\widetilde u \in ({}^L\mathfrak{h}'/\mathbb{C}\delta)(\mathcal{M})$ is the rational function valued in ${}^L\mathfrak{h}'/\mathbb{C}\delta$ defined as
\begin{equation} \label{Miura rho explicit b}
\widetilde u(z) \coloneqq - \sum_{i=1}^N \frac{\dot{\lambda}_i}{z - z_i} dz + \sum_{i = 1}^\ell \Bigg( \sum_{j=1}^{m_i} \frac{1}{z - w^i_j} - \sum_{j=1}^{m_0} \frac{1}{z - w^0_j} \Bigg) \alpha_i dz.
\end{equation}
\end{subequations}
Here, by comparison with the expression \eqref{u Miura op def}, we have split the sum over the simple poles at the Bethe roots into separate sums over the collection of Bethe roots $w^i_j$, $j = 1, \ldots, m_i$ of the same colour $i \in I = \{ 0, \ldots, \ell \}$. We also implicitly identify the subspace ${}^L\mathfrak{h}'/\mathbb{C}\delta$ of $ {}^L\!\g'/\mathbb{C}\delta$ with the span of the simple roots $\{ \alpha_i \}_{i=1}^\ell$ and the subspaces ${{}^L\!\g}_n/\mathbb{C}\delta$ for $n \neq 0$ with ${{}^L\!\g}_n$, as we did in \S\ref{sec: smooth DS}.
Recall from \S\ref{sec: twisted homology coord} that $\varphi(z) = \partial_z \log \mathcal{P}(z)$. Fix a collection of cuts $C \subset \mathbb{P}^1$ between the branch points $z_i$ of the multivalued function $\mathcal{P}^{1/h^\vee}$ on the Riemann sphere $\mathbb{P}^1$ and let $\overline{\mathcal{M}}$ denote the field of meromorphic functions on $\mathbb{P}^1 \setminus C$. From now on we fix a branch of $\mathcal{P}^{1/h^\vee}$ which by abuse of notation we also denote by $\mathcal{P}^{1/h^\vee} \in \overline{\mathcal{M}}$.
By treating \eqref{Miura rho explicit} as a Miura ${{}^L\!\g}/ \mathbb{C}\delta$-oper on $\mathbb{P}^1 \setminus C$ and working over the larger field $\overline{\mathcal{M}} \supset \mathcal{M}$ one can then remove the $\rho$ term by performing a gauge transformation by $\mathcal{P}(z)^{- \rho/h^\vee} \in {}^L \! H(\overline{\mathcal{M}})$. Using the second relation in \eqref{Lie alg a com rel} with $n = -1$ we find
\begin{equation} \label{Miura oper no rho}
\widetilde \nabla \coloneqq \mathcal{P}(z)^{-\rho/h^\vee} \nabla \mathcal{P}(z)^{\rho/h^\vee} = d + p_{-1} \mathcal{P}(z)^{1/h^\vee} dz + \widetilde u(z) dz.
\end{equation}
In order to bring $\widetilde \nabla$ back to the form of a Miura ${}^L\!\g'/\mathbb{C}\delta$-oper, consider the new variable $x$ defined as the indefinite integral
\begin{equation} \label{SC mapping}
x \coloneqq \int \mathcal{P}(z)^{1/h^\vee} dz = \int \prod_{i=1}^N (z - z_i)^{k_i / h^\vee} dz,
\end{equation}
where in the second equality we used the explicit form \eqref{def: P} of the function $\mathcal{P}$.
Suppose, for simplicity, that all the $z_i$, $i = 1, \ldots, N$ are real. It is then convenient to relabel them so that they are ordered as $z_1 < z_2 < \ldots < z_N$. For generic $k_i \in \mathbb{C}$, $i = 1, \ldots, N$, a possible choice of cuts $C \subset \mathbb{P}^1$ is to take the following union of open intervals along the real axis in the $z$-plane
\begin{equation*}
C = (- \infty, z_1) \cup \bigcup_{i=2}^{N-1} (z_i, z_{i+1}) \cup (z_N, \infty).
\end{equation*}
Let $x_i \in \mathbb{P}^1$, for each $i = 1, \ldots, N$, be the image of $z_i$ under the transformation \eqref{SC mapping}, and let $x_{N+1} \in \mathbb{P}^1$ denote the image of $z = \infty$. Suppose the $x_i$, $i=1,\ldots,N+1$ are all distinct and the ordered set $(x_i)_{i=1}^N$ describes the adjacent vertices of a simple polygonal domain $P$ in the $x$-plane, where one of the $x_i$'s could be infinite. In this case, the transformation $z \mapsto x$ given in \eqref{SC mapping} defines a Schwartz-Christoffel mapping. It is a biholomorphic map $\mathbb H \to P$, \emph{i.e.} a bijective holomorphic map whose inverse is also holomorphic, from the upper-half $\mathbb H \coloneqq \{ z \in \mathbb{C} \,|\, \Im z \geq 0 \}$ of the $z$-plane to the polygon $P$ in the $x$-plane. Each interval $(z_i, z_{i+1})$ for $i =1, \ldots, N-1$ is sent to the straight edge from $x_i$ to $x_{i+1}$, while the semi-infinite intervals $(-\infty, z_1)$ and $(z_N, \infty)$ are sent to the edges connecting $x_{N+1}$ to $x_1$ and $x_N$, respectively. The interior angles $\alpha_j$ of $P$ at each of its vertices $x_j$, for $j =1, \ldots, N+1$, are given by
\begin{equation*}
\alpha_i = \frac{k_i + h^\vee}{h^\vee} \pi, \quad \text{for} \;\; i = 1, \ldots, N \qquad \text{and}\quad \alpha_{N+1} = - \frac{\sum_{i=1}^N k_i + h^\vee}{h^\vee} \pi.
\end{equation*}
Furthermore, the transformation $z \mapsto x$ maps the lower-half $\overline{\mathbb H} = \{ z \in \mathbb{C} \,|\, \Im z \leq 0 \}$ of the $z$-plane to the reflection $P'$ in the $x$-plane of the polygonal domain $P$ through its edge connecting the vertices $x_1$ and $x_2$. The map $z \mapsto x$ in \eqref{SC mapping} therefore sends the Riemann sphere $\mathbb{P}^1$, equipped with the global coordinate $z$ on the dense open subset $\mathbb{C} \subset \mathbb{P}^1$, to another copy of the Riemann sphere $\mathbb{P}^1$, equipped with the global coordinate $x$ on an open and dense subset identified with the interior of the domain $P \cup P'$ in the $x$-plane. The case $N=2$ is depicted in Figure \ref{fig: polygonal region}. A very similar change of coordinate to this particular example was considered in \cite{Lukyanov:2013wra} in the context of the massive ODE/IM correspondence for the Fateev model.
\begin{figure}[h]
\raisebox{11mm}{\begin{tikzpicture}[scale=.6]
\filldraw (0,0) node [below right=-.5mm]{\scriptsize $z_1$} circle (2pt);
\filldraw (4,0) node [below left=-.5mm]{\scriptsize $z_2$} circle (2pt);
\draw[blue, -stealth', postaction={decorate,decoration={markings,mark=at position .4 with {\arrow{stealth'}}}}] (2,0) .. controls (-2,-2) and (-2,1.25) .. (2,1.25) node[above]{$\gamma$};
\draw[blue, postaction={decorate,decoration={markings,mark=at position .8 with {\arrow{stealth'}}}}] (2,1.25) .. controls (6,1.25) and (6,-2) .. (2,0);
\draw[blue, postaction={decorate,decoration={markings,mark=at position .8 with {\arrow{stealth'}}}}] (2,0) .. controls (-2,2) and (-3,-1.5) .. (2,-1.5);
\draw[blue] (2,-1.5) .. controls (7,-1.5) and (6,2) .. (2,0);
\draw[gray!30!black, decorate, decoration={snake, segment length=2mm, amplitude=.5mm}] (0,0) -- (-3,0);
\draw[gray!30!black, decorate, decoration={snake, segment length=2mm, amplitude=.5mm}] (4,0) -- (7,0);
\end{tikzpicture}}
\qquad\quad
\begin{tikzpicture}
\filldraw[fill=gray!20,draw=gray!50!black] (0,0) coordinate (a) node[anchor=east]{\scriptsize $x_1$}
-- (2.5,1) coordinate (b) node[anchor=south]{\scriptsize $x_3$}
-- (3,-1) coordinate (c) node[anchor=west]{\scriptsize $x_2$}
-- (1.4, -2.3) coordinate (b2) node[anchor=north]{\scriptsize $x'_3$}
-- cycle;
\draw[gray!50!black] (a) -- (c);
\draw[blue,postaction={decorate,decoration={markings,mark=at position .5 with {\arrow{stealth'}}}}]
($.3*(b2)-.3*(c)+(c)$) -- ($.6*(b)$);
\draw[blue,postaction={decorate,decoration={markings,mark=at position .5 with {\arrow{stealth'}}}}]
($.6*(b2)$) -- node[above]{$\gamma$} ($.7*(b2)-.7*(c)+(c)$);
\draw[blue,postaction={decorate,decoration={markings,mark=at position .3 with {\arrow{stealth'}}}}]
($.7*(b)-.7*(c)+(c)$) -- ($.2*(b2)$);
\draw[blue,postaction={decorate,decoration={markings,mark=at position .2 with {\arrow{stealth'}}}}]
($.2*(b)$) -- ($.3*(b)-.3*(c)+(c)$);
\draw[blue,dotted] ($.3*(b)-.3*(c)+(c)$) -- ($.3*(b2)-.3*(c)+(c)$);
\draw[blue,dotted] ($.7*(b)-.7*(c)+(c)$) -- ($.7*(b2)-.7*(c)+(c)$);
\draw[blue,dotted] ($.2*(b)$) -- ($.2*(b2)$);
\draw[blue,dotted] ($.6*(b)$) -- ($.6*(b2)$);
\node[scale=.6, rotate=20] at ($(a)!0.5!(b)$) {|};
\node[scale=.6, rotate=110] at ($(a)!0.5!(b2)$) {|};
\node[scale=.6, rotate=100] at ($(c)!0.5!(b)$) {||};
\node[scale=.6, rotate=40] at ($(c)!0.5!(b2)$) {||};
\draw[-stealth'] (2.7,-1.7) node[right]{\small $P'$} to [bend right=10] (1.5,-1);
\draw[-stealth'] (3.1,.4) node[right]{\small $P$} to [bend right=10] (2.2,0);
\end{tikzpicture}
\caption{A pochhammer contour $\gamma$ in the case of two marked points $z_1$, $z_2$ in the $z$-plane (left) and its image in the polygonal region $P \cup P'$ in the $x$-plane (right). The edge $x_i x_3$ is identified with $x_i x'_3$ for $i = 1, 2$.}
\label{fig: polygonal region}
\end{figure}
Coming back to the connection \eqref{Miura oper no rho}, the pullback of the meromorphic differential $\widetilde u(z) dz$ by the inverse transformation $\mathbb{P}^1 \rightarrow \mathbb{P}^1$, $x \mapsto z$ gives a multivalued differential $\widehat u(x) dx$, where $\widehat u$ is the multivalued function on $\mathbb{P}^1$ given in the interior of the domain $P \cup P'$ of the $x$-plane by $\widehat u(x) \coloneqq \mathcal{P}(z(x))^{-1/h^\vee} \widetilde u(z(x))$. Here we have used the fact that $dx/dz = \mathcal{P}(z)^{1/h^\vee}$.
Therefore \eqref{Miura oper no rho} can now be re-expressed as a multivalued Miura ${}^L\!\g'/\mathbb{C}\delta$-oper on $\mathbb{P}^1$ which is meromorphic on $\mathbb{P}^1 \setminus \{ x_i \}_{i=1}^{N+1}$ and given in the interior of the polygonal domain $P \cup P'$ of the $x$-plane by
\begin{equation} \label{polygonal mKdV}
\widetilde \nabla = d + p_{-1} dx + \sum_{i=1}^\ell \widehat u_i(x) \alpha_i dx.
\end{equation}
Here we wrote $\widehat u(x) = \sum_{i=1}^\ell \widehat u_i(x) \alpha_i$ in the basis of simple roots $\{ \alpha_i \}_{i=1}^\ell$. Comparing the above expression \eqref{polygonal mKdV} with the connection \eqref{mkdv}, it is tempting to regard the $\widehat u_i(x)$ for $i = 1, \ldots, \ell$ as classical ${{}^L\!\g}$-mKdV fields on the interior of the polygonal domain $P \cup P'$ in the $x$-plane.
Just as in \S\ref{sec: smooth DS}, one could also consider bringing the Miura ${{}^L\!\g}$-oper \eqref{polygonal mKdV} to a form analogous to \eqref{KdV connection} in the smooth setting and define classical ${{}^L\!\g}$-KdV fields $\widehat v_r(x)$, $r \in E$ on the interior of the polygonal domain $P \cup P'$ as the coefficients of the $\bar p_r$, $r \in E$, \emph{i.e.}
\begin{equation} \label{polygonal KdV}
d + p_{-1} dx + \sum_{r\in \bar E} \widehat v_r(x) \bar p_r dx.
\end{equation}
Suppose that the collection of Bethe roots $w^i_j$, $j = 1, \ldots, m_i$ for $i \in \{ 1, \ldots, \ell \}$ satisfy the Bethe equations \eqref{Bethe equations}, or more explicitly
\begin{equation*}
- \sum_{k=1}^N \frac{(\lambda_k|\alpha_{c(j)})}{w^i_j - z_k} + \sum_{\substack{k =1\\ k \neq j}}^m \frac{(\alpha_{c(k)}|\alpha_{c(j)})}{w^i_j - w^i_k} = 0, \qquad j= 1, \ldots, m_i,
\end{equation*}
for every $i \in \{ 1, \ldots, \ell \}$. Then it follows from Proposition \ref{prop: Miura oper BAE} and the explicit form \eqref{Miura rho explicit} of the connection $\nabla$ we started with in the $z$-plane, that the $\widehat v_r(x)$, $r \in \bar E$ are holomorphic at the images $x(w^i_j)$ of these Bethe roots under the Schwarz-Christoffel transformation \eqref{SC mapping}. However, each $\widehat v_r(x)$, $r \in \bar E$ will generically still be singular at the images $x(w^0_j)$, $j = 1, \ldots, m_0$, of the Bethe roots of ``colour'' 0. (By contrast, let us stress that there exists a quasi-canonical form of the affine ${{}^L\!\g}$-oper in which \emph{all} Bethe roots are erased, as in Corollary \ref{cor: Bethe equations}.)
Multivalued ${{}^L\!\g}$-opers of the form \eqref{polygonal KdV} but with a certain irregular singularity were conjectured in \cite{FFsolitons} to describe the spectrum of quantum $\dot{\mathfrak{g}}$-KdV. Specifically, it was shown that in the case $\dot{\mathfrak{g}} = \widehat{\mathfrak{sl}}_2$ such ${{}^L\!\g}$-opers are equivalent to the Schr\"odinger operators with `monster' potentials used to describe the spectrum of both local and non-local integrals of motion in quantum KdV theory via the ODE/IM correspondence \cite{MR1733841,BLZ4, BLZ5,DDT}.
In the ODE/IM setting, it was recently shown in \cite{Frenkel:2016gxg} and \cite{MRV1, MRV2} that a certain $Q\widetilde{Q}$-system can be extracted from both sides of the `KdV-oper' correspondence proposed in \cite{FFsolitons}, providing strong evidence in support of the conjecture.
By contrast, the proposal of the present work is a direct approach to establishing a correspondence between the spectra of quantum Gaudin models of affine type and opers of Langlands-dual affine type. It relies on the idea that, in close parallel with the well-established correspondence in finite types, the spectrum can be obtained from a (quasi-)canonical form of the (affine) oper.
\subsection{Classical limit of higher Gaudin Hamiltonians} \label{sec: classical lim}
One of the motivations for Conjectures \ref{conj: higher Ham} and \ref{conj: e-val op} comes from the structure of higher local integrals of motion in \emph{classical} affine Gaudin models \cite{V17}, constructed in \cite{LMV17} when the underlying finite-dimensional simple Lie algebra $\dot\dot{\mathfrak{g}}$, cf. \S\ref{sec: ao}, is of classical type.
Specifically, to every exponent $i \in E$ and every zero $x$ of the twist function, \emph{i.e.} an $x \in \mathbb{C}$ such that $\varphi(x) = 0$, is assigned an element $Q^x_i$ of a certain completion $\hat S_{\bm k}(\dot{\mathfrak{g}}'^{\oplus N})$ of the quotient of the symmetric algebra of $\dot{\mathfrak{g}}'^{\oplus N}$ by the ideal generated by the elements $\mathsf k^{(j)} - k_j$, $j = 1, \ldots, N$. These were obtained by generalising the approach of \cite{Evans:1999mj}, where certain $\dot\dot{\mathfrak{g}}$-invariant homogeneous
polynomials $P_i : \dot\dot{\mathfrak{g}}^{\times (i+1)} \to \mathbb{C}$ of degree $i+1$ were constructed for each $i \in E$. Extending these to $\dot{\mathfrak{g}}'/\mathbb{C} \mathsf k \cong_\mathbb{C} \mathcal L \dot\dot{\mathfrak{g}}$ as $P_i(x_m, \ldots, y_n) \coloneqq P_i(x, \ldots, y) \delta_{m + \ldots + n, 0}$ for any $x, \ldots, y \in \dot\dot{\mathfrak{g}}$ and $m, \ldots, n \in \mathbb{Z}$, they can be applied to the first tensor factor of the local Lax matrix \eqref{local Lax matrix}. The resulting $\hat S_{\bm k}(\dot{\mathfrak{g}}'^{\oplus N})$-valued meromorphic functions on $\mathbb{P}^1$ are then evaluated at any zero $x$ of the twist function to produce the charges $Q^x_i$, $i \in E$. The collection of these charges was shown in \cite{LMV17} to form a Poisson commutative subalgebra of $\hat S_{\bm k}(\dot{\mathfrak{g}}'^{\oplus N})$, \emph{i.e.} $\{ Q^x_i, Q^{x'}_j \} = 0$ for every $i, j \in E$ and any pair of zeroes $x, x'$ of the twist function.
We also expect the operators $\mathcal S_j(z)$, $j \in E$ in Conjecture \ref{conj: higher Ham} to be built from the local Lax matrix $L(z)$ in a similar way. Furthermore, reintroducing Planck's constant $\hbar$ to take the classical limit, we expect the dependence of the integral \eqref{integrated operators} on $\hbar$ to come in the form
\begin{equation*}
\int_{\gamma} \mathcal{P}(z)^{-i / \hbar h^\vee} \mathcal S_i(z) dz.
\end{equation*}
In the classical limit $\hbar \to 0$ such an integral localises, by the steepest descent method, at the critical points of the function $\mathcal{P}(z)$. Yet these are nothing but the zeroes of the twist function $\varphi(z) = \partial_z \log \mathcal{P}(z)$. Moreover, for generic $z_i$, $i = 1, \ldots, N$ the number of zeroes of the twist function is $N-1$, which coincides with the number of linearly independent cycles from Corollary \ref{cor: opint}.
We thus expect that the higher affine Gaudin Hamiltonians $\hat Q^\gamma_i$ of Conjecture \ref{conj: higher Ham} provide a quantisation of the local integrals of motion $Q^x_i$ for classical affine Gaudin models in \cite{LMV17}.
Strictly speaking, the classical field theories correspond to classical affine Gaudin models with reality conditions and various other generalizations \cite{V17}. To understand the corresponding quantum field theories in the present framework, one would need to extend the constructions above to, in particular, the cyclotomic case and the case of irregular singularities. In finite types, such generalizations of quantum Gaudin models were studied in \cite{VY16,VY17a} and \cite{FFT, VY17} respectively.
\subsection{Two-point case} One natural arena in which to test the conjectures in \S\ref{sec: conjectures} is the special case of $N=2$ marked points and $\dot{\mathfrak{g}}'= \widehat{\mathfrak{sl}}_2$. As was noted in \cite[\S6.4]{FFsolitons}, in that case the GKO coset construction \cite{GKO} means that one already has candidates for the higher affine Gaudin Hamiltonians, namely the Integrals of Motion of quantum KdV acting in multiplicity spaces. With that in mind, it is interesting to note that Bethe equations of a two-point Gaudin model of type $\widehat{\mathfrak{sl}}_2$ appeared in \cite[\S7.3]{FJM} as limits of Bethe equations for quantum toroidal algebras.
|
1,108,101,565,132 | arxiv | \section{Introduction}
The frictional force experienced by a quantized flux line moving in a conventional superconductor arises primarily from induced vortex electric fields coupling to charge excitations \emph{within} the vortex core. This was first captured by Bardeen and Stephen,\cite{Bardeen:1965p151} who treated the vortex core as a cylinder of normal metal embedded in a superconducting background. Their theory is applicable to conventional superconductors for two reasons: the vortex cores are large and support a nearly continuous spectrum of single-particle states; and $s$-wave pairing symmetry results in a low density of extended states surrounding the vortex cores. In cuprate superconductors the opposite situation holds: small vortex cores contain at most a few discrete states,\cite{MaggioAprile:1995p3014,Soininen:1994iu} with a continuum of low lying states \emph{outside} the vortex cores due to the nodes in the $d$-wave energy gap.\cite{hardy93,Scalapino:1995p741,Ding:1996p3019} Bardeen--Stephen theory is therefore unlikely to apply in its original form, but how it should be extended to the cuprates is not at all obvious.
The dissipation associated with a moving flux line is parameterized by a vortex viscosity, $\eta$, giving the linear coefficient of friction per unit length of flux line. Vortex viscosity, like flux-flow resistivity, is usually thought of as a static property of a type-II superconductor. However, $\eta$ should more generally be regarded as a frequency-dependent response function,\cite{Choi:1994ga} in a manner similar to the extension of electrical conductivity to high frequencies. We will show that $\eta$ has a very strong frequency dependence in ortho-II \ybco{6.52}, and that this frequency dependence carries the fingerprints of the microscopic processes responsible for the vortex dissipation, namely the charge dynamics of $d$-wave quasiparticles in the superconducting state outside the vortex cores.
As well as being interesting in its own right,\cite{Bardeen:1965p151,Nozieres:1966p667,Larkin:1976p3015} vortex viscosity has additional significance in the cuprates: low superfluid density \cite{Uemura:1989p962} makes these materials prone to phase disordering by vortex--antivortex fluctuations,\cite{Emery:1995p364,franz01,franz02,herbut02, herbut02a,herbut05,franz06,Tesanovic:2008p2290} with vortex viscosity an important parameter in theoretical models of these effects.\cite{Geshkenbein:1998p3010,Ioffe:2002p717,Lee:2003p7,Melikyan:2005p3011,Nikolic:2006p3012,Bilbro:2011p3009} There is also the possibility that the viscous response contains dynamical signatures that could identify whether vortex fluctuations are occurring in the pseudogap regime; this would provide information complementary to other experiments that may be probing local pairing and phase-disordered superconductivity.\cite{Corson:1999p716,Xu:2000p609,Wang:2005p2400,Wang:2006p185,Bilbro:2011p2722}
In this paper we report a comprehensive study of the vortex dynamics of high purity, ortho-II-ordered \ybco{6.52}.
Remarkably, the observed $\eta(\omega,T)$ mimics the behaviour of the zero-field microwave conductivity, $\sigma_\mathrm{qp}(\omega,T)$, in which it has been established that the dominant contribution comes from the charge dynamics of nodal $d$-wave quasiparticles.\cite{Bonn:1992p3021,HIRSCHFELD:1994p570,Hosseini:1999p383,Turner:2003p331,Harris:2006p388} Our data therefore suggest that bulk $d$-wave quasiparticles \emph{outside} the vortex cores are the primary source of frictional force on vortices in this material, in marked contrast to the situation in conventional $s$-wave materials. One consequence is that $\eta(T)$, like $\sigma_\mathrm{qp}(T)$, has a characteristic peak at intermediate temperatures, due to a competition between quasiparticle lifetime and quasiparticle density. This leads to low temperature upturns in the flux-flow resistivity, $\rho_\mathrm{ff}(T) \propto 1/\eta(T)$, which, on closer inspection, are seen to follow a $\log(1/T)$ form, similar to the resistivity observed in the pseudogap regime of the underdoped cuprates.\cite{Ando:1995p148, Boebinger:1996p147}
The paper is organized as follows. We start with the standard dynamical model of a flux line, and discuss the observability of the vortex Hall effect in microwave measurements. Vortex pinning is then introduced into the dynamical model, through a redefinition of the vortex viscosity. Next, we consider a more general situation, in which the vortex viscosity itself is frequency dependent, and show that this leads to a dynamical contribution to the effective pinning constant. We then present detailed measurements of the surface impedance of a high quality single crystal of ortho-II \ybco{6.52} as a function of field, temperature and frequency. From these data we obtain the vortex viscosity, pinning constant, depinning frequency and flux-flow resistivity. As well as supplying new insights into the origin of the various forces experienced by the vortices, the experiments provide a stringent test of the use of single-vortex dynamical models in the interpretation of high frequency measurements.
\section{Vortex dynamics}
\label{Sec:vortex_dynamics}
\subsection{Hall effect in a conventional metal}
The vortex Hall effect has useful parallels with that of a normal metal, so we begin by considering the magnetoconductivity of a metal in which the carriers have density $n$, mass $m$ and charge $q$. If scattering is treated in the relaxation-time approximation, the steady-state force equation is
\begin{equation}
q \left(\mathbf{E} + \mathbf{v} \times \mathbf{B}\right) - \frac{m \mathbf{v}}{\tau} = 0\;,
\label{Eq:normalmetal}
\end{equation}
where $\mathbf{v}$ is the carrier drift velocity and $\tau$ the relaxation time. We let the magnetic field $\mathbf{B} = \left(0,0,B_z \right)$. The resistivity tensor that relates electric field, $\mathbf{E}$, to the current density, $\mathbf{j} = n q \mathbf{v}$, is
\begin{equation}
\bm \rho = \rho_0 \left(\begin{array}{ccc}1 & -\omega_c \tau & 0 \\\omega_c \tau & 1& 0\\ 0 & 0 & 1\end{array}\right)\;,
\end{equation}
where $\omega_c = q B/m$ is the cyclotron frequency and \mbox{$\rho_0 = m/n q^2 \tau$} the resistivity. The Hall angle, $\theta_H$, measures the deflection of the charge currents by the magnetic field: $\tan(\theta_H) = \omega_c \tau$. The magnetoconductivity tensor is
\begin{equation}
\bm \sigma = \bm \rho^{-1} = \frac{\sigma_0}{1\!+\! \left(\omega_c \tau\right)^2} \left(\begin{array}{ccc}1 & \omega_c \tau & 0 \\-\omega_c \tau & 1 & 0\\ 0 & 0 & 1\! +\! \left(\omega_c \tau\right)^2\end{array}\right)\;,
\label{Eq:magnetoconductivity}
\end{equation}
where $\sigma_0 = 1/\rho_0$. Note that $\sigma_{xx}$ and $\sigma_{yy}$ depend on $\omega_c$, while the diagonal components of $\bm \rho$ do not. Under conditions of constant current bias $\mathbf{j}$, the power dissipation per unit volume is
\begin{equation}
\mathbf{j}^\top\bm{\rho}\,\mathbf{j} = \rho_0 j^2\;,
\end{equation}
independent of $\omega_c$. On the other hand, if a constant electric field $\mathbf{E}$ is applied transverse to $\mathbf{B}$, the power dissipation per unit volume is
\begin{equation}
\mathbf{E}^\top\bm{\sigma}\,\mathbf{E} = \frac{\sigma_0}{1 + \left(\omega_c \tau\right)^2} E^2\;,
\end{equation}
which \emph{is} a function of $\omega_c$. Whether a longitudinal transport experiment is sensitive to the Hall effect therefore depends crucially on whether constant current or constant electric field is applied.
\subsection{Vortex Hall effect}
The starting point for much work on vortex dynamics is the vortex equation of motion\cite{Nozieres:1966p667,Vinen:1967,JIGittleman:1968p172,KOPNIN:1976ua,KOPNIN:1991va,Hsu:1993ih,Vinokur:1993to,Choi:1994ga,BLATTER:1994p494,Parks:1995p189,GOLOSOVSKY:1996p1}
\begin{equation}
\eta \mathbf{v}_v + \alpha_H \mathbf{v}_v \times \mathbf{\hat z} = \Phi_0 \mathbf{j} \times \mathbf{\hat z}\;,
\label{Eq:vortex}
\end{equation}
where $\Phi_0 = h/2 e$ is the superconducting flux quantum, $\mathbf{j}$ the applied transport current density, $\mathbf{v}_v$ the vortex velocity, $\eta$ the vortex viscosity, $\alpha_H$ the Hall coefficient, and we assume that magnetic field is applied along the direction $\mathbf{\hat z}$. (Pinning effects are not included at this point: we show in Sec.~\ref{Sec:complex_viscosity_model} how pinning can be folded into a complex generalization of the vortex viscosity. Vortex inertia is also ignored, as it is negligible in the microwave frequency range. Similarly, thermal flux creep is not included in models of microwave-frequency dynamics, as the creep rate is expected to be much lower than the measurement frequency.\cite{GOLOSOVSKY:1996p1})
By requiring that the vortex dynamics be consistent with the magnetoconductivity of the electron fluid, Blatter \emph{et al.}\ argue that the viscosity and Hall coefficient must have the form:\cite{BLATTER:1994p494}
\begin{eqnarray}
\eta & = & \eta_0 \frac{1}{1 + \left(\omega_c \tau\right)^2}\label{Eq:viscosity}\\
\alpha_H & = & \eta_0 \frac{\omega_c \tau}{1 + \left(\omega_c \tau\right)^2}\;,\label{Eq:HallCoefficient}
\end{eqnarray}
where $\eta_0$ is the bare viscosity. The vortex Hall angle, $\theta_H$, is given by $\tan(\theta_H) = \alpha_H/\eta = \omega_c \tau$. Here $\omega_c = qB/m$ is the cyclotron frequency corresponding to the relevant magnetic field scale in the vortex core and $\tau$ is the relaxation time of the charge carriers responsible for damping the vortex motion. Similar results are obtained from microscopic calculations.\cite{KOPNIN:1976vi,KOPNIN:1976ua,KOPNIN:1991va} Note that $\eta$ and $\alpha_H$ have a form similar to that of $\sigma_{xx}$ and $\sigma_{xy}$ in the magnetoconductivity tensor of a normal metal, Eq.~\ref{Eq:magnetoconductivity}.
From the vortex equation of motion, Eq.~\ref{Eq:vortex}, we can read off the vortex viscosity tensor,
\begin{equation}
\bm \eta = \left(\begin{array}{cc}\eta & \alpha_H \\-\alpha_H & \eta\end{array}\right)\;,
\end{equation}
defined so that
\begin{equation}
{\bm \eta}\, \mathbf{v}_v = \Phi_0 \left(\mathbf{j} \times \mathbf{\hat z}\right)_t\;,
\end{equation}
where $(...)_t$ denotes the component of the vector transverse to the magnetic field and $\mathbf{v}_v$ now refers to the transverse component of vortex velocity. To obtain the effective resistivity, we solve for the vortex velocity
\begin{equation}
\mathbf{v}_v = \Phi_0 \bm \eta^{-1}\, (\mathbf{j} \times \mathbf{\hat z})_t
\end{equation}
and use the Josephson relation for moving vortices,\cite{Josephson:1965bo}
\begin{equation}
\mathbf{E} = \mathbf{B} \times \mathbf{v}_v\;,
\label{Eq:Josephson}
\end{equation}
to obtain the average electric field:
\begin{eqnarray}
\mathbf{E} & = & B \mathbf{\hat z} \times \mathbf{v}_v\\
& = & B \Phi_0 \mathbf{\hat z}\times(\bm \eta^{-1}\, \mathbf{j}_t) \times \mathbf{\hat z}\\
& = & B \Phi_0 \bm \eta^{-1}\, \mathbf{j}_t\;.
\end{eqnarray}
(Here it is understood that the cross product of $\mathbf{\hat z}$ with a 2D transverse vector is a $\pi/2$ rotation in the transverse plane.) The vortex resistivity is then
\begin{equation}
\bm \rho_v = B \Phi_0 \bm \eta^{-1} = \frac{B \Phi_0}{\eta^2 + \alpha_H^2} \left(\begin{array}{cc}\eta & -\alpha_H \\\alpha_H & \eta\end{array}\right)\;.
\end{equation}
Under conditions of constant current density $\mathbf{j}$, the power dissipation per unit volume is
\begin{equation}
\mathbf{j}^\top\bm{\rho}_v\,\mathbf{j} = \Phi_0 B j^2 \frac{\eta}{\eta^2 + \alpha_H^2}\;,
\end{equation}
equivalent to that for an effective vortex viscosity \mbox{$\eta^\ast = \eta + \alpha_H^2/\eta$}. However, when we substitute for the field dependences of $\eta$ and $\alpha_H$, given by Eqs.~\ref{Eq:viscosity} and \ref{Eq:HallCoefficient}, we obtain a cancellation:
\begin{equation}
\eta^\ast = \eta_0 \left( \frac{1}{1 + \left(\omega_c \tau\right)^2} + \frac{\left(\omega_c \tau\right)^2}{1 + \left(\omega_c \tau\right)^2} \right) = \eta_0\;.
\end{equation}
That is, the relevant viscosity is the \emph{bare viscosity}, independent of the vortex Hall angle. As pointed out by Golosovsky \emph{et al.},\cite{GOLOSOVSKY:1996p1} both the direction of the vortex motion and the magnitude of the viscosity are changed but, if the system is driven by an external source of constant current, the effects cancel and the effective viscosity is the same as if $\theta_H =0$. The situation we have in the microwave experiments is indeed one of constant $\mathbf{j}$: the superconducting sample has a surface impedance in the m$\Omega$ range, tiny compared to the characteristic impedance of free space. The sample is placed into the microwave resonator at a magnetic field antinode (electric field node), and the microwave $H$ field imposes a constant surface current density.
\subsection{Pinning and complex vortex viscosity}
\label{Sec:complex_viscosity_model}
In any real superconductor, local material imperfections lead to pinning, preventing the free flow of flux lines.\cite{Larkin:1979ta} Pinning effects can be particularly strong in the cuprate superconductors, especially at low temperatures, making flux-flow resistivity difficult to measure. One approach is to use a DC current in excess of the critical current to push the vortices into a state of free flux-flow.\cite{Kim:1965vl,Kunchur:1993ie} However, this is a nonlinear method and must be applied and interpreted carefully. An alternative approach, which has been used extensively,\cite{JIGittleman:1968p172,Owliaei:1992tt, Pambianchi:1993jv, DCMorgan1993, Morgan:1994p2404, Revenaz:1994fg, GOLOSOVSKY:1994p169, Parks:1995p189, Powell:1996tb, Belk:1997hn, Ghosh:1997p170, Hanaguri:1999fn, Silva:2000ia, Tsuchiya:2001p200, MATSUDA:2002p2718, Silva:2004bh, BMorgan2005, Pompeo:2008cd, Narduzzo:2008io, Ikebe:2009it, Pompeo:2008p2717, Zhou2009} is to probe the \emph{reversible} vortex motion --- the \emph{linear} response of the vortices to a high frequency driving force. An AC current shakes the flux lines harmonically about their equilibrium positions and, when the measurements are carried out in a manner that is sensitive to both magnitude and phase, allows dissipative and reactive forces to be resolved separately. The technique permits a clean determination of the viscous and elastic parameters and, since these make contributions of comparable magnitude in the GHz range, is ideally carried out at microwave frequencies.
The usual way to include pinning and elastic forces in the vortex equation of motion is through a pinning force of the form $F_p = - \alpha_p x$, where $\alpha_p$ is the effective pinning constant,\cite{JIGittleman:1968p172} and $x$ is the displacement of the vortex from equilibrium. This harmonic approximation should work well in the linear-response regime, in which the displacement of the vortex is small compared to the inter-vortex spacing (typically 1~\AA\ vs.\ 100~\AA\ in our experiments). However, a concern now arises over whether we can continue to describe the dynamics of the system in terms of a single, \emph{average} vortex: in contrast to viscous and electromagnetic forces, which originate from interactions with the electron fluid on a microscopic scale, the elastic forces on a vortex arise from random material imperfections, potentially giving rise to a broad distribution of pinning constants. Statistical averages over such a distribution do not necessarily correspond to the behaviour of an \emph{average} vortex.
Nevertheless, there are several situations in which the distribution of local pinning constants should be narrowly defined, and therefore the single-vortex approach valid. At high fields, in the collective-pinning regime, the density of flux lines is much greater than the density of pinning sites: vortices interact predominantly with one another, and only indirectly with the pinning sites, smoothing out point-to-point variations in local pinning constant. In addition to this, there are two other favourable situations, specific to high frequency experiments. First, as we will show below, the vortex viscosity can have a substantial imaginary component, which acts as an additional contribution to the effective pinning constant and can even dominate over the elastic component in the microwave frequency range. This dynamical contribution to pinning arises from interactions with the electron fluid and can be assumed to be the same everywhere in the sample. Secondly, in a high frequency experiment, the only vortices visible to the microwaves are close to the sample surface. (Microwave cavity perturbation is a power-absorption technique, so the relevant length scale is \emph{half} the RF penetration depth.) In clean materials, the interaction with the sample surface can become the dominant elastic force for these near-surface vortices.\cite{Bean:1964wr} This type of pinning is predominantly electromagnetic, arising from the interaction of the vortex with its image vortex. In our geometry this provides an intrinsic pinning mechanism that acts along the \emph{entire length} of the flux lines, rather than at particular points.
We therefore proceed with the single-vortex approach, focussing on a one-dimensional model of vortex motion that includes viscous, pinning and Lorentz forces, but ignores the vortex Hall effect, for reasons discussed in the previous section. In the simplest version of this model we have
\begin{equation}
\eta v + \alpha_p x = \Phi_0 j(t)\;.
\end{equation}
Here the vortex velocity, $v$, is the time derivative of the vortex displacement, $x$. Using a phasor representation for time-harmonic quantities, in which the transport current density is $j(t) = \mbox{Re}\{J_0 \exp(\mathrm{i} \omega t)\}$, we have
\begin{equation}
\left(\eta + \frac{\alpha_p}{\mathrm{i} \omega} \right)\tilde v \mathrm{e}^{\mathrm{i} \omega t} = \Phi_0 J_0 \mathrm{e}^{\mathrm{i} \omega t}\;,
\label{Eq:phasorforce}
\end{equation}
where $\tilde v$ is the phasor vortex velocity. We see that the inclusion of pinning can be incorporated into a redefined, \emph{complex} viscosity, $\tilde \eta = \eta + \alpha_p/\mathrm{i} \omega$.
In addition to the pinning term, there is the possibility that the bare viscosity itself has frequency dependence --- something that indeed occurs in ortho-II \ybco{6.52}. In this case, $\tilde \eta = \eta(\omega) + \alpha_p/\mathrm{i} \omega$ and, because we are dealing with a physical response function, the bare viscosity must have real and imaginary parts, $\eta(\omega) = \eta^\prime(\omega) - \mathrm{i} \eta^{\prime\prime}(\omega)$. Causality requires that these be related by Kramers--Kr\"onig relations, \emph{e.g.},
\begin{equation}
\eta^{\prime\prime}(\omega) = \frac{2 \omega}{\pi} \mathcal{P}\int_0^\infty \frac{\eta^\prime(\omega^\prime)}{\omega^2 - {\omega^\prime}^2}\mathrm{d} \omega^\prime\;,
\end{equation}
where $\mathcal{P}$ denotes the principal part of the integral. The main physical effect is that in systems with a strong frequency dependence of $\eta^\prime$ (\emph{i.e.}, in systems with long-lived charge excitations) the apparent pinning constant will depend on frequency:
\begin{equation}
\alpha_\mathrm{eff}(\omega) = \alpha_p + \omega \eta^{\prime\prime}(\omega)\;.
\end{equation}
As an example, the model we will use below to describe the viscosity of ortho-II \ybco{6.52},
\begin{equation}
\eta^\prime(\omega) = \eta_0 + \eta_1 \frac{1}{1 + \omega^2/\Gamma^2}\;,
\label{Eq:model_viscosity}
\end{equation}
must, by causality, be accompanied by a pinning constant of the form
\begin{equation}
\alpha_\mathrm{eff}(\omega) = \alpha_p + \eta_1 \frac{\omega^2/\Gamma}{1 + \omega^2/\Gamma^2}\;.
\label{Eq:model_pinning}
\end{equation}
We will see that this model provides an excellent description of the data.
Solving Eq.~\ref{Eq:phasorforce} for the phasor vortex velocity, we have
\begin{equation}
\tilde v = \frac{\Phi_0}{\eta(\omega) + \frac{\alpha_p}{\mathrm{i} \omega}} J_0 = \frac{\Phi_0}{\tilde\eta(\omega)} J_0\;.
\end{equation}
By the Josephson relation, Eq.~\ref{Eq:Josephson}, the electric field associated with the vortex motion is
\begin{equation}
\tilde E = \frac{B \Phi_0}{\tilde\eta(\omega)} J_0\;,
\end{equation}
implying a complex, effective vortex resistivity
\begin{equation}
\tilde \rho_v = \frac{B \Phi_0}{\tilde\eta(\omega)}\;.
\label{Eq:vortex_resistivity}
\end{equation}
\subsection{Vortex electric fields}
The interaction of the vortex with the surrounding electron fluid (including states in the vortex core) arises from the coupling of charge excitations to the electric field induced when the vortex moves.\cite{Bardeen:1965p151} This electric field can be obtained from the London acceleration equation\cite{London:1935uf}
\begin{equation}
\mathbf{E} = \frac{\partial(\Lambda \mathbf{j}_v)}{\partial t}\;,
\end{equation}
where $\Lambda$ is the London parameter and, in our case, $\mathbf{j}_v$ is the supercurrent density circulating around the vortex. The time rate of change of $\mathbf{j}_v$ arises solely from the motion of the vortex:
\begin{equation}
\frac{\partial \mathbf{j}_v}{\partial t} = - (\mathbf{v}_v\cdot\mathbf{\nabla}) \mathbf{j}_v\;.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{ZhouFig1}
\caption{(color online). Supercurrent screening profiles and induced electric fields for vortices with cylindrical symmetry. The panels on the left show the situation in an idealized Bardeen--Stephen vortex:\cite{Bardeen:1965p151} the supercurrent profile (upper panel) has the form $j(r) \propto 1/r$ outside the core radius $a$, and is zero inside. For vortex motion in the positive $y$ direction, the resulting electric field (lower panel) is uniform inside the vortex core (solid circle, $r = a$) and has dipolar form outside. The panels on the right depict a more realistic situation in which the supercurrent profile varies smoothly with radius.\cite{Caroli1964} The configuration shown (upper panel) uses the approximate form $j(r) \propto \tanh{(r/a)}/\sqrt{r^2 + a^2}$. As with the Bardeen--Stephen vortex, electric field is also relatively uniform near $r = 0$, with a dipolar form for $r > a$. The principal differences are a much smoother variation with position, with less intense electric field in the core region $r < a$.}
\label{fig:electricfields}
\end{figure}
We will illustrate this in the particular case of a vortex with cylindrical symmetry, moving in the positive $y$ direction at speed $v$. The screening supercurrent density will be azimuthal, and its magnitude will depend only on distance from the centre of the vortex: \emph{i.e.}, $\mathbf{j}_v = j(r) \mathbf{\hat \theta}$. In this case we can show that the Cartesian components of electric field are:
\begin{eqnarray}
E_x & = & \frac{v}{\Lambda} \left[\cos^2 \theta \frac{j(r)}{r} + \sin^2 \theta \frac{\partial j}{\partial r} \right]\\
E_y & = & \frac{v}{\Lambda} \sin \theta \cos \theta \left[ \frac{j(r)}{r} - \frac{\partial j}{\partial r} \right]
\end{eqnarray}
Electric field plots are shown in Fig.~\ref{fig:electricfields} for two cases: a Bardeen--Stephen-like vortex, for which $j(r) \propto 1/r$ outside the core radius $a$ and is zero within; and a more realistic vortex core, in which the supercurrent density varies smoothly through $r = a$, falling linearly to zero as $r \to 0$. For the latter case, we take the approximate form
\begin{equation}
j(r) \propto \frac{\tanh{(r/a)}}{\sqrt{r^2 + a^2}}\;.
\end{equation}
We see that the qualitative behaviour of the electric fields is similar in both cases: roughly uniform inside the vortex core, with a dipole form outside. The use of accurately calculated current profiles that break cylindrical symmetry\cite{Ichioka:1999p359} will not substantially change this picture, since the form of the current density ($j(r) \sim 1/r, a < r < \lambda$) is tightly restricted by the vortex topology. As a result, the electric field profile will always be similar to that shown on the right-hand side of Fig~\ref{fig:electricfields}.
For the purpose of vortex-dynamics experiments, the important point is that a moving vortex acts as a local concentration of electric field, and that the reaction force experienced by the vortex is the result of the electric field interacting (resistively and reactively) with conducting degrees of freedom in the vicinity of the vortex core. For the case of ortho-II \ybco{6.52}, a large part of this response appears to be due to bulk $d$-wave quasiparticles \emph{outside} the vortex cores.
\subsection{Extraction of vortex parameters}
\label{Sec:vortex_parameters}
For measurements made at microwave frequencies, in addition to the complex vortex resistivity, $\tilde \rho_v$, given in Eq.~\ref{Eq:vortex_resistivity}, we must also take into account the finite impedance, $\tilde \rho_s$, of the superconducting medium in which the vortices are embedded. The electrodynamics of this problem have been solved by Coffey and Clem,\cite{COFFEY:1991p156} and Brandt,\cite{BRANDT:1991p149} who find the following simple, additive form to be appropriate in the limit of low temperature and weak field ($B \ll B_{c2}$):
\begin{equation}
\tilde\rho_{\rm{eff}} \approx \tilde\rho_s + \tilde \rho_v = \tilde \rho_s + \frac{B \Phi_0}{\tilde\eta(\omega)}\;.
\label{eq:rff_extract}
\end{equation}
Here the effective complex resistivity, $\tilde \rho_\mathrm{eff}$, is the experimentally accessible quantity in a microwave experiment, being directly related to the complex surface impedance, $Z_s = R_s + \mathrm{i} X_s$, by the local electrodynamic relation, $\tilde \rho = Z_s^2/\mathrm{i} \omega \mu_0$.
\footnote{The local electrodynamic relation, $\rho = Z_s^2/\mathrm{i} \omega \mu_0$, is obtained by solving Maxwell's equations for phasor fields (Amp\`ere and Faraday laws) at the interface between vacuum and a conductor with local electrodynamics, \mbox{$\mathbf{E}(\mathbf{r}) = \rho \mathbf{J}(\mathbf{r})$}. The surface impedance $Z_s$ is defined as the ratio of the tangential components of electric and magnetic field at the interface}
To a good approximation, the background contribution from the superconducting medium, $\tilde \rho_s$, can be obtained from a measurement in zero field. To the extent that nodal quasiparticles in a $d$-wave superconductor give rise to a nonlinear Meissner effect,\cite{YIP:1992p3020} there will be some weak field dependence of $\rho_s$. However, this effect is known to be much weaker in the YBa$_{2}$Cu$_{3}$O$_{6+y}$ system than theoretically expected,\cite{Bidinosti:1999p2720,Sonier:2007p1185} and is negligible in the current context. The experimental procedure is then that
\begin{eqnarray}
\tilde \rho_\mathrm{eff} & = & \frac{Z_s^2(B,T)}{\mathrm{i} \omega \mu_0}\;,\\
\tilde \rho_s & \approx & \frac{Z_s^2(B = 0,T)}{\mathrm{i} \omega \mu_0}\;.
\end{eqnarray}
The vortex contribution is isolated by taking the difference, $\tilde \rho_v = \tilde \rho_\mathrm{eff} - \tilde\rho_s$, and from this we obtain the rest of the parameters in the vortex-dynamics model, in the following way:
\begin{eqnarray}
\tilde \eta & = & \frac{B \Phi_0}{\tilde \rho_v}\;,\\
\eta^\prime & = & B \Phi_0 \mbox{Re}\{\tilde \rho_v^{-1}\}\label{Eq:eta_prime}\;,\\
\alpha_\mathrm{eff} & = & \omega B \Phi_0 \mbox{Im}\{- \tilde \rho_v^{-1}\}\label{Eq:alpha}\;,\\
\rho_{\rm{ff}} & \equiv & \lim_{\omega \to 0} \frac{B \Phi_0}{\eta(\omega)} = \lim_{\omega \to 0} \frac{1}{\mbox{Re}\{\tilde\rho_v^{-1}\}}\;.
\end{eqnarray}
It should be pointed out that an analysis of this sort is only possible if both real and imaginary parts of the surface impedance are measured. In our experiment these are obtained at the same time, on the same sample.
\section{Experimental methods}
\subsection{YBa$_{\bm 2}$Cu$_{\bm 3}$O$_{\bm{6.52}}$ sample preparation}
Single crystals of high purity YBa$_2$Cu$_3$O$_{6+y}$ were grown using a self-flux method in a chemically inert BaZrO$_3$ crucible.\cite{Liang:2012va} Oxygen concentration was set to \ybco{6.52}\ by annealing in flowing oxygen at 748~$^\circ$C, followed by a homogenization anneal at 570~$^\circ$C for 10~days in which the crystal was sealed inside a quartz ampoule with a large volume of ceramic at the same oxygen content. The sample was mechanically detwinned at 200~$^\circ$C under uniaxial stress, without changing the oxygen content. Ortho-II ordering, in which the oxygen content of the CuO chains alternates between full and empty, was achieved by annealing at 85~$^\circ$C for one day then 50~$^\circ$C for one week. Note that the highest degree of ortho-II ordering is obtained when the oxygen content is slightly more than needed for filling every other chain, hence the oxygen doping \ybco{6.52}.
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{ZhouFig2}
\caption{(color online). Sample geometry in the microwave experiment. The sample is a platelet single crystal, thin in the $c$~direction. The sample is cooled through $T_c$ in a strong, static magnetic field, $B$, that is applied perpendicular to the CuO$_2$ planes and sets up the vortex lattice. In this picture, vortex cores are represented schematically by dots on the upper surface of the sample; a number of vortex-lattice unit cells are shown in the centre of the diagram. A weak microwave field, $H_\mathrm{rf}$, is applied parallel to $B$. This induces a microwave transport current density, $J_\mathrm{rf}$, which, due to strong demagnetizing effects in this geometry, is concentrated near the edges of the crystal.}
\label{fig:geometry}
\end{figure}
\subsection{Surface impedance measurements}
There are several key technical requirements for making accurate microwave measurements of vortex dynamics. High sensitivity is needed, as the resistive dissipation of sub-mm, high quality single crystals is typically very small. In addition, the technique must measure both real and imaginary parts of the impedance, to allow an unambiguous separation of viscous and reactive effects. We have used cavity perturbation of \mbox{high-$Q$} TiO$_2$ (rutile) dielectric resonators, operating in TE$_{0np}$ modes,\cite{Huttema:2006p344} to carry out measurements at four microwave frequencies ($\omega/2 \pi = 2.64$, 4.51, 9.12 and 13.97~GHz). In contrast to microwave measurements in zero field, for which the resonator is typically a superconducting cavity, the TiO$_2$ dielectric resonator is housed within a normal metal (copper) enclosure, enabling it to be used in an applied magnetic field. The low loss tangent and high dielectric constant of rutile allow quality factors of $10^6$ to $10^7$ to be achieved. In addition, the compact size of the resonator gives a much higher filling factor than cavity resonators operating at the same frequencies. The good mechanical stability of dielectric resonators and the absence of weak superconducting links, combined with a high $Q$ and filling factor, result in a system that has comparable or better surface-impedance resolution than a superconducting cavity system \emph{and} is capable of operating in high magnetic fields.
A single crystal of ortho-II \ybco{6.52}\ with dimensions $a \times b \times c = 0.54$~mm$\times 0.63$~mm$\times 10$~$\mu$m was mounted on the end of a sapphire hot finger\cite{Sridhar:1988p495} and introduced into the resonator through a hole bored along the axis of the rutile cylinder. The sample was mounted so that the static magnetic field, $B$, was applied along the $c$-direction of the crystal, as shown in Fig.~\ref{fig:geometry}. The microwave field, $H_\mathrm{rf}$ was also applied along the $c$-direction, in order to induce $a$--$b$ plane screening currents. All measured quantities are $a$--$b$ averages. This geometry has a high demagnetizing factor but offers the advantage of avoiding screening-current loops that close along the $c$-direction, something that is particularly important for electrically anisotropic materials such as \ybco{6.52}. The sapphire hot finger allows the temperature of the sample to be regulated separately from that of the resonator, which was kept fixed at 4.2~K. In our system the hot finger was mounted on a moveable, pumped helium pot, giving a base temperature of 1.1~K. The introduction of the sample into the resonator causes a change in resonant frequency, $f_0$, and bandwidth, $f_B$. Subsequent changes of sample temperature, and applied field, lead to further changes in $f_0$ and $f_B$. The surface impedance of the sample, $Z_s = R_s + \mathrm{i} X_s$, is obtained using the cavity perturbation relation\cite{altshuler1963,Huttema:2006p344}
\begin{equation}
R_s(B,T)+\mathrm{i}\Delta X_s(B, T)\! =\! \Gamma\!\left(\!\!\frac{\Delta f_B(B,T)}{2}- \mathrm{i}\Delta f_0(B,T)\!\!\right)\;,
\label{eq:CP}
\end{equation}
Here $T$ is the sample temperature, $\Gamma$ is an empirically determined scaling factor, $R_s(B,T)$ is the absolute surface resistance, $\Delta X_s(B, T)$ is the shift in surface reactance with respect to zero field and a reference temperature $T_0$, $\Delta f_B(B,T)$ is the shift in resonator bandwidth on introducing the sample into the empty resonator in an applied field $B$, and $\Delta f_0(B,T)$ is the shift in resonant frequency with respect to $B=0$ and $T = T_0$. Note that the absolute surface reactance cannot be inferred from a measurement of the frequency shift on inserting the sample into the resonator, as the microwave skin depth is much smaller than the effective size of the sample. Instead, absolute zero-field reactance is set using published penetration depth data.\cite{PeregBarnea:2004p761} All measurements reported here were made with the sample in a field-cooled state, in order that the sample magnetization be close to its equilibrium value. Microwave power levels were regulated so that the microwave $H$ field in the resonator was held constant during the course of the experiment, eliminating contributions that might arise from power dependence of the resonator frequency and quality factor. Field- and temperature-dependent background measurements were made on the empty resonator and sapphire sample holder and used to apply a small correction to the sample signal.
An estimate of the amplitude of the vortex motion can be obtained from the microwave power level and the pinning force constant. In the following, we use worst-case values for the various quantities (\emph{e.g.}, \mbox{low temperature $Q$}, high temperature $\alpha_p$) to obtain an upper bound on the range of motion. The input power to the resonator itself is typically $P_\mathrm{in} = 1$~nW or lower. (This includes a correction for the insertion loss of the cryogenic microwave cables and the fact that the resonator is operated at weak coupling.) For a quality factor $Q = 10^6$ (characteristic of low temperature operation) and a resonant frequency of 2.64~GHz, the stored energy in the resonator is \mbox{$E = P_\mathrm{in} Q/\omega = 6 \times 10^{-14}$~J.} For an effective resonator volume of 0.5~cm$^3$ (taking into account the concentrating effect of the dielectric resonator) this corresponds to a peak energy density $U = 1.2 \times 10^{-7}$~J/m$^3$. From this we obtain the strength of the microwave magnetic field at the centre of the resonator, \mbox{$H_\mathrm{rf} = (U/\mu_0)^{1/2} = 0.3$~A/m.} We assume that demagnetizing effects enhance this magnetic field by a factor of sample width/sample thickness,\cite{Prozorov:2000tj} giving $H_\mathrm{edge} = 60 H_\mathrm{rf} = 18$~A/m. For a skin depth $\delta = 0.2$~$\mu$m, this corresponds to a current density $J_\mathrm{rf} = H_\mathrm{edge}/\delta = 10^8$~A/m$^2$. The force per unit length on the individual vortices is \mbox{$F_\ell = \Phi_0 J_\mathrm{rf} = 2 \times 10^{-7}$~N/m.} For a pinning constant $\alpha_p = 10^3$~N/m$^2$ (characteristic of high temperatures) we obtain a maximum vortex displacement \mbox{$x_\mathrm{max} = F_\ell/\alpha_p = 2$~\AA.} We emphasize that this is an upper bound, based on worst-case assumptions, and that we have sufficient signal-to-noise ratio to operate at input powers several orders of magnitude lower. In every case, data were checked for power dependence in order to avoid nonlinearities.
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig3}
\caption{(color online). Surface impedance $Z_s = R_s + \mathrm{i} X_s$ at $\omega/2 \pi = 2.64$, 4.51, 9.12, and 13.97~GHz, for $B = 0$, 0.75, 1, 2, 3, 4, 5, 6 and 7~T (from bottom to top). Left-hand plots show surface resistance, $R_s$, on a logarithmic scale. Right-hand plots show surface reactance, $X_s$, on a linear scale. In each case the field is applied at a temperature $T > T_c$, and held constant during the temperature sweep.}
\label{fig:Zs}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig4}
\caption{(color online). Real part of the frequency-dependent vortex viscosity, $\eta^\prime(T,B)$, at frequencies $\omega/2 \pi = 2.64$, 4.51, 9.12, and 13.97~GHz, and for magnetic fields from 0.75 to 7~T. $T_m$ denotes the vortex-lattice melting temperature at each field, obtained from Ref.~\onlinecite{Ramshaw2012}. $T_d$ denotes the dynamical cross-over temperature, defined to be the point at which the frequency variation of $\eta^\prime$ becomes less than 20\% in our measurement range.}
\label{fig:viscosity}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig5}
\caption{(color online). Zero-field quasiparticle conductivity $\sigma_\mathrm{qp}(\omega,T)$ for ortho-II \ybco{6.52}. Data are the geometric mean of $a$- and $b$-axis microwave conductivity data from Ref.~\onlinecite{Harris:2006p388}. The strong rise in $\sigma_\mathrm{qp}(T)$ on cooling is a result of a rapid decrease in inelastic scattering of $d$-wave quasiparticles in the superconducting state. Low temperature peaks in $\sigma_\mathrm{qp}(T)$ arise from a competition between increasing quasiparticle lifetime and decreasing quasiparticle density on cooling.}
\label{fig:conductivity}
\end{figure}
\section{Results and Discussion}
\subsection{Surface impedance}
Surface impedance data, $Z_s = R_s + \mathrm{i} X_s$, are presented in Fig.~\ref{fig:Zs}, at each of the measurement frequencies ($\omega/2 \pi = 2.64$, 4.51, 9.12 and 13.97~GHz) and for magnetic fields ranging from 0 to 7 T. In zero field, the superconducting transition is preceded by some rounding due to superconducting fluctuations, but there is a sharp downturn in $Z_s(T)$ at $T_c = 59$~K. This downturn softens as magnetic field is applied, but remains visible in $X_s(T)$ as a slight kink, even at higher fields. This is due to the onset of pinning as the vortex lattice freezes. In surface resistance, the system remains strongly dissipative to much lower temperatures, with a substantial decrease in $R_s(T)$ only occurring below the vortex-lattice melting transition.
\begin{figure}[t]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig6}
\caption{(color online). Field--temperature phase diagram showing the vortex-lattice melting line, $B_m(T)$, from Ref.~\onlinecite{Ramshaw2012}, and the dynamical crossover $B_d(T)$ derived from the frequency dependent vortex viscosity. $B_{c2}(T)$ is the BCS upper critical field, plotted assuming $B_{c2}(T \to 0) = 40$~T.\cite{Ramshaw2012}}
\label{fig:BTphasediagram}
\end{figure}
\subsection{Vortex viscosity}
The real part of the frequency-dependent vortex viscosity is obtained from the microwave surface impedance using Eq.~\ref{Eq:eta_prime} and is plotted in Fig.~\ref{fig:viscosity} as a function of temperature, with each panel showing results for a different magnetic field. The qualitative behaviour of $\eta^\prime(\omega,T)$ is the same in each case: $\eta^\prime(T)$ rises strongly on cooling, with a peak in the 8 to 20~K temperature range. The peak is highest for the 2.64~GHz data, and decreases in magnitude with increasing frequency up to 13.97~GHz. The strong frequency variation of $\eta^\prime$ at low temperatures indicates the existence of long-lived charge excitations.\footnote{Note that vortex dissipation is due to the coupling of induced vortex electric fields to charge excitations in the vicinity of the vortex cores: $\eta^\prime(\omega)$ therefore reflects the dynamics of these excitations, rather than the dynamics of the vortices themselves.} In fact, the frequency and temperature dependence of $\eta^\prime$ is strikingly similar to that of the zero-field microwave conductivity, $\sigma_\mathrm{qp}(\omega,T)$, plotted in Fig.~\ref{fig:conductivity} using data from Ref.~\onlinecite{Harris:2006p388}. In the case of the zero-field conductivity, the peaks in $\sigma_\mathrm{qp}(T)$ are due to the competing effects of a quasiparticle lifetime that increases rapidly on cooling below $T_c$, and a decreasing normal fraction as quasiparticles condense into the ground state. The width of the low frequency quasiparticle spectrum, $\sigma_\mathrm{qp}(\omega)$, provides a measure of the average quasiparticle relaxation rate, with narrow widths in the low GHz range indicating long transport lifetimes and mean free paths of several $\mu$m.\cite{Hosseini:1999p383,Turner:2003p331,Harris:2006p388} Interestingly, the peak temperatures in $\eta^\prime(T)$ and the peak widths in $\eta^\prime(\omega)$ are very similar to those in $\sigma_\mathrm{qp}(\omega,T)$. Taken together, these observations strongly suggest that $d$-wave quasiparticles \emph{outside} the vortex cores provide the dominant mechanism for vortex viscosity in this material. This is completely different from the situation in conventional superconductors, in which the normal-metal cores are responsible for the vortex viscosity.
While the qualitative similarities between $\eta^\prime(\omega,T)$ and $\sigma_\mathrm{qp}(\omega,T)$ suggest that $d$-wave quasiparticles are the underlying mechanism, an important consistency check is provided by testing whether the zero-field quasiparticle conductivity is of sufficient magnitude to be responsible for the observed viscosity. There are several ways this could be done, but a conceptually clear method is to express the viscosity in terms of a length scale, $\ell_\eta$, that represents the area, $\pi\ell_\eta^2$, over which the vortex electric fields would need to couple to the electrical conductivity in order to give rise to the observed viscosity. In a conventional superconductor the relevant conductivity is the normal-state conductivity and the length scale is the vortex core size, $\xi$. Bardeen--Stephen theory gives $\eta = \sigma \Phi_0^2/2 \pi \xi^2$, and therefore \mbox{$\ell_\eta = \xi = \Phi_0 \sqrt{\sigma/2 \pi \eta}$}. We can now apply a similar analysis to Ortho-II \ybco{6.52}. We perform the comparison at $T = 8$~K, the temperature of the peak in $\eta^\prime(T)$. The low field viscosity at this temperature, and at 2.6~GHz, is $1.45 \times 10^{-6}$~Nsm$^{-2}$. Instead of a normal-state conductivity, we use the zero-field quasiparticle conductivity, as our assertion is that the viscous drag arises from quasiparticles \emph{outside} the core. The $a$--$b$ averaged zero-field conductivity\cite{Harris:2006p388} at 8~K and 2.6~GHz is $\approx 4.5 \times 10^7$~$\Omega^{-1}$m$^{-1}$. From this we obtain $\ell_\eta = 48$~\AA. This is of the same order as the vortex core size in \ybco{6.52},\cite{Sonier:2007p1185, Ramshaw2012} establishing \emph{quantitative} consistency between the observed vortex viscosity and a mechanism based on the electrical conductivity of bulk $d$-wave quasiparticles.
As magnetic field increases, $\eta^\prime(\omega,T)$ undergoes smooth changes in shape, but its low temperature form and overall magnitude are not strongly affected. The most noticeable change with increasing field is the emergence of a band of temperature, immediately below $T_c$, in which $\eta^\prime(\omega,T)$ has very weak frequency dependence, indicating that the viscous dissipation is being caused by the vortices coupling to charge excitations whose frequency spectrum extends well beyond the microwave range (\emph{i.e.}, excitations that relax more rapidly than on microwave timescales). To demarcate this regime we define a temperature, $T_d$, above which the frequency variation of $\eta^\prime$ is less than 20\%. This signifies a qualitative change in the relaxation dynamics, which we explore further below, using fits to complex viscosity spectra. We will see that below $T_d$, the strongly frequency-dependent part of $\eta^\prime$ rides on top of a broad background. This indicates that the long-lived excitations coexist with more rapidly relaxing ones. $T_d$ is marked on Fig.~\ref{fig:viscosity} along with the vortex-lattice melting temperature, $T_m$, from Ramshaw \emph{et al.}\cite{Ramshaw2012} There is no sharp change in viscosity at the melting transition, just the beginning of a gradual downturn in $\eta^\prime(T)$ that takes place over a roughly 10~K range between $T_m$ and $T_d$. The melting curve $B_m(T)$ and dynamical crossover $B_d(T)$ are also plotted in a $B$--$T$ phase diagram in Fig.~\ref{fig:BTphasediagram}.
\begin{figure}[t]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig7}
\caption{(color online). Temperature dependence of the effective pinning constant, $\alpha_\mathrm{eff}(T)$, at 2.64 GHz, for $B = 0.75$, 1, 2, 3, 4, 5, 6 and 7~T (from right to left). Dashed lines are guides to the eye and denote $\alpha(T) = \alpha_0 \exp(- T/T_0)$, with $T_0 = 11$~K. $T_m$ and $T_d$ indicate the melting and dynamical crossover temperatures at each field.}
\label{fig:alpha264}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig8}
\caption{(color online). Frequency-dependent pinning constant, $\alpha_\mathrm{eff}(T)$, for $B = 4$~T (upper panel) and 7~T (lower panel). Solid curves show data at $\omega/2 \pi = 2.64$, 4.51, 9.12, and 13.97~GHz (from bottom to top). The dashed curves show the DC limit, $\alpha_p(T)$, obtained from fits to Eqs.~\ref{Eq:model_viscosity} and \ref{Eq:model_pinning}. $T_m$ and $T_d$ indicate the melting and dynamical crossover temperatures.}
\label{fig:alphaB}
\end{figure}
\subsection{Pinning constant}
The effective pinning constant at microwave frequencies, $\alpha_\mathrm{eff}$, is extracted using Eq.~\ref{Eq:alpha}. The 2.64~GHz data are plotted in Fig.~\ref{fig:alpha264} on a semi-log plot, for each of the magnetic fields. (We will later see that, over most of the temperature range, the 2.64~GHz traces are close to the static limit of $\alpha_\mathrm{eff}(\omega)$, which, according to the discussion in Sec.~\ref{Sec:complex_viscosity_model}, is the elastic pinning constant $\alpha_p$.) The pinning constant drops rapidly with increasing temperature, following an approximately exponential temperature dependence, $\alpha_\mathrm{eff}(T) \approx \alpha_0 \exp(- T/T_0)$, with $\alpha_0 = 2 \times 10^5$~N/m$^2$ and $T_0 = 11$~K. Similar exponential behaviour has been reported in measurements of pinning constant\cite{GOLOSOVSKY:1994p169,BMorgan2005} and critical current density\cite{Senoussi:1988jf,Shi:1994cn} on optimally doped \ybco{7-\delta}: in that material, a typical value of $\alpha_0$ is 3~to~$4 \times 10^5$~N/m$^2$ (Refs.~\onlinecite{GOLOSOVSKY:1994p169,BMorgan2005}), with $T_0$ in the range 20 to 25~K (Refs.~\onlinecite{GOLOSOVSKY:1994p169,Shi:1994cn,BMorgan2005}). An elegant theory of this behaviour has been developed by Feigel'man and Vinokur,\cite{Feigelman:1990gp} in which small-amplitude thermal motion of the vortex lattice softens the apparent pinning potential: the specific form of the exponential temperature dependence arises from vortex lattice Debye--Waller factors.
The vortex-lattice melting transition is clearly visible in the pinning-constant data, with $\alpha_\mathrm{eff}(T)$ dropping by an order of magnitude on passing through $T_m(B)$. Above the melting transition the pinning constant remains finite, even when we take into account dynamical effects: we will argue below that this is peculiar to surface impedance measurements and is due to surface pinning. At higher temperatures, above $T_d$, $\alpha_\mathrm{eff}(T)$ reverts back to an exponential trend, with a value of $T_0$ similar to that at low temperature.
The frequency dependence of $\alpha_\mathrm{eff}(T)$ is shown in Fig.~\ref{fig:alphaB}, for fields of 4 and 7~T. As we will see in more detail in Sec.~\ref{Sec:complex_viscosity}, $\alpha_\mathrm{eff}$ has substantial frequency dependence at almost all temperatures, but in Fig.~\ref{fig:alphaB} this is most apparent \emph{above} the melting temperature, due to the low level of static pinning in the vortex-liquid regime. We will show that the frequency dependence of $\alpha_\mathrm{eff}$ is due to dynamical effects arising from the vortex viscosity.
\begin{figure}[t]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig9}
\caption{(color online). The depinning frequency, $\omega_p/2 \pi \equiv \alpha_p/\eta_\mathrm{dc}$, for $B = 0.75$, 1, 2, 3, 4, 5, 6 and 7~T. Here $\alpha_p$ and $\eta_\mathrm{dc} \equiv \eta_0 + \eta_1$ are the DC limits of the pinning and viscous force constants obtained from the fitting procedure.}
\label{fig:omegap}
\end{figure}
\subsection{Depinning frequency}
In the conventional Gittleman--Rosenblum model of vortex dynamics, viscosity and pinning constant are frequency-independent constants.\cite{JIGittleman:1968p172} There is then a well-defined depinning frequency, $\omega_p = \alpha_p/\eta$, at which the viscous and elastic forces are equal. In our measurements, viscosity and pinning constant are strongly frequency dependent, and a substantial part of the reactive force experienced by the vortex is due to dynamical effects arising from the viscosity, rather than elasticity due to pinning. The idea of a depinning frequency is therefore not well defined. Nevertheless, we can make an estimate from the DC limits of the pinning and viscous force constants, obtained from the fits to complex vortex viscosity presented in the next section: $\omega_p \equiv \alpha_p/\eta_\mathrm{dc}$. Data for $\omega_p$ are plotted in Fig.~\ref{fig:omegap}. Over most of temperature range, $\omega_p/2 \pi$ is in the low GHz range, indicating that viscous effects are predominant at our measurement frequencies. At low temperatures, however, $\omega_p/2 \pi$ increases to be 10 to 25~GHz (depending on field) due to the exponential increase in $\alpha_p(T)$ on cooling. In this regime it is essential that vortex dynamics be probed with a technique that accurately measures both real and imaginary parts of the surface impedance.
\subsection{Complex vortex viscosity}
\label{Sec:complex_viscosity}
One of our initial motivations for carrying out multiple-frequency microwave measurements on ortho-II \ybco{6.52} was as a test of the validity of the single-vortex dynamical model outlined in Sec.~\ref{Sec:vortex_dynamics}. Single-vortex theories are often treated with skepticism: there is a suspicion that they appear to work only because they have as many adjustable parameters ($\alpha_p$ and $\eta$) as there are degrees of freedom in the data (real and imaginary parts of $\tilde \rho_v)$. Multiple-frequency data therefore offer the prospect of a stringent test. A complication arises if the parameters in the vortex model have intrinsic frequency dependence, as appears to be the case in ortho-II \ybco{6.52}. Testing the vortex model then becomes a more subtle process: as discussed in Sec.~\ref{Sec:complex_viscosity_model}, causality means that the dissipative and reactive parts of the dynamical response must obey Kramers--Kr\"onig relations.
On purely empirical grounds, an observation of coexisting fast and slow relaxation processes in the viscosity suggests that we represent $\eta^\prime(\omega)$ by a two-component spectrum. The simplest possibility is that given by Eq.~\ref{Eq:model_viscosity}, in which a Drude-like Lorenztian spectrum, of magnitude $\eta_1$ and width $\Gamma$, rides on top of a broad background of magnitude $\eta_0$. By causality, there must be an associated imaginary component, $\eta^{\prime\prime}(\omega)$. This combines with the elastic contribution, $\alpha_p$, to form the effective pinning constant $\alpha_\mathrm{eff}(\omega)$, given by Eq.~\ref{Eq:model_pinning}. Using this model, we have carried out simultaneous fits to the measured frequency dependence of $\eta^\prime$ and $\alpha_\mathrm{eff}$, at \emph{all} fields and temperatures. Representative results, in 10~K temperature steps up to 50~K, are shown in Figs.~\ref{fig:visc_fits1}, \ref{fig:visc_fits4} and \ref{fig:visc_fits7}, at fields of 1, 4 and 7~T, respectively.
At all but the lowest temperatures and fields, the ability of the model to capture the observed dynamics is outstanding. (For the $T = 10$~K, $B = 1$~T data, the discrepancy in $\alpha_\mathrm{eff}(\omega)$ is consistent with proximity to a cyclotron resonance, something that is very interesting in its own right and will be investigated in detail in future measurements.) Importantly, the multiple frequency data now overconstrain the model. And, although these are four-parameter fits, only two of the parameters ($\eta_1$ and $\Gamma$) relate to frequency dependence: $\eta_0$ and $\alpha_p$ are additive offsets. It is therefore impressive that, at all but the highest temperatures, the frequency dependence of $\eta^\prime$ essentially \emph{predicts} that of $\alpha_\mathrm{eff}$, and \emph{vice versa}. We draw two conclusions from the goodness of the fits: that the measurements themselves are accurate; and that the single-vortex model provides a very good representation of the microwave dynamics.
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig10}
\caption{(color online). Simultaneous fits to $\eta^\prime(\omega)$ (left panels) and $\alpha_\mathrm{eff}(\omega)$ (right panels) at $B = 1$~T, using the procedure described in Sec.~\ref{Sec:complex_viscosity}. }
\label{fig:visc_fits1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig11}
\caption{(color online). Simultaneous fits to $\eta^\prime(\omega)$ (left panels) and $\alpha_\mathrm{eff}(\omega)$ (right panels) at $B = 4$~T, using the procedure described in Sec.~\ref{Sec:complex_viscosity}. }
\label{fig:visc_fits4}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig12}
\caption{(color online). Simultaneous fits to $\eta^\prime(\omega)$ (left panels) and $\alpha_\mathrm{eff}(\omega)$ (right panels) at $B = 7$~T, using the procedure described in Sec.~\ref{Sec:complex_viscosity}. }
\label{fig:visc_fits7}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig13}
\caption{(color online). Real part of the frequency-dependent vortex viscosity, $\eta^\prime(T,B)$, plotted along with the fit parameters $\eta_0$ and $\eta_1$ from Eqs.~\ref{Eq:model_viscosity} and \ref{Eq:model_pinning}. Long dashes show the fit parameter $\eta_0(T)$; short dashes the DC limit, $\eta_\mathrm{dc}(T) = \eta_0(T) + \eta_1(T)$. As in Fig.~\ref{fig:viscosity}, $T_m$ denotes the vortex-lattice melting temperature\cite{Ramshaw2012}; $T_d$ gives the dynamical cross-over temperature, above which the frequency dependence of $\eta^\prime$ is very weak.}
\label{fig:viscosity_params}
\end{figure}
\begin{figure}[t,h,b]
\centering
\includegraphics[width= 0.85 \columnwidth]{ZhouFig14}
\caption{(color online). The relaxation rate of the viscous dynamics, $\Gamma(T)$, for $B = 1$, 4 and 7~T, obtained from fitting to the complex viscosity spectra.}
\label{fig:Gamma}
\end{figure}
The need for a two-component spectrum is a clear indication that fast and slowly relaxing excitations coexist in the vortex state. At first sight this is not surprising, since quasiparticles in $d$-wave superconductors are known to have strongly energy-dependent relaxation rates as a consequence of the nodal quasiparticle spectrum.\cite{HIRSCHFELD:1994p570} However, the fast-and-slow dichotomy observed in $\eta^\prime(\omega,T)$, especially at higher fields, is too pronounced to arise from thermally averaging over a distribution of relaxation rates dependent on energy alone.
An alternative explanation, more in keeping with the disconnected nature of the broad and narrow parts of $\eta^\prime(\omega,T)$, is that the fast and slow processes are occurring in \emph{spatially distinct} regions. Since the narrow component of $\eta^\prime(\omega,T)$ appears to be characteristic of $d$-wave quasiparticle physics, the slow processes are most naturally associated with regions \emph{outside} the cores. By extension, we would then associate fast relaxation with processes involving the vortex cores. A picture of spatially separate relaxation mechanisms also helps to make sense of the sudden disappearance of slow relaxation above $T_d$ seen in Fig.~\ref{fig:viscosity}. As can be seen in the field--temperature phase diagram in Fig.~\ref{fig:BTphasediagram}, $B_d(T)$ lies in the vortex-liquid regime, above the vortex-lattice melting line $B_m(T)$,\cite{Ramshaw2012} showing that, on its own, melting of the vortex-lattice does not eliminate the long-lived excitations. Instead, $B_d(T)$ is likely a dynamical crossover at which thermally fluctuating vortices begin to move on a timescale similar to that of the microwave measurement frequency.
The parameters from the fits to Eqs.~\ref{Eq:model_viscosity} and \ref{Eq:model_pinning} are: the elastic pinning constant, $\alpha_p$; the viscosity terms $\eta_0$ and $\eta_1$; and the viscosity relaxation rate, $\Gamma$. $\alpha_p(T)$ is plotted in Fig.~\ref{fig:alphaB} as the DC limit of $\alpha_\mathrm{eff}(\omega)$: note that $\alpha_p$ fairly closely follows the 2.64~GHz pinning constant data. We see that it remains finite above the melting temperature. This is at odds with bulk measurements, for which the melting transition marks the resistive onset.\cite{FISHER:1991p681,Charalambous:1992gf,Kwok:1992ug,Safar:1992ep,MACKENZIE:1993p197,Ramshaw2012} A likely reason for this is that \emph{surface pinning} plays an important role in the microwave measurements: the loss of shear stiffness experienced at the melting transition has a profound effect on the ability of point-like defects to pin the vortex lattice; for planar defects such as the sample surface, shear stiffness is irrelevant. This observation also serves as a warning that microwave techniques are not necessarily a good probe of bulk pinning. Nevertheless, by narrowing the distribution of elastic forces experienced by vortices visible to the microwave fields, surface pinning likely works to our advantage, improving the applicability of the single-vortex model. The dynamical component of $\alpha_\mathrm{eff}$ also helps in this respect, as it arises from interactions with the electron fluid rather than from randomly distributed defects.
The viscosity terms $\eta_0(T)$ and $\eta_1(T)$ are shown in Fig.~\ref{fig:viscosity_params}, with the latter quantity appearing as part of $\eta_\mathrm{dc} \equiv \eta_0 + \eta_1$. $\eta_0(T)$ closely tracks the 13.97~GHz viscosity, and is very similar in magnitude to the viscosity inferred by Parks \emph{et al.} from terahertz measurements on \ybco{7-\delta} ($T_c = 85$ to 88~K), suggesting that the $\eta_0$ component of the viscosity extends over a very broad frequency range, up to terahertz frequencies.
The final fit parameter from Eqs.~\ref{Eq:model_viscosity} and \ref{Eq:model_pinning}, $\Gamma(T)$, is plotted separately, in Fig.~\ref{fig:Gamma}. At the lowest temperatures, $\Gamma/2\pi$ is of the order of several~GHz. This is comparable to the width of zero-field quasiparticle conductivity spectra in ortho-II \ybco{6.50},\cite{Turner:2003p331, Harris:2006p388} providing one of the pieces of evidence linking vortex dissipation to the $d$-wave quasiparticles. $\Gamma(T)$ initially grows linearly with temperature, also in accord with the behaviour inferred from zero-field measurements, in which the form of the temperature dependence of the scattering in Ortho-II \ybco{6.52} is taken as being indicative of weak-limit scattering of nodal quasiparticles.\cite{Turner:2003p331, Harris:2006p388} It is interesting that $\Gamma(T)$ is comparable to the quasiparticle relaxation rate in zero field, as it indicates that the vortex lattice does not contribute strongly to quasiparticle scattering. Vortices are large on the scale of the Fermi wavelength and would be expected to act as a source of small-angle scattering: such processes are known to be ineffective at relaxing charge currents in $d$-wave superconductors.\cite{Durst:2000p963} In contrast, small-angle scattering should have a strong effect on the rate of \emph{intra-nodal} transitions, and be very effective at randomizing the quasiparticle group velocity.\cite{Durst:2000p963} As a result, quasiparticles should become diffusively confined in the vortex lattice, even while electrical transport measurements, such as the ones presented here, indicate mean free paths much larger than the inter-vortex spacing. The vortex lattice should instead be responsible for new quasiparticle effects, such as pair-breaking induced by Doppler shifting of quasiparticle energies.\cite{VOLOVIK:1993p201} These should grow in importance with $B$, and may be responsible for the increase in $\eta_0(T)$ at higher fields. On passing through the melting temperature, $\Gamma(T)$ starts to increase rapidly, then appears to plateau above the dynamical crossover with $\Gamma/2\pi \approx 20$~GHz in the higher-field data sets. This frequency scale is substantially smaller than that expected for quasiparticle scattering in this temperature range, and is perhaps connected to superconducting fluctuations.
\begin{figure}[t]
\centering
\includegraphics[width= \columnwidth]{ZhouFig15}
\caption{(color online). Frequency-dependent flux-flow resistivity, $\rho_\mathrm{ff}(\omega) \equiv B \Phi_0/\eta^\prime(\omega)$, for $\omega/2 \pi = 2.64$, 4.51, 9.12, and 13.97~GHz, on semi-logarithmic axes, for $B = 0$, 0.75, 1, 2, 3, 4, 5, 6 and 7~T (from bottom to top). At the lower frequencies, the low temperature behaviour follows the form $\rho_\mathrm{ff}(T) = \rho_0 + A \log(1/T)$. At 9.12 and 13.97~GHz, the low temperature divergence appears to be cut off by finite frequency effects.}
\label{fig:rho_ff}
\end{figure}
\subsection{Flux-flow resistivity}
Finally, we obtain an estimate of the flux-flow resistivity, $\rho_\mathrm{ff}$, from the vortex-dynamics data. As mentioned in the introduction, we are faced with a trade-off when measuring $\rho_\mathrm{ff}$ in any real material, due to the presence of pinning: we must either use large currents that depin the vortices (for cuprates, an increasingly difficult prospect as temperature is lowered\cite{Kunchur:1993ie}); or measure the linear response at high frequencies and work back towards the static limit. Having taken the latter approach, we define a frequency-dependent flux-flow resistivity, $\rho_\mathrm{ff}(\omega) \equiv B \Phi_0/\eta^\prime(\omega)$, which is plotted in Fig.~\ref{fig:rho_ff}. At all fields and frequencies, $\rho_\mathrm{ff}(T)$ shows an initial drop on cooling through $T_c$, a broad minimum in the 8~to~20~K range, and an upturn at low temperatures --- this is a reflection of the peaked structure in $\eta^\prime(T)$. The most remarkable aspect of the data is that, at the lower frequencies, the low temperature upturns appear to follow a $\log(1/T)$ form. This is reminiscent of the behaviour of the DC resistivity in the pseudogap regime,\cite{Ando:1995p148, Boebinger:1996p147} raising the interesting possibility that the two effects are connected.
\section{Conclusions}
In summary, this work builds on a number of technical advances in microwave spectroscopy of the vortex state: the use of high $Q$ dielectric resonators, allowing high sensitivity measurements on small single crystals; the ability to simultaneously measure real and imaginary parts of the impedance at each frequency, allowing a clean separation of viscous and elastic effects; measurement over a wide range of microwave frequencies; and measurements as a function of magnetic field. Applied to a high-purity single crystal of ortho-II \ybco{6.52}, these developments come together to reveal the vortex dynamics in unprecedented detail, uncovering a close connection between the vortex viscosity $\eta^\prime(\omega,T)$ and the zero-field dynamics of the $d$-wave quasiparticles. The vortex viscosity is revealed to have strong frequency dependence and, when treated as a complex-valued quantity, is consistent with the tight constraints causality places on physical response functions. This gives us a great deal of confidence in both the experimental technique, and the use of single-vortex models of microwave-frequency vortex dynamics.
\section{Acknowledgements}
The authors thank W.~A.~Atkinson, J.~S.~Dodge, M.~P.~Kennett, B.~Morgan, J.~E. Sonier, Z.~Te\u{s}anovi\'c and J.~R.~Waldram for useful discussions. Research support for the experiments was provided by the Natural Science and Engineering Research Council of Canada (NSERC) and the Canadian Foundation for Innovation. Research support for sample preparation was provided by NSERC and the Canadian Institute for Advanced Research.\\
|
1,108,101,565,133 | arxiv | \section{Introduction}
\label{intro}
This paper discusses an algebraic combinatorics problem that arises in the theory of cluster algebras but whose study is also spurred on by geometric models appearing in string theory.
The engineering of four-dimensional quantum field theories in string theory has developed a great deal beyond $N=4$ supersymmetric Yang-Mills. Doubly-periodic bipartite planar graphs, also known as {\bf brane tilings}, are of special interest to physicists since perfect matchings on such graphs provide information regarding the geometry of certain toric varieties which are Calabi-Yau $3$-folds. Such varieties are the moduli spaces of certain quiver representations modulo relations as given by the Jacobian of a quiver with potential. In the language of theoretical physics, the AdS/CFT correspondence \cite{AdS1,AdS2,AdS3} associates a $N=1$ superconformal quantum field theory known as a supersymmetric quiver gauge theory to a toric Sasaki-Einstein $5$-manifold.
In this paper, we focus on a specific example of such a theory, which is associated to the cone over the del Pezzo surface of degree $6$ ($\mathbb{CP}^2$ blown up at three points) which we refer to as $\mathbf{dP_3}$ in this paper following the conventions of the physics literature \cite{BP,feng}. As previously studied, via the AdS/CFT correspondence, the corresponding quiver gauge theory is built using a highly symmetric six-vertex quiver (illustrated in Figure \ref{fig:quiv_brane} (Middle)) and the potential
\begin{eqnarray} \label{eq:pot} W &=& A_{16}A_{64}A_{42}A_{25}A_{53}A_{31}
~+~ A_{14}A_{45}A_{51} ~+~ A_{23}A_{36}A_{62} \\ \nonumber &-& A_{16}A_{62}A_{25}A_{51}
~-~ A_{36}A_{64}A_{45}A_{53} ~-~ A_{14}A_{42}A_{23}A_{31}.\end{eqnarray}
About a decade ago, using methods from dimer theory first developed by mathematicians and physicists alike in the context of statistical mechanics \cite{crystal,KOS}, physicists described how to associate a brane tiling to such a quiver gauge theory \cite{brane_dimer,Vafa}. See \cite{GK} for a more recent mathematical treatment. For the $dP_3$ case, this brane tiling is illustrated in Figure \ref{fig:quiv_brane} (Right). We refer to this as the $dP_3$ brane tiling, or $\mathcal{T}$ for short.
\vspace{1em}
Given a quiver, e.g. the one illustrated in Figure \ref{fig:quiv_brane} (Middle), one may define a {\bf cluster algebra} following the construction of Fomin and Zeleveinsky \cite{FZ}. (See Section \ref{Sec:Prelim} for details.) In general, a cluster algebra will have an infinite number of generators, called {\bf cluster variables}, which are grouped into overlapping subsets called {\bf clusters}. Cluster variables do not freely generate a cluster algebra, rather they satisfy certain binomial exchange relations which are deduced from the data of the quiver defining that given cluster algebra.
A cluster algebra is known as {\bf mutation-finite} if there is a finite list of quivers each of which would define that cluster algebra. A cluster algebra is known as {\bf mutation-infinite} otherwise. Combinatorial formulas are known for many mutation-finite cluster algebras \cite{MSW} since most such cluster algebras come from surfaces (see \cite{FeShTu}). For mutation-infinite cluster algebras, providing a combinatorial formula for every single cluster variable of that algebra is quite a challenge. However, for some specific mutation-infinite cluster algebras, there has been significant work which provides combinatorial formulas for cluster variables lying in certain subsets \cite{BMPW, DiF, glick, speyer}.
Our work yields further progress towards this combinatorial goal for a rich example which helps to illustrate new features that were unseen in previous descriptions of subsets of cluster varibles for mutation-infinite cluster algebras. In particular, we describe a three-parameter family of subgraphs of $\mathcal{T}$ with the property that a certain weighted enumeration of {\bf dimers} (also known as {\bf perfect matchings}) yields most of the Laurent expansions of the {\bf toric cluster variables}, which are a subset of the generators of the associated $dP_3$-cluster algebra. This is stated more precisely as Theorem \ref{thm:main} in terms of the language developed in our paper. We present our combinatorial formula and methods in full detail with an intention to generalize these methods to further examples in future work.
\vspace{1em}
We now say a little about the history of this problem before discussing the organization of this paper. Even before the connections between the mathematics and physics of brane tilings were well-known, mathematicians had been studying perfect matchings of $\mathcal{T}$ since the turn of the millennium, as by Jim Propp, Ben Wieland \cite[Problem 15]{EnumPropp} and Mihai Ciucu \cite[Section 7]{perfect}. (More precisely they were studying tilings of the dual graph of $\mathcal{T}$ which consists of regular hexagons, squares, and triangles.) In the modern language, Propp conjectured (proven by unpublished work of Wieland and published work by Ciucu) that a certain one-parameter family of subgraphs of $\mathcal{T}$ called {\bf Aztec Dragons}, as an analogue of {\bf Aztec Diamonds} \cite{Elkies}, had the property that the number of perfect matchings of each subgraph was a power of two.
While leading an REU (Research Experience for Undergraduates) at the University of Minnesota in 2012, the second author proposed a problem motivated by Sergey Fomin and Andrei Zelevinsky's theory of cluster algebras \cite{FZ} and the above mentioned theoretical physics. Definitions of cluster algebras and mutation appear in Section \ref{Sec:Mutations}. The goal was to obtain combinatorial formulas for the Laurent expansions of cluster variables obtained by certain sequences of mutations of quivers of interest to string theorists such as those associated to reflexive polygons \cite{hanany_polygons}. As part of this REU, Sicong Zhang was inspired by a paper by Cyndie Cottrell and Benjamin Young \cite{CY} and proved that by weighting the perfect matchings of Aztec Dragons in the appropriate way, it followed that the resulting partition functions (which are Laurent polynomials in this case) agreed with the corresponding cluster variables \cite{zhang}.
In the subsequent summer, 2013 REU students Megan Leoni, Seth Neel, and Paxton Turner worked with the second author to provide a combinatorial interpretation for a two-parameter family of mutation sequences by extending to a family of subgraphs beyond Aztec Dragons \cite{LMNT}. These were referred to as NE (Northeast) Aztec Castles and SW (Southwest) Aztec Castles.
Simultaneously, in Indiana, an alternative motivation for generalizing Aztec Dragons was being studied by Ciucu and the first author.
Similar to the Aztec Diamonds and the Aztec Dragons, James Propp also introduced Aztec Dungeons on a different lattice (the lattice corresponding to the affine Coxeter group $G_2$) and conjectured an explicit tiling formula for these regions, which is a power of $13$ or twice a power of $13$. This conjecture was later proven by Ciucu (see \cite[Section 8]{perfect}). Inspired by the Aztec Dungeons, Matt Blum considered a hexagonal counterpart of them called \textbf{Hexagonal Dungeons}. Blum had conjectured a striking pattern of the number of tilings of a hexagonal dungeon of side-lengths $a,2a,b,a,2a,b$ in cyclic order ($b\geq 2a$), which is $13^{2a^2}14^{\lfloor a^2/2\rfloor}$ (see \cite[Problem 25]{EnumPropp}). The first author and Ciucu proved and generalized Blum's conjecture \cite{lai'} by enumerating tilings of two families of regions restricted in a six-sided contour. This proof inspired a number of similar regions on different lattices, including {\bf Dragon Regions} denoted by $DR^{(1)}$ and $DR^{(2)}$ in \cite{LaiNewDungeon}. These Dragon Regions generalized Propp's Aztec Dragons \cite{perfect,EnumPropp,CY} and the NE/SW-Aztec Castles of \cite{LMNT}.
The present work provides a vivid picture of the cluster algebra associated to the $dP_3$ quiver and is a culmination of the above work on one-parameter, two-parameter, unweighted families of subgraphs of the $dP_3$ brane tiling $\mathcal{T}$. We describe a general construction that yields the Dragon regions of \cite{LaiNewDungeon} such that the Laurent polynomials obtained from the weighted enumeration of perfect matchings on such subgraphs of $\mathcal{T}$ are exactly a three-parameter family of cluster variables in the $dP_3$-cluster algebra. For lack of a better name, we have decided to refer to the three-parameter family of subgraphs $\mathcal{T}$ constructed in this paper as (general) {\bf Aztec Castles}. The work of the first author in \cite{LaiNewDungeon} can be considered as a special case of the unweighted version of the main result, Theorem \ref{thm:main}, of the present paper.
We prove in Section \ref{sec:gentoric} that this {\bf three-parameter family of cluster variables} is indeed the {\bf set of toric cluster variables}. Though our work begins with a description of certain mutation sequences known as {\bf generalized $\tau$-mutation sequences}, we show, see Lemma \ref{lem:gentoric}, that for all toric cluster variables $X$, there exists a cluster reachable via a generalized $\tau$-mutation sequence which contains $X$. Consequently, the aforementioned three-dimensional parameterization is indeed a parameterization of the entire set of toric cluster variables.
The combinatorial interpretation developed in this paper is in a similar spirit as David Speyer's {\bf crosses-and-wrenches} graphical interpretation for solutions to the Octahedron Recurrence \cite{speyer}, Philippe Di Francesco's dimer solutions to {\bf T-Systems} \cite{DiF}, or Bosquet-Melou-Propp-West's {\bf Pinecone} graphs \cite{BMPW}. However as we highlight, both throughout the paper and especially in Section \ref{sec:open}, the $dP_3$ cluster algebra and Aztec Castles provide several new combinatorial features that were not seen in these earlier examples, and yet, due to the symmetry of the $dP_3$ quiver, this example is ideal for detailed analysis.
\vspace{1em}
In Section 2, we begin with the relevant background material on cluster algebras and then provide a geometric viewpoint on certain cluster mutations (toric mutations) of the $dP_3$ cluster algebra. Section 3 presents our first theorem (Theorem \ref{thm:explicit}) which is an explicit algebraic formula for the Laurent expansion of any cluster variable reachable from toric mutations of the $dP_3$ quiver. This provides a three-parameter family which extends the one-parameter and two-parameter families of cluster variables discussed in \cite{zhang} and \cite{LMNT}, respectively. In Section 4, we illustrate how to construct Aztec Castles via six-tuples which is motivated by the constructions of \cite{LaiNewDungeon} and \cite{LMNT}. Section 5 discusses why it is sufficient to consider a certain three-parameter family of six-tuples and provides a complete illustration of all of the possible shapes of Aztec Castles. This leads us to our main theorem (Theorem \ref{thm:main}) which yields a combinatorial interpretation in terms of the $dP_3$ brane tiling for most cluster variables reachable from toric mutations. There is an issue regarding self-intersecting contours that prevents us from getting a combinatorial interpretation for all reachable cluster variables in terms of perfect matchings even though the algebraic formula of Theorem \ref{thm:explicit} still applies in such cases. Sections 6 and 7 provide the proof of our main theorem, while Section 8 includes some examples illustrating the proof. We finish with open problems and directions for further research in Section 9.
\vspace{1em}
\section{From Mutations To Alcove Walks} \label{Sec:Mutations}
We begin by reviewing the definition of quiver mutations and cluster mutations. This leads us to study a special subcollection of mutation sequences known as toric mutation sequences. This special subcollection includes the $\tau$-mutation sequences from \cite{LMNT} as special cases. We provide a geometric interpretation that allows us to examine toric mutation sequences essentially as alcove-walks on the $\mathbb{Z}^3$ lattice, which extends the visualization of $\tau$-mutation sequences as $\mathbb{Z}^2$-alcove walks from \cite{LMNT}. We exploit this identification later in this paper to obtain explicit algebraic formulas, Theorem \ref{thm:explicit}, and combinatorial interpretations, Theorem \ref{thm:main}, for the resulting cluster variables.
\subsection{Quiver and Cluster Mutations}
\label{subsec:quivbrane}
A \textbf{quiver} $Q$ is a directed finite graph with a set of vertices $V$ and a set of edges $E$ connecting them whose direction is denoted by an arrow. For our purposes $Q$ may have multiple edges connecting two vertices but may not contain any loops or $2-$cycles. We can relate a cluster algebra with initial seed $\{x_{1},x_{2},\ldots,x_{n}\}$ to $Q$ by associating a cluster variable $x_{i}$ to every vertex labeled $i$ in $Q$ where $|V| = n$. The cluster is the union of the cluster variables at each vertex.
\begin{definition}[\textbf{Quiver Mutation}] Mutating at a vertex $i$ in $Q$ is denoted by $\mu_{i}$ and corresponds to the following actions on the quiver:
\begin{itemize}
\item For every 2-path through $i$ (e.g. $j \rightarrow i \rightarrow k$), add an edge from $j$ to $k$.
\item Reverse the directions of the arrows incident to $i$
\item Delete any 2-cycles created from the previous two steps.
\end{itemize}
\end{definition}
\noindent When we mutate at a vertex $i$, the cluster variable at this vertex is updated and all other cluster variables remain unchanged \cite{FZ}. The action of $\mu_{i}$ on the cluster leads to the following binomial exchange relation:
\begin{equation*}
\label{eq: exchange relation}
x'_{i}x_{i} = \prod_{i \rightarrow j \; \mathrm{in} \; Q}x_{j}^{a_{i \rightarrow j}} + \prod_{j \rightarrow i \; \mathrm{in} \; Q}x_{j}^{b_{j \rightarrow i}}
\end{equation*}
where $x_i'$ is the new cluster variable at vertex $i$, $a_{i \rightarrow j}$ denotes the number of edges from $i$ to $j$, and $b_{j \rightarrow i}$ denotes the number of edges from $j$ to $i$.
\subsection{The Del Pezzo 3 Quiver and its Brane Tiling} \label{Sec:Prelim}
With quiver and cluster mutation in mind, we introduce our main character, the quiver $Q$ associated to the third del Pezzo surface ($\mathbf{dP_3}$), illustrated in Figure \ref{fig:quiv_brane} with its associated brane tiling \cite{brane_dimer,Vafa,FHMSVW,HV}. We focus on one of the four possible toric phases of this quiver. In particular, $Q$ is typically referred to as Model 1, as it is in Figure 27 of \cite{franco_eager}. (However, for completeness, we mention that these were first described earlier in the physics literature in \cite{BP,feng}).)
Its {\bf Toric Diagram} is the the convex hull of the vertices $\{(-1,1), (0,1), (1,0), (1,-1), (0,-1), (-1,0), (0,0)\}$. See Figure \ref{fig:quiv_brane} (Left).
\begin{figure}\centering
\includegraphics[width=12cm]{Branetiling1.pdf}
\caption{\small The $dP_3$ toric diagram, its quiver $Q$, and its associated brane tiling $\mathcal{T}$.}
\label{fig:quiv_brane}
\end{figure}
In a toric supersymmetric gauge theory, a formal linear combination of closed cycles of the quiver where each edge in the unit cell of the brane tiling appears exactly twice, once for clockwise (positive) orientation and once for counter-clockwise (negative) orientation, is
known as a \textbf{superpotential}. Using a pair $(Q,W)$ where $Q$ is a quiver that can be drawn on a torus and $W$ is a related superpotential, we can uniquely build a $2$-dimensional cell complex using potential terms as $2$-faces, quiver arrows as $1$-faces, and quiver vertices as $0$-faces. We construct this cell complex on a torus (unfolded on its universal cover the Euclidean plane), and then take its planar dual to get the {\bf brane tiling} associated to $(Q,W)$.
Proceeding in this way for the case of the $dP_3$ quiver with the superpotential $W$ given in (\ref{eq:pot}), the associated brane tiling is illustrated on the right-hand-side of Figure \ref{fig:quiv_brane}. We denote the $dP_3$ brane tiling as $\mathcal{T}$.
\subsection{Toric Mutations} \label{Sec:Toric}
We say that a vertex of a quiver $Q$ is {\bf toric} if it has both in-degree and out-degree $2$. A {\bf toric mutation} is a mutation at a toric vertex. Beginning with the $dP_3$ quiver introduced in Section \ref{Sec:Prelim}, mutation at any vertex is a toric mutation. We call this initial quiver Model 1. After any such mutation, up to graph isomorphism, we have a quiver as illustrated in the top-right of Figure \ref{fig:dP3QuiverModels} (the graphics are made with \cite{sage}).
In a Model 2 quiver, four out of the six vertices are toric at this point. Two of these toric vertices come as an antipodal pair on the equator of the octahedron. Mutation at one of them leads back to the original Model 1 quiver and mutation at the antipode leads to a Model 1 quiver where some of the vertices have been permuted. The remaining two toric vertices lie at the poles of the octahedron. Mutation at either of those toric vertices leads to a Model 3 quiver or the reverse of a Model 3 quiver. See the bottom-left of Figure \ref{fig:dP3QuiverModels}. By abuse of notation we will refer to both of these as Model 3 quivers.
In a Model 3 quiver, there is a unique vertex with in-degree and out-degree $3$. All other vertices in a Model 3 quiver are toric. Three of these toric vertices are incident to a double-arrow. Mutation at those three vertices leads to a Model 4 quiver, illustrated in the bottom-right of Figure \ref{fig:dP3QuiverModels}. The remaining two toric vertices yields Model 2 quivers after they are mutated. One of these Model 2 quivers is the quiver previously visited since mutation is an involution.
Finally, in a Model 4 quiver, there are three toric vertices, which are graph isomorphic to one-another. Mutation at either of them leads to a Model 3 quiver. To summarize all of the adjacencies, we borrow the following figure from \cite[Figure 27]{franco_eager}. See Figure \ref{fig:ModelConnections}. We now focus on special examples of toric mutation sequences before returning to the general case.
\begin{figure}
\includegraphics[width=2.5in]{Model1.png}
\includegraphics[width=3.1in]{Model2.png}
\includegraphics[width=2.5in]{Model3.png}
\includegraphics[width=2.5in]{Model4.png}
\caption{Models 1, 2, 3, and 4 of the $dP_3$ quiver. Two quivers are considered to be the same model if they are equivalent under (i) graph isomorphism and (ii) reversal of all edges.}
\label{fig:dP3QuiverModels}
\end{figure}
\begin{figure}\centering
\includegraphics[width=12cm]{Fig3.pdf}
\caption{Adjacencies between the different models.
\label{fig:ModelConnections}
\end{figure}
\subsection{Generalized $\tau$-mutation Sequences}
\label{sec:tau}
In this subsection, we define a class of mutation sequences on $Q$, which we refer to as generalized $\tau$-mutation sequences. This extends the definition from \cite{LMNT}, which defined $\tau_1$, $\tau_2$, and $\tau_3$\footnote{Note: the notations for $\tau_i$ and $\tau_i'$-mutations are reversed as compared with \cite{LMNT} since in this paper, the sequences involving combinations of mutations and permutations are more central.}.
\begin{definition}
\label{def:tau0}
Define the following pairs of mutations on $Q$.
\begin{itemize}
\centering
\item[] $\tau_{1}'=\mu_{1} \circ \mu_{2}$
\item[] $\tau_{2}'=\mu_{3} \circ \mu_{4}$
\item[] $\tau_{3}'=\mu_{5} \circ \mu_{6}$
\item[] $\tau_{4}'= \mu_{1} \circ \mu_{4} \circ \mu_1 \circ \mu_5 \circ \mu_1$
\item[] $\tau_{5}'= \mu_{2} \circ \mu_{3} \circ \mu_2 \circ \mu_6 \circ \mu_2$
\end{itemize}
\end{definition}
Since antipodal vertices share no common edges, we observe that $\mu_{2i - 1}$ and $\mu_{2i}$ commute for $i \in \{1,2,3\}$.
Furthermore, for such $i$, the action of $\tau_{i}$ on the quiver exchanges the labels on vertices $2i-1$ and $2i$. This motivates us to define
the following actions on cluster seeds which are slight variants of the $\tau'$-mutations.
\begin{definition}
\label{def:tau}
Define the following actions on $Q$.
\begin{itemize}
\centering
\item[] $\tau_{1}=\mu_{1} \circ \mu_{2} \circ (12)$
\item[] $\tau_{2}=\mu_{3} \circ \mu_{4} \circ (34)$
\item[] $\tau_{3}=\mu_{5} \circ \mu_{6} \circ (56)$
\item[] $\tau_{4}= \mu_{1} \circ \mu_{4} \circ \mu_1 \circ \mu_5 \circ \mu_1 \circ (145)$
\item[] $\tau_{5}= \mu_{2} \circ \mu_{3} \circ \mu_2 \circ \mu_6 \circ \mu_2 \circ (236)$.
\end{itemize}
where we apply a graph automorphism of $Q$ and permutation to the labeled seed after the sequence of mutations.
\end{definition}
One can then check that on the level of quivers and labeled seeds (i.e. ordered clusters), we have the following identities:
For all $i,j$ such that $1 \leq i \not = j \leq 3$
\begin{equation}
\label{eq: tau_relations}
\begin{split}
\tau_1(Q) = \tau_2(Q) = \tau_3(Q) = \tau_4(Q) = \tau_5(Q) &= Q \\
(\tau_{i})^{2} \{x_1,x_2\dots, x_6\} = (\tau_{4})^{2} \{x_1,x_2\dots, x_6\} = (\tau_{5})^{2} \{x_1,x_2\dots, x_6\}&= \{x_1,x_2\dots, x_6\} \\
(\tau_{i}\tau_{j})^{3} \{x_1,x_2\dots, x_6\}&= \{x_1,x_2\dots, x_6\}, \\
\tau_i \tau_4 \{x_1,x_2\dots, x_6\} &= \tau_4 \tau_i \{x_1,x_2\dots, x_6\}, \\
\tau_i \tau_5 \{x_1,x_2\dots, x_6\} &= \tau_5 \tau_i \{x_1,x_2\dots, x_6\}.
\end{split}
\end{equation}
Lastly, letting $\tau_4\circ \tau_5$ act on the labeled seed $\{x_1,x_2\dots, x_6\}$, we see that this sequence has infinite order. From these relations, it follows that $\langle \tau_1,\tau_2,\dots, \tau_5\rangle$ generate a subgroup\footnote{We later show, see Remark \ref{Rem:unique}, that in fact there are no other relations and $\langle \tau_1,\tau_2,\dots, \tau_5\rangle$ generate the full reflection group of this type. We do not require this equality for the remainder of Section \ref{Sec:Mutations}.} of the reflection group of type $\tilde{A}_2 \times I_\infty$ where $\tilde{A}_2$ is the affine symmetric group on $\{0,1,2\}$ and $I_\infty$ is the infinite dihedral group.
We define a \textbf{generalized} $\mathbf{\tau}$-\textbf{mutation sequence} $S$ to be a mutation sequence of the form $\tau_{a_1} \tau_{a_2} \ldots \tau_{a_k}$ with the $a_i$'s in $\{1,2,3,4,5\}$.
\begin{definition} [\bf Ordered Cluster from a generalized $\tau$-mutation sequence]
Starting with an initial cluster of $[x_1,x_2,x_3,x_4,x_5,x_6]$, we let $Z^S = [z^S_1, z^S_2, z^S_3, z^S_4, z^S_5, z^S_6]$ denote the ordered cluster resulting from the generalized $\tau$-mutation sequence $S$.
\end{definition}
\subsection{Viewing Generalized $\tau$-mutation Sequences as Prism Walks} \label{sec:walks}
Before saying more about the cluster variables arising from a generalized $\tau$-mutation sequence, we develop a two-parameter, and later three-parameter, coordinate system motivated by the relations satisfied by the $\tau_i$'s. In particular, we have the following.
\begin{remark} Since we intertwine the permutations with the mutations when applying a generalized $\tau$-mutation sequence, all of the $\tau_i$'s fix quiver $Q$. It follows that $Z^S$ is well-defined up to the $\tilde{A}_2 \times I_\infty$ relations of (\ref{eq: tau_relations}).
\end{remark}
Let $L^\Delta$ denote the square lattice triangulated with $45^\circ$-~$45^\circ$-~$90^\circ$ triangles, as in Figure \ref{fig:sqlattice}. Note that $L^\Delta$ is isomorphic to the affine $\widetilde{A}_2$ Coxeter lattice, which we let $\tau_1$, $\tau_2$, and $\tau_3$ act on as simple reflections, equivalently as steps in an alcove walk as in \cite{Rou} (also compare with \cite[Figure 5]{LMNT}).
We let the triangle $\{(0,-1), (-1,0), (0,0)\}$ be the initial alcove and use the convention
that $\tau_1$ (resp. $\tau_2$, $\tau_3$) corresponds to a flip across the horizontal (resp. vertical, diagonal) edge of the initial alcove. For the remaining alcoves, we obtain the correspondence between edges and $\tau_i$'s by noting that around each vertex in $L^\Delta$, the assignments alternate between a given $\tau_i$ and $\tau_j$ (for $i,j \in \{1,2,3\}$). Using two sides of the triangle in the initial alcove allows us to uniquely extend the assignment to all other edges.
\begin{figure}
\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\scalebox{2.0}{\scalebox{0.5}{
\begin{picture}(0,0)%
\includegraphics{GreggLattice.pdf}%
\end{picture}%
\begin{picture}(5005,3569)(826,-3337)
\put(2506,-1539){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_3$}%
}}}}
\put(3339,-1531){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_2$}%
}}}}
\put(2476,-991){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_1$}%
}}}}
\put(3834,-1539){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_1$}%
}}}}
\put(3583,-984){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_2$}%
}}}}
\put(3834,-601){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_3$}%
}}}}
\put(4276,-631){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_2$}%
}}}}
\put(4794,-609){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_1$}%
}}}}
\put(3313,-616){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_1$}%
}}}}
\put(2896,-624){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_2$}%
}}}}
\put(2386,-594){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_3$}%
}}}}
\put(2716,-61){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{ $\tau_3$}%
}}}}
\end{picture}
}}
\caption{The lattice $L^\Delta$ with the initial alcove labeled as $(0,-1), (-1,0)$, and $(0,0)$. The three $\tau$-mutations $\tau_1$, $\tau_2$, $\tau_3$ correspond to the directions of the triangular flips.}
\label{fig:sqlattice}
\end{figure}
Using (\ref{eq: tau_relations}), any generalized $\tau$-mutation sequence $S$ can be written as $S=S_1S_2$, where $S_1$ consists entirely of $\tau_1$'s, $\tau_2$'s, and $\tau_3$'s (i.e. a $\tau$-mutation sequence as defined in \cite{LMNT}), and $S_2$ is an alternating product of $\tau_4$ and $\tau_5$. For such an $S_1$, we associate an {\bf alcove walk} by starting in the initial alcove for $S_1 = \emptyset$ and then for each $\tau_i$ in $S_1$ (from left-to-right), we apply the associated reflection, which yields one of the three neighboring alcoves. The following remark is easy to verify and will be useful for the descriptions of triangular flips we use below.
\begin{remark} \label{rem:Delta} The lattice $L^\Delta$ consists of two orientations of triangles, a NE-pointing triangle, $[(i,j), (i-1,j+1), (i,j+1)]$, and a SW-pointing one, $[(i,j),(i-1,j+1),(i-1,j)]$. Further, the image of $[(i_1,j_1),(i_2,j_2),(i_3,j_3)]$ under the map $\alpha : \mathbb{Z}^2 \to \mathbb{Z}/3\mathbb{Z}$ defined as $(I,J) \mapsto I - J \mod 3$ is a permutation of $[1,2,3]$. See Figure \ref{fig:3levels}. In fact, $L^\Delta$ is completely and disjointly tiled by the set of NE-pointing triangles whose NE vertex $(I,J)$ satisfies $\alpha(I,J) = 3$. Such triangles are translates of the triangle shown shaded.
\end{remark}
\begin{remark} \label{rem:order123} For convenience, consider the case that $|S_1|$ is even so that we have $T_{S_1} = [(i,j), (i-1,j+1), (i,j+1)]$, is a NE-pointing triangle, as in Remark \ref{rem:Delta}. Without loss of generality, we also focus on the case $i - j \equiv 1 \mod 3$. If we apply triangular flips in the three possible directions, we obtain adjacent triangles $T_{S_1\tau_1}$, $T_{S_1\tau_2}$, and $T_{S_1\tau_3}$ where we have exchanged one vertex for a new one: $(i,j) \leftrightarrow (i-1,j+2)$, $(i-1,j+1) \leftrightarrow (i+1,j)$, or $(i,j+1) \leftrightarrow (i-1,j)$, keeping the order otherwise the same. By a case-by-case analysis, we see that the values of $\alpha(T_{S_1\tau_r})$ continue to be $[1,2,3]$ in that order, see Figures \ref{fig:3levels} and \ref{fig:flips}.
\end{remark}
\begin{figure}
\includegraphics[width=3in]{graphic1.pdf}
\caption{Levels of $i-j \mod 3$ in Lattice $L^\Delta$ illustrated.}
\label{fig:3levels}
\end{figure}
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{Page7.pdf}%
\end{picture}%
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(6286,1954)(813,-1579)
\put(6171,-629){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\itdefault}{$\tau_3$}%
}}}}
\put(1819,-834){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\itdefault}{$\tau_1$}%
}}}}
\put(4144,-531){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\itdefault}{$\tau_2$}%
}}}}
\end{picture}
\caption{The three possible triangular flips illustrated.} \label{fig:flips}
\end{figure}
With coordinates now described for $\tau$-mutation sequences $S_1$ consisting of $\tau_1$'s, $\tau_2$'s and $\tau_3$'s, we now consider a generalized $\tau$-mutation sequence $S=S_1S_2$, where $S_2$ consists of $\tau_4$'s and $\tau_5$'s. For such a generalized $\tau$-mutation sequence we associate a corresponding triangular prism in $\mathbb{Z}^3$. By the relations (\ref{eq: tau_relations}) and the equivalence $L^\Delta \times \mathbb{Z} \cong \mathbb{Z}^3$, the following definition is well-defined. In Section \ref{sec:subgraphs}, these prisms will be used to define a corresponding $6$-tuple of graphs, which will serve as our combinatorial interpretation for cluster variables.
\begin{definition} [\bf Prism $\Delta^S$ from a generalized $\tau$-mutation sequence $S$] \label{def:prism}
Factor $S$ as $S_1S_2$ as mentioned above, and let the triple
$[(i_1,j_1),(i_2,j_2),(i_3,j_3)]$ denote the triangle of $L^\Delta$ reached after applying the alcove walk associated to $S_1$ starting from the initial alcove. We order this triple such that that vertices satisfy $i_r- j_r \equiv r \mod 3$. Then
$$\Delta^{S} = \Delta^{S_1S_2} = [(i_1,j_1,k_1),(i_1,j_1,k_2),(i_2,j_2,k_2),(i_2,j_2,k_1),(i_3,j_3,k_1),(i_3,j_3,k_2)]$$
in $\mathbb{Z}^3 \cong L^\Delta \times \mathbb{Z}$ where we let $\{k_1,k_2\} = \{|S_2|, |S_2|+1\}$ (resp. $\{-|S_2|,-|S_2|+1\}$) if $S_2$ starts with $\tau_5$ (resp. $\tau_4$). Here the correspondence between these sets depends on the parity of $|S_2|$: we let $k_1 = \pm |S_2|$ if $|S_2|$ is odd and $k_2 = \pm |S_2|$ otherwise.
\end{definition}
In particular, notice that by Definition \ref{def:prism}, $$\Delta^\emptyset = [(0,-1,1), (0,-1,0), (-1,0,0), (-1,0,1), (0,0,1), (0,0,0)].$$
Since clusters are well-defined up to the relations (\ref{eq: tau_relations}), it follows that we can associate the ordered cluster $Z^S = [z_1^S,z_2^S,\dots, z_6^S]$, obtained from the generalized $\tau$-mutation sequence $S$, to the prism
$\Delta^S$.
Since both of these lists are ordered, this induces a mapping from the set of lattice points of $\mathbb{Z}^3$ to the set of cluster variables reachable via a generalized $\tau$-mutation sequence.
Accordingly, we use the notation $z_i^{j,k}$ to denote the cluster variable corresponding to the lattice point $(i,j,k)$, noting that we have not yet shown that this mapping is an injection.
We will show injectivity in Section \ref{sec:explicit} but our arguments in Section \ref{sec:gentoric} do not require it.
\begin{figure}
\includegraphics[width=5in]{ModelIPage1.pdf}
\caption{As in Remark \ref{rem:Delta}, we have two possible orientations of a triangle, i.e. alcove, in $L^\Delta$. We let these be shorthand for
the corresponding triangular prisms in $L^\Delta \times \mathbb{Z} \cong \mathbb{Z}^3$ reached by applying a $\tau$-mutation sequence $S_1$ involving
only $\tau_1$'s, $\tau_2$'s, and $\tau_3$'s.}
\label{fig:ModelI}
\end{figure}
\begin{figure}
\includegraphics[width=5.5in]{Page31.pdf}
\caption{Illustrating the prism as it is transformed by mutations $\mu_1$ and $\mu_2$, resulting in the image of $\tau_1' = \mu_1\circ\mu_2$. Applying permutation $(12)$ yields a triangular prism of the above form.}
\label{fig:ModelII}
\end{figure}
\begin{figure}
\includegraphics[width=5.5in]{Page41.pdf}
\caption{We illustrate the $\mathbb{Z}^3$-transformations induced by the mutation sequence $\tau_4' = \mu_1\mu_4\mu_1\mu_5\mu_1$. Applying permutation $(145)$ to this result
yields a triangular prism of the standard form but with a decreased third coordinate.}
\label{fig:ModelIII}
\end{figure}
\subsection{General Toric Mutation Sequences geometrically} \label{sec:gentoric}
As described at the end of the last section, the set of clusters that are reachable via a generalized $\tau$-mutation sequence can be modeled as prisms in the $\mathbb{Z}^3$ lattice. In particular, the effect of a single $\tau_i$ corresponds to a glide reflection or translation on one of the two prisms illustrated in Figure \ref{fig:ModelI}. Using these two types of transformations, we can move the initial prism, $[(0,-1,1), (0,-1,0), (-1,0,0), (-1,0,1), (0,0,1), (0,0,0)]$, to any other isometric prism in $\mathbb{Z}^3$ up to reflection or translation. The natural ensuing question is what about the set of clusters that are reachable via other toric mutation sequences? We answer this question with the following result.
\begin{lemma} \label{lem:gentoric}
For the cluster algebra associated to the $dP_3$ Quiver, the set of toric cluster variables, i.e. those reachable via a toric mutation sequence, coincides with the set of cluster variables that are reachable by generalized $\tau$-mutation sequences. Consequently, we obtain a parameterization of such toric cluster variables by $\mathbb{Z}^3$.
\end{lemma}
\begin{proof} We prove this result by breaking up $\tau$-mutation sequences into their individual components. From the above analysis, i.e. as induced by the relations (\ref{eq: tau_relations}), we know how each of the $\tau_i$'s transforms one cluster to another in terms of the $\mathbb{Z}^3$-parameterization. However, since most of the individual mutations appearing in a $\tau_i$ appear therein uniquely, we may immediately deduce in most cases how an individual mutation transforms one cluster variable to another in terms of this parameterization as well. To start with, in Figure \ref{fig:ModelII}, we illustrate how the components of $\tau_1$ act on $\mathbb{Z}^3$. By symmetry, the components of $\tau_2$ and $\tau_3$ induce rotated versions of these moves.
Next we present Figure \ref{fig:ModelIII}, which illustrates the intermediate steps that together yield the vertical translation induced by $\tau_4$. The validity of the second (resp. third) configuration in this figure follows from the above description of how the completed mutation sequences $\tau_1$ (resp. $\tau_4$) affects a cluster. Similarly, if we run the $\tau_4$ mutation sequence backwards, the validity of the fifth and fourth configurations follow analogously. By similar logic, the sequence $\tau_5$ would induce the opposite composition of moves.
\begin{figure}
\includegraphics[width=5in]{Fig10.pdf}
\caption{The toric mutation sequence $\mu_1\mu_4\mu_3$ does not correspond to a product of $\tau_i$'s but illustrates a toric mutation between Models 3 and 4.}
\label{fig:ModelsGeom}
\end{figure}
By comparing these figures with the description of toric mutations in Section \ref{Sec:Toric}, we observe that we can obtain $\mathbb{Z}^3$-coordinates for all cluster variables obtained from a toric mutation sequence that passes through quivers of Model 1, Model 2, or Model 3 type. In particular, up to geometric rotations or reflections, any single toric mutation starting from (1) a Model 1 quiver corresponds to the local transformation illustrated on the left-hand-side of Figure \ref{fig:ModelII}; (2) a Model 2 quiver corresponds either to the local transformation illustrated on the right-hand-side of Figure \ref{fig:ModelII} or the diagonal arrows of Figure \ref{fig:ModelIII}; (3) a Model 3 quiver corresponds to either the middle arrow of Figure \ref{fig:ModelIII}, the diagonal arrows on either side in Figure \ref{fig:ModelIII}, or is handled in the ensuing paragraph.
To complete this picture to include all toric mutation sequences, even those that visit a Model 4 type quiver, we include Figure \ref{fig:ModelsGeom} that shows the $\mathbb{Z}^3$-transformations induced by the toric mutations between Model 3 and Model 4. In particular, the content of Figure \ref{fig:ModelsGeom} is that if we start with the initial cluster $\{x_1,x_2,\dots, x_6\}$ and mutate by $\mu_1$, $\mu_4$, then $\mu_3$ we obtain a new cluster whose third cluster variable agrees with the second cluster variable in the cluster after the generalized $\tau$-mutation sequence $\tau_2\tau_1$. The resulting quiver is of Model 4 type and the resulting cluster variables are still parameterized by $\mathbb{Z}^3$. Any toric mutation of a Model 4 quiver leads back to a Model 3 one, and as described in Section \ref{Sec:Toric}, all toric mutations between Model 3 and Model 4 are equivalent. Thus up to symmetry, we have illustrated all possible single step toric mutations among any of the models of the $dP_3$ quiver and hence for any toric mutation sequence rather than just the generalized $\tau$-mutation sequences.
\end{proof}
\begin{remark} \label{rem:RG} In Section 4 of \cite{FHU}, Franco, Hanany, and Uranga discussed certain mutation sequences, called {\bf duality cascades} in their language, of the $dP_3$ quiver that are significant for geometric and physical reasons. After comparing their work to ours, we realized that their cascades are essentially the $\tau_i$'s defined above. It is of interest that our combinatorial motivation, i.e. we looked for a family of mutation sequences whose combinations would satisfy Coxeter relations, aligned with their geometric objectives of understanding the Renormalization Group (RG) flow. The second author has started investigating other examples such as $dP_2$ and $Y_{p,q}$'s with Franco to see how the use of fractional branes and beta functions could lead to other families of mutation sequences satisfying Coxeter relations.
\end{remark}
\begin{remark} \label{rem:zono} In Sections 3.2.2 and 8.3 of \cite{franco_eager}, Eager and Franco discuss a possible coordinate system for working with mutation sequences of quivers from brane tilings. They are motivated by previous work in tilting theory \cite{BH} and the multi-dimensional octahedron recurrence \cite{HK,HS,speyer}, and sketch examples of coordinates for $dP_2$ and $dP_3$. In their coordinate system, they describe certain duality cascades that act as translations of a zonotope.
As described in Section \ref{sec:walks}, the importance of the generalized $\tau$-mutation sequences is that out of the space of all possible toric mutation sequences, the generalized $\tau$-mutation sequences are the ones that map a cluster that looks like a prism (which is an example of a zonotope) to a translation, or a glide reflection, of the same prism. Hence, up to a coordinate change, our generalized $\tau$-mutation sequences of the present paper should agree with the duality cascades described in \cite{franco_eager}. With Eager, the second author is currently exploring the possibility of explicitly defining these coordinate systems and zonotopes for other examples.
\end{remark}
\begin{remark} Unpublished work by the researchers Andr\'e Henriques, David Speyer, and Dylan Thurston on the multi-dimensional octahedron recurrence continues the point of view from \cite{HK,HS,speyer} and would provide an alternative construction for cluster variable coordinates that would be a variant of the ones discussed in this section. We thank David Speyer for enlightening conversations on this topic. More comments comparing our approaches to theirs are included in Section \ref{sec:open}.
\end{remark}
\section{Explicit formula for cluster variables} \label{sec:explicit}
By Lemma \ref{lem:gentoric}, we have a surjection between the lattice points of $\mathbb{Z}^3$ and the cluster variables reachable from a general toric mutation sequence. Moreover, a general toric mutation sequence $S$, applied to the initial cluster, reaches a cluster $Z^S$ that either may be modeled as a prism $\Delta^S$ in $\mathbb{Z}^3$ or is at most three mutation steps away from such a cluster. In this section, we continue to use the notation $z_i^{j,k}$ to denote the cluster variable corresponding to lattice point $(i,j,k)$. With this notation in mind, we come to our first main result.
\begin{theorem} \label{thm:explicit}
Let $(i,j,k) \in \mathbb{Z}^3$ and $z_i^{j,k}$ be the associated cluster variable (reachable by a toric mutation sequence) as described above.
Define $A= \frac{x_3x_5+x_4x_6}{x_1x_2}$, $B=\frac{x_1x_6+x_2x_5}{x_3x_4}$, $C=\frac{x_1x_3+x_2x_4}{x_5x_6}$, $D=\frac{x_1x_3x_6+x_2x_3x_5+x_2x_4x_6}{x_1x_4x_5}$, and
$E = \frac{x_2x_4x_5 + x_1x_3x_5 + x_1x_4x_6}{x_2x_3x_6}$.
Then $z_i^{j,k}$ is given by the Laurent polynomial
{\large $$x_r ~~A^{\lfloor \frac{(i^2+ij+j^2+1) + i + 2j}{3}\rfloor} ~
B^{\lfloor \frac{(i^2+ij+j^2+1) + 2i + j}{3}\rfloor}~
C^{\lfloor \frac{i^2+ij+j^2+1}{3}\rfloor}~
D^{\lfloor \frac{(k-1)^2}{4}\rfloor}~
E^{\lfloor \frac{k^2}{4}\rfloor}$$} where
$r = 1$ if $2(i-j) + 3k \equiv 5$, ~ $r = 2$ if $2(i-j) + 3k \equiv 2$, ~ $r = 3$ if $2(i-j) + 3k \equiv 4$,
$r = 4$ if $2(i-j) + 3k \equiv 1$, ~ $r = 5$ if $2(i-j) + 3k \equiv 3$, ~ $r = 6$ if $2(i-j) + 3k \equiv 0$ working modulo $6$. In particular, the variable $x_r$ is uniquely determined by the values of $(i-j)$ modulo $3$ and $k$ modulo $2$.
\end{theorem}
\begin{remark}
This nontrivial correspondence between the values of $r$ and $2(i-j)+3k \mod 6$ comes from our cyclic ordering
of the Model 1 $dP_3$ quiver in counter-clockwise order given in Figure \ref{fig:quiv_brane}. In particular, as we rotate from vertex $r$ to $r'$ in clockwise order, the corresponding value of $2(i-j)+3k \mod 6$ increases by $1$ (circularly).
\end{remark}
\begin{remark} \label{Rem:unique}
As an application of Theorem \ref{thm:explicit}, observe that $z_i^{j,k} \not = z_{i'}^{j',k'}$ unless $i=i'$, $j=j'$, and $k=k'$. This follows since each of these algebraic expressions are distinct for different $(i,j,k)\in \mathbb{Z}^3$.
\end{remark}
\noindent Before proving the result we note the following identities that will aid us in the proof.
\begin{lemma} \label{lem:ABCDE}
Let $c(i,j) = i^2 + ij + j^2 + 1$, $a(i,j) = c(i,j) + i + 2j$, and $b(i,j) = c(i,j) + 2i+j$. Then we have the identities
{\footnotesize \begin{eqnarray} \label{eq:thirdA} \bigg\lfloor \frac{a(i,j)}{3}\bigg\rfloor + \bigg\lfloor \frac{a(i-1,j+2)}{3}\bigg\rfloor &=& \bigg\lfloor \frac{a(i-1,j+1)}{3}\bigg\rfloor + \bigg\lfloor \frac{a(i,j+1)}{3}\bigg\rfloor + \chi(i - j \equiv 1 \mod 3), \\
\label{eq:thirdB} \bigg\lfloor \frac{b(i,j)}{3}\bigg\rfloor + \bigg\lfloor \frac{b(i-1,j+2)}{3}\bigg\rfloor &=& \bigg\lfloor \frac{b(i-1,j+1)}{3}\bigg\rfloor + \bigg\lfloor \frac{b(i,j+1)}{3}\bigg\rfloor + \chi(i - j \equiv 2 \mod 3), \\
\label{eq:thirdC} \bigg\lfloor \frac{c(i,j)}{3}\bigg\rfloor + \bigg\lfloor \frac{c(i-1,j+2)}{3}\bigg\rfloor &=& \bigg\lfloor \frac{c(i-1,j+1)}{3}\bigg\rfloor + \bigg\lfloor \frac{c(i,j+1)}{3}\bigg\rfloor + \chi(i - j \equiv 0 \mod 3), \\
\label{eq:fourthD} \bigg\lfloor \frac{k^2}{4}\bigg\rfloor + \bigg\lfloor \frac{(k-2)^2}{4}\bigg\rfloor &=&
\bigg\lfloor \frac{(k-1)^2}{4}\bigg\rfloor + \bigg\lfloor \frac{(k-1)^2}{4}\bigg\rfloor + \chi(k \mathrm{~is~even}), \mathrm{~and} \\
\label{eq:fourthE}\bigg\lfloor \frac{(k+1)^2}{4}\bigg\rfloor + \bigg\lfloor \frac{(k-1)^2}{4}\bigg\rfloor &=&
\bigg\lfloor \frac{k^2}{4}\bigg\rfloor + \bigg\lfloor \frac{k^2}{4}\bigg\rfloor + \chi(k \mathrm{~is~odd})
\end{eqnarray} }
where $\chi(S)$ equals $1$ when statement $S$ is true and equals $0$ otherwise. The identities
{\footnotesize \begin{eqnarray} \label{eq:thirdAA} \bigg\lfloor \frac{b(i-1,j+1)}{3}\bigg\rfloor + \bigg\lfloor \frac{b(i+1,j)}{3}\bigg\rfloor &=& \bigg\lfloor \frac{b(i,j)}{3}\bigg\rfloor + \bigg\lfloor \frac{b(i,j+1)}{3}\bigg\rfloor + \chi(i - j \equiv 1 \mod 3) \\
\label{eq:thirdAAA} \bigg\lfloor \frac{c(i,j+1)}{3}\bigg\rfloor + \bigg\lfloor \frac{c(i-1,j)}{3}\bigg\rfloor &=& \bigg\lfloor \frac{c(i,j)}{3}\bigg\rfloor + \bigg\lfloor \frac{c(i-1,j+1)}{3}\bigg\rfloor + \chi(i - j \equiv 1 \mod 3) \end{eqnarray} }
\vspace{-1em}\noindent and analogous versions for $i-j \equiv 2$ or $0 \mod 3$ also hold if all instances of $b(i,j)$ (resp. $c(i,j)$) are switched with $c(i,j)$ or $a(i,j)$ (resp. $a(i,j)$ or $b(i,j)$).
\end{lemma}
\begin{proof}
Firstly we note that $a(i,j) \equiv \begin{cases} 1 \mod 3 \mathrm{~if~}i-j \equiv 0 \mathrm{~or~} 2\mod 3 \\ 0 \mod 3 \mathrm{~if~}i - j \equiv 1\end{cases}.$
Similarly, we see that $b(i,j) \equiv \begin{cases} 1 \mod 3 \mathrm{~if~}i-j \equiv 0 \mathrm{~or~} 1\mod 3 \\ 0 \mod 3 \mathrm{~if~}i - j \equiv 2\end{cases}$
and $c(i,j) \equiv \begin{cases} 1 \mod 3 \mathrm{~if~}i-j \equiv 0 \mod 3 \\ 2 \mod 3 \mathrm{~if~}i-j \equiv 1 \mathrm{~or~} 2\mod 3\end{cases}.$
It is easy to verify also that
\begin{equation}
\label{eq:IdA} a(i,j) + a(i-1,j+2) - a(i-1,j+1) - a(i,j+1) = 1
\end{equation}
and we get identical equalities for $b(i,j)$ and $c(i,j)$.
Subtracting the floor functions appearing on the RHS of (\ref{eq:thirdA}) from those on the LHS, we obtain
{\small \begin{eqnarray*}\frac{a(i,j)}{3} + \frac{a(i-1,j+2)}{3} &-& \frac{a(i-1,j+1)-1}{3} - \frac{a(i,j+1)-1}{3} \mathrm{~~when~}i - j \equiv 1 \mathrm{~mod~3}, \\
\frac{a(i,j)-1}{3} + \frac{a(i-1,j+2)-1}{3} &-& \frac{a(i-1,j+1)-1}{3} - \frac{a(i,j+1)}{3} \mathrm{~~when~}i - j \equiv 2 \mathrm{~mod~3,~and} \\
\frac{a(i,j)-1}{3} + \frac{a(i-1,j+2)-1}{3} &-& \frac{a(i-1,j+1)}{3} - \frac{a(i,j+1)-1}{3} \mathrm{~~when~}i - j \equiv 0 \mathrm{~mod~3}.
\end{eqnarray*}}
Using identity (\ref{eq:IdA}), the result is $\chi(i-j \equiv 1 \mod 3)$ exactly as desired.
The proof of identity (\ref{eq:thirdB}) is essentially identical. In the case of (\ref{eq:thirdC}), we see
{\small \begin{eqnarray*}\frac{c(i,j)-2}{3} + \frac{c(i-1,j+2)-2}{3} &-& \frac{c(i-1,j+1)-2}{3} - \frac{c(i,j+1)-1}{3} \mathrm{~~when~}i - j \equiv 1 \mathrm{~mod~3,} \\
\frac{c(i,j)-2}{3} + \frac{c(i-1,j+2)-2}{3} &-& \frac{c(i-1,j+1)-1}{3} - \frac{c(i,j+1)-2}{3} \mathrm{~~when~}i - j \equiv 2 \mathrm{~mod~3,~and} \\
\frac{c(i,j)-1}{3} + \frac{c(i-1,j+2)-1}{3} &-& \frac{c(i-1,j+1)-2}{3} - \frac{c(i,j+1)-2}{3} \mathrm{~~when~}i - j \equiv 0 \mathrm{~mod~3},
\end{eqnarray*}}
\noindent resulting in $\chi(i-j \equiv 0 \mod 3)$.
We next consider the identities (\ref{eq:fourthD}) and (\ref{eq:fourthE}). Notice that these identities agree up to a shift of $k$ by $1$ so it suffices to prove (\ref{eq:fourthE}).
We have $k^2 \mod 4 \equiv \begin{cases} 1 \mathrm{~if~k~is~odd} \\ 0 \mathrm{~if~k~is~even}\end{cases}$ and the reverse is true for $(k\pm 1)^2 \mod 4$.
Thus if $k$ is odd, taking the difference of the floor functions appearing on RHS from the floor functions appearing on the LHS we obtain
$$\frac{k^2+2k+1}{4} + \frac{k^2-2k+1}{4} - \frac{k^2-1}{4} - \frac{k^2-1}{4} = 1$$
and obtain $$\frac{k^2+2k}{4} + \frac{k^2-2k}{4} - \frac{k^2}{4} - \frac{k^2}{4} = 0$$ if $k$ is even.
Hence, this difference is exactly $\chi(k \mathrm{~is~odd})$ as desired.
Identities (\ref{eq:thirdAA}), (\ref{eq:thirdAAA}), and their analogues for $b(i,j)$ and $c(i,j)$ are proved by the same method as the proofs of
(\ref{eq:thirdA})-(\ref{eq:thirdC}).
\end{proof}
With this lemma in hand we now prove Theorem \ref{thm:explicit}. Our proof extends the arguments appearing in the proof of Theorem 2.1 of \cite{LaiNewDungeon} and in Section 2 of \cite{LMNT}.
\begin{proof}[Proof of Theorem \ref{thm:explicit}]
We begin with the base cases for $(i,j) = (0,-1), (-1,0),$ or $(0,0)$ and $k=0$ or $1$.
We see that $(\lfloor \frac{a(i,j)}{3} \rfloor, \lfloor \frac{b(i,j)}{3} \rfloor, \lfloor \frac{c(i,j)}{3} \rfloor, \lfloor \frac{(k-1)^2}{4} \rfloor, \lfloor \frac{k^2}{4} \rfloor)= (0,0,0,0,0)$ for all six of these cases and hence $$[z_0^{-1,1}, z_0^{-1,0}, z_{-1}^{0,0}, z_{-1}^{0,1}, z_0^{0,1}, z_0^{0,0}] = Z^\emptyset = [x_1,x_2,\dots, x_6]$$ as desired.
We prove the explicit formula in general by showing that recurrences induced by mutations are also satisfied by the relevant products of $x_r$, $A$, $B$, $C$, $D$, and $E$. For brevity, let $\mathcal{A}_i^{j} = A^{\lfloor \frac{a(i,j)}{3}\rfloor}
B^{\lfloor \frac{b(i,j)}{3}\rfloor}
C^{\lfloor \frac{c(i,j)}{3}\rfloor}$ and $\mathcal{D}^k = D^{\lfloor \frac{(k-1)^2}{4}\rfloor}E^{\lfloor \frac{k^2}{4}\rfloor}$.
Let $S=S_1S_2$ be a generalized $\tau$-mutation sequence, factored as in Definition \ref{def:prism}. We assume that $S_1$ and $S_2$ are both of even length and work in the case $i-j \equiv 1 \mod 3$. By Remark \ref{rem:order123}, we can assume in this case that we have
$Z^{S} = [z_{i}^{j,k+1}, z_{i}^{j,k}, z_{i-1}^{j+1,k}, z_{i-1}^{j+1,k+1}, z_{i}^{j+1,k+1}, z_{i}^{j+1,k}]$. Induction and the statement of Theorem \ref{thm:explicit} yields
$$Z^{S} = [x_1 \mathcal{A}_i^j\mathcal{D}^{k+1}, ~x_2 \mathcal{A}_i^j\mathcal{D}^{k}, ~x_3 \mathcal{A}_{i-1}^{j+1}\mathcal{D}^{k}, ~x_4 \mathcal{A}_{i-1}^{j+1}\mathcal{D}^{k+1}, ~x_5 \mathcal{A}_i^{j+1}\mathcal{D}^{k+1}, ~x_6 \mathcal{A}_i^{j+1}\mathcal{D}^{k}].$$
Since we are assuming that $S_2$ is of even length, we may assume also that $k$ is even.
The cases where $i - j \equiv 0$ or $2 \mod 3$, $S_1$ is of odd length, or $S_2$ is of odd length can be handled by similar logic. We make comments below pointing out how to change the proofs in these other cases.
\vspace{0.5em}
By the definition of mutation by $\mu_1$, $\mu_4$, or $\mu_5$, we get recurrences relating cluster variables $z_i^{j,k}$'s together:
{\footnotesize
\begin{eqnarray}
\label{eq:Rec1} z_{i-1}^{j+2,k} z_i^{j,k+1} &=& z_{i-1}^{j+1,k} z_i^{j+1,k+1} + z_{i-1}^{j+1,k+1} z_i^{j+1,k} = (x_3 x_5 + x_4 x_6)~ \mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}, \\
\label{eq:Rec2} z_{i+1}^{j,k} z_{i-1}^{j+1,k+1} &=& z_{i}^{j,k} z_i^{j+1,k+1} + z_{i}^{j,k+1} z_i^{j+1,k} = (x_2x_5 + x_1x_6)~ \mathcal{A}_{i}^{j} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}, \mathrm{~and~}\\
\label{eq:Rec3} z_{i-1}^{j,k} z_i^{j+1,k+1} &=& z_i^{j,k+1} z_{i-1}^{j+1,k} + z_i^{j,k} z_{i-1}^{j+1,k+1} = (x_1x_3 + x_2x_4)~ \mathcal{A}_i^{j} ~\mathcal{A}_{i-1}^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}.
\end{eqnarray}
}
\vspace{-1em}
Inductively, one of the factors on the LHS is already of the desired form (e.g. $z_{i}^{j, k+1} = x_2\mathcal{A}_i^j \mathcal{D}^{k+1}$,
$ z_{i-1}^{j+1,k+1} = x_3 \mathcal{A}_{i-1}^{j+1}\mathcal{D}^{k+1}$, or
$z_{i}^{j+1,k+1} = x_6 \mathcal{A}_i^{j+1}\mathcal{D}^{k+1}$) and to verify that the inductive step continues to hold as we mutate by $\mu_1$, $\mu_4$, or $\mu_5$, it suffices to verify that
\begin{eqnarray*}
z_{i-1}^{j+2,k} z_i^{j,k+1} = x_1 x_2 \mathcal{A}_{i-1}^{j+2} ~\mathcal{A}_i^{j} \mathcal{D}^k \mathcal{D}^{k+1} &=& (x_3 x_5 + x_4 x_6)~ \mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}, \\
z_{i+1}^{j,k} z_{i-1}^{j+1,k+1} = x_3x_4 \mathcal{A}_{i+1}^{j} ~\mathcal{A}_{i-1}^{j+1} \mathcal{D}^k \mathcal{D}^{k+1} &=& (x_2x_5 + x_1x_6)~ \mathcal{A}_{i}^{j} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}, \mathrm{~and~}\\
z_{i-1}^{j,k} z_i^{j+1,k+1} = x_5x_6 \mathcal{A}_{i-1}^{j} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k+1} &=& (x_1x_3 + x_2x_4)~ \mathcal{A}_i^{j} ~\mathcal{A}_{i-1}^{j+1} \mathcal{D}^k \mathcal{D}^{k+1}.
\end{eqnarray*}
By cross-multiplying, we reduce the problem to showing
$$\frac{\mathcal{A}_{i-1}^{j+2} ~\mathcal{A}_i^{j}}{\mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1}} = \frac{x_3 x_5 + x_4 x_6}{x_1 x_2}, ~~
\frac{\mathcal{A}_{i+1}^{j} ~\mathcal{A}_{i-1}^{j+1}}{\mathcal{A}_{i}^{j} ~\mathcal{A}_i^{j+1}} = \frac{x_2 x_5 + x_1 x_6}{x_3 x_4}, ~~
\frac{\mathcal{A}_{i-1}^{j} ~\mathcal{A}_i^{j+1}}{\mathcal{A}_i^{j} ~\mathcal{A}_{i-1}^{j+1} } = \frac{x_1 x_3 + x_2 x_4}{x_1 x_2}$$
Continuing to assume that $i-j \equiv 1 \mod 3$, and using the identities of Lemma \ref{lem:ABCDE}, we see that the LHS's equal $A^1B^0C^0$, $A^0B^1C^0$, and $A^0B^0C^1$, respectively. This agrees with the RHS's by definition. The proofs for $i-j \equiv 2$ or $0 \mod 3$ are handled similarly using the identities of Lemma \ref{lem:ABCDE} but with cyclic permutations of the initial cluster variables appearing in Equations (\ref{eq:Rec1})-(\ref{eq:Rec3}).
Mutations by $\mu_2$, $\mu_3$, or $\mu_6$ are analogous except with $z_{i-1}^{j+2,k+1} z_i^{j,k}$,
$z_{i+1}^{j,k+1} z_{i-1}^{j+1,k}$, or $z_{i-1}^{j,1} z_i^{j+1,k}$ on the LHS instead. Doing these in pairs plus a transposition corresponds to the $\tau$-mutation sequences $\tau_1$, $\tau_2$, or $\tau_3$.
If $S_1$ is of odd length, then as in Remark \ref{rem:Delta}, we use the SW-pointed triangular prism instead and this just reverses the direction of the triangular flips, meaning that the factor on the LHS that is inductively known is the one with coordinate $k$ rather than $(k+1)$. However, the proof is otherwise unaffected.
On the other hand, the $\tau_4$ and $\tau_5$-mutation sequences are more complicated. Assuming that $i - j \equiv 1 \mod 3$ and $k$ is even, then $\tau_4 = \mu_1 \circ \mu_4 \circ \mu_1 \circ \mu_5 \circ \mu_1 \circ (145)$ corresponds to the
sequence of recurrences:
\begin{eqnarray}
\label{eq:kRec1}
z_{i-1}^{j+2,k}z_i^{j,k+1} &=& z_{i-1}^{j+1,k} z_i^{j+1,k+1} + z_{i-1}^{j+1,k+1} z_i^{j+1,k} \sim (\ref{eq:Rec1})\\
\label{eq:kRec2}
z_{i-1}^{j+1,k+1} z_{i}^{j+1,k-1} &=& z_{i-1}^{j+2,k} z_i^{j,k} + z_{i-1}^{j+1,k} z_i^{j+1,k} \\
\label{eq:kRec3}
z_{i-1}^{j+2,k} z_{i+1}^{j,k} &=& z_{i}^{j+1,k-1} z_i^{j+1,k+1} + (z_{i}^{j+1,k})^2 \\
\label{eq:kRec4}
z_{i}^{j+1,k+1} z_{i}^{j,k-1} &=& z_{i+1}^{j,k} z_{i-1}^{j+1,k} + z_{i}^{j,k} z_i^{j+1,k} \\
\label{eq:kRec5}
z_{i-1}^{j+1,k-1}z_{i+1}^{j,k} &=& z_{i}^{j,k} z_i^{j+1,k-1} + z_{i}^{j,k-1} z_i^{j+1,k} \sim (\ref{eq:Rec2})
\end{eqnarray}
Notice that in the last recurrence, because of the cyclic permutation, it is as if we are mutating by $\mu_4$ rather than $\mu_1$ hence why this recurrence looks like (\ref{eq:Rec2}) instead of (\ref{eq:Rec1}). Furthermore, focusing on the $(i,j)$-coordinates, the three terms in (\ref{eq:kRec2}) are a rearrangement of the terms in (\ref{eq:Rec1}) while the three terms in (\ref{eq:kRec4}) are a rearrangement of the terms in (\ref{eq:Rec2}). (The situation for $\tau_5$ is analogous and left to the reader.)
Equations (\ref{eq:kRec1}) and (\ref{eq:kRec5}) were handled above, thus we now use recurrence (\ref{eq:kRec2}) to show that the inductive hypothesis continues even as $k$ decreases. Note that in Equation (\ref{eq:kRec2}), all cluster variables are known to have the desired explicit form except for
$z_i^{j+1,k-1}$. Hence, just as above, it suffices to show that
$$x_4 x_5 ~ \mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j} \mathcal{D}^{k+1} \mathcal{D}^{k-1} =
x_2 x_2 ~ \mathcal{A}_{i-1}^{j+2} ~\mathcal{A}_i^{j} \mathcal{D}^k \mathcal{D}^{k}
+ x_3 x_6 ~ \mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k}.
$$
Dividing by $x_4x_5\mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k}
$ yields
$$\frac{\mathcal{D}^{k+1} \mathcal{D}^{k-1} }{\mathcal{D}^k \mathcal{D}^{k}} =
\frac{x_2 x_2}{x_4x_5} ~ \frac{\mathcal{A}_{i-1}^{j+2} ~\mathcal{A}_i^{j} }{\mathcal{A}_{i-1}^{j+1} ~\mathcal{A}_i^{j+1} }
+ \frac{x_3 x_6}{x_4x_5} = \frac{x_2^2 A + x_3x_6}{x_4x_5}.$$
Then recurrences (\ref{eq:fourthD}) and (\ref{eq:fourthE}) of Lemma \ref{lem:ABCDE} yield
$D^1E^0$ on the LHS since we assumed that $k$ was even.
By algebraic manipulations, we see that
{\footnotesize $$D = \frac{x_2^2 A + x_3x_6}{x_4x_5} = \frac{x_3^2 B + x_2 x_6}{x_1x_5} = \frac{x_6^2 C + x_2x_3}{x_1x_4}, \mathrm{~and~}
E = \frac{x_1^2 A + x_4x_5}{x_3x_6} = \frac{x_4^2 B + x_1 x_5}{x_2x_6} = \frac{x_5^2 C + x_1x_4}{x_2x_3}$$}
\noindent and so we have the desired equality.
The other expressions for $D$ and $E$ allow us to prove the result for other values of $i-j \mod 3$ and when $k$ is odd instead of even.
The recurrence (\ref{eq:kRec3}) allows us to proceed with the induction as well. Assuming that we know the explicit formula holds for five out of these six cluster variables, it suffices to check
$$x_2 x_3 ~ \mathcal{A}_{i-1}^{j+2} ~\mathcal{A}_{i+1}^{j} \mathcal{D}^{k} \mathcal{D}^{k} =
x_5 x_5 ~ \mathcal{A}_{i}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^{k-1} \mathcal{D}^{k+1}
+ x_6 x_6 ~ \mathcal{A}_{i}^{j+1} ~\mathcal{A}_i^{j+1} \mathcal{D}^k \mathcal{D}^{k}
$$
After dividing through and multiplying top and bottom of the LHS by
$\mathcal{A}_i^j\mathcal{A}_{i-1}^{j+1}$, we get
$$x_2 x_3 ~
\frac{\mathcal{A}_{i-1}^{j+2} \mathcal{A}_i^j}{
\mathcal{A}_{i-1}^{j+1}\mathcal{A}_{i}^{j+1}} ~
\frac{\mathcal{A}_{i+1}^{j}\mathcal{A}_{i-1}^{j+1}}{
\mathcal{A}_i^{j} \mathcal{A}_i^{j+1}} =
x_5 x_5 ~ \frac{\mathcal{D}^{k-1} \mathcal{D}^{k+1}}
{\mathcal{D}^k \mathcal{D}^{k}}
+ x_6 x_6,
$$
which equals
$$ x_2 x_3 A B = x_5^2 D + x_6^2$$
when $i-j \equiv 1 \mod 3$ and $k$ is even, and this is easily shown.
Lastly, recurrence (\ref{eq:kRec4}) is a variant of recurrence (\ref{eq:kRec2}) and the inductive hypothesis continues by an analogous verification: i.e.
$$x_5 x_1 \mathcal{A}_i^{j+1} \mathcal{A}_{i}^j \mathcal{D}^{k+1} \mathcal{D}^{k-1} =
x_3 x_3 \mathcal{A}_{i+1}^{j} \mathcal{A}_{i-1}^{j+1} \mathcal{D}^k \mathcal{D}^k
+ x_2 x_6 \mathcal{A}_{i}^{j} \mathcal{A}_{i}^{j+1} \mathcal{D}^k \mathcal{D}^k$$
by cross-multiplication
$$x_5 x_1 \frac{\mathcal{D}^{k+1} \mathcal{D}^{k-1}}{\mathcal{D}^k \mathcal{D}^k } =
x_3 x_3 \frac{\mathcal{A}_{i+1}^{j} \mathcal{A}_{i-1}^{j+1}}{\mathcal{A}_i^{j+1} \mathcal{A}_{i}^j}
+ x_2 x_6 $$
and the algebraic fact that
$x_5x_1D = x_3^2 B + x_2 x_6$ (which is true when $i-j \equiv 1 \mod 3$ and $k$ is even).
We have demonstrated the validity of this formula for all cluster variables reachable by a generalized $\tau$-mutation sequence. However, because of Lemma \ref{lem:gentoric}, the set of such cluster variables is the same as the set of cluster variables reachable by a toric mutation sequence. Hence our proof is complete.
\end{proof}
\begin{remark} We later will give combinatorial formulas for $z_i^{j,k}$ which will provide alternative interpretations of the recurrences (\ref{eq:Rec1})-(\ref{eq:kRec5}) in terms of a technique known as Kuo's Graphical Condensation \cite{kuo1}.
\end{remark}
\section{Contours} \label{Sec:Contours}
In this section we describe a method for constructing subgraphs of the brane tiling $\mathcal{T}$ corresponding to the $dP_3$ Quiver. This construction is a variant and extension of the Dragon Regions defined in \cite{LaiNewDungeon}, as well as generalizing the Aztec Castles from \cite{LMNT}.
Given a $6$-tuple $(a,b,c,d,e,f) \in \mathbb{Z}^6$, we consider a {\bf (six-sided) contour} whose side-lengths are $a,b, \dots , f$ in clockwise order (starting from the Northeast corner). See the right-hand-side of Figure \ref{fig:hexagon}. In the case of a negative entry we draw the contour in the opposite direction for the associated side. By abuse of notation, we will refer to such entries as {\bf lengths} even when they are negative. Several different qualitatively different looking contours are illustrated in Figure \ref{fig:contours}. Let $\mathcal{C}(a,b,c,d,e,f)$ denote the corresponding (six-sided) contour, which we abbreviate as {\bf contour} in the remainder of the paper.
\begin{figure}\centering
\includegraphics[width=12cm]{Branetiling.pdf}
\caption{Illustrating the contour $\mathcal{C}(a,b,c,d,e,f)$ in the case that all entries are positive.}
\label{fig:hexagon}
\end{figure}
\begin{figure}
\centering
\scalebox{0.9}{\includegraphics[keepaspectratio=true, width=150 mm]{Fig12.pdf}}
\caption{\small From left to right, the cases where $(a,b,c,d,e,f) = $ (1) $(+, -, + , +, -, +)$, (2) $(+, -, +, 0, -, +)$, (3) $(+, -, +, 0, -, -)$,
(4)$(+, -, + , +, -, -)$, (5) $(+, +, +, -, +, -)$, (6) $(+, -, +, -, +, -)$.}
\label{fig:contours}
\end{figure}
\begin{definition} [\textbf{Subgraphs $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$ and $\mathcal{G}(a,b,c,d,e,f)$}]
\label{def:subgraphs} Suppose that the contour $\mathcal{C}(a,b,c,d,e,f)$ does not intersect itself. (See the last picture in Figure \ref{fig:contours} for an example of a contour with self-intersections.) Under this assumption, we use the contour $\mathcal{C}(a,b,c,d,e,f)$ to define two subgraphs, which are cut-out by the contour, by the following rules:
Step 1: The brane tiling $\mathcal{T}$ consists of a subdivided triangular lattice. We superimpose the contour $\mathcal{C}(a,b,c,d,e,f)$ on top of $\mathcal{T}$ so that its sides follow the lines of the triangular lattice, beginning the contour at a white vertex of degree $6$. In particular, sides $a$ and $d$ are tangent to faces $1$ and $2$, sides $b$ and $e$ are tangent to faces $5$ and $6$, and sides $c$ and $f$ are tangent to faces $3$ and $4$. We scale the contour so that a side of length $\pm 1$ transverses two edges of the brane tiling $\mathcal{T}$, and thus starts and ends at a white vertex of degree $6$ with no such white vertices in between.
Step 2: For any side of positive (resp. negative) length, we remove all black (resp. white) vertices along that side.
Step 3: A side of length zero corresponds to a single white vertex. If one of the adjacent sides is of negative length, then that white vertex is removed during step 2. On the other hand, if the side of length zero is adjacent to two sides of positive length, we keep the white vertex. However, as a special case, if we have three sides of length zero in a row, we instead remove that white vertex\footnote{The reader might wonder what happens if we have two adjacent sides of length zero (between two positive values) or four or more adjacent sides of length zero. As shown in Figures \ref{fig:decomposition} and \ref{fig:decomposition2}, for the subgraphs we care about in this paper, such a case cannot occur.}.
Step 4: We define $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$ to be the resulting subgraph, which will contain a number of black vertices of valence one. After matching these up with the appropriate white vertices and continuing this process until no vertices of valence one are left, we obtain a simply-connected graph $\mathcal{G}(a,b,c,d,e,f)$ which we call the {\bf Core Subgraph}, following notation of \cite{BMPW}.
\end{definition}
\begin{remark}
In the case that $(a,b,c,d,e,f) = (+,-,+,+,-,\pm)$, these subgraphs (without face-labels) agree with the $DR^{(1)}(a,-b,c)$ dragon regions of \cite{LaiNewDungeon}. Similarly, $(a,b,c,d,e,f) = (-,+,-,-,+,\pm)$ agree with the $DR^{(2)}(-a,b,-c)$ dragon regions.
\end{remark}
\begin{remark} \label{Rem:LMNT}
In the case that $a+b+c = 0$ or $1$, after negating $b$ and $e$, followed by subtracting $1$ from each entry, recovers the NE Aztec Castles in Section 3 of \cite{LMNT}. The SW Aztec Castles in Section 4 of \cite{LMNT} are obtained analogously, but using the opposite sign convention and a different translation.
In particular, using the notation of \cite{LMNT}, with $i, j \geq 0$,
\begin{eqnarray*}\gamma_i^j &=& \mathcal{G}(j, -i-j, i, j+1, -i-j-1, i+1), \mathrm{~and} \\
\widetilde{\gamma}_{-i}^{-j} &=& \mathcal{G}(-j+1, i+j, -i, -j, i+j+1, -i-1). \end{eqnarray*}
\end{remark}
\begin{example} \label{ex:0}
Here we provide examples of subgraphs arising from contours for each of the first five cases appearing in Figure \ref{fig:contours}.
The subgraphs illustrated are
(1) $\mathcal{G}(3, -4, 2, 2, -3, 1)$, (2) $\mathcal{G}(1, 1, 1, -4, 6, -4)$, (3) $\mathcal{G}(5,-6,4,0,-1,-1)$,
(4) $\mathcal{G}(6,-7,4,1,-2,-1)$, and (5) $\mathcal{G}(5,-8,6,0,-3,1)$, respectively.
\noindent See Figure \ref{fig:ex0}.
\end{example}
\begin{figure}
\includegraphics[width=5.3in]{Newexample1n.pdf}
\caption{The subgraphs associated to Example \ref{ex:0}.}
\label{fig:ex0}
\end{figure}
\begin{example} \label{ex:Dragons}
We can also produce Aztec Dragons \cite{perfect,CY,EnumPropp} in this notation.
Let $\sigma\mathcal{C}(a,b,c,d,e,f) = \mathcal{C}(a+1, b-1, c+1, d-1, e+1, f-1)$. We denote by
$\mathcal{C}_i^j$ the contour $\mathcal{C}(j, -i-j, i, j+1, -i-j-1, i+1)$.
For each $n \in \mathbb{Z}_{\geq 0}$,
\[D_{n + 1/2} = \mathcal{G}(\mathcal{C}_{-1}^{n+1}) = \mathcal{G}(n+1, -n, -1, n+2, -n-1, 0),\]
\[D_n = \mathcal{G}(\sigma\mathcal{C}_{0}^{n}) = \mathcal{G}(n+1, -n-1, 1, n, -n, 0).\]
See Figure \ref{fig:AztecDragons}.
As a special case of Definition \ref{def:6tuple}, we will see that $D_{n}$'s and $D_{n+1/2}$'s arise from the $\tau$-mutation sequence $\tau_1\tau_2\tau_3\tau_1\tau_2\dots$ continued periodically.
This $\tau$-mutation sequence corresponds to a vertical translation in the square lattice $L^\Delta$.
\end{example}
\begin{figure}
\includegraphics[width=5.3in]{Newexample2b.pdf}
\caption{The Aztec Dragon constructed as examples of $\mathcal{G}(a,b,c,d,e,f)$'s. See Example \ref{ex:Dragons}.}
\label{fig:AztecDragons}
\end{figure}
\begin{example} \label{ex:antiparallel}
Another possible contour arises when we have two sides of length zero in a row. On first glance, the two contours given below appear to be triangles with all sides of length $5$. However, these
are actually degenerate quadrilaterals where two adjacent sides happen to be anti-parallel. In particular, along one of the three sides of this contour, the pattern of including and excluding white and black vertices switches, signifying the invisible corner of an angle of $180^\circ$. Thus, when building subgraphs corresponding to a contour we note that
$\mathcal{G}(5,-5,5,0,0,0)$, $\mathcal{G}(5,-5,4,0,0,-1)$, and $\mathcal{G}(5,-5,5,3,0,-2)$ would all be different subgraphs of $\mathcal{T}$. We have illustrated
$\mathcal{G}(5,-5,3,0,0,-2)$ and $\mathcal{G}(-5,5,-2,0,0,3)$ in Figure \ref{fig:antiparallel}.
\end{example}
\begin{figure}
\includegraphics[width=5.3in]{Newexample3b.pdf}
\caption{Examples of graphs obtained from contours which are degenerate quadrilaterals, as described in Example \ref{ex:antiparallel}.}
\label{fig:antiparallel}
\end{figure}
\section{From Mutations to Subgraphs} \label{sec:subgraphs}
In this section, we come to our second main result, Theorem \ref{thm:main}, which provides a combinatorial interpretation for the Laurent polynomials $z_{i}^{j,k}$ defined in Theorem \ref{thm:explicit}. Combining this with Definition \ref{def:prism}, this yields a direct combinatorial interpretation for most cluster variables reachable by a toric mutation sequence (using Lemma \ref{lem:gentoric} and the geometry of the $\mathbb{Z}^3$ lattice described in Section \ref{sec:gentoric}).
Motivated by the definition of NE and SW Aztec Castles, see Remark \ref{Rem:LMNT}, we extend the definition to three dimensions as follows.
\begin{definition} \label{Def:LMNT}
For all $i,j \in \mathbb{Z}$, we let
$$\mathcal{C}_i^j \mathrm{~be~the~contour~} \mathcal{C}(j, -i-j, i, j+1, -i-j-1, i+1).$$ Recall the map from Example \ref{ex:Dragons}, we let $\sigma^k\mathcal{C} = \mathcal{C}(a+k, b-k, c+k, d-k, e+k, f-k)$ for $\mathcal{C} = \mathcal{C}(a,b,c,d,e,f)$.
Combining this together, we let $$\mathcal{C}_{i}^{j,k} = \mathcal{C}(j+k, -i-j-k, i+k, j+1-k, -i-j-1+k, i+1-k).$$
\end{definition}
\begin{remark}In particular, note that $\sigma\mathcal{C}_i^j = \mathcal{C}(j+1, -i-j-1, i+1,j, -i-j, i)$. This agrees with the fact that $\sigma \gamma_i^j$, was defined as $\gamma_i^j$ after $180^\circ$ rotation in \cite{LMNT}. Furthermore, the SW Aztec Castles of \cite{LMNT} can
be described as $\widetilde{\gamma}_{-i}^{-j}~~=~~ \sigma\mathcal{C}_{-i-1}^{-j}$ and $\sigma\widetilde{\gamma}_{-i}^{-j}~~=~~ \mathcal{C}_{-i-1}^{-j}.$
\end{remark}
Definition \ref{Def:LMNT} motivates us to define the map $\phi: \mathbb{Z}^3 \to \mathbb{Z}^6$ defined by $\mathcal{C}_i^{j,k} = \mathcal{C}(a,b,c,d,e,f)$. In other words,
$$a = j + k, ~~~b = -i-j-k, ~~~ c= i+k, ~~~ d = j-k +1, ~~~ e = -i-j+k -1, ~~~ f = i - k +1.$$
\begin{lemma} \label{lem:closeup}
Out of all possible contours $\mathcal{C}(a,b,c,d,e,f)$, those which are of the form $\mathcal{C}_i^{j^k} = \mathcal{C}(j+k, -i-j-k, i+k, j+1-k, -i-j-1+k, i+1-k)$ actually comprise all contours that (i) close up; and (ii) either have a self-intersection or yield a subgraph such that the number of black vertices is the number of white vertices.
\end{lemma}
\begin{proof}
First of all, since the plane has two directions, a contour must satisfy
$$a+b = d+e \mathrm{~~~and~~~} c+d = f + a$$
if it is going to close up. These two relations also imply the equivalent third relation $b +c = e+f$. Picking two of these three relations already restrict us from $\mathbb{Z}^6$ to $\mathbb{Z}^4$ (calculating the Smith normal form can be used to ensure that these relations do not introduce any torsion elements). To reduce all the way to $\mathbb{Z}^3$, the balancing of the vertex colors yields the linearly independent relation
$$a+b+c+d+e+f=1$$ as we show now (again, no torsion is introduced by including this relation).
Firstly, note that a single triangle has three white vertices and three black vertices on the boundary, but an extra white vertex at its center.
We inductively build our contour by attaching a triangle to our shape along one or two sides. Either way, this does not alter the difference between the number of white vertices and black vertices.
However, after building the full contour (which has one extra white vertex), we then remove vertices from the boundary as instructed by Definition \ref{def:subgraphs}.
Every side with positive length $p$ forces us to remove $p$ black vertices. Every segment with total negative length $-n$ (by a segment we mean at least one, but possibly multiple consecutive sides, all of negative length with sides of positive length on either side) forces us to remove $(n+1)$ white vertices.
For the purposes of this proof, note that a segment of two or more zeros in a row between two sides of positive length also counts as a negative segment and would lead to removing exactly one white vertex. (As indicated in Definition \ref{def:subgraphs}, this can only happen with three zeros in a row and such patterns only occur in contours corresponding to initial cluster variables). On the other hand, a single side of length zero or a segment of a zeroes adjacent to at least one side of negative length does not lead to the removal of any vertices of either color.
By considering all possible sign patterns without self-intersecting contours, see Figures \ref{fig:decomposition} and \ref{fig:decomposition2}, we see that there are always exactly two negative segments and hence we are removing
$2-a-b-c-d-e-f$ more white vertices than black vertices leading to a color-balanced subgraph if and only if we have $a+b+c+d+e+f=1$.
\end{proof}
\subsection{Possible shapes of Aztec Castles} \label{sec:shapes}
Since we have just shown that the contours $\mathcal{C}_i^{j,k}$ are of the most general form for our combinatorial purposes (i.e. we want subgraphs which have perfect matchings), we next describe the possible shapes of these contours. In this section we work directly with the six-tuples $(a,b,c,d,e,f)$ rather than the contours $\mathcal{C}(a,b,c,d,e,f)$ themselves. By direct computation we arrive at the following description: $32$ possible sign-patterns (excluding those with zeroes) organized into orbits of size $1$ or $6$ under the action of twisted rotation $\theta : (a,b,c,d,e,f) \to (-f, -a, -b, -c, -d, -e)$. When $k \geq 1$ (resp. $k \leq 0$), we get only $19$ of these sign-patterns, $6$ of which are possible regardless of $k$.
1) First Possibility: $(a,b,c,d,e,f) = (+, -, -, +, -, -)$, $(-, +, +, -, +, +)$ or a cyclic rotation of one of these.
See Figure \ref{fig:shapes2}.
2) Second Possibility: $(a,b,c,d,e,f) = (+, -, -, +, +, -)$, $(-, +, +, -, -, +)$ or a cyclic rotation. See Figure \ref{fig:shapes1}.
3) Third Possibility: $(a,b,c,d,e,f) = (+, -, -, -, +, -)$, $(-, +, +, +, -, +)$, or a cyclic rotation. See Figure \ref{fig:shapes3}.
4) Lastly, there are degenerate cases which are a combination of two of these possibilities where one or more of the sides are length zero.
5) Six-tuples of the form $(a,b,c,d,e,f) = (+, -,+, -, +,-)$ or $(-, +, -, +, -, +)$ also appear, see Figure \ref{fig:shapes4}, but these always correspond to self-intersecting contours, which we do not give a combinatorial interpretation for in this paper. This leaves a question for future work. See Problem \ref{prob:self-int} in Section \ref{sec:open}.
\begin{figure}
\includegraphics[width=5.5in]{Shapes2.pdf}
\caption{Six unbounded sign-unbalanced regions.}
\label{fig:shapes2}
\end{figure}
\begin{figure}
\includegraphics[width=5in]{Shapes1.pdf}
\caption{Six unbounded sign-balanced regions.}
\label{fig:shapes1}
\end{figure}
\begin{figure}
\includegraphics[width=5in]{Shapes3.pdf}
\caption{Six bounded sign-unbalanced regions.}
\label{fig:shapes3}
\end{figure}
\begin{figure}
\includegraphics[width=2in]{Shapes4.pdf}
\caption{One bounded sign-balanced region with the two types of self-intersections shown.}
\label{fig:shapes4}
\end{figure}
\begin{remark}
Figures \ref{fig:decomposition} and \ref{fig:decomposition2} illustrate how the map $\phi$ sends $(i,j,k)$ to these six-tuples. For a fixed $k$, the map $\theta$ acts as $60^{\circ}$ clockwise rotation of $(i,j)$-plane. These two-dimensional cross-sections motivates our terminology of unbounded and bounded regions for the various sign-patterns.
\end{remark}
\begin{remark}
If $ -2 \leq k \leq 3$, these bounded regions shrink to a point and some of the unbounded regions shrink to a line as there are no non-degenerate examples of those particular sign patterns. This is why previous work \cite{LMNT}, which corresponded to the $k=0$ and $k=1$ cases involved only unbounded regions. See Figure 5 of \cite{LMNT}. The decompositions for $k\leq -3$ and $k\geq 4$ are analogous to one another except for applying the negation map $(a,b,c,d,e,f) \to (-a,-b,-c,-d,-e,-f)$ everywhere.
\end{remark}
\begin{figure}
\includegraphics[width=5in]{Decompose.pdf}
\caption{Possible sign-patterns for a fixed $k \geq 1$, where the six lines illustrate the $(i,j)$-coordinates so that one of the elements of the $6$-tuple equals zero.}
\label{fig:decomposition}
\end{figure}
\begin{figure}
\includegraphics[width=5in]{Decompose2.pdf}
\caption{Possible sign-patterns for a fixed $k \leq 0$. Observe that the uncolored unbounded regions correspond to the same sign patterns as they did in the $k \geq 1$ case.}
\label{fig:decomposition2}
\end{figure}
\subsection{Combinatorial Interpretation of Ordered Clusters}\label{sec:comb}
In this section, we describe the weighting scheme that yields Laurent polynomials from the subgraphs defined in Section \ref{sec:subgraphs}. In summary, given a toric mutation sequence $S$, we obtain a specific prism (or a well-defined deformation of a prism) in $\mathbb{Z}^3$, which corresponds to an ordered cluster $Z^S$. Then each such lattice point $(i,j,k)$ corresponds via map $\phi$ to a $6$-tuple $(a,b,c,d,e,f)$. Subsequently, the contour $\mathcal{C}_i^{j,k} = \mathcal{C}(a,b,c,d,e,f)$ yields a subgraph\footnote{As mentioned in Definition \ref{def:subgraphs}, this construction makes sense as long as $\mathcal{C}(a,b,c,d,e,f)$ has no self-intersections.} $\mathcal{G}(\mathcal{C}_i^{j,k}) = \mathcal{G}(a,b,c,d,e,f)$.
Finally, the weighting scheme leads to a Laurent polynomial $z(a,b,c,d,e,f)$ from $\mathcal{G}(a,b,c,d,e,f)$. The main theorem of this section is that $z(a,b,c,d,e,f) = z_{i}^{j,k}$, the cluster variable in cluster $Z^S$ reached via the toric mutation sequence $S$.
We will adopt the weighting scheme on the brane tiling utilized in \cite{zhang}, \cite{speyer}, \cite{GK}, and \cite{LMNT}. Associate the \textbf{weight} $\frac{1}{x_i x_j}$ to each edge bordering faces labeled $i$ and $j$ in the brane tiling. Let $\mathcal{M}(G)$ denote the set of perfect matchings of a subgraph $G$ of the brane tiling. We define the weight $w(M)$ of a perfect matching $M$ in the usual manner as the product of the weights of the edges included in the matching under the weighting scheme. Then we define the weight of $G$ as
\[
w(G) = \sum_{M \in \mathcal{M}(G)}w(M).
\]
We also define the \textbf{covering monomial}, $m(G)$, of any graph $G=\mathcal{G}(a,b,c,d,e,f)$, resulting from a contour without self-intersection, as follows.
\begin{definition} \label{def:covmon} First we define the covering monomial $m(\widetilde{G})$ of the graph $\widetilde{G}=\widetilde{\mathcal{G}}(a,b,c,d,e,f)$ as the product $x_1^{a_1}x_2^{a_2}x_3^{a_3}x_4^{a_4}x_5^{a_5}x_6^{a_6}$, where $a_j$ is the number of faces labeled $j$ restricted inside the contour $\mathcal{C}(a,b,c,d,e,f)$. Consider the edges in $\widetilde{G}$ that are adjacent to a valence one black vertex (this edge is not in $G$). Assume that $b_i$ is the number of faces labeled $i$ adjacent to such a forced edge. We now define the covering monomial, $m(G)$, of $G$ as the product $x_1^{a_1-b_1}x_2^{a_2-b_2}x_3^{a_3-b_3}x_4^{a_4-b_4}x_5^{a_5-b_5}x_6^{a_6-b_6}$.
Figure \ref{fig: cov mon} illustrates an example of the quadrilaterals included in the covering monomial of a small subgraph, outlined in red.
\end{definition}
\begin{figure}[h]
\centering
\scalebox{0.75}{\includegraphics[keepaspectratio=true, width=50 mm]{covmon2.pdf}}
\caption{\small The covering monomial of $\mathcal{G}(2,-2,1,1,-1,0) \subset \widetilde{\mathcal{G}}(2,-2,1,1,-1,0)$ outlined in red includes the blue quadrilaterals and is given by $x_{1}x_{2}x_{3}x_{4}x_{5}^{3}x_{6}^{2}$.} \label{fig: cov mon}
\end{figure}
Finally, to make notation more concise in later proofs, it will be useful to define the product of the covering monomial and weight of a subgraph $G$ as
\begin{equation*}
c(G) = w(G)m(G).
\end{equation*}
Similarly, we define $c(\widetilde{G})=w(\widetilde{G})m(\widetilde{G})$.
\begin{remark} \label{rem:cov}
We have $$c(\mathcal{G}(a,b,c,d,e,f)) = c(\widetilde{\mathcal{G}}(a,b,c,d,e,f))$$
for any contours without self-intersections. Indeed, besides containing a larger set of faces, the difference between perfect matchings of $\mathcal{G}(a,b,c,d,e,f)$ and $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$ are the edges forced to be in a perfect matching by being adjacent to a valence one black vertex. On the other hand, the contribution to the covering monomial of $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$ from the faces incident to these edges balances out the weight of these edges. Then the equality follows.
\end{remark}
\begin{definition} [\bf Contour $6$-tuple] \label{def:6tuple}
Given a toric mutation sequence $S$, we may obtain a corresponding six-tuple of points
$$[(i_1,j_1,k_1),(i_2,j_2,k_2),(i_3,j_3,k_3),(i_4,j_4,k_4),(i_5,j_5,k_5),(i_6,j_6,k_6)]$$
in $\mathbb{Z}^3$ starting from the initial prism
{\footnotesize $[(0,-1,1), (0,-1,0), (-1,0,0), (-1,0,1), (0,0,1), (0,0,0)]$} and using the lattice moves illustrating in Figures \ref{fig:ModelII}, \ref{fig:ModelIII}, and \ref{fig:ModelsGeom}\footnote{In particular, when $S$ is specifically a generalized $\tau$-mutation sequence, this six-tuple of points is the prism $\Delta^S$ defined in Definition \ref{def:prism}.}.
We use these six $\mathbb{Z}^3$-points to define a corresponding $6$-tuple of Contours, which we denote as $\mathcal{C}^S$.
$$\mathcal{C}^{S} =
[\mathcal{C}_{i_1}^{j_1,k_1}, \mathcal{C}_{i_2}^{j_2,k_2}, \mathcal{C}_{i_3}^{j_3,k_3},
\mathcal{C}_{i_4}^{j_4,k_4}, \mathcal{C}_{i_5}^{j_5,k_5}, \mathcal{C}_{i_6}^{j_6,k_6}]$$
using $\mathcal{C}_i^{j,k}= \mathcal{C}(j+k, -i-j-k, i+k, j+1-k, -i-j-1+k, i+1-k)$ from Definition \ref{Def:LMNT}.
\end{definition}
In particular, notice that by Definition \ref{def:6tuple}, $\mathcal{C}^\emptyset = [\mathcal{C}_0^{-1,1}, \mathcal{C}_0^{-1,0}, \mathcal{C}_{-1}^{0,0}, \mathcal{C}_{-1}^{0,1}, \mathcal{C}_0^{0,1}, \mathcal{C}_0^{0,0}]$, which we abbreviate as $[C_1, C_2, C_3, C_4, C_5, C_6],$ and refer to as the initial contours.
\begin{eqnarray}
\label{contours1} C_1 = \mathcal{C}(0,0,1,-1,1,0), ~~C_2 = \mathcal{C}(-1,1,0,0,0,1), \\
\label{contours2} C_3 = \mathcal{C}(0,1,-1,1,0,0), ~~C_4 = \mathcal{C}(1,0,0,0,1,-1), \\
\label{contours3} C_5 = \mathcal{C}(1,-1,1,0,0,0), ~~C_6 = \mathcal{C}(0,0,0,1,-1,1).
\end{eqnarray}
\begin{theorem} \label{thm:main}
Let $S$ be a toric mutation sequence. Build the $6$-tuple of contours $\mathcal{C}^S$'s as in Definition \ref{def:6tuple}. Then use these six contours to build six subgraphs of $\mathcal{T}$ as in Section \ref{Sec:Contours}, i.e.
$\mathcal{G}^S_i = \mathcal{G}(a,b,c,d,e,f)$ if $\mathcal{C}^S_i = \mathcal{C}(a,b,c,d,e,f)$ for $1 \leq i \leq 6$.
If none of the contours have self-intersections, then the cluster obtained from the toric mutation sequence $S$ is
$$Z^S = [ c(\mathcal{G}^S_1), c(\mathcal{G}^S_2), c(\mathcal{G}^S_3), c(\mathcal{G}^S_4), c(\mathcal{G}^S_5)
c(\mathcal{G}^S_6)].$$
\end{theorem}
We prove this Theorem in Section \ref{Sec:proof}. Note that it is sufficient to prove the result when $S$ is a generalized $\tau$-mutation sequence, due to
Lemma \ref{lem:gentoric}.
By specializing $x_1=x_2=\dots=x_6=1$, Theorems \ref{thm:explicit} and \ref{thm:main} imply that the (unweighted) numbers of perfect matchings of our subgraphs are always certain products of a power of $2$ and a power of $3$. This also implies the work of the first author about tilings of the Dragon Regions in \cite[Theorem 2]{LaiNewDungeon}. In other words, our main results here provide a $6$-parameter refinement of Theorem 2 in \cite{LaiNewDungeon}.
\begin{example} \label{ex:1}
As an illustration of Theorem \ref{thm:main}, we consider the generalized $\tau$-mutation sequence
$S= \tau_1\tau_2\tau_3\tau_1\tau_2\tau_3 \tau_2 \tau_1 \tau_4$. Letting $S_1 = \tau_1\tau_2\tau_3\tau_1\tau_2\tau_3\tau_2\tau_1$, and applying the corresponding alcove walk, we reach the triangle
$\{(1,3),(1,2),(0,3)\}$ in $L^\Delta$, written in order using $I-J \mod 3$. We then apply $\tau_4$ to get
$$\mathcal{C}^S = [ \mathcal{C}_1^{3,-1}, \mathcal{C}_1^{3,0}, \mathcal{C}_1^{2,0}, \mathcal{C}_1^{2,-1},
\mathcal{C}_0^{3,-1}, \mathcal{C}_0^{3,0}], \mathrm{~which~equals~}$$
$$[\mathcal{C}(2, -3, 0, 5, -6, 3), \mathcal{C}(3, -4, 1, 4, -5, 2), \mathcal{C}(2, -3, 1, 3, -4, 2),$$
$$\hspace{3em}\mathcal{C}(1, -2, 0, 4, -5, 3), \mathcal{C}(2, -2, -1, 5, -5, 2), \mathcal{C}(3, -3, 0, 4, -4, 1)].$$
These six contours respectively correspond to the six subgraphs appearing in Figure \ref{fig:ex1}. The associated cluster variables are
$$\frac{(x_1x_3 + x_2x_4)^{4}(x_2x_5 + x_1x_6)^{6}(x_3x_5 + x_4x_6)^{7}(x_1x_3x_6 + x_2x_3x_5 + x_2x_4x_6)}{x_1^{7}x_2^{7}x_3^{6}x_4^{7}x_5^{5}x_6^{4}},$$
$$\frac{(x_1x_3 + x_2x_4)^{4}(x_2x_5 + x_1x_6)^{6}(x_3x_5 + x_4x_6)^{7}}{x_1^{7}x_2^{6}x_3^{6}x_4^{6}x_5^{4}x_6^{4}},$$
$$\frac{(x_1x_3 + x_2x_4)^{2}(x_2x_5 + x_1x_6)^{4}(x_3x_5 + x_4x_6)^{4}}{x_1^{4}x_2^{4}x_3^{3}x_4^{4}x_5^{2}x_6^{2}},$$
$$\frac{(x_1x_3 + x_2x_4)^{2}(x_2x_5 + x_1x_6)^{4}(x_3x_5 + x_4x_6)^{4}(x_1x_3x_6 + x_2x_3x_5 + x_2x_4x_6)}{x_1^{5}x_2^{4}x_3^{4}x_4^{4}x_5^{3}x_6^{2}},$$
$$\frac{(x_1x_3 + x_2x_4)^{3}(x_2x_5 + x_1x_6)^{4}(x_3x_5 + x_4x_6)^{5}(x_1x_3x_6 + x_2x_3x_5 + x_2x_4x_6)}{x_1^{6}x_2^{5}x_3^{4}x_4^{5}x_5^{3}x_6^{3}}, \mathrm{~and~}$$
$$\frac{(x_1x_3 + x_2x_4)^{3}(x_2x_5 + x_1x_6)^{4}(x_3x_5 + x_4x_6)^{5}}{x_1^{5}x_2^{5}x_3^{4}x_4^{4}x_5^{3}x_6^{2}}$$
containing 393216, 131072, 1024, 3072, 12288, and 4096 terms, respectively.
\end{example}
\begin{figure}
\includegraphics[width=5in]{6contour1.pdf}
\caption{The subgraphs obtained from the generalized $\tau$-mutation sequence of Example \ref{ex:1}.}
\label{fig:ex1}
\end{figure}
\begin{example} \label{ex:2}
As another example, we consider the generalized $\tau$-mutation sequence given by $S=\tau_1\tau_2\tau_3\tau_1 \tau_3\tau_2\tau_1 \tau_4\tau_5$.
We have $S_1 = \tau_1\tau_2\tau_3\tau_1 \tau_3\tau_2\tau_1$. Following the corresponding alcove walk, we reach the triangle $\{(2,1),(1,3),(1,1)\}$ in $L^\Delta$. Applying $\tau_4$ yields \[[ \mathcal{C}_2^{1,-1}, \mathcal{C}_2^{1,0}, \mathcal{C}_1^{2,0}, \mathcal{C}_1^{2,-1},
\mathcal{C}_1^{1,-1}, \mathcal{C}_1^{1,0}],\] and $\tau_5$ afterwards implies
$$[\mathcal{C}(0, -2, 1, 3, -5, 4), \mathcal{C}(-1, -1 , 0, 4, -6, 5), \mathcal{C}(0, -1, -1, 5, -6, 4),$$
$$\hspace{3em}\mathcal{C}(1, -2, 0, 4, -5, 3), \mathcal{C}(0, -1, 0, 3, -4, 3), \mathcal{C}(-1, 0, -1, 4, -5, 4)].$$
These six contours respectively correspond to the six subgraphs of Figure \ref{fig:ex2} with 3072, 27648, 27648, 3072, 96, and 864 perfect matchings, respectively. Again this matches the number of terms in the corresponding cluster variables which we have omitted due to their size.
\end{example}
\begin{figure}
\includegraphics[width=5in]{6contour2.pdf}
\caption{The subgraphs obtained from the generalized $\tau$-mutation sequence of Example \ref{ex:2}.}
\label{fig:ex2}
\end{figure}
\section{Preparations for the Proof of Theorem \ref{thm:main}} \label{sec:directions}
In the proof of Theorem \ref{thm:explicit} in Section 3, we used three types of recurrences, e.g. (\ref{eq:Rec1}), (\ref{eq:kRec2}), and (\ref{eq:kRec3}). These recurrences correspond to the lattice moves in Figure \ref{fig:ModelIII}. We call them (R4), (R1) and (R2), respectively, using the notations of \cite{LaiNewDungeon}.
The $(R4)$ recurrences correspond to replacing the cluster variable $z_i^{j,k}$ with one of twelve possibilities: $z_{i-1}^{j+2, k \pm 1}$, $z_{i+1}^{j-2,k \pm 1}$, $z_{i+2}^{j-1, k\pm 1}$, $z_{i-2}^{j+1, k\pm 1}$,
$z_{i-1}^{j-1, k\pm 1}$, or $z_{i+1}^{j+1, k\pm 1}$. In terms of the $6$-tuples, this corresponds to replacing $(a,b,c,d,e,f)$ with
$$(a+1, b, c-2, d+3, e-2, f)$$ or a cyclic rotation or negation of this transformation.
The $(R1)$ recurrences correspond to replacing the cluster variable $z_i^{j,k}$ with
$z_{i+1}^{j,k\pm 2}$, $z_{i-1}^{j,k\pm 2}$, $z_{i}^{ j+1, k\pm 2}$, $z_{i}^{ j-1, k \pm 2}$, $z_{i+1}^{j-1, k \pm 2}$ or $z_{i-1}^{j+1, k\pm 2}$. In terms of $6$-tuples, the transformation is a cyclic rotation or negation of $$(a-2, b+3, c-3, d+2, e-1, f+1).$$
Finally, the $(R2)$ recurrences correspond to replacing the cluster variable $z_i^{j,k}$ with
$z_{i + 2}^{j, k}$, $z_{i - 2}^{j, k}$, $z_{i}^{j+2, k}$, $z_{i}^{j-2, k}$, $z_{i + 2}^{j-2, k}$, or $z_{i - 2}^{j+2, k}$. This transformation is a cyclic rotation or negation of $$(a-2,b+2,c,d-2,e+2,f).$$ Notice that in this case, there are only three distinct cyclic rotations.
Comparing with the geometry in Section \ref{sec:gentoric}, any toric mutation corresponds to one of these transformations. To recover the associated binomial exchange relations, we build a (possibly degenerate) octahedron in $\mathbb{Z}_3$ using the lattice points $(i,j,k)$ and $(i',j',k')$ as its antipodes. More precisely, see Figure \ref{fig:degoct}.
\vspace{2em}
\begin{figure}
\includegraphics[width=12cm]{octahedral2.pdf}
\caption{Examples of (possibly degenerate) octahedron in $\mathbb{Z}^3$ induced by the lattice points $(i,j,k)$ and $(i',j',k')$.}
\label{fig:degoct}
\end{figure}
These algebraic recurrences agree with the three-term recurrences that appear in Eric Kuo's theory of graphical condensation \cite{kuo1,kuo2}. This point of view is used in Section \ref{Sec:proof} to present the proof of Theorem \ref{thm:main}, which links the algebraic expressions of cluster variables as Laurent polynomials to a combinatorial interpretation as partition functions of perfect matchings of certain graphs. Towards this end, we introduce a way to identify six special points in the graph $\mathcal{G}(a,b,c,d,e,f)$ and how to use these to build related subgraphs.
For the sides $a,b,c,d,e,f$, we define respectively the points $A,B,C,D,E,F$ as follows. The region restricted by the contour $\mathcal{C}(a,b,c,d,e,f)$ can be partitioned into equilateral triangles consisting of the faces $1, 4, 5$ or of the faces $2,3,6$. There are $|a|$ such triangles with bases resting on the side $a$ (see the the shaded triangles in Figures \ref{fig:part1}, \ref{fig:part2}, and \ref{fig:part3}). If $a$ is positive, the point $A$ can be any of $|a|$ big black vertices located at the center of the near side of a shaded triangle as we follow the direction of the side $a$ (see the first picture of Figure \ref{fig:part1}). If $a$ is negative, we can pick $A$ as any of the $|a|$ big white vertices (that are the tops of the shaded triangles with bases resting on the side $a$ as in first picture in Figure \ref{fig:part3}). Similarly, we define the points $B,C,D,E,F$, based on Figures \ref{fig:part1}, \ref{fig:part2}, and \ref{fig:part3}.
\begin{figure}
\includegraphics[width=4in]{Triangulation.pdf}
\caption{How we pick the points $A,B,C,D,E,F$ in the case when $(a,b,c,d,e,f) = (+,-,+, +, -, +)$.}
\label{fig:part1}
\end{figure}
\begin{figure}
\includegraphics[width=4in]{Triangulation2.pdf}
\caption{How we pick the points $A,B,C,D,E,F$ in the case when $(a,b,c,d,e,f) = (+,+,-, +, +, -)$.}
\label{fig:part2}
\end{figure}
\begin{figure}
\includegraphics[width=5in]{Triangulation3.pdf}
\caption{How we pick the points $A,B,C,D,E,F$ in the case when $(a,b,c,d,e,f) = (-,+,+, -, +, +)$.}
\label{fig:part3}
\end{figure}
\begin{lemma} \label{claim:signtuple}
Given a subgraph $G$ of the $dP_3$ lattice corresponding to the contour\\ $\mathcal{C}(a,b,c,d,e,f)$ which contains the point $A$, the subgraph $G-\{A\}$ corresponds to the contour $\mathcal{C}(a-1, b+1, c, d, e, f+1)$ (resp. $\mathcal{C}(a+1, b-1, c, d, e, f-1)$) if the point $A$ is black (resp. white). Analogous results hold for points $B$, $C$, $D$, $E$, and $F$ up to cyclic rotations.
\end{lemma}
\begin{proof}
The removal of $A$ from $G$ yields several edges which are forced in any perfect matching of the resulting graph. By removing these forced edges, we obtain a subgraph which coincides with the graph associated to the contour $\mathcal{C}(a-1, b+1, c, d, e, f+1)$ (see the first picture in Figure \ref{fig:forced}).
If $A$ is white, the removal of $A$ also yields several forced edges. By removing these forced edges, we get the graph associated to the contour $\mathcal{C}(a+1, b-1, c, d, e, f-1)$ (see the second picture in Figure \ref{fig:forced}). The removal of the forced edges also yields the removal of the trapezoid consisting of $2|a|-1$ equilateral triangles along the side $a$.
The arguments for $B$, $C$, $D$, $E$, and $F$ can be obtained by an analogous manner, based on Figures \ref{fig:forced} and \ref{fig:forced2}. Figure \ref{fig:forced3} illustrates that the same procedure still works even in the degenerate case when some of the sides are of length $\pm 1$.
\end{proof}
\begin{figure}\centering
\includegraphics[width=4.5in]{forced.pdf}
\caption{Removal of $A$, $B$ and $C$ yields forced edges.}
\label{fig:forced}
\end{figure}
\begin{figure}
\includegraphics[width=4.5in]{forced2.pdf}
\caption{Removal of $D$, $E$ and $F$ yields forced edges.}
\label{fig:forced2}
\end{figure}
\begin{figure}
\includegraphics[width=5in]{forced3.pdf}
\caption{Removal of $A,B,C,D,E,F$ when some sides equal $\pm 1$.}
\label{fig:forced3}
\end{figure}
Kuo condensation was first used by Eric H. Kuo \cite{kuo1} to (re)prove the Aztec diamond theorem by Elkies, Kuperberg, Larsen, and Propp \cite{Elkies}. Kuo condensation can be considered as a combinatorial interpretation of Dodgson condensation (or the Jacobi-Desnanot identity; see e.g \cite{Mui}, pp.136--149) on determinants of matrices. See e.g. \cite{YZ}, \cite{kuo2}, \cite{Ciucu}, \cite{Ful}, \cite{speyer} for different aspects and generalizations of the method, and see e.g. \cite{lai'}, \cite{KW}, \cite{CF}, \cite{Trin} for recent applications of Kuo condensation.
In [Kuo], Kuo presented several different versions of Kuo condensation. For ease of reference, we list below the four versions employed in our proofs.
\begin{lemma}[Balanced Kuo Condensation; Theorem 5.1 in \cite{kuo1}]\label{Kuo1}
Let $G=(V_1,V_2,E)$ be a (weighted) planar bipartite graph with $|V_1|=|V_2|$. Assume that $p_1,p_2,p_3,p_4$ are four vertices appearing in a cyclic order on a face of $G$. Assume in addition that $p_1,p_3\in V_1$ and $p_2,p_4\in V_2$. Then
\begin{align}
w(G)w(G-\{p_1,p_2,p_3,p_4\})=&w(G-\{p_1,p_2\})w(G-\{p_3,p_4\})\notag\\
&+w(G-\{p_1,p_4\})w(G-\{p_2,p_3\}).
\end{align}
\end{lemma}
\begin{lemma}[Unbalanced Kuo Condensation; Theorem 5.2 in \cite{kuo1}]\label{Kuo2}
Let $G=(V_1,V_2,E)$ be a planar bipartite graph with $|V_1|=|V_2|+1$. Assume that $p_1,p_2,p_3,p_4$ are four vertices appearing in a cyclic order on a face of $G$. Assume in addition that $p_1,p_2,p_3\in V_1$ and $p_4\in V_2$. Then
\begin{align}
w(G-\{p_2\})w(G-\{p_1,p_3,p_4\})=&w(G-\{p_1\})w(G-\{p_2,p_3,p_4\})\notag\\
&+w(G-\{p_3\})w(G-\{p_1p_2,p_4\}).
\end{align}
\end{lemma}
\begin{lemma}[Non-alternating Kuo Condensation; Theorem 5.3 in \cite{kuo1}]\label{Kuo3}
Let $G=(V_1,V_2,E)$ be a planar bipartite graph with $|V_1|=|V_2|$. Assume that $p_1,p_2,p_3,p_4$ are four vertices appearing in a cyclic order on a face of $G$. Assume in addition that $p_1,p_2\in V_1$ and $p_3,p_4\in V_2$. Then
\begin{align}
w(G-\{p_1,p_4\})w(G-\{p_2,p_3\})=&w(G)w(G-\{p_1,p_2,p_3,p_4\})\notag\\
&+w(G-\{p_1,p_3\})w(G-\{p_2,p_4\}).
\end{align}
\end{lemma}
\begin{lemma}[Monochromatic Kuo Condensation; Theorem 5.4 in \cite{kuo1}]\label{Kuo4}
Let $G=(V_1,V_2,E)$ be a planar bipartite graph with $|V_1|=|V_2|+2$. Assume that $p_1,p_2,p_3,p_4$ are four vertices appearing in a cyclic order on a face of $G$. Assume in addition that $p_1,p_2,p_3,p_4\in V_1$. Then
\begin{align}
w(G-\{p_1,p_3\})w(G-\{p_2,p_4\})=&w(G-\{p_1,p_2\})w(G-\{p_3,p_4\})\notag\\
&+w(G-\{p_2,p_3\})w(G-\{p_4,p_1\}).
\end{align}
\end{lemma}
\section{Proof of Theorem \ref{thm:main}} \label{Sec:proof}
We begin by verifying that $Z^\emptyset = [x_1,x_2,x_3,x_4,x_5,x_6]$ equals
$$[c(\mathcal{G}(C_1)), c(\mathcal{G}(C_2)), c(\mathcal{G}(C_3)), c(\mathcal{G}(C_4)), c(\mathcal{G}(C_5)), c(\mathcal{G}(C_6))],$$
or equivalently, by Remark \ref{rem:cov}, $$[c(\widetilde{\mathcal{G}}(C_1)), c(\widetilde{\mathcal{G}}(C_2)), c(\widetilde{\mathcal{G}}(C_3)), c(\widetilde{\mathcal{G}}(C_4)), c(\widetilde{\mathcal{G}}(C_5)), c(\widetilde{\mathcal{G}}(C_6))],$$ where the $C_i$'s are the initial contours from (\ref{contours1})-(\ref{contours3}).
Each graph $\widetilde{\mathcal{G}}(C_i)$ is a unit triangle in the brane tiling $\mathcal{T}$ with all but two vertices removed. In all six of these cases, there is a single edge remaining which is the lone contribution to a perfect matching of $\widetilde{\mathcal{G}}(C_i)$. Thus multiplying the covering monomial and weight of the unique perfect matching together we get
$$m(\widetilde{\mathcal{G}}(C_1)) = m(\widetilde{\mathcal{G}}(C_4)) = m(\widetilde{\mathcal{G}}(C_5))= x_1x_4x_5,$$
$$m(\widetilde{\mathcal{G}}(C_2)) = m(\widetilde{\mathcal{G}}(C_3)) = m(\widetilde{\mathcal{G}}(C_6)) = x_2x_3x_6,$$
$$w(\widetilde{\mathcal{G}}(C_1)) = \frac{1}{x_4x_5},~~ w(\widetilde{\mathcal{G}}(C_2)) = \frac{1}{x_3x_6},~~ w(\widetilde{\mathcal{G}}(C_3)) = \frac{1}{x_2x_6},$$
$$w(\widetilde{\mathcal{G}}(C_4)) = \frac{1}{x_1x_5},~~ w(\widetilde{\mathcal{G}}(C_5)) = \frac{1}{x_1x_4},~~ w(\widetilde{\mathcal{G}}(C_6)) = \frac{1}{x_2x_3}.$$
In all these cases $c(\mathcal{G}(C_i)) =c(\widetilde{\mathcal{G}}(C_i))= m(\widetilde{\mathcal{G}}(C_i))w(\widetilde{\mathcal{G}}(C_i)) = x_i$ as desired.
See Figure \ref{fig:C12}.
This verifies the desired combinatorial interpretation of the cluster variables $z_{i}^{j,k}$ for $(i,j,k) \in \{
(0,-1,0), (0,-1,1), (-1,0,0), (-1,0,1), (0,0,0), (0,0,1)\}$.
\begin{figure}
\includegraphics[width=1.5in]{CO-eg2.pdf} \hspace{3em}
\includegraphics[width=1.5in]{C1-eg2.pdf}
\caption{Examples of $\widetilde{\mathcal{G}}(C_1)$ and $\widetilde{\mathcal{G}}(C_2)$.}
\label{fig:C12}
\end{figure}
The proof continues by induction. As in Section \ref{sec:directions}, toric mutations correspond to $30$ possible transformations $(i,j,k) \to (i',j',k')$. It suffices to turn each of these geometric moves into an algebraic recurrence. We proceed to accomplish this as follows:
Step 1: Let $(d_A,d_B,d_C,d_D,d_E,d_F)$ denote the difference between the six-tuple given as $\phi(i,j,k) = (a,b,c,d,e,f)$ and the six-tuple $\phi(i',j',k') = (a',b',c',d',e',f')$. Based on the description in Section \ref{sec:directions}, the $30$ possibilities for $(d_A,d_B,d_C,d_D,d_E,d_F)$ are
$\pm (1, 0, -2, 3, -2, 0)$, $\pm (-2, 1 ,-1, 2, -3, 3)$, $\pm (2,-2, 0, 2, -2, 0)$, or one of these up to rotation, categorized as (R4), (R1), or (R2), respectively. We let $\mathcal{C}$ and $\mathcal{C}'$ denote the contours
$$\mathcal{C} = {\mathcal{C}}(a,b,c,d,e,f) \mathrm{~~and~~} \mathcal{C}' = {\mathcal{C}}(a',b',c',d',e',f').$$
Step 2: We create a new contour $\mathcal{O}$ by superimposing a shift of $\mathcal{C}'$ on top of $\mathcal{C}$ and drawing the contour obtained by taking the outer boundary. We shift according to the following rules:
(a) If $d_A \geq 2$ (resp. $d_A \leq -2$) we first shift $\mathcal{C}'$ to the left (resp. right) by $(-1,0)$ (resp. $(1,0)$), i.e. parallel to side $f$.
(b) If $d_F \geq 2$ (resp. $d_F \leq -2$) we instead or then shift $\mathcal{C}'$ diagonally by $(\frac{1}{2}, -\frac{\sqrt{3}}{2})$ (resp. $(-\frac{1}{2}, \frac{\sqrt{3}}{2})$), i.e. parallel to side $a$.
(c) In the cases when $|d_A|$ and $|d_F| \geq 2$, both of these two shifts will occur. However we observe that $d_A$ and $d_F$ cannot have the same sign and thus we get a total shift of $\mathcal{C}'$ of either $(\frac{3}{2}, -\frac{\sqrt{3}}{2})$ or $(-\frac{3}{2}, \frac{\sqrt{3}}{2})$.
(d) In the remaining cases, we also shift $\mathcal{C}'$ by $(-1,0)$ (resp. $(1,0)$) if $d_A = 1$ (resp. $d_A = -1$) and $d_F = 0$. Similarly, we shift by
$(\frac{1}{2}, -\frac{\sqrt{3}}{2})$ (resp. $(-\frac{1}{2}, \frac{\sqrt{3}}{2})$) if $d_A = 0$ and $d_F = 1$ (resp. $d_F=-1$). Lastly, we do not shift
$\mathcal{C}'$ at all if $\{d_A,d_F\} = \{-1,1\}$.
Based on these rules, the local configuration around the corner where sides $f$ and $a$ meet in $\mathcal{C}$ (and sides $f'$ and $a'$ meet in $\mathcal{C}'$) are one of the possibilities illustrated in Figure \ref{fig:AA} (if $|d_A| = 2$, $|d_A|= 3$, or both $d_A = \pm 1$ and $d_F=0$), Figure \ref{fig:FF} (if $|d_F| = 2$, $|d_F|= 3$, or both $d_F = \pm 1$ and $d_A=0$), Figure \ref{fig:NN} (if neither of these happen), or Figure \ref{fig:AF} (if both of these conditions occur).
For brevity, we only illustrate the cases where $d_A \geq 0$ and $d_F \leq 0$ since allowing $d_A < 0$ or $d_F > 0$ merely switches the role of $\mathcal{C}$ and $\mathcal{C}'$. We also only illustrate the case where side lengths $a$ and $f$ are positive (except in Figure \ref{fig:NN}) since changing their signs or values does not affect the relative position of the endpoints of sides $a$, $a'$, $f$, and $f'$. This is clear in Figure \ref{fig:NN} and extends to the other cases as well.
\begin{figure}
\includegraphics[width=4.5in]{AA.pdf}
\caption{The cases with a shift of $(-1,0)$ for $\mathcal{C}'$. From Left to Right: (i) $d_F=0$, $d_A=1$, (ii) $d_F=0$, $d_A=2$, and (iii) $d_F=-1$, $d_A=2$.}
\label{fig:AA}
\end{figure}
\begin{figure}
\includegraphics[width=4.5in]{FF.pdf}
\caption{The cases with a shift of $(1/2,-\sqrt{3}/2)$ for $\mathcal{C}'$. From Left to Right: (i) $d_F=-1$, $d_A=0$, (ii) $d_F=-2$, $d_A=0$, and (iii) $d_F=-2$, $d_A=1$.}
\label{fig:FF}
\end{figure}
\begin{figure}
\includegraphics[width=4.5in]{NN.pdf}
\caption{The case with no shift. There are only two possibilities, $(d_F,d_A)=(1,-1)$ or $(d_F,d_A)=(-1,1)$. We illustrate the latter but unlike Figures \ref{fig:AA}, \ref{fig:FF}, and \ref{fig:AF}, illustrate the shape of the corner as $a$ and $f$ vary in sign. }
\label{fig:NN}
\end{figure}
\begin{figure}
\includegraphics[width=4.5in]{AF.pdf}
\caption{Lastly, we illustrate the cases with a shift of $(3/2,-\sqrt{3}/2)$ for $\mathcal{C}'$. (i) In the top-left corner, $d_F=-2$, $d_A=2$, (ii) in the top-right, $d_F=-2$, $d_A=3$, (iii)
in the bottom-left, $d_F=-3$, $d_A=3$, and (iv) in the bottom-right, $d_F=-3, d_A=2$.}
\label{fig:AF}
\end{figure}
Step 3: By inspecting these figures, we claim that this outer contour $\mathcal{O}$ either coincides or is shifted one unit away from the contour $\mathcal{C}$ (resp. $\mathcal{C}'$) along side $a$ (resp. $a'$), as well as $f$ (resp. $f'$). Its behavior is also completely determined by the pair $(|d_A|, |d_F|)$.
Considering instead the local configuration in the neighborhoods of the corners where $a$ and $b$ meet in $\mathcal{C}$ (as well as where $a'$ and $b'$ meet in $\mathcal{C}'$), we obtain rotations of Figures \ref{fig:AA}, \ref{fig:FF}, \ref{fig:NN}, and \ref{fig:AF} by $60^\circ$ clockwise. Based on the possible six-tuples for $(d_A,d_B,d_C,d_D,d_E,d_F)$ given in Step 1, we see that the ordered pairs $(d_F,d_A)$ lead to the possibilities for $(d_A,d_B)$ as given by the following tables\footnote{We only show the cases where $d_A \geq 0$ and $d_F \leq 0$ since the case where $d_A < 0$ or $d_F > 0$ is analogous.}:
$$\begin{array}{r|c|c|c|c|c|c}
(d_F,d_A) & (0,1) & (0,2) & (-1,2) & (-1,0) & (-2,0) & (-2,1) \\
\hline (d_A,d_B) & (1,0) & (2,-2) \mathrm{~or~} (2,-3) & (2,-3) & (0,2) & (0,1) \mathrm{~or~} (0,2) & (1,-1)
\end{array}$$
$$\begin{array}{r|c|c|c|c|c}
(d_F,d_A) & (-1,1) & (-2, 2) & (-2, 3) & (-3, 3) & (-3,2) \\
\hline (d_A,d_B) & (1,-2) & (2, 0) & (3, -2) \mathrm{~or~} (3,-3) & (3, -2) & (2,0) \mathrm{~or~} (2,-1)
\end{array}$$
Based on the rules of Step 2 and these two tables, we conclude that these two local configurations can be consistently glued together into a shape involving the three sides $\{f,a,b\}$ (resp. $\{f',a',b'\}$). Inductively, all six sides can be glued together in this way, and we see from these figures that the outer contour $\mathcal{O}$ is built by taking the longer side at each corner (when the two parallel sides do not overlap). It follows that $\mathcal{O}$ is a (six-sided) contour just as defined in the beginning of Section \ref{Sec:Contours}.
Step 4: Comparing the outer contour $\mathcal{O}$ to the contours $\mathcal{C}$ and $\mathcal{C}'$, respectively, some of the sides of $\mathcal{O}$ lie to the strict left of $\mathcal{C}$ (call these positive), others to the strict right (call these negative). We compare $\mathcal{O}$ and $\mathcal{C}'$ analogously.
As explained in Step 3, it is sufficient to look at the six ordered pairs
$(d_F,d_A)$, $(d_A,d_B)$, $(d_B,d_C)$, $(d_C,d_D)$, $(d_D,d_E)$, and $(d_E,d_F)$ to determine which sides are positive, which sides are negative, and which are neither. Based on the description in Section \ref{sec:directions} and in Step 1, we conclude that in all cases, there are exactly four sides which are positive or negative. Hence by taking the appropriate linear combination of $6$-tuples associated to sides, as appearing in Lemma \ref{claim:signtuple}, we can recover the contour $\mathcal{C}$ or the contour $\mathcal{C}'$ from $\mathcal{O}$.
Step 5: By case-by-case analysis, we see that the possibilities for $(d_A,d_B,d_C,d_D,d_E,d_F)$ discussed in Step 1 can be written as the linear combination
$$c_A (-1,1,0,0,0,1) + c_B(1,-1,1,0,0,0) + c_C(0,1,-1,1,0,0)$$ $$ \hspace{5em} + ~~ c_D(0,0,1,-1,1,0) + c_E(0,0,0,1,-1,1) + c_F(1,0,0,0,0,1,-1),$$
where two of these coefficients are $+1$, two are $-1$, and two are $0$.
For example, $$(1,0,-2,3,-2,0) \leftrightarrow c_A=-1, c_C=+1, c_D=-1, c_E=+1,$$
$$(-2,1,-1,2,-3,3) \leftrightarrow c_A=+1, c_D=-1, c_E=+1, c_F=-1, \mathrm{~ and}$$
$$(2,-2,0,2,-2,0) \leftrightarrow c_A=-1, c_B=+1, c_D=-1, c_E=+1.$$
The set of nonzero coefficients exactly match up with the set of special sides identified in Step 4. Thus may also use from Lemma \ref{claim:signtuple} to switch our point of view from the contours $\mathcal{C}(a,b,c,d,e,f)$ to the actual subgraphs $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$.
Consequently, we now build the graph $H$, which is defined as $H= \widetilde{\mathcal{G}}(\mathcal{O})$. We are able to use the methods of Section \ref{sec:directions} to pick four points $X$, $Y$, $W$, $Z$ (in $\{A,B,C,D,E,F\}$) on $H$ with appropriate colors based on the special sides identified in Step 4. The colors are determined by the sign of the side. Because the contour $\mathcal{O}$ always overlaps with either $\mathcal{C}$ or $\mathcal{C}'$ (or both) along each side, we get a set partition $\{S_1, S_2\}$ of the points $\{X,Y,W,Z\}$ such that the removal of the points $S_1$ from $H$ yields the graph $G = \widetilde{\mathcal{G}}(\mathcal{C})$ and the removal of the points $S_2$ from $H$ yields the graph $G' = \widetilde{\mathcal{G}}(\mathcal{C}')$.
Step 6: Based on this set partition and the color pattern of the four points involved, one of the four possible versions of Kuo Condensation applies. By construction, the left-hand-side will involve graphs $G = \widetilde{\mathcal{G}}(\mathcal{C})$ and $G' = \widetilde{\mathcal{G}}(\mathcal{C}')$. The appropriate application of Kuo condensation, i.e. Lemma \ref{Kuo1}, \ref{Kuo2}, \ref{Kuo3}, or \ref{Kuo4}, dictates the appropriate graphs on the right-hand-side accordingly.
Assume that we get the general Kuo recurrence:
\begin{align}\label{eq:kuogen1}
w(H-S_1)w(H-S_2)=w(H-S_3)w(H-S_4)+w(H-S_5)w(H-S_6),
\end{align}
where $S_1,S_2,\dotsc, S_6$ are certain subsets of $\{X,Y,W,Z\}$, and $S_{i+1}=\{X,Y,W,Z\}-S_i$, for $i=1,3,5$. By applying Lemma \ref{claim:signtuple}, we are able to obtain a contour $\mathcal{C}_i$ by adding or subtracting the appropriate six-tuples from $\mathcal{O}$ as dictated by the subset $S_i$. After removing the forced edges from the graph in the above recurrence, we get the graphs $G=G_1=\widetilde{\mathcal{G}}(\mathcal{C}_1),G'=G_2=\widetilde{\mathcal{G}}(\mathcal{C}_2),G_3=\widetilde{\mathcal{G}}(\mathcal{C}_3),G_4=\widetilde{\mathcal{G}}(\mathcal{C}_4),G_5=\widetilde{\mathcal{G}}(\mathcal{C}_5),G_6=\widetilde{\mathcal{G}}(\mathcal{C}_6)$.
It is easy to see that the removal of the point $X$ (resp. $Y,W,Z$) reduces the covering monomial of $H$ by exactly one face (see Figure \ref{fig:6-cov-mon}). In particular, this face can be determined uniquely from Figures \ref{fig:forced} and \ref{fig:forced2}. If $X$ (resp. $Y,W,Z$) is white then the face is the one inside the shaded triangle adjacent to $X$ (resp. $Y,W,Z$); if $X$ (resp. $Y,W,Z$) is black then the face is the one inside the shaded triangle adjacent to $X$ (resp. $Y,W,Z$) and the side $x$ (resp. $y,w,z$). We denote this face by $t_X$ (resp. $t_Y,t_W,t_Z$).
In particular, the face $t_X$ has label $2,5,3,1,6,4$ when $X$ is white $A,B,C,D,E,F$, respectively; and $t_X$ is labeled by $5,3,1,6,4,2$ when $X$ is black $A,B,C,D,E,F$, respectively.
\begin{figure}
\includegraphics[width=12cm]{balancesmall.pdf}
\caption{Illustrating how covering monomials are affected by the removal of points.}
\label{fig:6-cov-mon}
\end{figure}
Just like the case for $\widetilde{\mathcal{G}}(a,b,c,d,e,f)$, we define the covering monomial, $m(H)$, of $H$ as the product of weight of all faces restricted inside the contour $\mathcal{O}$ associated to $H$. We also define the covering monomial, $m(H-S_i)$, of the graph $H-S_i$ by
\[m(H-S_i)=\frac{m(H)}{wt(S_i)},\]
where $wt(S_i)$ is the product of the weights of all faces corresponding to the vertices in $S_i$; and let $c(H-S_i)=m(H-S_i)w(H-S_i)$. By definition, we have
\begin{align}
m(H-S_1)m(H-S_2)&=m(H-S_3)m(H-S_4)=m(H-S_5)m(H-S_6)\notag\\
&=\frac{m(H)^2}{wt(t_X)wt(t_Y)wt(t_W)wt(t_Z)}.\end{align}
Thus, (\ref{eq:kuogen1}) is equivalent to
\begin{align}\label{eq:kuogen2}
c(H-S_1)c(H-S_2)=c(H-S_3)c(H-S_4)+c(H-S_5)c(H-S_6).
\end{align}
Since $G_i$ is obtained from $H-S_i$ by removing forced edges, we have $w(H-S_i)/w(G_i)$ is the product of the weights of all forced edges, and $m(H-S_i)/m(G_i)$ is the product of the weights of all faces adjacent to these forced edges. However, the two products cancel each other out. It means that $c(H-S_i)=c(G_i)$. Therefore, equation (\ref{eq:kuogen2}) implies
\begin{align}\label{eq:kuogen3}
c(G)c(G')=c(G_3)c(G_4)+c(G_5)c(G_6).
\end{align}
In conclusion, for any contour without self-intersection, we have $z_{i}^{j,k} =c(\widetilde{\mathcal{G}}(\mathcal{C}_i^{j,k}))=c(\mathcal{G}(\mathcal{C}_i^{j,k})) = z(a,b,c,d,e,f)$ (using the notation of Section \ref{sec:comb}) as desired, finishing the proof of the theorem.
\section{Further Examples} \label{sec:examp}
In this section, we provide a number of graphics illustrating the methods used in Sections \ref{sec:directions} and \ref{Sec:proof} to prove Theorem \ref{thm:main}.
Firstly, we provide a sample of the graphics obtained in \cite{sage} for a variety of examples of the contours $\mathcal{C}$ and shifted $\mathcal{C}'$. In these examples, we start with $\mathcal{C}=\mathcal{C}_i^{j,k}$ and $\mathcal{C}' = \mathcal{C}_{i+d_i}^{j+d_j,k+d_k}$. We visualize the superposition and the outer contour $\mathcal{O}$ via the command ~ {\tt SuperO(i,j,k,[$d_i$,$d_j$,$d_k$])}. The code also outputs the six-tuples $(a,b,c,d,e,f)$, $(a',b',c',d',e',f')$, the type of recurrence involved, and the associated shift.
\vspace{10em}
\begin{sageblock}
SuperO(-7,-7,-7,[1,-1,2])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img1.png}
\end{center}
$$(-14, 21, -14, 1, 6, 1), (-13, 19, -11, -2, 8, 0), R1, (0, 0)$$
\begin{sageblock}
SuperO(-10,-10,8,[-1,2,-1])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img2.png}
\end{center}
$$(-2, 12, -2, -17, 27, -17), (-1, 12, -4, -14, 25, -17), R4, (-1,0)$$
\begin{sageblock}
SuperO(-10,-10,8,[-1,2,1])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img3.png}
\end{center}
$$(-2, 12, -2, -17, 27, -17), (1, 10, -2, -16, 27, -19), R4, (-\frac{3}{2},\frac{\sqrt{3}}{2})$$
\vspace{5em}
\begin{sageblock}
SuperO(-4,-10,8,[2,0,0])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img4.png}
\end{center}
$$(-2, 6, 4, -17, 21, -11), (-2, 4, 6, -17, 19, -9), R2, (\frac{1}{2},-\frac{\sqrt{3}}{2})$$
\begin{sageblock}
SuperO(8,-9,8,[2,0,0])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img5.png}
\end{center}
$$(-1, -7, 16, -16, 8, 1), (-1, -9, 18, -16, 6, 3), R2, (\frac{1}{2},-\frac{\sqrt{3}}{2})$$
\begin{sageblock}
SuperO(20,-6,8,[1,0,2])
\end{sageblock}
\begin{center}\includegraphics[width=4in]{Img6.png}
\end{center}
$$(2, -22, 28, -13, -7, 13), (4, -25, 31, -15, -6, 12), R1, (-1,0)$$
In Figures \ref{fig:egg} - \ref{fig:eggs}, we provide additional examples and fill in the associated contours with the corresponding subgraphs of the brane tiling $\mathcal{T}$. We illustrate our use of the four types of Kuo Condensation used to get our algebraic recurrences in Step 6 of the proof of Theorem \ref{thm:main}.
\begin{figure}
\includegraphics[width=12cm]{R1b.pdf}
\caption{Example of the recurrence (R1) using Balanced Kuo Condensation with $i=0, j=5, k=3$.}
\label{fig:egg}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{R2b.pdf}
\caption{Example of the recurrence (R2) using Balanced Kuo Condensation with $i=0, j=5,k=3$.}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{QIII8b.pdf}
\caption{Example of the recurrence (R4) using Unbalanced Kuo Condensation with $i=-5, j=3,k=1$.}
\end{figure}
\begin{figure}
\includegraphics[width=11cm]{QIV8b.pdf}
\caption{Example of (R4) using Unbalanced Kuo Condensation with $i=-3, j=-2,k=1$.}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{QI8b.pdf}
\caption{Another example of (R4) using Unbalanced Kuo Condensation with $i=1, j=3, k=1$.}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{R2new.pdf}
\caption{Example of (R2) using Non-alternating Kuo Condensation with $i=-5, j=6,k=6$.}
\end{figure}
\begin{figure}
\includegraphics[width=12cm]{R2single.pdf}
\caption{Example of (R2) using Monochromatic Kuo Condensation with $i=0, j=4,k=0$.}
\label{fig:eggs}
\end{figure}
\begin{remark}
Balanced Kuo Condensation, as in Lemma \ref{Kuo1}, is utilized to prove Lemma 3.6 of \cite{LMNT}, Proposition 2 of \cite{zhang}, and the Recurrence (R4) in \cite{LaiNewDungeon}. The proof of Theorem \ref{thm:main} shares similarities to the methods in those papers but using all four types of Kuo Condensation theorems provided us with a greater toolbox. For instance, in \cite{LMNT}, a consequence of relying solely on only one of the four types of Kuo condensations was the need to consider only certain subsequences of $\tau$-mutation sequences along a so-called canonical path as opposed to mutations in any direction. This was handled by induction and taking advantage of symmetries but required a non-intuitive decomposition of the $(i,j)$-plane into twelve regions.
Analogously, the first author's previous work in \cite{LaiNewDungeon} had to rely on a large number of base cases without utilizing the additional three Kuo Condensation Theorems. We believe that more widely used application of all four of these Kuo condensation recurrences will allow more elegant attacks on other perfect matching enumeration problems that had previously appeared too daunting.
\end{remark}
\section{Conclusions and Open Questions} \label{sec:open}
In this paper, we succeeded in starting from the Model 1 $dP_3$ quiver and providing a $\mathbb{Z}^3$-parameterization of cluster variables that are reachable via sequences of toric mutations (i.e. two incoming arrows and two outgoing arrows at every vertex when it is mutated). For such cluster variables, we presented an explicit algebra formula for their Laurent expansions in terms of this parameterization. Then for most of these cluster variables (the ones corresponding to contours without self-intersections) we also gave a combinatorial interpretation to these Laurent expansions in terms of subgraphs of the $dP_3$ brane tiling $\mathcal{T}$.
We suspect that the methods of this paper should generalize to other quivers associated to brane tilings, i.e. those that can be embedded on a torus. One interesting feature of the $dP_3$ case was the fact that the associated toric diagram and contours had six possible sides rather than the four sides that show up in the study of the octahedron recurrence \cite{speyer}, $T$-systems \cite{DiF}, Gale-Robinson Sequences and pinecones \cite{BMPW,JMZ}. This led to new phenomena such as self-intersecting contours which we wish to investigate further in future work.
On the other hand, the $dP_3$ provides an example with a lot of symmetry coming from the toric diagram consisting of three pairs of anti-parallel sides. Additionally, Model 1 gives rise to a brane tiling which is {\bf isoradial}, meaning that {\bf alternating-strand diagrams} \cite{Post,Scott}, also known as {\bf Postnikov diagrams} or {\bf zig-zag paths} \cite{GK}, behave (and can be drawn as) straight lines \cite{Vafa,HV}. For less symmetric cases, the contours might not follow the lines of the brane tiling and instead require a description directly in terms of zig-zag paths or alternating-strands instead.
In particular, the boundaries of the subgraphs in this paper are actually segments of zig-zag-paths (as observed by R. Kenyon) and the alternating-strand diagrams would naturally oscillate along the contour lines thereby separating the line of white and black vertices on the strands' two sides. However, translating lengths of these zig-zags into coordinates was not as well-behaved (see \cite{LMNT}) as the contour-coordinates of this current paper. Based on conversations with David Speyer and Dylan Thurston, they have unpublished work that works with zig-zag coordinates for general brane tilings to yield cluster structures.
\begin{problem} \label{prob:self-int}
How do we make sense of contours that self-intersect, and their corresponding subgraphs so that we get the desired cluster variables and Laurent polynomials? As indicated by conversations with R. Kenyon, D. Speyer, and B. Young, we suspect there is some sort of double-dimer interpretation for the Laurent expansions of such cluster variables. Initial conjectured combinatorial interpretations in this direction did not produce the correct formulas, but more recent computations by the second author and D. Speyer showed more promise.
\end{problem}
\begin{problem} We saw in Remarks \ref{rem:RG} and \ref{rem:zono} that the generalized $\tau$-mutation sequences have particular properties of note to physicists that more general toric mutation sequences lack. Mathematically, we wonder if there are other important ways that the generalized $\tau$-mutation sequences are special. We have already seen that they satisfy Coxeter relations in this example. Additionally, based on explorations by Michael Shapiro and Michael Gehktman \cite{GS}, it seems natural to ask if the generalized $\tau$-mutation sequences represent the integrable directions of motions in the discrete integrable system induced by this cluster algebra.\end{problem}
\begin{problem}
How well do the results and methods of this paper extend to other quivers and subgraph interpretations? For example the $F_0 (\mathbb{P}^1 \times \mathbb{P}^1)$, Gale-Robinson Quivers, and others arising from the Octahedron Recurrence or $T$-systems have similar looking combinatorial interpretations using Aztec Diamonds or Pinecones. Can we construct the Aztec Diamonds and Pinecones using a parameterization and contour coordinates? In principle, this should agree with the methods in \cite{speyer}, but the transformation between the two coordinate systems and parameterization of cluster variables is still not written down.
\end{problem}
\begin{problem}
In the first authors' work on Blum's conjecture with Ciucu \cite{lai'}, a number of other subgraphs of doubly-periodic tilings of the plane appear. Some of these are called Needle Families and Hexagonal Dungeons. In fact the Hexagonal Dungeons are the dual graph for the Model 4 $dP_3$ quiver, so it is expected that the weighted enumeration of perfect matchings of these regions should indeed yield Laurent expansions of cluster variables starting with a different initial cluster.
However, one mystery is why contours have coordinates $(a,b,c,d,e,f)$ summing up to $1$ in the present paper (see Lemma \ref{lem:closeup}), but summing up to $0$ in the case of Hexagonal Dungeons. For the Needle family, this seems to be a genuinely different quiver. Can we find other quivers such that the weighted enumeration of perfect matchings of those subgraphs also yield Laurent expansions of cluster variables reachable by certain mutation sequences in the corresponding cluster algebra?
\end{problem}
\begin{problem}
In work in progress with Di Francesco, the equations of Section \ref{sec:explicit} seem well-suited for analyzing limit shapes. This analysis uses techniques similar to those used in the case of T-systems by Di Francesco and Soto-Garrido for studying arctic curves \cite{diFSG}. Several nuances appear to make this case quite interesting such as multi-dimensional time and the role of non-convex regions.
\end{problem}
\begin{problem}
In his study of Admissible $W$-graphs, Stembridge \cite{Stembridge} introduced a family of directed graphs (equivalently quivers) $\Lambda \times \Lambda$ known as a {\bf twist}. Here $\Lambda$ is an orientation of a Dynkin diagram of type $A$, $D$, or $E$. Such twists were recently studied by Galashin and Pylyavskyy in \cite{GaPy} as part of their classification of Zamolodchikov periodic quivers. Such quivers and their associated cluster algebras have the property that one can define certain sequences of mutations such that their compositions with one another satisfy the Coxeter relations of $\Lambda$, when thought of as an action on cluster variables. As pointed out to us by Pylyavskyy, the $dP_3$ quiver is an example of such a quiver, in particular this is a twist of the Weyl group Affine $\tilde{A}_2$. The relevant Coxeter relations were seen in (\ref{eq: tau_relations}). This thus motivates an exploration for combinatorial formulas for cluster variables associated to the quivers constructed as the twists of other Weyl groups or other Zamolodchikov periodic quivers.
\end{problem}
{\bf Acknowledgments:} The authors are grateful to Mihai Ciucu, Philippe Di Francesco, Richard Eager, Sebastian Franco, Michael Gehktman, Rinat Kedem, Richard Kenyon, Pasha Pylyavskyy, Michael Shapiro, David Speyer, and Dylan Thurston for a number of inspirational discussions. We are both appreciative of the hospitality of the Institute of Mathematics and its Applications (IMA) for providing support for this project. The second author was supported by NSF Grants DMS-\#1148634 and DMS-\#13692980. Much of this research was also aided by the open source mathematical software \cite{sage}. We are both appreciative of the hospitality of the Institute of Mathematics and its Applications (IMA) for providing support for this project. Much of this project was done during the time the first author worked at the IMA as a postdoctoral associate (2014--2016).
\bibliographystyle{alphaurl}
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,108,101,565,134 | arxiv | \section{Introduction}
Currently, around $50 \%$ of all patients with localized malignancies undergo treatment including ionizing radiation, mostly in combination with tumor resection and/or chemotherapy \cite{Durante2,ptcog}. Conventional therapy with high energy photons is by far the most common approach, but the use of accelerated particles has grown exponentially, especially in the past decade. The well-defined, energy dependent, range with sharp distal fall-off and the limited lateral beam spread, typical of ions when penetrating a medium, translate into a dose profile delivered with millimeter precision. In addition, charged particles,especially for larger charge, present enhanced biological effectiveness compared to photons, resulting in reduced cellular repair \cite{Durante,Sch,Tin}. Thus the field of radiation oncology is evolving towards broader application of radiotherapy with ions, while still several key physical and biological questions remain to be fully unraveled.
In particular, the need to account for a biologically effective dose, beyond a purely physical energy deposition imposes an advanced characterization of a beam, \cite{Kramer}.
The calculation of the effective dose distribution delivered to the patient during a treatment, indeed, requires detailed knowledge of the radiation field composition at the tumor site and surrounding tissue. The beam quality across its propagation in the medium is, in fact, modified by nuclear and electromagnetic interactions of the primary ions with the patient's body nuclei, atoms and molecules, creating a mixed radiation field composed of primary as well as secondary nuclear fragments of different charge and kinetic energy, \cite{Durante,Kra}.
Such a complex field, contribute with different components delivering a different nanoscopic damage to the biological target molecules, mainly mediated by their secondary electrons distribution, \cite{Sci,Plan}, and on their turn by the radicals generated by the latter electrons, \cite{Plan2,Plan3}, despite alternative processess, \cite{Sur2,Tou}. Such a nanoscopic pattern of energy deposition, returns in a different \textit{complexity} of molecular damage, \cite{Sur}, which correlates with a different repairability and thus a different biological response.
An accurate approach for characterizing the complex radiation field produced by an ion beam is microdosimetry \cite{Ros}. There are two main points of strength in using this methodology: i) the energy deposited by radiation is measured in an area with dimensions comparable to a cell nucleus; and ii) stochastic fluctuations of energy deposition, e.g. from cell to cell, are taken into account. Microdosimetry is considered a link between the physical characteristics and the biological effectiveness of a radiation with the advantage of an experimentally measurable physical quantity, and has been used in radiobiological models to describe radiation quality. One of the most relevant examples is the \textit{Microdosimetric Kinetic Model} (MKM), which was formulated in its original version in \cite{Haw,Haw3} as an elaboration of the \textit{Theory of Dual Radiation Action} (TDRA)\cite{Kel,Zai} and of the \textit{Repair--Misrepair Model} (RMR) \cite{Cur,Tob}. The MKM exploits microdosimetric spectra to calculate the energy deposited by radiation and predicts cell survival modeling the DNA-damage repair kinetics. Today, it is one of two radiobiological models employed clinically in particle therapy, together with the local effect model (LEM), \cite{Els,Pfu}.
Although based on microdosimetry, the MKM is a purely deterministic model as only the average number of lethal lesions induced by radiation to the DNA is considered. The model aims to provide a mathematical formulation of the kinetic evolution of double--strand breaks (DSB) in the DNA in order to calculate the cell survival fraction. Mathematically, the temporal evolution of a DSB is described by a system of two ordinary differential equations representing the average number of lethal and potentially lethal damages as a function of time. This description is accurate only as long as the lethal and potentially lethal damage distributions are Poissonian, and results in a cell survival curve that follows a linear--quadratic behaviour \cite{Haw}. However, it has been widely shown in literature that the DNA damage distribution deviates significantly from a Poisson function under several irradiation conditions, such as high--dose or high--LET \cite{Haw2}. For this reason, several recent studies focused on implementing corrections to the original MKM formulation to account for non--Poissonian behaviours of the DNA damage distribution, \cite{Haw2,Haw3,Kas,Ina,Ina2,Sat}. An extensive collection of the original MKM formulation and its subsequent generalizations can be found in \cite{Bel}. Nonetheless, all MKM versions are based on the original deterministic formulation described by Hawkins \cite{Haw}.
The main goal of the present work is to develop a fully probabilistic model of DNA damage formation and its kinetic evolution based on microdosimetry. The new model, called (\textit{Generalized Stochastic Microdosimetric Model} (GSM$^2$), will provide a rigorous and general mathematical description of DNA damage time--evolution without using any a priori assumption on the lesion distribution (e.g. a Poisson). The model accuracy will be tested for different irradiation conditions (beam quality, dose and dose rate) and compared with MKM predictions, to prove both GSM$^2$ validity and advances compared to the current standard.
The classical approach for mathematically modeling a complex physical system, such as the one resulting from the interaction between cells and ionizing radiation that leads to the formation of DNA lesions, is achieved with deterministic models. In these approaches, given an initial condition the system time--evolution can be completely characterized at each state. Recent studies \cite{Smi} have shown that this approach fails mainly for three reasons: i) a precise and accurate estimation of the parameters if often non feasible; ii) it is unrealistic to account for all possible interactions as the system complexity increases; and iii) certain systems can be over-sensitive to some input parameters, typically the initial values. All above reasons have led to inclusion of stochasticity in the models via suitable random variables.
To model complex physical processes, such as lesions formation following a radiation exposure, the standard method is to consider the \textit{macroscopic} system, so that the main focus is on the system as a whole; this approach typically yields that the main equations governing the physical or biological processes are deterministic representing average values. In a \textit{microscopic} (or often nanoscopic) approach, instead, each element of the system is usually modelled using Brownian dynamics \cite{VK,Sol,Ianik}. However, the complexity of lesion formation and time-evolution makes a full Brownian dynamics-representation not feasible.
To obtain a more general and accurate description of DNA lesion formation and evolution than the one provided by a \textit{macroscopic} approach, and yet to maintain suitable mathematical tractability of the main equations, which is often missing in a \textit{microscopic} approach, a hybrid methodology, known as \textit{mesoscopic}, can be considered. This approach takes into account the stochastic nature of a system while remaining manageable from both the analytical and numerical points of view. The \textit{mesoscopic} method is based on the assumption that the process driving the system evolution is a Markov jump process \cite{Gar}. The equations of motion are described via the so--called \textit{master equation} that contains the probability density function of the whole system \cite{Gar,VK,Web}.
In GSM$^2$, we will introduce an equation, referred to as \textit{Microdosimetric Master Equation} (MME), that governs the time evolution of the joint probability density function for lethal and sub--lethal damages inside the cell nucleus and is based on the parameters $a$, $b$ and $r$. The main innovation with respect to the existing approaches is that in the proposed MME they are taken into account variations in both lesions formation and evolution caused by the randomness of these processes. In particular, we will use microdosimetry spectra for describing radiation quality and considering the stochastic nature of energy deposition.
To provide a rigorous mathematical formulation of the DNA damage kinetics, we will consider lethal and sublethal lesions inside a single cell nucleus. Potentially lethal lesions can either be repaired or not, in which case they become lethal lesions. A cell in which at least one lethal lesion has been formed is considered inactivated. A potentially lethal damage induced by radiation can undergo three main processes: (i) it can spontaneously repair at a rate $r$; (ii) it can spontaneously become a lethal damage at a rate $a$; or (iii) it can combine with another potentially lethal lesion to form a lethal lesion at rate $b$.
Starting from some probabilistic assumptions on the lesions formation, we will derive a master equation that describes the time evolution for the joint probability density function of DNA lesions, both lethal and potentially lethal. The density function solution will be shown to have first moment in agreement with the standard MKM driving equations. The main goal of this study is to overcome the Poissonian assumption on lethal lesions.
In the present work, we will further generalize the MME in two main directions. In particular, besides the damage kinetic mechanisms (i), (ii) and (iii) introduced above, we will additionally consider that: (iv) either a lethal or sub--lethal damage can be formed randomly due to the effect of the ionizing radiation at a rate $\dot{d}$; and (v) lethal lesions can move inside the cell nucleus. Case (iv) represents DNA damage formation resulting from a continuous irradiation field. In fact, together with standard lesion interactions, we will also take into account random jumps in the number of lethal and sub--lethal lesions caused by the stochastic nature of energy deposition.
Case (v), instead, accounts for the the fact that we allow lesions to move between adjacent domains. Because GSM$^2$ model consider pair--wise interactions of potentially lethal lesions, the domain size plays a crucial role. In fact, a domain too big implies that lesions created far away from each other can interact to form a lethal lesion. On other hand, a domain too small results in a lower number of lesions per domain so that the probability of double events can be underestimated. In the limit case, the domain size approaches zero, most domains contain a single lesions and interactions cannot occur \cite{Isa,Hel}. To minimize the model dependence on the domain size, we will allow interactions between lesions both belonging to the same domain and to different domains \cite{Smi}.
In summary, we will introduce a general master equation that models the joint probability distribution of DNA lethal and potentially lethal lesion inside a cell nucleus. The derived master equation will consider, besides potentially lethal lesion repair and death due to either spontaneous dead or pair--wise interaction, also the stochastic effect of energy deposition due to ionizing radiation and lesions movements between adjacent domains, providing a global description of the cell nucleus as a whole. To validate GSM$^2$, we will consider microdosimetric energy spectra obtained from Geant4 simulations \cite{G4}. We will show how different assumptions related to the probability distribution of damages number, as well as model parameters, show significant deviation from the Poisson distribution assumed by all existing models, including the MKM. We will further compute the survival probability and compare it to the classical \textit{linear--quadratic} (LQ) model \cite{Bod,McM}.
The innovations presented in this work are several. We will develop a fully probabilistic description of the DNA damage kinetic. In particular, the joint probability distribution of the number of sub-lethal and lethal lesions will be modelled. We will further generalize the model including inter--domain movements and continuous damage formation due to protracted dose. The resulting \textit{master equation} solution will provide the real probability distribution without any \textit{a priori} assumption on the density function, allowing to compute several biological endpoints. The proposed approach will be able to fully describe the stochastic nature of energy deposition both in time and space, improving the existing models were the energy deposition is averaged over both the whole cell nucleus and cell population. In doing so, we will be able to reproduce several behaviours referred to in literature as \textit{non--Poissonian effects}, that cannot be predicted by the MKM and its variants and are typically included in the models with ad hoc corrections \cite{Kas,Sat,Haw2,Haw3}.
Because of GSM$^2$ flexibility and generality, analytical solutions both on the probability density function and on the resulting survival curve are not of easy derivation. Therefore, the present study is intended as a first step of a systematic investigation of the stochastic nature of energy deposition and how it influences lesion formation. In particular, a further investigation will focus on long--time behaviour of the \textit{master equation} and the resulting survival curve. Furthermore, the principles used in the current approach will be used to develop a fully stochastic model of inter-cellular damage formation optimized to improved radiation field characterization via a novel hybrid detector for microdosimetry, \cite{Mis}.
With GSM$^2$ and its future developments, we try to shed a new light on non--Poissonian effects, to obtain a deeper mechanistic understanding which will allow us to model them more accurately.
The present paper is structured as follows: Section \ref{SEC:MKM} recalls basic assumptions and formulation of the MKM model and its variants, \cite{Sat,Ina,Ina2,Haw3,Haw2}. Then Section \ref{SEC:ME} introduces the main \textit{master equation} describing the probability distributions of lethal lesions. Subsection \ref{SEC:InitD} shows in details how microdosimetric spectra can be used to extract the energy deposition. Thus, Subsections \ref{SEC:Split}-\ref{SEC:Vox} introduced the above mentioned generalization of the \textit{master equation} to consider split dose and domain interconnection. Connection of the current model to the standard MKM are presented in Subsection \ref{SEC:Conn}. Further, long--time behaviour and survival probability resulting from the GSM$^2$ are presented in Section \ref{SEC:Surv}. At last Section \ref{SEC:Num} presents some numerical examples aiming at highlighting specific aspects resulting from the governing \textit{master equation}.
\section{Fundamentals on the Microdosimetric Kinetic Model and related non--Poissonian generalizations}\label{SEC:MKM}
The Microdosimetric Kinetic Model (MKM) is based on the following assumptions:
\begin{enumerate}
\item the cell nucleus can be divided into $N_d$ independent domains;
\item radiation can create two different kinds of DNA damage, referred to as type $I$ and $II$;
\item type $II$ lesions cannot be repaired, and for this reason will be also called lethal lesions. On the contrary, type $I$ lesions, also called sublethal, can be either repaired or evolve into a lethal lesions either by spontaneous dead or by interaction with another sublethal lesion;
\item the number of type $I$ and $II$ lesions in a single domain $d$ is proportional to the specific energy $z$ delivered by radiation to the site;
\item cell death occurs if at least one domain suffers at least one lethal lesion.
\end{enumerate}
In the described setting, lethal lesions represent clustered double-strand breaks that cannot be repaired whereas sublethal lesions are double-strand breaks that can be repaired.
Denoting by $\bar{x}_{g,z_d}$ and $\bar{y}_{g,z_d}$ the average number of type $II$ (sub--lethal) and type $I$ (lethal) lesions, respectively, induced in the domain $d$ that received a specific energy $z_d$, the following set of coupled ODE is satisfied
\begin{equation}\label{EQN:LQM}
\begin{cases}
\frac{d}{dt} \bar{y}_{d,z_d}(t) = a \bar{x}_{d,z_d} + b \bar{x}_{d,z_d}^2\,,\\
\frac{d}{dt} \bar{x}_{d,z_d}(t) = - (a+r) \bar{x}_{d,z_d} - 2 b \bar{x}_{d,z_d}^2\,.\\
\end{cases}
\end{equation}
Assuming further that $(a+r) \bar{x}_{d,z_d} >> 2b \bar{x}_{d,z_d}^2$, equation \eqref{EQN:LQM} can be simplified as
\begin{equation}\label{EQN:LQM2}
\begin{cases}
\frac{d}{dt} \bar{y}_{d,z_d}(t) = a \bar{x}_{d,z_d} + b \bar{x}_{d,z_d}^2\,,\\
\frac{d}{dt} \bar{x}_{d,z_d}(t) = - (a+r) \bar{x}_{d,z_d}\,.\\
\end{cases}
\end{equation}
For ease of notation, we will omit the subscript $(d,z)$ and indicate $\bar{x}_{d,z}:=\bar{x}$ and $\bar{y}_{d,z}:=\bar{y}$.
One of the main goals of the MKM model, is to predict the survival probability of cell nuclei when exposed to ionizing radiation, whose quality is described with a microdosimetry approach. In order to achieve this result, an additional assumption to those listed above must be made:
\begin{enumerate}[resume]
\item lethal lesions follows a Poissonian distribution.
\end{enumerate}
Under the latter assumption, the probability $S_{d,z_d}$ that a domain $d$ survives as $t \to \infty$ when receiving the specific energy $z_d$, can be computed as the probability that the random outcome of a Poisson random variable is null. Therefore, $S_{d,z_d}$ is given by
\begin{equation}\label{EQN:SurvMKMD}
S_{d,z_d} = e^{-\lim_{t \to \infty} \bar{y}_{d,z_d}(t)}\,.
\end{equation}
The explicit computation \cite{Haw,Man} shows that the number of lethal lesions as $t \to \infty$ can be expressed as
\begin{equation}\label{EQN:Lim1}
\lim_{t \to \infty} \bar{y}_{d,z_d}(t) = \left (\lambda + \frac{a \kappa}{a+r}\right )z_d + \frac{b \kappa^2}{2(a+r)} z_d^2\,,
\end{equation}
Combining equation \eqref{EQN:Lim1} and \eqref{EQN:SurvMKMD} we obtain
\[
S_{d,z_d} = e^{-A z_d - B z_d^2}\,,
\]
with $A$ and $B$ some suitable constants independent of $d$ and $z_d$.
The survival probability \eqref{EQN:SurvMKMD} can be extended to the whole cell nucleus ($S_n$), by averaging it on all domains as
\begin{equation}\label{EQN:SurvMKMC}
\begin{split}
S_{n,z_n} &:= \exp \left (-\sum_{d=1}^{N_d} \langle \lim_{t \to \infty} \bar{y}_{d,z_d}(t) \rangle \right ) \,.
\end{split}
\end{equation}
At last, by averaging $S_{n,z_n}$ over the entire cell population, the overall cell survival can be calculated as:
\begin{equation}\label{EQN:SurvMKMPFin}
S = exp\left (\alpha D + \beta D^2\right )\,.
\end{equation}
where $D$ is the macroscopic dose delivered to the entire cell population.
Details on how the survival function $S$ was derived can be found in \cite{Haw,Haw2,Haw3,Kas}.
Several generalizations \cite{HawIna2,HawIna,Ina,Ina2,Haw2,Sat,Kas} have been proposed to take into account effects due to a deviation of the lethal lesion behavior from a Poissonian distribution. All models try to correct the survival probability \eqref{EQN:SurvMKMPFin} introducing some correction term based on the overkilling effects. By overkilling effect it is intended when a single particle deposits much more energy than is required to kill a cell, \cite{Cha}, so that it kills less cells per absorbed dose. The typical survival correction is of the form, \cite{Haw2},
\[
S = \exp\left ( -(\alpha_0 + f(\bar{z}_d,\bar{z}_n)\beta)D - \beta D^2\right )\,,
\]
where $f(\bar{z}_d,\bar{z}_n)$ is a suitable correction term that depends on both energy deposition on the single domain ($\bar{z}_d$ ) and on the cell nuclues ($\bar{z}_n$). An alternative form is given by \cite{Kas}
\[
S = \exp\left ( -(\alpha_0 + \bar{z}_d^* \beta) D - \beta D^2\right )\,,
\]
where $\bar{z}_d^*$ is a term that accounts for the overkilling effects.
We refer to \cite{Bel} for a comprehensive review of the biophysical models of DNA damage based on microdosimetric quantities.
It is worth highlighting that all corrections so far proposed for non--Poissonian effects rely on ad hoc terms derived from empirical considerations. The final goal of this study, instead, is to obtain analogous corrections based on physical considerations stemming from the stochastic nature of energy deposition, \cite{Loa}.
\section{The Generalized Stochastic Microdosimetric Model GSM$^2$}\label{SEC:ME}
As a part of this study, we investigated how the models described in Section \ref{SEC:MKM} could be developed to rely on the whole probability distribution rather than simmply on its mean value. In fact, all proposed generalizations of the MKM always consider deterministic driving equations for predicting the number of lethal and sub-lethal lesions. Non-Poissonian effects are often proposed as corrections terms added to the survival fraction predicted by the MKM with no formal mathematical derivation and mainly based on empirical evaluations.
The MKM formulation is based on the probability distribution of inducing a damage when a specific energy $z$ is deposited. Once the survival for a given $z$ is computed, the specific energy is averaged over the whole cell population to yield the overall expected survival probability. To the best of our knowledge, there is no systematic investigation that aims at capturing the true stochasticity of both energy deposition and lesion formation.
The main goal of the present work is thus to generalize microdosimetric based models in order to describe the full probability distribution of lethal and sub-lethal lesions. We will take advantage of assumptions $1-5$ described in Section \ref{SEC:MKM}. Regarding assumption $4$, the MKM assumes that the lethal lesions initial distribution, given an energy deposition $z$, follows a Poisson law. We will generalize this assumption assuming a general initial distribution, allowing to fully describe the stochastcic nature of energy deposition. This point will be treated in detail in Section \ref{SEC:InitD}.
An additional remark on the importance of the initial distribution is necessary to fully understand the implication of the generalization we will carry out in this study. The stochasticity of energy deposition in a microscopic volume is the basic foundation of microdosimetry, and assuming every probability distribution to be Poissonian is a restrictive assumption that limits the model application.
In order to capture the real stochastic nature of energy deposition and related DNA damage formation we will provide a probabilistic reformulation of equation \eqref{EQN:LQM}. We denote by $\left (Y(t),X(t)\right )$ the system state at time $t$, where $X$ and $Y$ are two $\mathbb{N}_0-$valued random variables representing the number of lethal and sub--lethal lesions, respectively. We will consider a standard complete filtered probability space $\left (\Omega,\mathcal{F},\left (\mathcal{F}_t\right )_{t \geq 0},\mathbb{P}\right )$ that satisfies the usual assumptions of right--continuity and saturation by $\mathbb{P}-$null sets.
Let us consider two different sets $\mathcal{X}$ and $\mathcal{Y}$ denoting the number of type $I$ and type $II$ lesions, respectively. The heuristic interpretation of the coefficients in equation \eqref{EQN:LQM} is that $a$ is the rate at which a lesion of type $II$ becomes a lesion of type $I$, $r$ is the rate at which a lesion of type $II$ recovers and goes to the set $\emptyset$ (i.e. that of the healthy cells), whereas $b$ is the rate at which two lesions interact to become a single type $I$ lesion. These considerations can be mathematically expressed as
\begin{equation}\label{EQN:React}
\begin{split}
& X \xrightarrow{a} Y\,,\\
& X \xrightarrow{r} \emptyset\,,\\
& X + X \xrightarrow{b} Y\,.\\
\end{split}
\end{equation}
Thus, at a given time $t$, the probability to observe $x$ lesions of type $II$ and $y$ of type $I$ is
\[
p(t,y,x) = \mathbb{P}\left (\left (Y(t),X(t)\right ) = \left (y,x\right )\right )\,.
\]
Also,
\[
\begin{split}
&p_{t_0,y_0,x_0}(t,y,x) := p(t,y,x|t_0,y_0,x_0) =\mathbb{P}\left (\left .\left (Y(t),X(t)\right ) = \left (y,x\right )\right |\left (Y(t_0),X(t_0)\right ) = \left (y_0,x_0\right )\right )\\
\end{split}
\]
is the probability conditioned to the fact that at $t=t_0$ there were $x_0$ and $y_0$ sub--lethal and lethal lesions, respectively.
To determine the governing master equation for the above probability density $p(t,y,x)$, we need to account for all possible system changes in the infinitesimal time interval $dt$
Thus, the following scenarios may happen:
\begin{description}\label{DES:react}
\item[(i)] at time $t$ we have exactly $(y,x)$ lesions and they remain equal with a rate $(1- (a+r) x - b x(x-1))dt$, namely
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y,x\right )\right ) =\\
&= 1- ((a+r) x - b x(x-1))dt + O(dt^2)\,;
\end{split}
\]
\item[(ii)] at time $t$ we have exactly $(y,x+1)$ lesions, and one lesion recovers with rate $(x+1)r dt$, namely
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y,x+1\right )\right )= (x+1)r dt + O(dt^2)\,;
\end{split}
\]
\item[(iii)] at time $t$ we have exactly $(y-1,x+1)$ lesions, and one type $II$ lesion becomes of type $I$ with a rate $(x+1) a dt$, namely
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y-1,x+1\right )\right ) = (x+1) a dt + O(dt^2)\,;
\end{split}
\]
\item[(iv)] at time $t$ we have exactly $(y-1,x+2)$ lesions, and two type $II$ lesions become one type $I$ with a rate $(x+2)(x+1)b dt$, namely
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y-1,x+2\right )\right ) = (x+2)(x+1)b dt + O(dt^2)\,;
\end{split}
\]
\end{description}
Grouping the equations derived in \ref{DES:react} we obtain
\[
\begin{split}
p(t+dt,y,x) &=p(t,y,x)\left (1- ((a+r) x - b x(x-1))dt + O(dt^2)\right ) + \\
&+p(t,y,x+1)\left ((x+1)r dt + O(dt^2)\right ) + \\
&+p(t,y-1,x+1)\left ((x+1) a dt + O(dt^2)\right ) +\\
&+ p(t,y-1,x+2)\left ((x+2)(x+1)b dt + O(dt^2)\right )\,,
\end{split}
\]
Writing down above relation and taking the limit as $dt \to 0$ we eventually obtain the \textit{microdosimetric master equation} (MME)
\begin{equation}\label{EQN:Master}
\begin{split}
\partial_t p(t,y,x) &= - \left ((a+r) x - b x(x-1)\right )p(t,y,x)+ (x+1)r p(t,y,x+1) + \\
&+(x+1)a p(t,y-1,x+1) +(x+2)(x+1)b p(t,y-1,x+2)\,,
\end{split}
\end{equation}
where above $\partial_t$ denotes the partial derivative with respect to the first argument of $p(t,y,x)$, that is the time variable. Equation \eqref{EQN:Master} must be equipped with suitable initial condition $p(0,y,x) = p_0(y,x)$.
We remark that the above derived MME arises solely from the probabilistic assumptions regarding lesion formation.
The MME \eqref{EQN:Master} can be written for short as
\begin{equation}\label{EQN:Master2}
\begin{split}
\partial_t p(t,y,x) &= \left (E^{-1,2} -1\right )\left [x(x-1) b p(t,y,x)\right ] + \left (E^{-1,1} -1\right )\left [x a p(t,y,x)\right ]+\left (E^{0,1} -1\right )\left [x r p(t,y,x)\right ]=\\
&= \mathcal{E}^{-1,2}\left [x(x-1) b p(t,y,x)\right ] + \mathcal{E}^{-1,1} \left [x a p(t,y,x)\right ]+ \mathcal{E}^{0,1} \left [x r p(t,y,x)\right ]\,,\\
\end{split}
\end{equation}
where above we have denoted the creation operators defined as
\[
\begin{split}
\mathcal{E}^{i,j} \left [f(t,y,x)\right ] &:= \left (E^{i,j}-1\right )\left [f(t,y,x)\right ] :=f(t,y+i,x+j) - f(t,y,x)\,.
\end{split}
\]
\subsection{Connection with the MKM}\label{SEC:Conn}
The present section aims at showing that the mean value of the master equation does satisfy, under certain assumptions, the kinetic equations \eqref{EQN:LQM}. In what follows, $\mathbb{E}$ denotes the mean value of a random variable defined as
\[
\begin{split}
\bar{x}(t) &:= \mathbb{E}[X(t)] = \sum_{x,y \geq 0} x p(t,y,x)\,,\\
\bar{y}(t) &:= \mathbb{E}[Y(t)] = \sum_{x,y \geq 0} y p(t,y,x)\,.
\end{split}
\]
Note that, for a general function $f$, the following holds true
\begin{equation}\label{EQN:CreaOp}
\begin{split}
\sum_{x,y \geq 0} x \mathcal{E}^{i,j} \left [f(y,x) p(t,y,x)\right ] &= -\mathbb{E} j f(Y,X)\,, \\
\sum_{x,y \geq 0} y \mathcal{E}^{i,j} \left [f(y,x) p(t,y,x)\right ] &= -\mathbb{E} i f(Y,X)\,.
\end{split}
\end{equation}
Therefore, multiplying the MME \eqref{EQN:Master2} by $x$ and $y$, we obtain using \eqref{EQN:CreaOp}
\begin{equation}\label{EQN:MeanM}
\begin{cases}
\frac{d}{dt}\mathbb{E}[Y(t)] &= b \mathbb{E}[X(t)(X(t)-1)] + a \mathbb{E}[X(t)] \,,\\
\frac{d}{dt}\mathbb{E}[X(t)] &= - 2 \mathbb{E}[X(t)(X(t)-1)] - (a+r) \mathbb{E}[X(t)] \,.\\
\end{cases}
\end{equation}
Equations \eqref{EQN:MeanM} are still not of the form of equations \eqref{EQN:LQM2}; in particular they depend on a second order moment $\mathbb{E}[X(t)(X(t)-1)]$. Nonetheless explicit computation will show that, if we try to compute a kinetic equation for the second order moment $\mathbb{E}[X(t)(X(t)-1)]$, we would obtain a dependence on higher moments, and so to obtain an infinite set on coupled ODE. To solve the impasse we shall make what is called a \textit{mean--field} assumption, that is we assume that
\[
\mathbb{E}[X(t)(X(t)-1)] \sim \mathbb{E}[X(t)]^2\,.
\]
Under the above \textit{mean--field assumption}, equations \eqref{EQN:MeanM} become
\begin{equation}\label{EQN:MeanM2}
\begin{cases}
\frac{d}{dt}\bar{y}(t) &= b \bar{x}^2(t) + a \bar{x}(t) \,,\\
\frac{d}{dt}\bar{x}(t) &= - 2 \bar{x}^2(t) - (a+r) \bar{x}(t) \,,\\
\end{cases}
\end{equation}
and the original kinetic equations are in turn recovered.
A quick remark on the \textit{mean--field assumption} is needed. In the case of $x$ being large enough, we have that the following approximation holds true $\mathbb{E}[X(t)(X(t)-1)] \sim \mathbb{E}[X^2(t)]$; therefore the \textit{mean field assumption} means that $\mathbb{E}[X^2(t)] - \mathbb{E}[X(t)]^2 \sim 0$. Noticing that the last term is nothing but the variance, and recalling that the variance for a random variable is null if and only if the random variable is in fact deterministic, if the \textit{mean field assumption} is realistic than the realized number of lesion does not differ much from the mean value so that everything we need to know is the mean value.
On the contrary if there are evidence that the mean value is not a realistic approximation for the realized number of lesion, the \textit{mean--field assumption} must be considered unrealistic so that the knowledge of the full probability distribution is essential to have a complete understanding of the system.
\subsection{On the initial distribution for the number of lethal and sub-lethal lesions}\label{SEC:InitD}
One of the main advantages of the proposed model is that the distribution of DNA damages induced by an ionizing radiation $z$ does not need to be chosen as Poissonian. In the present section we will show how the number of induced lesions can be evaluated starting from microdosimetric spectra.
Let $f_{1;d}(z)$ be the single--event distribution of energy deposition on a domain $d$, see \cite[]{Ros}. The single--event distribution $f_{1;d}$ can be either computed numerical via Monte Carlo toolkit or by experimental microdosimetric measurements.
The full probability distribution of an energy deposition thus depends on the number of events that deposit energy on the cell nucleus. Given a cell nucleus, composed by $N_d$ domains, the probability that $\nu$ events deposit an energy $z$ obeys to a Poissonian distribution of mean $\lambda_n := \frac{z_n}{z_F}$, being $z_n$ the mean energy deposition on the nucleus, i.e.
\[
z_n = \int_0^\infty z f(z|z_n) dz\,,
\]
and $z_F$ the first moment of the single event distribution $f_{1;d}$ defined as
\begin{equation}\label{EQN:ZF}
z_F := \int_0^\infty z f_{1;d}(z) dz\,.
\end{equation}
Then, assuming a Poissonian probability that a domain registers $\nu$ events, the energy deposition distribution is given by
\[
f(z|z_n) := \sum_{\nu = 0}^\infty \frac{e^{- \frac{z_n}{z_F}}}{\nu!}\left (\frac{z_n}{z_F}\right )^n f_{\nu;d}(z)\,,
\]
where $f_{\nu;d}(z)$ is the energy deposition distribution resulting from $\nu$ depositions.
In particular, given a domain $d$ suffers $\nu$ energy deposition events, the distribution resulting from $\nu$ events can be computed convolving $\nu$ times the single event distribution, see, \cite{Ros,Sat}. Therefore, the imparted energy $z$ has distribution $f_{\nu;d}$, computed iteratively as
\[
\begin{split}
f_{2;d}(z) &:= \int_0^\infty f_{1;d}(\bar{z})f_{1;d}(z-\bar{z})d\bar{z}\,,\\
&\dots\,,\\
f_{\nu;d}(z) &:= \int_0^\infty f_{1;d}(\bar{z})f_{\nu-1;d}(z-\bar{z})d\bar{z}\,.\\
\end{split}
\]
For a certain energy deposition $z$, the induced number of lesions is a random variable. The standard assumption is that the distribution of $X$ given $z$ is a Poisson random variable of mean value $\kappa z$. Analogous reasoning holds for $Y$, being the number of induced lesion given $z$ a Poisson random variable of mean $\lambda z$. Given the high--flexibility of the proposed approach, the number of induced lesions given an energy deposition $z$ can be any random variables. It is worth stressing that the chosen distribution may vary with LET.
In the following general treatment we will denote by $p^X_z(x|\kappa z)$, resp. $p^Y_z(y|\lambda z)$, the initial random distribution for the number of sub--lethal, resp. lethal, lesions given an energy deposition $z$. We remark again that both $p^X_z(x|\kappa z)$ and $p^Y_z(y|\lambda z)$ can be any probability distribution. Specific relevant examples will be considered in the numerical implementation.
Putting all the above reasoning together, the MME \eqref{EQN:Master2} reads
\begin{equation}\label{EQN:MMEInitial}
\begin{cases}
\partial_t p(t,y,x) &= \mathcal{E}^{-1,2}\left [x(x-1) b p(t,y,x)\right ] + \mathcal{E}^{-1,1} \left [x a p(t,y,x)\right ]+ \mathcal{E}^{0,1} \left [x r p(t,y,x)\right ]\,,\\
p(0,y,x) &= p^X_0(x)p^Y_0(y)\,,
\end{cases}
\end{equation}
where the initial distribution is obtained as
\begin{equation}\label{EQN:InitialIntegral}
\begin{split}
p^X_0(x) &= \int_0^\infty p^X_z(x|\kappa z) f(z|z_n) dz \,,\\
p^Y_0(y) &= \int_0^\infty p^Y_z(y|\kappa z) f(z|z_n) dz\,.\\
\end{split}
\end{equation}
\subsection{The protracted dose case for the Generalized Stochastic Microdosimetric Model}\label{SEC:Split}
The MME can be further generalized to consider \textit{protracted dose} irradiation. We refer to \textit{protracted dose} as a continuous dose delivery in time. On the contrary a fixed in time, asymptotically short impulse-like dose irradiation is called \textit{acute dose irradiation}, whereas a series of acute irradiations at prescribed timesteps is referred to as \textit{split dose irradiation}. Existing models fail at properly describing protracted dose, being unable to fully capture the stochasticity inherent to energy deposition. Usually, strong assumptions are used to treat protracted dose, \cite{Haw2}, or a split dose is used to approximate a continuous dose delivery, \cite{Ina}. Nonetheless, models cannot fully predict experimental data, \cite{Ina}.
The generalization of the GSM$^2$ master equation \eqref{EQN:Master2} to consider a continuous dose irradiation is not trivial. In fact, at random time $t$ the number of lesions, either lethal or sublethal, exhibits a jump upward of a random quantity that depends on the energy deposition $z$, that we recall is a random variable.
More formally, the possible interactions now become
\[
\begin{split}
& X \xrightarrow{a} Y\,,\\
& X \xrightarrow{r} \emptyset\,,\\
& X + X \xrightarrow{b} Y\,.\\
& X \xrightarrow{\dot{d}} X + Z_{\kappa}\,.\\
& Y \xrightarrow{\dot{d}} Y + Z_{\lambda}\,,\\
\end{split}
\]
being $Z_{\lambda}$ and $Z_{\kappa}$ two random variables with integer--valued distributions $p_0^X$ and $p_0^Y$ respectively, defined as in equation \eqref{EQN:MMEInitial}. The parameter $\dot{d}$ represents the dose rate, see, \cite{HawIna,HawIna2}, and it is given by $\dot{d} := \frac{z_n}{T_{irr} z_F}$, being $z_F$ given in equation \eqref{EQN:ZF} and $T_{irr}$ is the total irradiation time.
\begin{description}\label{DES:react2}
\item[(i)]
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y,x\right )\right )=\\
&= 1 - ((a+r) x + b x(x-1) + \dot{d} (1-p_0^X(0))(1-p_0^Y(0)))dt + O(dt^2)\,;
\end{split}
\]
\item[(ii)]
\[
\begin{split}
&\mathbb{P}\left (\left .\left (Y(t+dt),X(t+dt)\right ) = \left (y,x\right )\right |\left (Y(t),X(t)\right ) = \left (y-i_y,x-i_x\right )\right ) =\\
&= \dot{d} p_0^X(i_x) p_0^Y(i_y) dt + O(dt^2)\,, \quad i_x = 1,\,\dots,x\,,i_y = 1,\,\dots,y\,,
\end{split}
\]
\end{description}
Further, reactions $(ii)$, $(iii)$ and $(iv)$ in Section \ref{SEC:ME} remain valid.
Therefore, a similar analysis to the one carried out in Section \ref{SEC:ME} leads to the following MME
\begin{equation}\label{EQN:Master2FullSplit}
\begin{split}
\partial_t p(t,y,x) &= \left (E^{-1,2} -1\right )\left [x(x-1) b p(t,y,x)\right ] + \left (E^{-1,1} -1\right )\left [x a p(t,y,x)\right ]+\\
&+\left (E^{0,1} -1\right )\left [x r p(t,y,x)\right ]+\left (\sum_{i_x=1}^x \sum_{i_y=1}^y E^{-i_y,-i_x}_{\dot{d}} - (1-p_0^X(0)) (1-p_0^Y(0))\right )\left [\dot{d} p(t,y,x)\right ]= \\
&= \mathcal{E}^{-1,2}\left [x(x-1) b p(t,y,x)\right ] + \mathcal{E}^{-1,1} \left [x a p(t,y,x)\right ]+ \\
&+\mathcal{E}^{0,1} \left [x r p(t,y,x)\right ]+\mathcal{E}^{-y,-x}_{\dot{d}}\left [ \dot{d} p(t,y,x)\right ] \,.
\end{split}
\end{equation}
The operator in the last line of equation \eqref{EQN:Master2FullSplit} right end side has been defined as
\[
\begin{split}
&\mathcal{E}^{-y,-x}_{\dot{d}} f(t,y,x) := \left (\sum_{i_x=1}^x \sum_{i_y=1}^y E^{-i_y,i_x}_{\dot{d}} - (1-p_0^X(0))(1-p_0^Y(0))\right )f(t,y,x) =\\
&=\sum_{i_x=1}^x \sum_{i_y=1}^yp_0^X(i_x) p_0^Y(i_y)\, f(t,y-i_y,x-i_x) - (1-p_0^X(0))(1-p_0^Y(0))\, f(t,y,x)\,.
\end{split}
\]
The protracted dose is assumed to be delivered up to a finite time $T_{irr}<\infty$, beyond which no irradiation is considered and the systems evolves according to \eqref{EQN:Master2}.
\subsection{The diffusive cell nucleus model for GSM$^2$}\label{SEC:Vox}
In Section \ref{SEC:ME}, we investigated the time evolution for lethal and sub--lethal lesions in the cell nucleus.
As we discussed above, one of the major weaknesses of the standard MKM and its extensions is the choice of the cell domains \cite{Smi}. In fact, too small domains translate in a null probability of double events, whereas too big domains imply that distant lesions may combine to produce a lethal lesion. To overcome this problem, the cell nucleus is split into several domains so that the time evolution in each domain can be considered independently. Further, following treatment's aim is to encompass above limitations, allowing domains interaction and variability in shape and dimension.
In the current Section, we will show how the MME \eqref{EQN:Master2FullSplit} can be extended to include interactions between the domains. In order to keep the treatment as clear as possible, no protracted dose will be consider. The general case of a continuous irradiation can easily included in the following treatment via arguments analogous to the ones used in Section \ref{SEC:Split}.
Let us consider $N_d$ domains (referred to also as voxels) that can undergo one of the following possible reactions
\begin{equation}\label{EQN:ReactN}
\begin{split}
& X_i \xrightarrow{a} Y_i\,,\quad i=1,\dots,N_d\,,\\
& X_i \xrightarrow{r} \emptyset\,,\quad i=1,\dots,N_d\,,\\
& X_i + X_i \xrightarrow{b} Y_i\,,\quad i=1,\dots,N_d\,.
\end{split}
\end{equation}
A reasoning analogous to the one carried out in Section \ref{SEC:ME} leads to the following MME
\begin{equation}\label{EQN:MasteriN}
\begin{split}
\partial_t p(t,y,x) &= \sum_{i=1}^{N_d} \mathcal{E}^{-1,2}_i\left [x_i(x_i-1) b p(t,y,x)\right ] + \sum_{i=1}^{N_d} \left ( \mathcal{E}^{-1,1}_i \left [x_i a p(t,y,x)\right ]+ \mathcal{E}^{0,1}_i\left [x_i r p(t,y,x)\right ]\right ) \,.
\end{split}
\end{equation}
In equation \eqref{EQN:MasteriN}, the variables $x$ and $y$ are $N-$ dimensional vectors with $i-$th component given by $x_i$ and $y_i$, representing the number of sub--lethal or lethal lesions, respectively, within the $i-$th domain ($i =1,\,\dots, N$).
\begin{Remark}
In order to keep the notation as simple as possible, in equation \eqref{EQN:ReactN} we chose the rates $a$, $b$ and $r$ independent of the domain. Similar results would be obtained with voxel-dependent rates $a_i$, $b_i$ and $r_i$, $i=1,\,\dots,\,N_d$.
\end{Remark}
Empirical evidence shows that the lesions, together with interacting within the same voxel, may also move to a different voxel. In fact, lesion spatial movements inside a cell has been demonstrated to be significantly higher than the typical voxel size \cite{Schet} . To account for this behaviour, we will add an additional term to the MME \eqref{EQN:MasteriN}.
Besides reactions considered in equation \eqref{EQN:ReactN}, we now assume further the following
\begin{equation}\label{EQN:ReactV}
\begin{split}
& X_i \xrightarrow{\kappa_{i;j}^X} X_j\,, \quad i,\,j=1,\dots,N_d\,,\\
& Y_i \xrightarrow{\kappa_{i;j}^Y} Y_j\,, \quad i,\,j=1,\dots,N_d\,.\\
\end{split}
\end{equation}
\begin{Remark}
We assumed possible interactions also between non adjacent domains. If the reactions described by equation \eqref{EQN:ReactV} are to be intended as lesions movements inside the cell nucleus, the most reasonable choice for the interaction rates is to set
\[
\kappa_{i;j}^X = \kappa_{i;j}^Y = 0\,,
\]
for $j \not \in \Gamma_i$, being $\Gamma_i$ the set of adjacent domains to $i$.
\end{Remark}
Following the same process described in in Section \ref{SEC:ME}, we obtain the MME
\begin{equation}\label{EQN:MasteriND}
\begin{split}
\partial_t p(t,y,x) &= \sum_{i=1}^N \mathcal{E}^{-1,2}_i\left [x_i(x_i-1) b p(t,y,x)\right ] + \sum_{i=1}^N\left ( \mathcal{E}^{-1,1}_i \left [x_i a p(t,y,x)\right ]+ \mathcal{E}^{0,1}_i\left [x_i r p(t,y,x)\right ]\right )+\\
&+ \sum_{i,j=1}^N {}^X\mathcal{E}^{-1,1}_{i,j} \left [x_i \kappa_{i;j}^X p(t,y,x)\right ] +\sum_{i,j=1}^N {}^Y\mathcal{E}^{Y;-1,1}_{i,j} \left [y_i \kappa_{i;j}^Y p(t,y,x)\right ]\,,
\end{split}
\end{equation}
where the operators are defined as
\[
\begin{split}
{}^X\mathcal{E}^{-1,1}_{i,j} f(t,y,x) &= \left (E^{0,1}_{i}E^{0,-1}_{j} -1\right )f(t,y,x) \,,\\
{}^Y\mathcal{E}^{-1,1}_{i,j} f(t,y,x) &= \left (E^{1,0}_{i}E^{-1,0}_{j} -1\right )f(t,y,x) \,.\\
\end{split}
\]
The first two lines of equation \eqref{EQN:MasteriND} accounts for reactions within the same voxel, whereas the last line described movements between adjacent domains.
Using the same approach for modeling the initial damage distribution (Section \ref{SEC:InitD}) the resulting MME reads
\begin{equation}\label{EQN:MasteriNDDoubleInit}
\begin{cases}
\partial_t p(t,y,x) &= \sum_{i=1}^{N_d} \mathcal{E}^{-1,2}_i\left [x_i(x_i-1) b p(t,y,x)\right ] + \sum_{i=1}^{N_d}\left ( \mathcal{E}^{-1,1}_i \left [x_i a p(t,y,x)\right ]+ \mathcal{E}^{0,1}_i\left [x_i r p(t,y,x)\right ]\right )+\\
&+ \sum_{i,j=1}^{N_d} {}^X\mathcal{E}^{-1,1}_{i,j} \left [x_i \kappa_{i;j}^X p(t,y,x)\right ] +\sum_{i,j=1}^{N_d} {}^Y\mathcal{E}^{Y;-1,1}_{i,j} \left [y_i \kappa_{i;j}^Y p(t,y,x)\right ]\,,\\
p(0,y,x) &= \prod_{i=1}^N p_{0;i}^X(x_i)p_{0;i}^Y(y_i)\,,
\end{cases}
\end{equation}
where $p_{0;i}^X(x_i)p_{0;i}^Y(y_i)$ denotes the initial distribution for the voxel $i$ as computed in equations \eqref{EQN:MMEInitial}--\eqref{EQN:InitialIntegral}.
\subsection{Survival probability}\label{SEC:Surv}
Cell survival is one of the most relevant biological endpoints in radiobiology and is defined as the probability for a cell to survive radiation exposure\red, {mostly measured by its ability to form clonogens, i.e., to retain its reproductive potential}. Taking into account assumption $5$, no lethal lesions must be present into the cell nucleus after a sufficiently large time has passed from the irradiation. An estimate of cell survival can be obtained from the solution to the MME \eqref{EQN:Master2}. In this study, we will focused on a single domain, because the calculations for the entire cell is completely analogous.
The survival probability for the domain $d$ under the assumptions $1$-$5$ introduced above, is defined as
\begin{equation}\label{EQN:Surv}
S^Y := \mathbb{P}\left (\lim_{t \to \infty} Y(t) = 0\right )\,;
\end{equation}
in order to asses the survival probability, the limiting long--time distribution for the MME \eqref{EQN:Master2} must be studied.
From an heuristic perspective, since the number of sub--lethal lesion can only decrease, the points $\{(y,0)\,:\, y \in \mathbb{N}_0\}$ are absorbing states. Furthermore, the system reaches an absorbing state in a finite time with probability $1$, converging towards a limiting stationary distribution. With \textit{absorbing state} we mean that once the system reaches the point $(y,0)$, it stays there and future evolutions are no longer considered.
The high generality of the GSM$^2$model, especially because no detailed balance is satisfied or explicit conserved quantities can be obtained, makes the closed form for the limiting distribution not easily computable. For this reason, in the present work the survival probability will be computed from the corresponding master equation numerical solution as
\[
S^Y = \lim_{t \to \infty} p(t,0,0)\,.
\]
In forthcoming developments, we will study in more detail the survival probability resulting from the proposed GSM$^2$ model and its explicit form. In general, it is worth mentioning that, besides the numerical approach, as the one used here, and the analytical approach in which the survival probability is explicitly computed, an efficient approach is to introduce suitable approximations in the driving equation so that a formal expansion of the survival probability can be computed, \cite{Gar}.
\section{Numerical implementation}\label{SEC:Num}
To calculate a numerical solution to the MME \eqref{EQN:Master2}, the following steps are performed:
\begin{enumerate}
\item We choose the number $N_d$ of domains in which the cell nucleus is divided. As GSM$^2$ does not rely on any specific assumption for the probability distribution, the domains do not need to be assumed of equal size. For each domain, the \textit{single event} energy deposition distribution $f_{1;d}(z)$ is obtained with Geant4 \cite{G4} simulations.
\item The number of lethal and sub-lethal lesions are sampled from the distributions $p^X_0(x)$ and $p^Y_0(x)$ as derived in equation \eqref{EQN:InitialIntegral}. The standard assumption is that $p^X_z$, resp. $p^Y_z$, is a Poisson distribution of mean $\kappa z_d$, resp. $\lambda z_d$. Given the general setting, we will compare the results with an initial Gaussian distribution of different possible variances.
\item Given the initial number of lesions, the evolution paths are simulated via the \textit{stochastic simulation algorithms} (SSA) \cite[Chapter 13]{Wei}.
\item Steps 1-3 are repeated to obtain the Monte Carlo empirical distribution of lethal and sublethal lesions over the cell nucleus;
\item The survival probability in the single domain as well as the cell nucleus are calculated from the empirical distribution obtained in step 4;
\end{enumerate}
Previous steps can be computed independently for each domain if no interaction between domains is assumed or the paths for the whole nucleus can be estimated simultaneously, in case of a dependent-voxel model. The computational effort for the latter is substantially higher. It should be noted here that developing an efficient simulation algorithm is beyond the aim of the present work and we refer to \cite{Sim} for a review of possible simulation algorithms.
\subsection{The numerical solution}
\begin{figure}[thpb]
\centering
\includegraphics[width=.45\columnwidth]{SubL.png}
\includegraphics[width=.45\columnwidth]{LetL.png}\\
\caption{Sub-lethal lesions (left panel) and lethal lesions (right panel) evolution. GSM$^2$ parameters were set to $r=1$, $a=0.1$ and $b=0.01$. The red line represents the average value.}
\label{FIG:CompM}
\end{figure}
\begin{figure*}[!t]
\begin{tabular}{c|cc}
\rotatebox{90}{$t1$} & \addheight{\includegraphics[width=.45\textwidth]{mt1_b.png}} &
\addheight{\includegraphics[width=.45\textwidth]{mft1_b.png}}\\
\rotatebox{90}{$t2$} & \addheight{\includegraphics[width=.45\textwidth]{mt100_b.png}} &
\addheight{\includegraphics[width=.45\textwidth]{mft100_b.png}}\\
\rotatebox{90}{$t3$} & \addheight{\includegraphics[width=.45\textwidth]{mt150_b.png}} &
\addheight{\includegraphics[width=.45\textwidth]{mft150_b.png}}\\
\end{tabular}
\caption{Master equation solution at time $t=1 m$ (top panel), $t=100$ arb. unit (middle panel) and $t=150$ arb. unit. GSM$^2$ parameters were set to $r=1$, $a=0.2$ and $b=0.1$.}
\label{FIG:Density}
\end{figure*}
\begin{figure*}[!t]
\begin{tabular}{cc}
\addheight{\includegraphics[width=.45\textwidth]{xdose.png}} &
\addheight{\includegraphics[width=.45\textwidth]{ydose.png}}\\
\end{tabular}
\caption{Master equation solution for acute, split and protracted doses of $100 \,Gy$. GSM$^2$ parameters were set to $r=1$, $a=0.2$ and $b=0.1$.}
\label{FIG:Dose}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\begin{tabular}{c}
\addheight{\includegraphics[width=.65\textwidth]{val1_b.png}} \\
\addheight{\includegraphics[width=.65\textwidth]{val3_b.png}} \\
\addheight{\includegraphics[width=.65\textwidth]{val4_b.png}}\\
\end{tabular}
\end{center}
\caption{Comparison of long--time lethal lesion distributions and Poisson distributions. Top panel: dose=$5$ Gy, $r=1$, $a=0.1$ and $b = 0.01$. Middle panel: dose=$100$ Gy, $r=5$, $a=0.1$ and $b = 0.01$. Bottom panel: dose=$150$ Gy, $r=5$, $a=0.2$ and $b = 0.1$.}
\label{FIG:Param}
\end{figure*}
The present Section is devoted to finding and discussing the numerical solution of MME derived in Section \ref{SEC:ME}. In particular, the full master equation \eqref{EQN:Master2} is solved via the \textit{stochastic simulation algorithms} (SSA) \cite[Chapter 13]{Wei}, so that the density is estimated with a Monte Carlo simulation. We simulate 10$^6$ events and the density function is thus reconstructed empirically.
The goal of this Section is also to highlight how a different setting affects the lesions density distribution. In particular, it will emerge how the density distribution resulting from the corresponding master equation changes for different lesion evolution parameters, initial probabilistic conditions or also irradiation conditions.
To assess the energy deposited on the domain, we used the microdosimetry approach as discussed in Section \ref{SEC:InitD}. With Geant4, we simulated microdosimetric spectra of a 20 MeV/u carbon ion beam traversing a 1.26 cm diameter sphere filled with pure propane gas with a low density ($1.08 *10^{-4} g/cm^3$), such that the energy depositions are equivalent to those in 2 $\mu$m of tissue. This geometry reproduces a standard Tissue Equivalent Proportional Counter (TEPC) as used for example in \cite{Mis2}. Specific energies acquired with the TEPC are then converted to the domain size of interest as reported in \cite[Section 2]{Bel}.
The choice to simulate a microdosimeter has been made with the aim of remaining as consistent as possible with real experiments.
In addition, carbon ions have been chosen since existing model fails at predicting relevant radiobiological endpoints under high-LET regimes.
In the calculations, we consider high doses, so that multi-event distributions as described in Section \ref{SEC:InitD} are computed for $z_n >> 1$. This choice is due to the fact that the plotted distributions refer to a single cell nucleus domain and thus, to highlights differences at such a small scale, high dose needs to be considered. At lower doses, differences between the MME solution for a single nucleus domain for different parameters are more difficult to appreciate. Nonetheless, small differences at the domain level can translates into relevant dissimilarities at the macroscopic level.
Figure \ref{FIG:CompM} reports different path realizations for the lethal and sub--lethal evolution; the stochastic paths are also compared to the mean value, that evolves according to the MKM kinetic equations \eqref{EQN:LQM}. The plots indicate that the mean value can not be representative of the whole path realizations distribution.
Figure \ref{FIG:Density} shows the master equation solution at different times. The left panels show the contour plots of the joint probability distributions of lethal and sub--lethal damages, together with their marginal distributions. The rights panels are 3D representations of the density function solutions. At a starting time t$_1$, there is a high variability in the number of reparable lesions while small fluctuations are present in the number of lethal lesions. At a later time t$_3$, instead, the situation is the exact opposite, with a greater variability in the number of lethal lesions against small fluctuations in the number of sub--lethal lesions.
Figure \ref{FIG:Dose} compares lethal and sub-lethal lesion distributions for different types of irradiation conditions, namely acute dose delivery at initial time, split dose at uniform time steps and protracted dose according to equation \eqref{EQN:Master2FullSplit}. A split dose at uniform times yields a rather similar lesion distribution as a fully stochastic protracted dose irradiation, while the solution differ significantly for the acute dose case. This result is caused by the non-linear effect that double events have on the lesions probability distribution.
The long time distribution of lethal lesions is compared with a Poisson distribution for different parameters and doses in Figure \ref{FIG:Param}. At lower doses and for $b$ negligible with respect to $r$, the MME solution is in fact Poissonian (top panel).
As the dose increases, the MME solution can be non poissonian even if $r$ dominates $b$ (middle panel). Finally, for higher doses and higher $b$, the MME solution differs significantly from a Poisson distribution (bottom panel).
\subsection{Effect of the initial law on the lethal lesions distribution and cell survival}
\begin{figure*}[!t]
\begin{tabular}{cc}
\addheight{\includegraphics[width=.45\textwidth]{CompX1.png}} &
\addheight{\includegraphics[width=.45\textwidth]{CompYt3.png}}\\
\end{tabular}
\caption{Lethal and sub-lethal lesions distribution depending on the chosen initial distribution at time $t1=1\,$ arb. unit and $t3 = 150$ arb. unit. The initial distributions $p^X_z$ and $p^Y_z$ have been chosen as a Poisson distribution of mean $\mu = {\lambda,\kappa}z$ or as a Gaussian distribution with mean $\mu = {\lambda,\kappa}z$ and variance $\sigma^2 \in \{0.5 \mu,\,,\,1.5\mu\}$. The MME parameters were set to $r=1$, $a=0.2$ and $b=0.1$.}
\label{FIG:Density2}
\end{figure*}
\begin{figure}[thpb]
\centering
\includegraphics[width=.7\columnwidth]{Surv2.png}
\caption{Cell survival function calculated for different initial conditions. The initial distributions $p^X_z$ and $p^Y_z$ have been chosen as a Poisson distribution of mean $\mu = {\lambda,\kappa}z$, or as a Gaussian distribution with mean $\mu = {\lambda,\kappa}z$ and variance $\sigma^2 \in \{0.5 \mu,\,,\,1.5\mu\}$. The MME parameters were set to $r=1$, $a=0.2$ and $b=0.1$.}
\label{FIG:Survival}
\end{figure}
The goal of the present Section is to emphasize the dependence on the initial law of the long--time lethal lesions distribution, showing that the lethal lesions marginal distribution might differ from the Poisson distribution that is typically assumed.
We considered different initial conditions for equation \eqref{EQN:InitialIntegral}. In particular, the following initial distributions were selected for $p^X_z(x|\kappa z)$ and $p^Y_z(y|\lambda z)$: i) a Poisson random variable with mean value $\mu$; and ii) a Gaussian with mean value $\mu$ and variance between 0.5$\mu$ and 1.5$\mu$. The mean value $\mu$ has been set to $\lambda$z for sub--lethal lesions and $\kappa$z for lethal lesions.
The results are plotted in Figure \ref{FIG:Density2} and indicate that a a more peaked initial distribution correspond to a more peaked long--time distribution, meaning that the initial value can sharpen or broaden lethal and sub--lethal lesion distributions. This effect has a straightforward consequence on the resulting survival probability shown in figure \ref{FIG:Survival}.
we test both the typically used Poisson initial distribution and a Gaussian random variable with different variance.
Figure \ref{FIG:Density2} shows the comparison of lethal and sub--lethal lesion distributions for different initial conditions. In particular, initial datum has been taken to be a Poisson random variable with mean value $\mu$. Additionally, the case of an initial distribution to be Gaussian and with mean value $\mu$ and variance 0.5$\mu$ and 1.5$\mu$ has been considered. The mean value $\mu$ has been set to $\lambda$z for sub--lethal lesions and $\kappa$z for lethal lesions. It can be seen how the initial value can sharpen or broaden lethal and sub--lethal lesion distributions, with a straightforward consequence on the resulting survival probability, see figure \ref{FIG:Survival}.
Survival probability is one of the most used and relevant radiobiological endpoints. Figure \ref{FIG:Survival} highlights how a different initial condition affects the resulting survival curve.
In particular, it is important to notice that the probability of survival rises or falls in the high dose region. One of the major flaws in classical models, with particular reference to the linear--quadratic model, is the fact that it significantly underestimates the probability of survival for high doses.
\section{Conclusion}
The present work represents a first step into an advanced and systematic investigation of the stochastic nature of energy deposition by particle beams, with particular focus on how it affects DNA damage. Starting from basic probabilistic assumptions, a \textit{master equation} for the probability distribution of the number of lethal and sub--lethal lesions induced by radiation of a cell nucleus has been derived. The new model, called (\textit{Generalized Stochastic Microdosimetric Model} (GSM$^2$), provides a simple and yet fundamental generalization of all existing models for DNA-damage prediction, being able to truly describe the stochastic nature of energy deposition. This advance results in a more general description of DNA-damage formation and time-evolution in a cell nucleus for different irradiation scenarios, from which radiobiological outcomes can be assessed.
Most of the existing models assume a Poissonian distribution of lethal damage, ignoring the true space-time stochastic nature of energy deposition.
In order to overcome the limits of this assumption, \textit{ad hoc} corrections have been introduced, called non-Poissonian corrections in the literature, but an extensive survey on the complete stochasticity of biophysical processes to the best of our knowledge has never been carried out.
This work aims at highlighting how the stochastic nature of energy deposition can lead to different cell survival estimations and how non--Poissonian effects emerge naturally in the general setting developed. Remarkably, in particular, GSM$^2$ does not require any \textit{ad hoc} corrections for taking into account overkill effects, as it is required by all existing models.
A further investigation, dedicated to a separate work, will focus on verification and optimization of the prediction of the survival curves for different systems, i.e. radiation type, irradiation conditions and cell line. In addition, given the general nature of the proposed model, closed form solutions for lesion distribution and survival curve are typically difficult to obtain. However, it is fair to say that, due to the several process involved, approximation methods provide powerful tools to estimates several quantity of interests. Among the most important approximation methods, we mention system size expansions \cite{Kur,VK,Gar,Mel}, and the related small--noise asymptotic expansions \cite{Gar}. Both approaches will be investigated in future research to provide accurate estimates of several biological endpoints, such as cell survival.
Further investigation will also be devoted to develop a more efficient numerical implementation of the driving \textit{master equation}.
\section{Acknowledgments}
This work was partially supported by the INFN CSNV projects MoVe-IT and NEPTune.
\cleardoublepage
\bibliographystyle{apalike}
|
1,108,101,565,135 | arxiv | |
1,108,101,565,136 | arxiv | \section{Introduction}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figmay_newhi.eps}
\caption{Distribution of the selected fields, numbered as in Table \ref{tabfields}. The distribution is superimposed on the HI 21 cm brightness temperature map (\cite{kal05}) in the radial velocity interval -100 km/s $\leq v_{LSR} \leq$ 100 km/s. The map is in galactic coordinates, centered on $l=-60^{\circ}$.}
\label{stardistribution}%
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.49\textwidth]{star1wide.eps}
\includegraphics[width=0.49\textwidth]{star1narrow.eps
\caption{Illustration of the NaI doublet/6614 \AA\ DIB global, multi-component analysis, here for the GES star 12574905-6458511 (field 3). The NaI region is shown at top panel and the DIB region at lower panel. The entire fitted spectral interval is shown at left, while the right figure displays enlarged the NaI-D2 region (top) and the DIB region (lower).
In each figure the red line shows the stellar spectrum (lower plot) and the fitting residuals (upper plot). The dotted lines are the models: stellar (orange), telluric (green), and interstellar components (blue) respectively. The thick blue line is the final adjustment. The radial velocities of the NaI and DIBs components are kept linked (see the black vertical line). Velocities are heliocentric.}
\label{globalanalysis}%
\end{figure*}
\begin{table*}
\caption{Selected Fields with FLAMES Observations. Fields 1 to 5 are from GES. Fields 6 and 7 are part of the ESO program 079.B-0662.}
\begin{center}
\begin{tiny}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Field & GIR. & UVES & $l_c$ & $b_c$ & D$_{min}$ & D$_{max}$ & A0$_{min}$ & A0$_{max}$ & Ang. & Setting(s) & Studied \\
& targ. & targ. & ($^\circ$) & ($^\circ$) & (kpc) & (kpc) & mag & mag & size ($^\circ$) & & DIBs \\
\hline
1 COROT-ANTICENTER & 57 & 7 & 212.9 & -2.0 & 0.1 & 8.6 & 0.0 & 2.5 & 0.4 & UVES5800,HR15N,HR21 & 6283,6614,8620 \\
2 COROT CENTER & 105 & 5 & 37.5 & -7.0 & 0.1 & 16.1 & 0.0 & 2.1 & 0.3 & UVES5800,HR15N,HR21 & 6283,6614 \\
3 NGC4815 & & 13 & 303.6 & -2.1 & 0.9 & 5.0 & 1.0 & 2.5 & 0.1 & UVES5800 & 6283,6614 \\
4 $\gamma$ Vel & & 25 & 262.8 & -7.7 & 0.7 & 2.3 & 0.0 & 1.0 & 0.9 & UVES5800 & 6283,6614 \\
5 OGLE BUL\_SC45 & & 12 & 1.0 & -4.0 & 1.3 & 2.8 & 0.8 & 1.2 & 0.3 & UVES5800 & 6283,6614 \\ \hline
6 OGLE BUL\_SC24(O) & 99 & & 357.3 & -3.6 & 0.5 & 10.0 & 1.6 & 3.1 & 0.4 & HR13 & 6283 \\
7 OGLE BUL\_SC3,4(W) & 106 & & 0.1 & -2.1 & 0.7 & 9.6 & 0.7 & 2.7 & 0.4 & HR13 & 6283 \\
\hline
\end{tabular}
\end{tiny}
\end{center}
\label{tabfields}
\end{table*}
About 500 diffuse interstellar bands (DIBs) have been detected in the optical domain between 4400 and 8600 {\AA} \citep{hobbs2008,2009ApJ...705...32H,mccall2013}, and their number in the infrared and ultra-violet windows is still growing \citep{joblin90,geballe11}. Identifying the carriers of those irregular features that appear in absorption in stellar spectra is a subject of active research for many years \citep[see reviews by][and references therein]{1995ARA&A..33...19H,sarre2006,2011ApJ...727...33F,camicoxiau}. A lot of effort has been put to extract the most precise information on the DIBs from high resolution, high signal stellar spectra and derive their various properties, in particular their fine structure and the way they react to the radiation field (see e.g. \citealt{JD94,krelo95,gala00,tuairisg00,cox2005,welty06,2009ApJ...705...32H,vos11}).
In those spectral studies, DIBs were extracted from hot (early-type) stars because of their smooth, easily fitted continuum. This introduces a limitation on the number of potential target stars that can be used to study DIBs. In the case of nearby stars, it favors highly variable conditions in irradiation and in subsequent DIB carrier destruction or ionization state changes (e.g. \citealt{vos11}).
Recently, progresses were shown on the extraction of DIBs from cool (late-type) star spectra, in particular a method using synthetic stellar model devised by \citet{chen13}. Such a technique has the advantage of enormously increasing the number of potential targets, probing average conditions in the interstellar medium (ISM) far away from the strong radiation field of UV stars, and simultaneously providing some feedback to improve both the synthetic stellar spectrum and the DIB detection (Monreal-ibero et al., in preparation). Other methods have been applied to cool stars, i.e., using comparisons with unreddened star spectra \citep{2013ApJ...778...86K}, or statistical methods based on principal component analysis (PCA) \citep{zasowski14}.
Independently of the search for their carriers, our goal here is to study how they can be used to trace the ISM at the Galactic scale, both its distribution and its kinematics (see previous works in this direction by \citealt{vanloon,vanloon14,zasowski14}). In particular, DIBs used as an interstellar (IS) tracer may potentially help to build 3D ISM maps by means of inversion methods, similar to the inversion of neutral sodium or extinction data \citep{vergely01,vergely010,2014A&A...561A..91L}. Thanks to the Gaia mission, launched 19 December 2013, parallax distances should become available for a huge number of Milky Way stars, allowing to build more accurate maps. One of the observational advantages of DIBs over gaseous lines is their spreading over a wide wavelength interval (from optical to IR), and, more important, the absence of saturation for distant or particularly opaque sight-lines.
Another strong advantage over the use of photometric extinction is the derivation of the kinematic information, i.e., the radial velocities of the IS clouds.
All of the individual IS clouds that are present along a line-of-sight (LOS) imprint a specific DIB absorption whose strength and Doppler shift reflect the IS matter content and cloud radial velocity, respectively. This is why measuring DIB equivalent widths in a single-component approach becomes inappropriate when the radial velocity interval that is spanned by all cloud projected motions is not negligible with respect to the DIB spectral width, i.e., it is not a valid technique for narrow DIB and/or distant sight-lines. However, the extraction of multi-component DIBs together with their kinematics has been rarely attempted. \cite{cox2005} used the convolution of a template DIB profile and the multi-component KI absorption profile, while \cite{cordiner08} (resp. \citealt{cordiner11}) fitted separately the Milky Way and M31 (resp. M33) DIBs using Gaussian profiles. Here, we present improved fitting methods allowing for multi-component of DIBs. The methods are fully automated. Automated here means that no intervention by the user is needed during the series of fitting that are launched in a unique run for a large number of spectra. More precisely, no spectral interval selection for continuum fitting is needed, and there is a total absence of manual "guesses" (most profile-fitting methods are only partly automated and require those "manual" steps). Each component has a pre-determined shape derived from high resolution spectra of hot nearby stars. The methods are suitable for any type of stars as long as their stellar parameters have been determined and their synthetic spectra can be computed.
We have applied these new fitting techniques to a series of spectra of cool target stars for which stellar atmospheric parameters and estimated distances have been determined spectroscopically.
Part of the data are from the Gaia-ESO Spectroscopic survey (GES) \citep{2012Msngr.147...25G}, a public spectroscopic survey that started in 2011 and that aims at recording VLT/FLAMES spectra of $\sim$ 100000 stars in our Galaxy down to magnitude 19, systematically covering all the major
components of the Milky Way and the selected open clusters. This survey will provide
a wealth of precise radial velocity and abundance determinations. The other data is part of an earlier program devoted to the study of the inner disk \citep{hill12,hill14}. Deducing properties of the ISM is a \textit{by-product} of these stellar-oriented programs.
Seven FLAMES fields were selected for being widely distributed in galactic longitudes, to probe very different interstellar cloud complexes, and close to the Plane, to ensure significant absorptions. They were chosen totally independently of the primary objectives (i.e. open cluster studies, bulge star properties, etc.., and of the target star properties themselves). We also gave priority to fields with targets widely distributed in distance.
Our goal is (i) to test our interstellar absorption fitting methods, (ii) to study the variation of the DIBs as a function of the distance along the LOS and show the potentiality of the DIBs for 3D mapping purposes, and (iii) to study the DIB-extinction relationship in different regions of the Milky Way (MW) disk.
Section 2 presents the data and some general properties of the selected DIBs. Section 3 describes the spectral analysis method for multi-component DIB extraction and illustrates its application. Section 4 describes the results and the observed DIB properties. In this section we compare the DIB equivalent widths with the estimated extinctions and draw LOS profiles of DIBs in the various directions. Section 5 discusses future improvements and the mapping potentialities.
\section{Data and choice of DIBs}
Of the seven fields, five fields are GES data.
We complemented the GES data with previously recorded spectra from two fields towards the bulge.
Along with one of the GES LOS, this allows comparisons between DIBs in directions that differ by a few degrees. Overall we tried to probe
a variety of cases to test our methods. All of the selected spectra are characterized by a good signal-to-noise (S/N) ratio, S/N $\gtrsim$ 50, which ensures good results.
Figure \ref{stardistribution} shows the distribution of the fields in the sky, superimposed on a HI 21cm emission map. The projections to the Plane are also shown in Fig \ref{gxmap}. All targets were observed with the FLAMES multi-object spectrograph at the VLT-UT2. We used both GIRAFFE ($R \simeq 17000$) and UVES ($R \simeq 47000$) observations \citep[see][for UVES]{2000SPIE.4008..534D}. The UVES spectra cover the 5822 to 6831 \AA\ spectral range which contains the \textit{classical} NaI (D2-D1 5889.9-5895.9 \AA) IS lines as well as some rather strong DIBs, such as the 6283.8 (hereafter called 6283) and 6613.6 (6614) \AA\ bands. Depending on the observed field, the GIRAFFE observations were made with the H665 (HR15N) setting (spectral range 6444-6816 \AA) which allows study of the 6614 \AA\ DIB at a lower resolution than UVES, and with the H875 (HR21) setting (spectral range 8475-8982 \AA) which includes the 8620.4 (8620) \AA\ DIB (informally known as the \textit{Gaia} DIB, since it is contained in the spectral interval of the Radial Velocity Spectrometer (RVS) on board the satellite). The additional inner disk bulge data was observed with GIRAFFE H13 setting (spectral range 6170-6309 \AA).
The GES UVES and GIRAFFE reduced spectra are issued from the dedicated pipeline \citep{sacco}, while the two OGLE field spectra were reduced by means of using our dedicated GIRAFFE tool based on the ESO pipeline.
Table \ref{tabfields} lists the selected fields, the number of targets in each field, the field center coordinates, the observing modes, and the whole range of estimated stellar distances and extinctions (see the next section). The full list of target stars along with their coordinates, estimated extinction and distances can be found in the online Appendix. There are 429 target stars, from which about half (224) have been observed as part of GES. A majority of those GES target stars are within the GES-CoRoT (COnvection ROtation et Transits plan\'etaires) fields (172 stars).
We focus on the 6614 and 6283 \AA\ DIBs that are strong enough to ensure a detection in most targets. When recorded, we also analyzed the shallower 8620 \AA\ DIB.
The 6614 \AA\ DIB
is a widely studied, strong and narrow DIB and has a good correlation with E(B-V) \citep[see][etc]{sonnentrucker97,2011ApJ...727...33F,vos11,2013A&A...555A..25P,2013ApJ...774...72K}.
The broader 6283 \AA\ DIB is
is a strong, broad DIB that was also widely studied and is known for being significantly influenced by the radiation field (\citealt{vos11}). The 8620 \AA\ DIB
is rather weak band that has been recently studied as part of the RAVE spectroscopic Survey \citep[see][]{2008A&A...488..969M, 2013ApJ...778...86K} and is of particular interest in the frame of Gaia. It seems to be quite well correlated with the reddening, although the number of studies is still limited.
\section{Data analysis}
\subsection{Description of the fitting method}
The principles of the fitting method are essentially the same as in \cite{chen13},
the main difference being that we allow here for multi-component DIBs, and subsequently extract kinematic information.
As the length of LOS increases, differences in cloud radial velocities may become comparable or larger than the DIB width, making the use of a multi-component fit necessary.
We model the observed spectrum as the product of a synthetic stellar spectrum ($S_{\lambda}$), a synthetic telluric transmission ($T_{\lambda}$), and a DIB model that is itself the product of several DIB profiles, each one representing one absorbing cloud complex. When the telluric absorption is very weak or negligible, $T_\lambda \simeq 1$. Finally, to take into account the local slope of the unnormalized spectrum, we allow for a continuum that is simply represented by a linear polynomial with A and B as the coefficients. This appears to be sufficient for our limited wavelength interval around each DIB. The model spectrum ($M$) can be therefore written as,
\begin{eqnarray}
M(_{\lambda})= S_{\lambda} [V_{star}] \hspace{0.2cm}\times\hspace{0.2cm} T_{\lambda} [V_{tell}]^{\alpha_{tell}} \hspace{0.1cm}\times\nonumber \\
\hspace{0.4cm} \Pi^{i}(DIB^{i}_{\lambda} \hspace{0.05cm}[vel^{i}]^{\hspace{0.05cm}\alpha^{i}})
\hspace{0.2cm} \times\hspace{0.2cm} ([A]+[B]\times\lambda) \hspace{0.2cm}.
\end{eqnarray}
$V_{star}$ is the stellar radial velocity, $V_{tell}$ is the Earth's motion, and $vel^i$ is the interstellar cloud radial velocity. These various terms are detailed below, as well as the coefficients $\alpha_{tell}$ and $\alpha_{i}$.
The computation of the stellar model $S_{\lambda}$ requires the preliminary knowledge of the stellar parameters. For each of our target stars, the effective temperature, gravity, metallicity, and micro-turbulence have been previously determined: (i) for the GES targets we use the
stellar parameters jointly determined by the GES team members \citep{smiljanic14,recioblanco14,Lanz2014}; (ii) for the additional archival data, see \citet{hill12}. Based on the stellar parameters, a synthetic stellar model was computed for each target star using an ATLAS 9 model atmosphere and the SYNTHE suite \citep{2005MSAIS...8...14K,sbo2004,sbo2005}. In the case of GES targets, this may yield a synthetic spectrum which is not exactly the same as the one of the synthetic spectral library used in GES. Similarly, inner disk spectra may be slightly different from those used in the first analysis.
However, in both cases the differences should be too small to influence the determinations of the DIBs, see section 5.
The synthetic telluric transmissions $T_{\lambda}$ were computed by means of the LBLRTM code (Line-By-Line Radiative Transfer Model, \citealt{lblrtm05}), using the molecular database HITRAN (HIgh-resolution TRANsmission molecular absorption \citep{hitran2008}. This telluric transmission model is available online in TAPAS web-based service \citep{tapas}. Telluric lines are strong in the 6283 \AA\ spectral region and negligible for the 6614 \AA\ band. We make use of the same telluric models for the derivation of the fitting of neutral sodium lines. The coefficient $\alpha_{tell}$ is proportional to the optical depth of the telluric lines.
The models for the 6614 and 6283 \AA\ bands are empirical profiles that have been previously determined from high signal to noise spectra of nearby stars \citep{2013A&A...555A..25P}.
Since the laboratory wavelengths for the DIBs are currently unknown and their profiles are irregular, the choice of rest wavelengths that correspond to a null Doppler shift of the absorbing matter is somewhat arbitrary. Throughout this work, we use, for these first two DIBs, the wavelength values derived by \cite{hobbs2008} who cross-calibrated the DIB profiles and interstellar KI absorption lines. We assumed that the rest wavelength corresponds to the deepest point in the profile. Because our model profiles may slightly differ from the \cite{hobbs2008} profiles, a small offset may exist between the rest wavelengths, of the order of few km/s, that we neglect here. On the other hand, it is well established that the 6614 \AA\ DIB has substructures, and that these substructures may slightly vary from one LOS to the other \citep{2002A&A...384..215G}. This results in small changes of the overall profile. In our case, the GIRAFFE and UVES spectral resolutions do not allow these subtle changes to be distinguished. We ignore the profile variability to simplify the modeling. For at least the 6614 \AA\ DIB, it has been shown that in very rare, extreme conditions for the radiation field, the DIB profile may evolve and be characterized by a redward wing \citep{oka13}. We neglect this possibility here, a reasonable assumption since our LOS do not target
particular strong infrared sources.
The model for the 8620 \AA\ DIB is also an empirical model, obtained by averaging DIB profiles from several spectra based on the \cite{chen13} data analysis. For this band the rest wavelength is chosen to be the one defined by \cite{2008A&A...488..969M}.
The three empirical DIB profiles are defined over the $\lambda\lambda$ 6609-6619 \AA, 6263-6303 \AA, and 8612-8628 \AA\ intervals respectively. Finally, $\alpha_{i}$ is an adjustable coefficient that is the ratio between the optical depth of the absorber that produces the DIB and the optical depth of reference.
The fitting procedure adjusts to the data the convolution of the above product by the instrumental function, here represented by a Gaussian ($G$). During the adjustment of the composite stellar-DIB-telluric model, we allow Doppler shifting of the stellar model by a free quantity $V_{star}$ to take into account the stellar radial velocity, of the telluric transmission model by a free quantity $V_{tell}$ to take into account the Earth's motion, and of the DIB profile $i$ by a radial velocity $vel^{i}$ to take into account the ISM kinematics. We could evidently use the star radial velocity that comes out from the stellar spectrum analysis and is derived over a much wider wavelength range, and we could also make use of the telluric information linked to the observing conditions. However,
a cross-correlation operation has been actually integrated in our code to make a first estimate of these offset values which is convenient for handling any spectroscopic dat
, and allow for their fine tuning during the adjustment.
Our derived values actually conform to the expected ones. We allow for changes of the $\alpha_{tell}$ parameter and $\alpha^{i}$ to adjust the telluric lines and DIB strength, respectively.
The DIB equivalent width (EW) is derived in two different ways:
(i) by using the best fit DIB strength $\alpha^{i}$ and the equivalent width of the DIB model, which provides a first result we refer to as the fitted EW ($EW_f$), or (ii) by measuring the true area of the absorption band with respect to the continuum, which provides a second result that is independent of the DIB model and we name the continuum-integrated EW (EW$_{ci})$. The $EW_{ci}$ is obtained after subtraction of the other components (stellar and telluric lines) in the normalized spectrum.
In the multiple component case, the EW for each absorbing cloud can be independently derived using the $EW_f$ method. The total of intervening matter corresponds to the sum of the fitted EWs from each DIB component. In contrast, the $EW_{ci}$ method does not detect the individual components, but only measure the total absorption.
The spectral interval used for the computation of the DIB EW is the same as the one of \cite{2011ApJ...727...33F}, or in the case of 8620 \AA\ DIB is taken from - 7 to +7 \AA\ from the DIB center.
In principle, sky emission lines disappear after background subtraction, however there are potential residuals. The spectral ranges we consider here for the DIB extraction are free of
strong sky emission lines, e.g. OI at 6300 \AA\ does not overlap with the 6283 \AA\ DIB. There is an exception in the case of the red wing of the 8620 \AA\ DIB where emission line residuals may influence the DIB fitting (see next sections). Similarly, there may be a presence of features within strong stellar lines which are not accounted-for by stellar atmosphere models, e.g. circum-stellar H alpha
emissions or interstellar permitted and forbidden emissions, but they do not overlap with our selected DIBs.
\subsection{Fitting strategies and examples of adjustments}
For all multi-component adjustments, it is necessary to start with initial parameters that are as close as possible to the actual solutions, in order to avoid secondary minima and in order to converge more rapidly toward the final solution. Here, the initial guesses for the number of required velocity components, their radial velocities and strengths come either from interstellar NaI lines as in the case of UVES spectra, or, in the absence of any absorption line, from a simplified decomposition of the HI emission spectrum, taken from the spectral HI cube in the direction of the target star as in the case of GIRAFFE spectra. Prior to the use of those guesses we performed profile-fitting tests without any such initial parameters, and compared with the subsequent results. We did not find any negative influence of the guesses such as biases towards a non realistic solution, instead we always found the expected positive effects of fast convergence towards the primary minimum.
\subsubsection{Use of the NaI absorption lines}
In the case of the NaI lines, they are not only used as sources of the first \textit{guesses} of the cloud parameters, but they also enter in the global analysis of lines and DIBs, which means that they are simultaneously measured together with the DIB components. Their radial velocities are linked to remain identical throughout the adjustment, component by component. Such a method is justified by the fact that any NaI line must have a (strong or weak) DIB counterpart. From the previous observations, we know that all of the detected DIBs were found to be associated with strong neutral sodium lines. There may be a small Doppler shift between the DIB and interstellar NaI line center due to the preferential presence of the DIBs carriers in a particular phase, e.g. at the cloud periphery or in the core. However, those shifts remain small compared with the DIB widths and we will neglect this effect. On the other hand, such a global fitting method is particularly tractable here because the determination of the initial guesses for the parameters can be quite precise, especially if the interstellar lines used are not saturated.
The automated global analysis procedure is developed in the frame of the \cite{igor} software and environment which allows to fit multiple data sets simultaneously while linking some of their parameters. Initial guess values for the radial velocities of the interstellar NaI components are preliminarily determined from the observed spectrum on the basis of the main absorption peaks.
The sodium lines are modeled by Voigt profiles with three free parameters: opacity, radial velocity, and apparent temperature. In normal, realistic profile fitting of NaI lines, the apparent temperature (combination of thermal broadening and turbulence) is constrained to be $T < 10000$K, since NaI is negligible in warmer gas. However, here we are interested only in the first order kinematics, and neither the actual number of clouds nor the NaI columns need to be known in details.
This is why, in order to avoid having too many interstellar components, we extend the line broadening and allow for a significantly higher apparent temperature ($T < 100000$K). In turn, we list EWs only, and omit NaI column densities that are too imprecise. Figure \ref{globalanalysis} shows an illustration of the global analysis of the NaI- D2/D1 lines and the 6614 \AA\ DIB. In all cases the fitting results reveal a good agreement between the DIB/NaI radial velocities and the main HI 21 cm velocities. Figure \ref{globalanalysis2} is a similar illustration for the 6283 \AA\ DIB.
\begin{figure}[h]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.9\linewidth]{star12574905_1e+05_smooth8_maskspectra16283.eps}
\caption{Same as Fig. \ref{globalanalysis} (except for the enlarged figure) for the 6283 \AA\ DIB.}
\label{globalanalysis2}%
\end{figure}
\subsubsection{Use of the HI 21 cm emission profiles}
In the second case, i.e. when no NaI lines are available and instead HI emission spectra \citep{kal05} are used, the fitting scheme is different. Since the HI emission spectrum represents the totality of the IS clouds, both in front of and beyond the target star, a global analysis based on all main HI components is inappropriate.
The HI emission spectrum is therefore used to construct a table of velocity guesses
($v_{r_{HI}}$)
and provide upper and lower limits to the velocity range. Then, the DIBs are fitted independently of the actual HI measurement, using a hierarchical sequence of velocity prior values described below. Another significant difference in this second case is that the initial values of the cloud Doppler shifts are much less precise than in the case of sodium lines, and the cloud velocity profiles strongly overlap (for the same gas temperature the Doppler width is about 5 times wider than for sodium). Finally, the HI map has a spatial resolution of $\sim 0.6^\circ$, larger than the FLAMES field-of-view.
Still,
the emission spectra give an appropriate starting point for the fitting and initial parameters of the interstellar cloud components.
However, the 6614 and 8620 \AA\ DIBs have very different widths and only the 6614 \AA\ DIB is narrow enough that multi-components with velocity differences on the order of 10 km/s or more can be distinguished in an automated way. Figure \ref{multiDIB} shows an example of such a fitting of this DIB based on the HI initial guesses. The first adjustment involves a single component $v_{r_{HI}}$ and uses as a guess the smallest absolute value of the HI velocities, that in all cases corresponds to local gas. When the single-component velocity derived from the fit is significantly different from $v_{r_{HI}}$, the second velocity component from the $v_{r_{HI}}$ table is included and a fit with those two prior
values is performed, and so on. Using two components gives a significantly better fit, see the red part of the DIB.
The very broad 8620 \AA\ band does not react with enough sensitivity to changes on the order of 10-20 km/s for guiding the fit to multi-velocity solutions, at least for our present dataset. Moreover, many spectra are contaminated
by sky emission residuals which make the fitting even more difficult. For those reasons and after several negative tests we have chosen to keep the mono-cloud procedur
, i.e. we consider only the first step (see Fig. \ref{multiDIB8620}) and the prior is the velocity that corresponds to the smallest absolute value (the local value). Still, the derivation of the DIB EW is made with a rather good precision, as tests made with one or more components have shown, again due to the large width of this absorption band. Exceptionally, we use the velocity results from the 6614 \AA\ DIB fitting as the initial guesses of the 8620 \AA\ DIB fitting to avoid the artificial effects of the sky emission contamination.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{initialguessHI.eps}\\
\includegraphics[width=0.99\linewidth]{star2583_665syntstar2583_v0range655666_665nomaskonediffcons}
\includegraphics[width=0.99\linewidth]{star2583_665syntstar2583_v0range655666_665nomaskmultidiffcons}
\caption{Illustration of the multi-component 6614 \AA\ DIB fitting: GIRAFFE field 1, GES target 06441034-0048254. The red line shows the stellar spectrum. The dotted lines are the models: stellar (orange), telluric (green), empirical DIBs (grey). The thick blue line is the optimal model adjustment. The initial guesses for the DIB velocity centroids are 24, 40, 50 km/s and based on the HI spectrum in the same direction (see top plot).
\textit{Middle:} an example of a preliminary adjustment with a unique DIB component. The DIB velocity is found to be $\sim$ 33 km/s. The large difference from the initial guess (24 km/s) demonstrates the need for the introduction of a second cloud component. \textit{Bottom:} an example of a subsequent adjustment with two DIB components. The two fitted velocities are now close to the first two HI emission peaks.}
\label{multiDIB}%
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth
{star2583_875syntstar2583_v0range857867_875_300414.eps}
\caption{Same as Fig. \ref{multiDIB} (middle), but for 8620 \AA\ (GES target 06441034-0048254). We remark sky residuals in the red wing of DIB. For this broad DIB a single velocity component is used (see text).}
\label{multiDIB8620}%
\end{figure}
\subsection{Derivation of the DIB equivalent width and error estimate}
As previously discussed, the DIB EW can be derived using two different ways: the EW$_f$ and the EW$_{ci}$. The
results and figures that are presented in this article all correspond to the first method. As already said it allows for the distribution into separate components, but it has also the additional advantage of being less influenced by the potential errors in the computed stellar lines.
The reason why both EWs are computed in each case is that their comparisons acts as a flag for the quality of the fit and reveals bad quality spectra. For all data we present the two EWs are found to agree within the observational and model uncertainties.
The errors on the EW have three distinct sources: errors on the stellar continuum determination, statistical noise, and errors on the stellar model:
$\sigma^2=\sigma_{cont}^2+\sigma_{S/N}^2+\sigma_{stellar}^2$.
The error on the stellar continuum placement is mainly linked to the statistical noise and both errors are estimated in a joined manner. In order to obtain a first, global estimate of those combined errors, we performed a preliminary study
that is a series of simulations with varying random noise.
For each simulation, we fitted the DIB and then compared all resulting EWs. For a random noise representative of the typical S/N of the spectra (S/N$\simeq$100),
we obtained a typical relative error of about 5\% on the EW (more specifically a deviation of $\sim$ 5 m\AA\ when the EW is 100 m\AA). This gives an
estimate of the contribution of the first two errors.
Regarding the third error linked to the stellar model, we already know that
the data-model residuals are larger than average for some specific stellar lines and depend mainly on the stellar effective temperature and metallicity \citep[see][]{chen13}.
Figures \ref{residual} and Fig. \ref{residual2} in the Appendix show the stacked residual of the DIB fitting for the 6614 \AA\ and 8620 \AA\ bands for $\sim 160$ GIRAFFE stars, and examples of dependence on the star effective temperature. To study the order of magnitude of the contribution of the stellar model to the error, we extracted %
all of the residuals, and estimated their maximum level at the center of the DIB. This corresponds to the most contaminated cases for which the stellar line falls close to the DIB center. Then we performed again a random noise simulation with this new variance instead of the measurement noise, and we obtained a new error estimation on the order of 13 \% and 15\% respectively for the two DIBs. We estimate that this gives us a realistic estimate of the maximum total error from the three sources.
Although our final estimate for individual spectra will be based on another method, described below,
this range for the errors linked to the signal and the model illustrates the gain in precision we can expect in future by improving the stellar model.
Applying the above method to each of individual targets would be too time consuming. Instead, we use a different approximation. For the first two errors, we use the following formulation, $\sigma_{S/N+cont}=\sigma_{S/N} \times \frac{\Delta\lambda}{\sqrt(N)}$; $\Delta \lambda$ is width of DIB and $N$ is number of points/pixels covering this width. The signal to noise ratio is estimated for each spectrum based on a linear fit in a clean area. Secondly, to obtain the third error, $\sigma_{stellar}$, we performed two consecutive fits without, then with
masking of the strong stellar lines that fall in the DIB interval. The number of masked lines depends on the DIB and on the stellar radial velocity (it varies between one and three lines).
The difference between the two calculated EWs (when stellar lines are not masked and when they are masked) gives us an estimate of $\sigma_{stellar}$, $\sigma_{stellar}=\Delta EW_f = EW_f - EW_{f masked}$. Finally the total error is
$\sigma^2 = \sigma_{S/N+cont}^2 + \sigma_{stellar}^2 $. This method gives errors that are in agreement with those from the preliminary study described above.
For the 8620 \AA\ DIB, we have an additional complication because the right wing of the DIB region is sometime contaminated by sky emission residual $\sigma_{sky}$. Correcting for this emission is beyond the scope of this work, and estimating its effects can not be done in the same way as for the two other DIBs
because the contamination results in a "bumpy" feature which changes the absorption shape and produces a non realistic runaway shift from the true DIB radial velocity.
Instead, we calculated the error as the sum of four terms: $\sigma^2=\sigma_{cont}^2+\sigma_{S/N}^2+\sigma_{stellar}^2+\sigma_{sky}^2$. The term $\sigma_{sky}$ is obtained by calculating the variance in the region of sky contamination and multiplying by the full width half maximum (FWHM) of the DIB profile. We plan to incorporate in future the pixel-by-pixel estimated uncertainties provided by the pipeline.
\subsection{Distance and reddening estimates}
To estimate the distance and extinction of the GES data, we used the 2D Bayesian method described in \cite{Babusiaux14} \citep[see also][]{2010MNRAS.407..339B}. All our targets have 2MASS NIR photometry \citep{Cutri03} as well as V magnitude from different sources: OGLE-II photometry \citet{Udalski02} for the bulge directions, \citet{Deleuil09} for the CoRoT fields, and \citet{Bragaglia14} for the open clusters. We will use here the V-K colour which is more sensitive to the extinction than the J-K colour used in \citet{Babusiaux14}.
We used the \cite{Bressan12} isochrones (Parsec 1.1) with a step of 0.05 in log(Age) between [6.6, 10.13] and a step of 0.05 dex in [M/H] between [$-2.15$, 0.5]. Each isochrone point $i$, corresponding to a metallicity [M/H]$_i$, age $\tau_i$ and mass $\mathcal{M}_i$, has a weight associated to it $P(i)$ according to the Initial Mass Function (IMF) $\xi(\mathcal{M})$ and Star Formation Rate (SFR) $\psi (\tau)$. We used here the \cite{Chabrier01} lognormal IMF (integrated over the mass interval between isochrone points) and a constant SFR (considering that we have a grid sampled in logAge this means that the SFR associated weight is proportional to the age), and we did not introduce any age-metallicity correlation.
We computed the probability of a star with the observed parameters $\tilde{O}$ ($\widetilde{{\mathrm{T_{eff}}}}$,$\widetilde{\log g}$,$\widetilde{{\mathrm{[Fe/H]}}}$,$\tilde{V}$,$\tilde{K}$) to have the physical parameters of the isochrone point $i$ (${\mathrm{T_{eff}}}_i$,$\log g_i$,${\mathrm{[Fe/H]}}_i$, $\tau_i$, $\mathcal{M}_i$, $V_i^0$, $K_i^0$),
\begin{equation}
P(i|\tilde{O}) \propto P(\tilde{O}|i) P(i).
\end{equation}
To compute $P(\tilde{O}|i)$, we assume Gaussian ($\mathcal{N}$) observational errors $\epsilon_O$ on the atmospheric parameters and the magnitudes.
Assuming a distance $d$ and an extinction $A_{0}$ for the isochrone point $i$, we have
\begin{equation}
P(\tilde{O}|i,d,A_{0}) \propto \prod_O \mathcal{N}(\widetilde{O}-O_i,\epsilon_O).
\end{equation}
However the atmospheric parameters derived from spectroscopy ($\widetilde{{\mathrm{T_{eff}}}}$,$\widetilde{\log g}$,$\widetilde{{\mathrm{[Fe/H]}}}$) are not independent. For the inner disc fields we derived correlation coefficients that we applied in the above equation using a multivariate normal distribution. For the GES UVES parameters, the GES Consortium provides the individual node values, so instead of using only the recommended value we use all nodes individual values (in general around 5 nodes provides parameters for the same star), which mimick the correlation we want to introduce on a star by star basis. For the GES GIRAFFE parameters we have no information about the correlations available.
The apparent magnitude $m_i$ derived from the isochrone $i$ is a function of the absolute magnitude $M_i^0$, the extinction $A_m$, and the distance $d$:
\begin{equation}
m_i = M_i^0 + 5 \log d -5 + A_m.
\label{eq:Pogson}
\end{equation}
We therefore derived $P(\tilde{O}|i,d,A_{0})$ for a very thin 2-D grid of distances $d$ and extinction $A_{0}$. $A_{0}$ is the absorption at 5500 \AA\, and is roughly equivalent to $A_V$ (e.g. \citealt{CBJ11}).
To derive the extinction in the different photometric bands $A_m$, we used the extinction law $E_\lambda = 10^{-0.4 k_\lambda}$ of \cite{FitzpatrickMassa07}. We used a typical red clump SED $F_\lambda^0$ from \cite{CastelliKurucz03} ATLAS9 models.
With $T_\lambda$ the photometric total instrumental transmission we have
\begin{equation}
A_m = -2.5 \log_{10}\left({\int F_\lambda T_\lambda E_\lambda^{A_{0}} d\lambda \over \int F_\lambda T_\lambda d\lambda}\right).
\end{equation}
To take the non-linearity of the above equation into account, we used a discrete table of $A_m$ as a function of $A_{0}$. No prior on the distance nor extinction is added.
What we seek is the distance probability $P(d,A_{0}|\tilde{O})$, which we obtain by marginalization over the isochrone points:
\begin{equation}
P(d,A_{0}|\tilde{O}) \propto \sum_i P(\tilde{O}|i,d,A_{0}) P(i).
\end{equation}
Marginalization over the extinction leads to $P(d|\tilde{O})$ and marginalization over the distance leads to $P(A_0|\tilde{O})$. The resulting distance and extinction estimates used hereafter corresponds to the mode of the distribution and the errors corresponds to the 68\% highest Bayesian confidence interval (or highest density interval, HDI).
\section{Results}
The first subsection discusses the measurements of the two CoRoT fields, whereas the second subsection discusses the measurements of the five other fields that have less targets.
\subsection{CoRoT Fields}
DIBs in these sight-lines were derived following the fitting strategy described above.
All of the measured EWs, uncertainties and NaI/DIB velocities are listed in the Appendix.
For the CoRoT anti center field, the target stars are located about the Galactic Plane, and are widely distributed in distances (from 0 to 7 kpc from the Sun). This allows us to probe not only the
local arm, but we also expect the crossing of external Galactic arms.
As the distance of the target star increases, the LOS intersects more ISM and therefore the EW of the DIB is expected to increase, with abrupt increases corresponding to dense cloud crossings and \textit{plateaus} to interclouds/interarms.
Figure \ref{acgraphs} shows the 6614 and 8620 bands DIB strength as well as the estimated extinction $A_0$ as a function of the target distance. We do not show the 6283 band profile due to the very limited distance range of the measurements. We remark that the DIBs and $A_0$ profiles, i.e. three quantities that are totally independently derived, are in good agreement. All of the three show a clear increasing trend, which is expected for a field of view as narrow as the one of FLAMES,
also show the same global pattern. There is an increase between distances 0 and 1 kpc, and a second increase beyond 2.5 kpc, up to 6 kpc. These two \textit{ramps} correspond to two distinct interstellar cloud complexes, that we identify as the local and Perseus arms. The \textit{plateau} from 1 to 2.5 kpc likely corresponds to the gap between the two Galactic arms. In this distance range there are two groups of stars with EWs that differ by about 30\%.
They seem to correspond to two different regions within the field of view, i.e. likely the two groups do not intersect the same parts of the densest clouds, which is not surprising, the targets being distributed over $\sim$ 30 arcmin. Better precisions on distances and extinctions, which will be provided by Gaia, may help refine this point.
We note a very discrepant point in the $A_0$-distance curve (marked by a red star in Fig \ref{acgraphs}, lower panel), with no corresponding anomalously small DIBs. Interestingly, this target star has sismologic parameters that are in marked disagreement with the spectrophotometric determinations (R. A. Peralta, private communication), and for this reason its distance/extinction determination may be wrong. It is encouraging that our most discrepant result points to such a contradiction.
At large distance, it is not clear whether the strong increase beyond 4 kpc corresponds to the Outer Arm. Its location is in good agreement with a crossing of the Outer Arm internal part as it appears in the schematic Galactic map of \cite{2009PASP..121..213C} (see Fig. \ref{gxmap}). Also,
the total reddening E(B-V) from the Planck map \citep{planck13} varies between 0.5 and 0.9 over the field covered by the targets ($\sim$20 x 25 arcmin wide), and the spectrophotometric extinction (or the similar DIB-based extinction) for the most distant stars is found to reach the Planck integrated value (the maximum value is even slightly above the Planck value in the direction of the corresponding target). However, we do not detect clearly a corresponding distinct and strong shift in radial velocity (see below the discussion about the kinematics).
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{ncorotac_ewvsdist.eps
\caption{Results for target stars in field 1 (CoRoT anticenter). \textbf{6614 \AA\ DIB EW (top), 8620 \AA\ DIB EW (middle) and extinction $A_0$ (bottom) vs. the estimated distance}. Circles show GIRAFFE observations. Triangles show UVES observations. Colors in top panel correspond to the number of IS components used to fit the IS line or band. All nearby targets (D $\leq$ 1kpc) have only single IS component, or, for three targets, a very weak, negligible second component. Distant targets have more than one velocity component, in agreement with the crossing of at least one external arm (see the text). The outlier star 06441428-0057447 marked by (*) has stellar spectroscopic parameters in strong disagreement with stellar seismology information, which suggests they distance and extinction are inaccurate for this target.}
\label{acgraphs}
\end{figure}
Figure \ref{diba0corotac} displays, for the three DIBs, their variations with the estimated extinction $A_0$ based on all of the target stars in the field. It can be clearly seen from the figure that the three DIBs appear to be linearly correlated with the extinction.
Our three selected DIBs are among those that are reasonably well correlated with extinction in average conditions. However, previous studies based on early-type stars have revealed a strong dispersion about the mean relationship and in particular many \textit{outliers} that correspond to the bright UV stars. Here we note that there are no equivalent \textit{outliers}, which is probably due to our cool target stars and our integrations over large distances. This corresponds to a less severely modulated character of the sight-line, or the ISM varies in a less extreme way (see the combined results in section 4.3)
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{corotac_ewvsa0.eps
\caption{6283, 6614, and 8620 \AA\ DIB EWs as a function of the extinction for CoRoT ANTI CENTER field targets.}%
\label{diba0corotac}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.42\linewidth,height=7cm]{dib_vs_dist_AC.eps}
\includegraphics[width=0.57\linewidth]{Layout0.eps
\caption{\textbf{Evolution of the DIB profile with target distance. }Left: The 6614\AA\ DIB absorption \textbf{spectral profile} up to stars at increasing distances along the CoRoT Anti-Centre direction (l,b= 213$^{\circ}$,-2$^{\circ}$). \textbf{Shown is the average of stacked extracted, normalized absorption spectra sorted by stellar distances (an offset of 0.5 in y-axis separates two spectra}. The continuum on the blue side of the DIB is affected by the presence of strong stellar lines insufficiently corrected for. The first (top) spectrum corresponds to the first kpc, the last (bottom) spectrum to distances between 4 and 6 kpc. Right: Comparisons between single- and two-DIBs components adjustments for close and distant stars. Distant star require at least two DIBs separated by more than 20 km.s$^{-1}$ (see text).}
\label{dibevolution}%
\end{figure}
The need for a multi-component analysis in the case of the CoRoT AC field and the narrow 6614\AA\ band is illustrated in the Fig. \ref{dibevolution}. For each star we derived the full absorption attributable to the DIB in the following manner: the full profile-fitting (whose results are described below) is performed first. The fitted continuum and the adjusted stellar spectrum are used to subtract from the normalized spectrum the modeled stellar lines, leaving solely the DIB. Within stellar line residuals, this provides the full DIB absorption independently of its assumed intrinsic shape and the number of components. After having sorted the targets by increasing distance, we averaged the absorption profiles over groups of eight stars each. The resulting profiles for each distance bin are displayed in Fig. \ref{dibevolution} as a function of the heliocentric velocity. It can be seen from the figure that the DIB depth increase with distance is accompanied with a significant velocity shift towards higher positive values, as expected from the rotation curves in this direction. The value of the maximum shift, on the order of 20 km/s, is not negligible w.r.t. the DIB width for a single cloud and calls for a multi-cloud fitting procedure. We show how this need for at least two shifted DIBs is a function of the line-of-sight extent by fitting with one then two components the mean profiles obtained from stars located between 1 and 1.3 kpc on one hand, and from stars located beyond 5 kpc on the other hand. For the most distant stars there is a strong, highly visible discrepancy between the observed profile and the adjustment with a single DIB, while the adjustment with two DIBs separated by about 30 km.s$^{-1}$ is acceptable. For the closer stars the differences between the two adjusted models are smaller and not easily detected by-eye. We have performed several statistical tests to derive the reliability of the two-DIBs model, using standard deviations derived from the continuum outside the DIB, for both narrow or broad spectral intervals. As a matter of fact, as we already noticed, the standard deviation varies according to the inclusion or exclusion of the spectral regions that are the most contaminated by stellar line residuals (see the blue part of the spectrum in Fig 8). We also caution that due to those stellar lines residuals, errors on the continuum are not regularly distributed and such statistical tests are approximate, however they provide some first-order useful indications. For the distant stars the reduced chi-square increases by more than a factor of 2 when restricting to one component, i.e. this second component is statistically extremely probable, confirming the discrepancy between the observed profile and the single-DIB shape. A second test based on the Bayesian Information Criterion (BIC) is similarly showing that the existence of a second, shifted DIB is also extremely likely ($\Delta$BIC is always largely above 10). For the stars located between 1 and 1.3 kpc (middle curve in Fig \ref{dibevolution}) the reduced chi-square increase is by at least 20\%, showing that the measured profile is also very likely broadened, which is also confirmed by the BIC test. This is not so surprising as within the Local Arm the velocity dispersion may reach 20 km.s$^{-1}$. For the closest stars (top curve in Fig \ref{dibevolution}), stellar residuals in the DIB area become half of the DIB itself and a better correction of those residuals is necessary to get a firm conclusion, as confirmed by all tests.
We have represented in Fig. \ref{vel6613} the velocities of the detected components that come out from the automated fitting following the strategy
described in the previous section. For all individual stars standard deviations including both the measurement uncertainties and the stellar line residuals were estimated from the best adjustments, and new adjustments were performed using these standard deviations. The errors on the free parameters were estimated using the full covariance matrix and take into account all correlations between the parameters. Resulting errors are displayed in Fig \ref{vel6613}. It can be seen that the resulting DIB velocities belong to two groups centered on: $v_{hel} \simeq 15-32$ and $v_{hel} \simeq 40-55$ km/s (or, $v_{LSR} \simeq -2-15$ and $v_{LSR} \simeq23-38 $, respectively). Velocity results for those targets for which DIB velocities were determined through global fitting and are consequently linked to the strong sodium absorptions are marked by triangles. They are in agreement with the main groups of radial velocities, showing a global agreement between the main HI, NaI and DIB structures. The first velocity group is tightly associated with the first HI peak, that corresponds to the local arm. The second group corresponds to the second or blended second and third HI components at $\sim$ 35-45 km/s,
that corresponds
to Perseus. Interestingly, none of our target requires absorption at around + 65 km/s, the heliocentric radial velocity of the reddest, strong HI emission peak (see Fig. \ref{vel6613}). It is not clear whether this highest velocity component seen in HI corresponds to the Perseus or a more distant Arm, i.e. the Outer (or Cygnus) Arm. In their synthetic Figure 3, \cite{dame01} are attributing to the Perseus Arm a heliocentric velocity interval $v_{hel} =37-67$ km/s ($v_{lsr} = 20-50$ km/s) in the direction of the CoRoT anti-center field, while lower velocities are predicted by \cite{vallee08}. If the higher HI velocity corresponds to the Outer Arm, then apparently none of our targets is beyond a significant column of gas/dust belonging to this Arm and all the detected IS matter is from Perseus, albeit (i) the target estimated distances are reaching at least 4.8 kpc (6.8 kpc is the most probable distance), (ii) there is a strong and coherent increase of DIBs and extinction with distance found from the 6 most distant targets, and, (iii) as discussed above the Planck integrated reddening is on the same order than the reddening towards our most distant targets. In this case the fastest HI arises beyond 5-6 kpc, and is too poor in dust to produce a significant additional reddening. Conversely, if the faster gas belongs to Perseus, a potential explanation is that the ensemble of distant targets may miss those clouds. HI maps have a lower resolution compared to Planck, the Perseus Arm is highly fragmented and the distant targets are distributed over $\sim$ 15 arcmin. If there is a strong inhomogeneity within the field the path to the distant targets may not cross the higher velocity matter. More data are needed and more accurate distances should help answering this question.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{allcompHInewn.eps}
\caption{Comparison between the fitted DIB radial velocities and EWs and the HI 21 cm emission spectra.
Black (respectively red) markers and lines are GIRAFFE results and HI spectra from field 1 (respectively field 2). Error bars on the velocities are based on the full covariance matrix for the various parameters. In the case of the narrow 6614 \AA\ DIB (GIRAFFE observations) a significant number of spectra require two velocity components, that very likely correspond to the Local and Perseus arms. Small EWs and large error bars on velocities correspond to marginal results in low signal to noise spectra. UVES target results are displayed with triangles \textbf{(yellow and blue for fields 1 and 2 resp.)}. At variance with GIRAFFE, UVES velocities are linked to the strong sodium lines through global fitting. Their agreement with the velocity structure derived from the GIRAFFE targets shows the link between NaI and DIB velocities
}
\label{vel6613}%
\end{figure}
For the CoRoT center field, target stars also widely distributed in distance, however,
its higher latitude ($b=-7^\circ$) has a strong impact on the results. For this field, the 8620 \AA\ DIB is not extracted due to significant sky emission line residuals.
Figure \ref{COROTC} displays the DIBs and the extinctions as a function of the target distance. Although distributed over large distance
, we do not detect any EW increase (\textit{ramp}) in addition to the one associated with the local arm.
Instead, the DIB strength appears to form a \textit{plateau}.
This shows that the LOS do not intersect inner arms
because the distant target stars are significantly below the Plane. The measured profile implies that most of the absorbing matter is closer than 1.5 kpc.
The relationships between the DIB strength and the extinction is shown in Fig. \ref{COROTC}. Due to the quite small DIB and extinction interval covered by the targets in this field, all of the data points are clustered. Still, an increasing trend is clearly observed. For this field,
the kinematics is also rather simple (Fig. \ref{vel6613}).
There was no need for more than one IS component, all velocities fall close to each other in agreement with the peak of the HI emission spectrum at $v_{hel} \simeq -14$ km/s.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{ncorotc_ewvsdist.eps
\includegraphics[width=0.5\linewidth]{ncorotc_ewvsa0.eps}
\caption{CoRoT CENTER FIELD field. Left: DIB/$A_0$ vs. distance profile. Right: DIB vs $A_0$: 6283 and 6614
\label{COROTC}%
\end{figure}
\subsection{Fields 3 to 7}
For the field3/NGC\,4815 direction \citep[see][]{Friel,Magrini},
we analysed 14 red clump stars that were observed with UVES. Only six are open cluster members.
We performed the simultaneous fitting of the NaI lines and the 6283 \AA\ DIB (respectively the 6614 \AA\ DIB).
The NaI absorption is characterized by two velocity components, at $v_{hel} \simeq$ -25 and -5 km/s (see Fig. \ref{velcompcluster}), that are well separated and was useful to test our multi-component technique.
The results are displayed in Fig. \ref{dibdistcluster}.
The extinctions and DIB strengths are on the same order for most stars,
showing that at their distances on the order of 2 kpc, they are located beyond the main, nearby absorber, in agreement with the 3D ISM map \citep{2014A&A...561A..91L}.
The two stars with lower extinction are not cluster members and
must be foreground stars. Their most probable distances are 1.7 and 1.9 kpc, which shows that not all the absorption is local
and pinpoints another absorber
between 1.9 kpc and 2.5 kpc (cluster distance).
From the results of the most distant target,
there is no significant additional IS absorption between 2 and 4 kpc.
The comparison between the DIBs and the estimated extinctions shows they are well correlated (Fig. \ref{dibdistcluster} right).
Radial velocities of the NaI lines and DIBs correspond to two strong peaks in the HI spectrum. Those cloud complexes also appear in the CO survey of \citet{1987ApJ...322..706D} and probably correspond to the Coalsack complex and another dense cloud. In none of the spectra do we detect velocities
above +10 km/s, which implies that
those HI structures at higher velocity are located beyond 4 kpc.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\linewidth]{ncluster_ewvsdist.eps
\includegraphics[width=0.49\linewidth]{ncluster_ewvsa0.eps
\caption{NGC 4815 field: (left) DIB and $A_0$ distance profiles. Stars identified by \cite{Friel} as cluster members correspond to blue markers, non-members to red. Right: 6283 (top) and 6614 (lower panel) \AA\ DIB EW vs the estimated extinction A0. The black "0" sign indicates the unique star with a single DIB velocity, for all other targets adjustment to data requires two velocity components.}
\label{dibdistcluster}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth]{velcompCLUSTER6283.eps}\\%{velcomp6283cluster.eps}\\
\includegraphics[width=\linewidth]{velcompcluster6614.eps
\caption{NGC 4815 field: kinematics. Top: the 6283 \AA\ DIB. Bottom: the 6614 \AA\ DIB. The black line represents the HI 21 cm emission spectrum (LAB Survey). The dashed blue lines are an example of the fitted IS NaI lines (here from star cname 12581939-6453533) and triangles are the velocities of the DIB components derived from the global fit for all targets.}
\label{velcompcluster}
\end{center}
\end{figure}
For the $\gamma$ Vel direction (field 4) \citep[see][]{jef09,jef14,spina},
the most significant difference from the other directions is the distribution of targets over a much wider area ($1^{\circ} \times 1^{\circ}$).
As we will see (in Figure \ref{dibdistgam}) this has a strong impact on the star to star variability, especially for this region that is well known for having a complex interstellar density and ionization structure, partly under the strong influence from the Wolf-Rayet (WR) star.
For the 6614 \AA\ DIB, unfortunately, the profile is significantly scattered due to the relatively strong influence of the stellar residuals and the resulting large relative errors.
The HI spectrum presents a strong peak at $v_{hel}$ 35 km/s, a velocity that is in good agreement with the NaI and DIB absorptions (see Fig. \ref{velcompgam2vel}).
The second component in the HI spectrum at $v_{hel}$ 50 km/s is found to be very weak or null. Whereas, the HI component at $\simeq$ 100 km/s is not detected in any of the spectra and corresponds to more distant clouds.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\linewidth]{ngamvel_ewvsa0.eps
\includegraphics[width=0.5\linewidth]{ngamvel_ewvsdist.eps
\caption{The $\gamma$ Vel field. Left: DIB EW and estimated extinction as a function of target distance. Two groups of stars are present that probe different regions of the foreground cloud. Right: The relation between EWs and extinction. There is a significant scatter, the largest from the 7 fields. Several \textit{outliers} have stronger DIBs compared to the averaged relation. These departures from linearity are very likely linked to the influence of the Wolf-rayet star environment.}
\label{dibdistgam}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{velcompGAMVEL6283n.eps}\\%{velcomp6283GAMVEL.eps}\\%{velcomp6284_gamvel.eps}
\includegraphics[width=\linewidth]{velcompgamvel6614new.eps
\caption{$\gamma$ Vel field: kinematics. Top: the 6283 \AA\ DIB. Bottom: the 6614 \AA\ DIB. Line and markers are the same as the one in Fig. \ref{velcompcluster}.}
\label{velcompgam2vel}
\end{center}
\end{figure}
The last three fields point to the Galactic bulge. This first field corresponds to the commonly used, low extinction direction at $(l,b)\simeq(1^\circ,-4^\circ)$, the Baade's window (BW).
Figure \ref{dibdistbulge} (left) shows the extinction, and the 6283 and 6614 EWs
along this LOS.
As for the previous field, the 6614 \AA\ profile is consistent with the two others, however much less precisely defined because the absorption is weaker and the stellar line residual have a stronger impact.
Figure \ref{velcompbulge} shows the HI emission spectrum in this Bulge direction, a spectrum characterized by a dominant emission peak at -5 km/s (heliocentric frame). Here again, the comparison with the DIB velocity that comes out from the automated fitting shows a good agreement with HI, with a dispersion of a few km/s around the central velocity.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\linewidth]{nbulge_ewvsdist.eps
\includegraphics[width=0.5\linewidth]{nbulge_ewvsa0.eps
\caption{Baade Window direction: DIB vs. distance and $A_0$ vs. distance}
\label{dibdistbulge}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth,height=4.cm]{velcompbulge6614new.eps}\\%{velcomp6614_bulge.eps}
\includegraphics[width=\linewidth,height=4.cm]{velcompbulge6283new.eps
\caption{Baade Window direction: kinematics. All DIB velocities are found to be consistent with the local HI, around 5 km/s. The second velocity component allowed by the fitting procedure is found to be unnecessary or negligible. Note that the discrepant data point at +15 km/s for the 6283 \AA\ DIB (lower panel) is due to the effect of a strongly discrepant stellar line and disappears when the fitting is repeated after masking of the corresponding region (the total EW is found to be unchanged).}
\label{velcompbulge}
\end{center}
\end{figure}
For the next two fields
(INNERDISK O and W resp.),
IS absorptions are expected to be confined within a narrow interstellar radial velocity range, which is confirmed by the HI emission spectra, and the DIBs could be analyzed by means of the single component method. We have checked for several stars that the allowance for more than one component results in EW values that are fully compatible with those from the single component method, within our estimated uncertainties.
Figure \ref{dibdistinnerdisko} and \ref{dibdistinnerdiskw} show the radial profiles of the DIB strength and the estimated extinction. They both show a gradual increase.
The profiles are in agreement with the profiles derived by \cite{marshall06} from 2MASS and the Besançon model in adjacent directions. We used A$_{Ks}$/A$_{0}$=0.11 for the conversion. The DIB-extinction correlation is shown in lower panel, and is compatible with a linear relationship within the measurements and the model uncertainties. The Pearson coefficients are found to be 0.55 and 0.76 for INNERDISK O and W resp.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth]{ninnerdisk_0.eps}
\caption{6283 DIB EW and estimated extinction as a function of distance (top) and DIB extinction relationship (lower panel) for the OGLE BUL\_SC24 (INNERDISK O) field. We compare our estimated extinction with the profiles from \cite{marshall06} (see text) for \textbf{the closest} directions.}
\label{dibdistinnerdisko}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth]{innerdisk_W_all.eps
\caption{INNERDISK W field: DIB vs. distance, $A_0$ vs. distance, and DIB vs $A_0$. \textbf{The EW vs $A_0$ linear relationship (lower panel) has a slope of 567 $\pm$ 7 mA per magnitude (forcing the intercept to be 0). The Pearson correlation coefficient is 0.76.} }
\label{dibdistinnerdiskw}
\end{center}
\end{figure}
\subsection{All fields: Correlation with the Extinction}
As discussed in Section 1, a large number of studies were devoted to the correlation between the DIBs and the extinction. Our results provide an opportunity to study further this relation, with for the first time a large selection of DIBs in very different regions of the Galaxy.
Figure \ref{allfields} shows the whole set of 6283 and 6614 \AA\ DIB EWs as a function of extinction.
We also display the DIB-extinction relations obtained from previous studies using early-type star data \citep{2013A&A...555A..25P,vos11}. To convert the color excess values E(B-V) listed in the previous works into extinction values A$_{0}$, we have assumed that $A_{0}/E(B-V)= 3.2882 + 0.04397 \times E(B-V) $ in all directions. We have fitted the DIB-extinction relationship independently of the error bars, and also using both errors (in extinction and EW) using the orthogonal distance regression method (ODR) \citep{Boggs}.
We have compared our correlation coefficients with those obtained from previous studies based on early-type target stars, characterized by well known extinctions and excellent spectra. It is remarkable that despite the complexity of the global adjustment and the presence of the stellar lines, the correlation between the 6283 \AA\ DIB and the reddening is found to be tighter, as shown by the error-independent Pearson correlation coefficient of 0.91. Such a value is above most previous determinations, e.g. the Friedman et al (2011) coefficient of 0.82. We believe that our use of late-type stars is the dominant reason for a globally decreased dispersion,
because we avoid the radiation field effects on the DIB carriers that arise around hot stars. Instead, our LOS cross mainly clouds that are far from those radiation sources. Such a conclusion is in agreement with the results of \citet{chen13}.
In the case of the weaker and narrower 6614 DIB, our correlation coefficient is 0.83, i.e. on the same order than the Friedman et al (2011) coefficient. We believe that here the absence of a correlation increase is due to the strong impact of the residual stellar features. This impact is much stronger than on the 6283 DIB due to the smaller DIB width, closer to the stellar line width. Better synthetic stellar models should help to reduce this source of uncertainty.
From Fig. \ref{allfields}, we remark that our measured EWs are globally higher than what
has generally been
derived from early type stars. This is especially clear for the inner disk and CoRoT anti-center fields and may be explained by the fact that we avoid some of the strong DIB suppressions that arise in the environment of UV-bright stars. For other sight-lines like the one towards NGC4815 there are no significant differences from the DIB-color excess relations based on early type stars. Finally, we note that more dispersion seems to be present for the local clouds, which may be explained by averaging effects along large distances.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{DIB6283VSA0_newconversion.eps}\\%{DIB6283VSA0odr.eps}
\includegraphics[width=\linewidth]{DIB6614VSA0_newconversion.eps
\caption{DIB vs A0, all fields.}
\label{allfields}
\end{figure}
\subsection{Spatial distribution}
Figure \ref{gxmap} shows the projections of the target stars onto the Galactic plane, superimposed on a face-on map of the Milky Way. The color represents the 6283 DIB EW when it is measured, and a "6283-equivalent" value deduced by simply scaling the 6614 or 8620 DIBs based on the mean 6614/6283 and 8620/6283 ratios. When LOS are about the Plane, the EW reflects the spiral arm crossings. This is no longer the case when the latitude is increasing, as can be seen in the figure where the latitudes are indicated for each LOS.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth]{GraphMW_6284.eps}
\caption{Projections of the target stars onto the face on map of the Galaxy (image from Churchwell et al. 2009). Units are parsecs, counted from the Sun, with d$_{Sun}$ $\simeq$ 8kpc. The color coding corresponds to the equivalent of the 6283 \AA\ DIB, either directly measured, or, when not measured, estimated from the other DIB measurements using the average EW(6283)/EW(8620) or EW(6283)/EW(6614) ratios computed from the whole dataset. The black asterisk and small arrow mark the Bulge field 5 (Baade Window direction) for which the X coordinate has been multiplied by 4 to avoid confusion with the other directions. The galactic latitude of each field is indicated at the extremity of the sightline.}
\label{gxmap}
\end{center}
\end{figure}
\section{Discussion and perspectives}
We have developed and applied automated methods of extraction of multi-component DIBs from stellar spectra. These methods can be applied to any kind of stellar spectrum, as long as the stellar parameters are known. In particular it can handle cool star spectra despite their complex continua.
Here we have presented the results of our automated adjustments when they are applied to ESO/FLAMES high resolution spectra of red giants and F, G, K dwarfs that are part of the GES first data release and about the same number of FLAMES spectra from a previous program about the inner disk. We have extracted three DIBs and studied their strengths and velocity shifts.
The comparison between the DIB strengths and spectro-photometric estimated extinctions reveals a significant correlation and demonstrates that we successfully extract the DIB EWs despite the stellar and telluric absorptions. This correlation suggests that the link between the DIB strength and the extinction does not vary in a large extent among regions of the Galaxy that span galactocentric radii from 2.5 to 13 kpc. From this dataset we find broad consistency between the DIB distance profiles and the estimated locations/extents of the Local, Perseus
arms. We also find agreement between the line-of-sight velocity structure deduced from the HI 21 cm emission spectra and the DIBs velocities. This shows that on the large scales DIBs may be a kinematical tool in the same way IS gaseous lines are commonly used. This opens perspectives for the study of the most external arms for which very few measurements of the DIB abundance do exist.
Altogether these results show how DIBs can be used to reconstruct the large-scale distribution of the interstellar medium in the Galaxy, and may be especially useful for distant clouds because in this case they are strong enough but not saturated. In addition, DIBs in more distant clouds (like Perseus) are found more tightly correlated with the extinction and less spatially/ angularly variable than in the local clouds, which we interpret as an effect of distance-averaging. It also confirms that when using cool (respectively, both cool and distant) stars the effects of a strong radiation field on the DIB abundance and/or ionization are minimized (respectively, both minimized and averaged out), and DIBs follow more closely the extinction. This latter aspect is quantitatively confirmed by the correlation coefficients we obtain when assembling all measurements. In the case of the broad 6283 \AA\ DIB, the Pearson coefficient is 0.91, significantly above previous determinations based on early-type stars despite the extremely large distances between the probed areas and the presence of the stellar lines. This implies that DIBs can be used as a first prior for the extinction in the absence of any other information. However, we note that for one field, the $\gamma$ Vel cluster direction, there is a more complex relationship between DIBs and extinction estimates. The proximity of the absorber and the presence of bright UV stars is probably responsible for this complexity and the departures from the average conditions. Finally, we note that for two sight-lines the measured DIBs are slightly stronger than previous relationships based on early-type targets have predicted. We believe that this is also due to the absence of strong environmental effects. This deserves further
study, as the data presented here are still too limited to permit to draw definitive conclusions.
There is still room for a number of improvements of the synthetic stellar spectrum computations, and subsequent DIB measurements. Here we have used the most probable values of the stellar parameters, and did not allow for any uncertainty. Moreover we did not make any use of individual abundances, although in the case of the GES spectra most of them are determined. One reason is our choice of a homogenous treatment of both GES spectra and other data for which the individual abundance measurements were not available. The second reason is our findings, already developed by \cite{chen13} that the main source of bad quality adjustments of the stellar synthetic spectra to the data is clearly linked to specific spectral lines that are systematically under- or over-estimated, or simply missing (see Fig. \ref{residual}, \ref{residual2}), and allowing for small changes of the parameters would not solve for those discrepancies. Work is in progress to correct for those systematics that must be done before fine tuning of the parameters within the GES error bars or use of individual abundances is done. This should result in a better accuracy of the DIB strength and allow to go further in the kinematical analysis, here still limited to the detection of velocity shifts above 5 to 10 km.s$^{-1}$ depending on DIBs and the signal. Improvements of the fitting strategy are also in progress, in particular the simultaneous adjustment of NaI lines and all measurable DIBs is expected to provide more reliable results. There is also room for an improved strategy regarding the choice of the number of velocity components. We have explored criteria based on the DIB velocity shift, however other methods should also be elaborated and tested. This will be the subject of further studies based on larger datasets. Finally, residuals from sky emission removal are still limiting significantly the DIB extraction in some cases, and special attention must be devoted to this problem.
Globally these results pave the way to three-dimensional mapping of the Galactic ISM based on DIB absorption measurements from current or future stellar spectroscopic surveys. Like all three-dimensional maps, future DIB-based maps are expected to gain in accuracy in a considerable way when Gaia parallax measurements will be available. Finally, as illustrated by the CoRoT anti-center line-of-sight, a very promising aspect that is specific to these DIB spectroscopic measurements is the potential detailed comparison, sightline by sightline, between the distance-limited absorption measurements and emission spectra that trace the gas at all distances. This comparison by means of the radial velocities should bring interesting information on the location of the poorly known dust-poor distant gas in outer parts of the spiral arms.
\begin{acknowledgements}
R.L, L.P., and C.B. acknowledge support from the French National Research Agency (ANR) through the STILISM project.
L. S. and S. D. acknowledge the support of Sonderforschungsbereich SFB 881 "The Milky Way system " (subprojects A4 and A5) of the German Research Foundation (DFG), and of Project IC120009 "Millennium Institute of Astrophysics (MAS)" of Iniciativa Científica Milenio del Ministerio de Economía, Fomento y Turismo de Chile
This work was partly supported by the European Union FP7 programme through ERC grant number 320360 and by the Leverhulme Trust through grant RPG-2012-541. We acknowledge the support from INAF and Ministero dell' Istruzione, dell' Universit\`a' e della Ricerca (MIUR) in the form of the grant "Premiale VLT 2012". The results presented here benefit from discussions held during the Gaia-ESO workshops and conferences supported by the ESF (European Science Foundation) through the GREAT Research Network Programme.
\end{acknowledgements}
|
1,108,101,565,137 | arxiv | \section{Introduction}
An important aspect of ``algebraic'' model theory is to uncover the algebraic consequences for structures such as groups, rings, fields, of abstract model-theoretic properties such as categoricity, stability, simplicity, and so on. Among the first results in the area was Macintyre's theorem \cite{ma} that an infinite field $K$ whose first order theory is $\omega$-stable, is algebraically closed. This was subsequently generalized to superstable fields \cite{CS}, and the proof also works for stable fields with `semiregular generic type''. In all these cases, a suitable rank or dimension is available (for example $U$-rank, or $p$-weight) which one can compute with.
Such methods or tools are on the face of it unavailable in arbitrary stable fields.
Nevertheless a longstanding conjecture is that any infinite field whose first order theory is stable is separably closed.
In the current paper, we discuss this conjecture, but under additional assumptions on the ``weight'' of the generic type of $K$. We discuss later our motivation.
We refer the reader to \cite{pillay-book} for more details on stability theory, and to \cite{poizat-book} for more details on stable groups and fields. In particular, we assume familiarity with the notion of ``generic type'' of a group, and with the fact that a stable field has a unique generic type.
When we talk about a group $G$ or field $K$ as a first order structure, we mean that the group/field is endowed not only with its algebraic operations, but possible additional relations. When we say for example that $G$ is stable, we mean $Th(G)$ is stable, and by convention $G$ is assumed to be a ``monster model'' or very saturated model of its theory.
The cardinalities of subsets that we are working with are assumed to be smaller than the degree of saturation. And all complete types we mention are finitary, namely are types in finitely many free variables.
\begin{definition}
Let ${\EuFrak C}$ be a monster model of a stable theory $T$, $A \subseteq {\EuFrak C}$ and $p \in S(A)$. The weight of $p$ is defined as the supremum of the set of cardinalities $\kappa$ for which there exists a non-forking extension $q=tp(a/B) \in S(B)$ of $p$ and a $B$-independent sequence $(b_i: i <\kappa)$ such that $a \mathop{\mathpalette\Notind{}}_A b_i$ for every $i<\kappa$.
\end{definition}
\begin{definition} We say that a stable group $G$ has weight $\alpha$ (symbolically $w(G)=\alpha$) if every (some) generic type of $G$ has weight $\alpha$.
\end{definition}
The weight of a type $p$ is always bounded by $|T|$. In a superstable theory, all types have finite weight; in fact, any type $p$ of weight $n$ will be domination equivalent to a product of $n$ regular types, and any regular type has weight $1$. As mentioned above, there is a machinery ($p$-simplicity, $p$-semiregularity,...) around working close to a regular type in a stable theory, which enables one to prove results as in the superstable case, under the assumption of the existence of enough, or suitable, regular types. However, in a general stable theory, a type can have finite weight or even weight $1$ without being nonorthogonal to a regular type. An important example for the current paper is a separably closed field $K$ of infinite Ershov invariant (or degree of imperfection). It was proved in \cite[Partie IV]{de} that the generic type of $K$ has weight $1$. However, this generic type is not regular (and is, in fact, orthogonal to all regular types). If $K$ is a superstable group, or more generally, a stable group with semiregular generic, then the additivity properties of $U$-rank or $p$-weight can be used to prove what we might call a weak ``exchange property'' for generics: if $g\in G$ is generic (over some fixed set of parameters), $h\in G$ and $g\in acl(h)$ (where $acl(-)$ refers to algebraic closure in the structure $G$ in the model-theoretic sense), then $h$ is also generic. This key property is behind the proofs that for example a superstable field is algebraically closed. But it fails in {\em any} separably closed non perfect field $K$: if $p$ is the characteristic and $g\in K$ is generic, then $g$ is algebraic over $g^{p}$, but $g^{p}$ is not generic. In particular,
it fails in the infinite imperfection degree case but where nevertheless the generic type has weight $1$.
Bearing in mind this example, the strongest conjecture we can make about stable fields of finite weight is:
\begin{conjecture}\label{main conjecture}
Every infinite stable field of finite weight is separably closed.
\end{conjecture}
In this paper, we prove the above conjecture for stable fields of weight 1. We also establish some partial results for stable fields of finite weight.
Although Conjecture \ref{main conjecture} is interesting in its own right, it is worth giving some motivation.
Shelah recently introduced {\em strongly dependent theories} as a kind of counterpart of superstable theories, but in the NIP context, and he asked about the structure of strongly dependent fields \cite{sh}. Actually this strong dependence condition turns out to be something like a ``finite weight'' assumption. In fact, assuming stability, strong dependence of $T$ amounts precisely to saying that all types have finite weight \cite[Corollary 9]{ad}. We call strongly dependent stable theories {\em strongly stable}. So, to understand in a meaningful way strongly dependent fields, we would at least have to have some techniques to use a ``finiteness of weight'' hypothesis in the stable case. Thus we were naturally led to ask whether appropriate weight assumptions on the generic type of a stable field could have structural-algebraic consequences.
In section 1, we prove the main theorem (Theorem 1.7) that stable fields of weight $1$ are separably closed. The key lemma shows that we do obtain a kind of weak exchange property for generics under a weight $1$ assumption, but with model-theoretic $acl(-)$ replaced by field-theoretic separable algebraic closure (see Lemma 1.3 and Corollary 1.4).
In section 2, we obtain other partial results around Conjecture \ref{main conjecture}, as well as pointing out in Proposition \ref{perfect} that strongly stable fields are perfect.
The first author is grateful to Frank Wagner for sharing useful ideas.
\section{Stable fields of weight 1}
Let us start from a very important, basic observation. It is essentially contained in Proposition 2.8 of \cite{pillay-freegroup}, but we give a complete proof.
\begin{remark}\label{product of non generics}
Let $G$ be a stable group of weight 1. Then, for an arbitrary set $A$, if $a$ and $b$ are non-generics over $A$, so is the product $a\cdot b$.
\end{remark}
{\em Proof.} Suppose for a contradiction that $a\cdot b$ is generic over $A$. Choose $g$ generic over $A,a,b$. Then $g\cdot a$ is of course generic over $A$. We also have that $g \mathop{\mathpalette\Ind{}}_A a\cdot b$, and so $g \mathop{\mathpalette\Ind{}}_A g\cdot a\cdot b$. On the other hand, $g\cdot a \mathop{\mathpalette\Notind{}}_A g $ (otherwise $g$ is generic over $A,g\cdot a$, so $a=g^{-1}\cdot g\cdot a$ is generic over $A$, a contradiction) and $g\cdot a \mathop{\mathpalette\Notind{}}_A g\cdot a\cdot b$ (otherwise $g\cdot a$ is generic over $A,g\cdot a \cdot b$, so $b=(g\cdot a)^{-1}\cdot g\cdot a\cdot b$ is generic over $A$, a contradiction). Hence, $w(g\cdot a/A) >1$, and so $w(G)>1$, a contradiction.\hfill $\blacksquare$\\
From the above Remark, we get the following
\begin{corollary}
In a stable field $K$ of weight 1, for any $A\subseteq K$, both the sum and the product of two non-generics over $A$ are non-generic over $A$.
\end{corollary}
From now on, in this section, $K$ will be a stable field satisfying the conclusion of the above corollary.
By $p$ we will denote the characteristic of $K$ and by $\mathbb{F}_p$ the prime subfiled of $K$. Also, in the remainder of this section, when we speak of an element of a field being (separably) algebraic over a subfield, we mean of course in the field-theoretic sense.
The following lemma is essential for the proof of the main result.
\begin{lemma}\label{main lemma}
Let $A$ be a subset and $g,h_1,\dots,h_m$ elements of $K$. Suppose $g$ is generic over $A$ and separably algebraic over $\mathbb{F}_p(A,h_1\dots,h_m)$. Then, $h_i$ is generic over $A$ for some $i\in \{1,\dots,m\}$.
\end{lemma}
{\em Proof.} Put $\overline{h}=(h_1,\dots,h_m)$. Let
$$P(x)=x^n+ R_{n-1}(A,\overline{h})x^{n-1}+\dots+R_0(A,\overline{h})$$
be the minimal polynomial of $g$ over $\mathbb{F}_p(A,\overline{h})$. So, $R_i(A,\overline{y})$'s are rational functions in $\overline{y}$ over $\mathbb{F}_p(A)$, and $P$ is separable. The proof will be by induction on $n$.
First, consider the base induction step, i.e. $n=1$. We have $g=-R_0(A,\overline{h})$. We can write $R_0(A,\overline{y})=Q(A,\overline{y})/T(A,\overline{y})$, where $Q(A,\overline{y})=\sum a_{i_1,\dots,i_m}y_1^{i_1}\dots y_m^{i_m}$ and $T(A,\overline{y})=\sum b_{j_1,\dots,j_m}y_1^{j_1}\dots y_m^{j_m}$ for some $a_{i_1,\dots, i_m}, b_{j_1,\dots,j_m} \in \mathbb{F}_p(A)$. Since the quotient $Q(A,\overline{h})/T(A,\overline{h})$ is generic over $A$, either $Q(A,\overline{h})$ or $T(A,\overline{h})$ is generic over $A$. Hence, there are $i_1,\dots,i_m$ such that $a_{i_1\dots,i_m}h_1^{i_1}\dots h_{i_m}^{i_m}$ or $b_{i_1\dots,i_m}h_1^{i_1}\dots h_{i_m}^{i_m}$ is generic over $A$. As $a_{i_1,\dots,i_m}$ and $b_{i_1,\dots,i_m}$ are not generic over $A$, we get that one of the $h_i$'s must be generic over $A$, which completes the base induction step.
Now, we turn to the induction step. So, assume that $n>1$ and that the lemma is true for elements whose minimal polynomial has degree smaller than $n$.\\[1mm]
{\bf CASE 1} $p \mid n$.
Since $P(x)$ is separable and irreducible over $\mathbb{F}_p(A)$, there is $1\leq j \leq n-1$ such that $p \nmid j$ and $R_j(A,\overline{h}) \ne 0$.
Take $g_0$ generic over $A,g$. Then, $gg_0$ is generic over $A$. So, $gg_0 \equiv_A g$. Thus, there is $\overline{h'}=(h_1',\dots,h_m') \equiv_A (h_1,\dots,h_m)$ such that
$$(gg_0)^n + R_{n-1}(A,\overline{h'})(gg_0)^{n-1}+\dots +R_0(A,\overline{h'})=0.$$
Put
$$Q(x)=x^n + \frac{R_{n-1}(A,\overline{h'})}{g_0}x^{n-1}+ \dots + \frac{R_{0}(A,\overline{h'})}{g_0^n} \in \mathbb{F}_p(A,g_0,\overline{h'})[x].$$
Let
$$W(x)=Q(x)-P(x) \in \mathbb{F}_p(A,g_0,\overline{h},\overline{h'})[x].$$
We see that $Q(g)=0$, so $W(g)=0$. Moreover,
$$
\begin{array}{ll}
W(x) = & \left( \frac{R_{n-1}(A,\overline{h'})}{g_0} - R_{n-1}(A,\overline{h})\right) x^{n-1} + \dots +\\
&+ \left( \frac{R_{j}(A,\overline{h'})}{g_0^{n-j}} - R_{j}(A,\overline{h})\right) x^{j} + \dots + \left( \frac{R_{0}(A,\overline{h'})}{g_0^n} - R_{0}(A,\overline{h})\right).
\end{array}$$
{\bf Subcase A} $\frac{R_{j}(A,\overline{h'})}{g_0^{n-j}} - R_{j}(A,\overline{h}) =0$.
Then, $g_0^{n-j}-\frac{R_j(A,\overline{h'})}{R_j(A,\overline{h})}=0$. Since $1 \leq n-j<n$ and $p \nmid n-j$, we see that $g_0$ is separably algebraic over $\mathbb{F}_p(A, \overline{h}, \overline{h'})$ and the degree of the minimal polynomial of $g_0$ over $\mathbb{F}_p(A, \overline{h}, \overline{h'})$ is smaller than $n$. Moreover, $g_0$ is generic over $A$. Hence, by the induction hypothesis, there is $i$ such that $h_i$ or $h_i'$ is generic over $A$. But $h_i' \equiv_A h_i$. So, $h_i$ is generic over $A$.\\[1mm]
{\bf Subcase B} $\frac{R_{j}(A,\overline{h'})}{g_0^{n-j}} - R_{j}(A,\overline{h}) \ne 0$.
We have that $W(g)=0$, $1\leq deg(W) \leq n-1$, and we know that $g$ is separably algebraic over $\mathbb{F}_p(A,g_0,\overline{h},\overline{h'})$. So, the degree of the minimal polynomial of $g$ over this field is smaller than $n$. Moreover, $g$ is generic over $A,g_0$. Hence, by the induction hypothesis,
there is $i$ such that $h_i$ or $h_i'$ is generic over $A,g_0$, so also over $A$.
As $h_i \equiv_A h_i'$, we conclude that $h_i$ is generic over $A$.\\[1mm]
{\bf CASE 2} $p \nmid n$.
Once again, take $g_0$ generic over $A,g$. Then, $g+g_0$ is generic over $A$. So, $g \equiv_A g+g_0$. Thus, there is $\overline{h'} =(h_1'\dots,h_m') \equiv_A \overline{h}$ such that
$$(g+g_0)^n + R_{n-1}(A,\overline{h'})(g+g_0)^{n-1}+\dots +R_0(A,\overline{h'})=0.$$
Put
$$Q(x)=(x+g_0)^n + R_{n-1}(A,\overline{h'})(x+g_0)^{n-1}+ \dots + R_{0}(A,\overline{h'}) \in \mathbb{F}_p(A,g_0,\overline{h'})[x].$$
Let
$$W(x)=Q(x)-P(x) \in \mathbb{F}_p(A,g_0,\overline{h},\overline{h'})[x].$$
We see that $W(g)=0$ and
$$W(x)=(ng_0+R_{n-1}(A,\overline{h'}) - R_{n-1}(A,\overline{h}))x^{n-1}+W_1(x),$$
where $W_1(x) \in \mathbb{F}_p(A,g_0,\overline{h},\overline{h'})[x]$ is of degree smaller than $n-1$.\\[1mm]
{\bf Subcase A} $ng_0+R_{n-1}(A,\overline{h'}) - R_{n-1}(A,\overline{h})=0$.
Since $p \nmid n$ and $g_0$ is generic over $A$, by the base induction step, we get that there is $i$ such that $h_i$ or $h_i'$ is generic over $A$. So, $h_i$ is generic over $A$.\\[1mm]
{\bf Subcase B} $ng_0+R_{n-1}(A,\overline{h'}) - R_{n-1}(A,\overline{h})\ne 0$.
Then, $W(g)=0$, $deg(W)=n-1\geq 1$, and we know that $g$ is separably algebraic over $\mathbb{F}_p(A,g_0,\overline{h},\overline{h'})$. So, the degree of the minimal polynomial of $g$ over this field is smaller than $n$. Moreover, $g$ is generic over $A,g_0$. Hence, we finish using the induction hypothesis as in Subcase B of Case 1. \hfill $\blacksquare$\\
Notice that if the characteristic of $K$ equals 0, then Case 1 does not hold, and so it is enough to apply the argument from Case 2 to prove Lemma \ref{main lemma}.
Let us formulate Lemma \ref{main lemma} in the case $m=1$ as a corollary.
\begin{corollary}\label{R-condition}
Let $A$ be a subset and $g,h$ elements of $K$. Suppose $g$ is generic over $A$ and separably algebraic over $\mathbb{F}_p(A,h)$. Then, $h$ is generic over $A$.
\end{corollary}
\begin{lemma}\label{R for tuples}
Let $A$ be a subset of $K$ and $g_1,\dots, g_m$ independent generics over $A$. Suppose $h_1,\dots,h_m$ are such that the elements $g_1,\dots,g_m$ are separably algebraic over $\mathbb{F}_p(A,h_1,\dots,h_m)$. Then, $h_1,\dots,h_m$ are independent generics over $A$.
\end{lemma}
{\em Proof.} The proof is by induction on $m$. For $m=1$, the conclusion follows from Corollary \ref{R-condition}.
Let us do the induction step. By the assumption, $g_m$ is generic over $A,g_{<m}$ and it is separably algebraic over $\mathbb{F}_p(A,g_{<m},\overline{h})$. Hence, by Lemma \ref{main lemma}, there is $i$ such that $h_{i}$ is generic over $A,g_{<m}$. Therefore, $h_i,g_1,\dots,g_{m-1}$ are independent generics over $A$.
Put $A'=A \cup \{h_i\}$. We see that $g_1,\dots,g_{m-1}$ are independent generics over $A'$ and they are separably algebraic over $\mathbb{F}_p(A',h_{\ne i})$. So, by the induction hypothesis, $h_1,\dots,h_{i-1},h_{i+1},\dots,h_m$ are also independent generics over $A'$. We finish using the fact that $h_i$ is generic over $A$. \hfill $\blacksquare$
\begin{corollary}\label{symmetric functions}
Let $A$ be a subset of $K$ and $a_0,\dots,a_{n-1}$ independent generics over $A$. Then:\\
(i) the elementary symmetric functions in $a_0,\dots,a_{n-1}$ are independent generics over $A$,\\
(ii) the polynomial $x^n+a_{n-1}x^{n-1}+\dots + a_0$ has $n$ distinct roots in $K$.
\end{corollary}
{\em Proof.}
(i) Let $s_0,\dots,s_{n-1}$ be the elementary symmetric functions in $\overline{a}$. We have that $a_0,\dots,a_{n-1}$ are pairwise distinct solutions to $x^n-s_{n-1}x^{n-1}+\dots + (-1)^ns_0$. Hence, $a_0,\dots,a_{n-1}$ are separably algebraic over $\mathbb{F}_p(A,s_0,\dots,s_{n-1})$. So, by Lemma \ref{R for tuples}, we get that $s_0,\dots,s_{n-1}$ are independent generics over $A$.\\
(ii) It follows from (i) and the uniqueness of the generic type. \hfill $\blacksquare$\\
With the above lemmas and corollaries, we can now prove our main result, by adapting the proof of \cite[Proposition 5.2]{pi}.
\begin{theorem}\label{main theorem}
Each stable field of weight 1 is separably closed.
\end{theorem}
{\em Proof.} As usual, $K$ is our stable field of weight 1 and $p$ is its characteristic. Suppose for a contradiction that there is $\alpha \in K^{sep} \setminus K$. Let $P(x)=x^n+a_{n-1}x^{n-1}+\dots + a_0$ be the minimal polynomial of $\alpha$ over $K$. Since $\alpha \in K^{sep}$, $P(x)$ has $n$ different roots $\alpha_1,\dots, \alpha_n$ in $K^{sep}$.
Choose
\begin{enumerate}
\item[(i)] $t_0,\dots, t_{n-1}$ independent generics over $a_0,\dots, a_{n-1}$.
\end{enumerate}
Define
$$r_i=t_0+t_1\alpha_i+\dots + t_{n-1}\alpha_i^{n-1}$$
for $i=1,\dots,n$. Let $s_0, \dots,s_{n-1}$ be the elementary symmetric function in $r_1,\dots,r_n$. Then, $s_0,\dots,s_{n-1} \in K$ because they are fixed by every element of $Gal(K^{sep}/K)$. We claim that
\begin{enumerate}
\item[(ii)] $r_1,\dots,r_n$ are separably algebraic over $\mathbb{F}_p(s_0,\dots,s_{n-1})$.
\end{enumerate}
We have that $r_1,\dots, r_n$ are the roots of $x^n-s_{n-1}x^{n-1}+\dots +(-1)^ns_0$. So, in order to prove (ii), it is enough to show that $r_i \ne r_j$ whenever $i\ne j$. Suppose for a contradiction that there are $i \ne j$ such that $r_i=r_j$. Then, $$t_1(\alpha_i-\alpha_j)+\dots +t_{n-1}(\alpha_i^{n-1} - \alpha_j^{n-1})=0.$$ So, $t_1$ is algebraic over $\mathbb{F}_p(\alpha_i,\alpha_j,t_2,\dots,t_{n-1})$ and so over $\mathbb{F}_p(a_0,\dots,a_{n-1},t_2,\dots,t_{n-1})$, which contradicts (i).
Since the matrix
$$\left( \begin{array}{llll}
1 & \alpha_1 & \dots & \alpha_1^{n-1}\\
\vdots & \vdots & \vdots & \vdots\\
1 & \alpha_n & \dots & \alpha_n^{n-1}
\end{array}\right)$$
is invertible, we see that $t_0,\dots,t_{n-1} \in \mathbb{F}_p(\alpha_1,\dots,\alpha_n,r_1,\dots,r_n)$. On the other hand, $\alpha_1,\dots,\alpha_{n-1} \in \mathbb{F}_p(a_0,\dots,a_{n-1})^{sep}$. Thus, by (ii),
\begin{enumerate}
\item[(iii)] $t_0,\dots,t_{n-1}$ are separably algebraic over $\mathbb{F}_p(a_0,\dots,a_{n-1},s_0,\dots,s_{n-1})$.
\end{enumerate}
By (i), (iii) and Lemma 1.5, we see that $s_0,\dots,s_{n-1}$ are independent generics over $a_0,\dots,a_{n-1}$. So, in virtue of Corollary \ref{symmetric functions}(ii), all $r_i$'s belong to $K$. Thus, the degree of the minimal polynomial of $\alpha$ over $K$ is smaller than $n$, a contradiction. \hfill $\blacksquare$\\
Recall that by \cite{de}, we know that the weight of a separably closed field of infinite Ershov invariant is 1. Thus, Theorem \ref{main theorem} is in a sense best possible.
\section{Stable fields of finite weight}
\begin{proposition}\label{groups of finite weight} Let $G$ be any stable commutative group (written multiplicatively) of finite weight. Then for all but finitely many primes $q$, $G^q$ has finite index in $G$. More precisely, if $w(G)=w<\omega$, then there are at most $w$ many primes $q$ such that $[G:G^q]$ is infinite.
\end{proposition}
{\em Proof.}
Choose an independent sequence $(a_n)_{n \in \omega}$ of generics in $G$. Assume $w(G)=w < \omega$. Suppose for a contradiction that there are $w+1$ primes $p_1,\dots,p_{w+1}$ such that $[G:G^{p_i}]$ are infinite. It follows that $G^{p_i}$ are not generic.
Define a sequence $(k_1,\dots,k_{w+1})$ of natural numbers by
$$\left\{
\begin{array}{lrl}
k_1 & = & p_1+1,\\
k_{i} & = & (p_1\dots p_{i-1})^{p_i -1}\; \, \mbox{for} \; 2\leq i \leq w+1.
\end{array} \right.
$$
Then, $p_i|k_i-1$ for any $i=1,\dots,w+1$, and $p_i|k_j$ for any $i<j$.
Put $g=a_0a_1^{k_1}\dots a_{w+1}^{k_{w+1}}$, and define a sequence $(g_i)_{1\leq i \leq w+1}$ of elements of $G$ by
$$\left\{
\begin{array}{lrl}
g_1 & = & a_0a_1,\\
g_i & = & a_0a_1^{k_1}\dots a_{i-1}^{k_{i-1}}a_i \; \, \mbox{for} \; 2 \leq i \leq w+1.
\end{array} \right.
$$\\[3mm]
{\bf Claim}
(i) $g$ is generic.\\
(ii) $g_i \mathop{\mathpalette\Ind{}} g_j$ for any $i \ne j$.\\
(iii) $g \mathop{\mathpalette\Notind{}} g_{i}$ for every $i=1,\dots,w+1$.\\[3mm]
{\em Proof of Claim.}
(i) It follows from the fact that $a_0$ is generic over $a_{>0}$.\\
(ii) Assume $j >i$. Since $a_j$ is generic over $a_{<j}$,
$$g_j=a_0a_1^{k_1}\dots a_{j-1}^{k_{j-1}}a_j \mathop{\mathpalette\Ind{}} a_0a_1^{k_1}\dots a_{i-1}^{k_{i-1}}a_i=g_i.$$
(iii) We have $g_i^{-1}g=a_i^{k_i -1}a_{i+1}^{k_{i+1}}\dots a_{w+1}^{k_{w+1}}$. Since $p_i|k_i-1$ and $p_i|k_j$ for any $j>i$, we get that $g_i^{-1}g \in G^{p_i}$.
Suppose for a contradiction that $g \mathop{\mathpalette\Ind{}} g_{i}$. Then, by (i), $g_{i}^{-1}g$ is generic, and hence $G^{p_i}$ is generic, a contradiction. \hfill $\square$\\
By the Claim, $w(K)\geq w+1$, a contradiction. \hfill $\blacksquare$
\begin{corollary}\label{almost all} Let $K$ be any infinite stable field of finite weight. Then for all but finitely many primes $q$, $K^q=K$. More precisely, if $w(K)=w<\omega$, then there are at most $w$ many primes $q$ for which $K^q \ne K$.
\end{corollary}
{\em Proof.} It an immediate consequence of Proposition \ref{groups of finite weight} and the fact that stable fields are multiplicatively connected.\hfill $\blacksquare$\\
As the weight of a separably closed field of infinite Ershov invariant is 1, we cannot expect to strengthen Corollary \ref{almost all} to get that for every prime $q$, $K^q=K$. However, one can hope to prove that for every prime $q$ different from the characteristic, $K^q=K$. In fact, this would imply Conjecture \ref{main conjecture}. To see this, one should apply Macintyre's proof \cite{ma} using the fact that a finite extension of a stable field of finite weight remains stable of finite weight and the fact that stable fields are closed under Artin-Schreier extensions \cite{KSW}.
As was mentioned in the introduction, a separably closed field of infinite Ershov invariant is an example of stable field of finite weight which is not strongly stable, i.e. there is a finitary type in it of infinite weight. This follows from the next proposition.
\begin{proposition}\label{perfect}
An infinite strongly stable field is perfect.
\end{proposition}
{\em Proof.} Let $p$ be the characteristic of $K$. Assume $p>0$, and suppose for a contradiction that $K^p \ne K$. Then, there are $b_1,b_2 \in K$ linearly independent over $K^p$. Choose a Morley sequence $(a_i)_{i \in \omega}$ in the generic type over $b_1,b_2$.
By compactness, one can find $a\in K$ for which there is a sequence $(c_i)_{i \in \omega}$ of elements of $K$ such that $c_0=a$ and for every $i$, $c_i=b_1c_{i+1}^p+b_2a_i^p$.
Since $b_1,b_2$ are linearly independent over $K^p$, we get that $a_i \in dcl(b_1,b_2,a)$ for every $i$. So $a \mathop{\mathpalette\Notind{}}_{b_1,b_2} a_i$ for every $i$. On the other hand, $(a_i)_{i \in \omega}$ was chosen to be independent over $b_1,b_2$. So $w(a/b_1,b_2)$ is infinite, and hence $K$ is not strongly stable, a contradiction. \hfill $\blacksquare$\\
The above proposition together with Theorem \ref{main theorem} yield the following corollary.
\begin{corollary}
Strongly stable fields of weight 1 are algebraically closed.
\end{corollary}
The next observation says that if we assume that the degree of imperfection is finite, then the conclusion of Proposition \ref{perfect} holds under the weaker assumption of being of finite weight.
\begin{proposition}
An infinite stable field of finite weight and of finite degree of imperfection is perfect.
\end{proposition}
{\em Proof.} Let $p>0$ be the characteristic of $K$. Suppose for a contradiction that $K^p \ne K$. Since the degree of imperfection is finite, there is a finite basis $\{ b_1,\dots,b_n\}$ of $K$ over $K^p$.
The map $f: K \to K^{\times n}$ given by $f(a)=(f_1(a),\dots,f_n(a))$ where $a=b_1f_1(a)^p +\dots + b_nf_n(a)^p$ is a group automorphism definable over $\{b_1,\dots,b_n\}$. So for any $A\subseteq K$, if $a$ is generic over $A,b_1,\dots,b_n$, then $f(a)$ is a sequence of independent generics over $A,b_1,\dots,b_n$. For $\eta \in \{1,\dots,n\}^l$ and $x \in K$, we put $f_\eta(x)=(f_{\eta(l-1)}\circ \dots \circ f_{\eta(0)})(x)$.
Let $a$ be generic over $b_1,\dots,b_n$. By an easy induction, we get that $(f_{1^ki}(a): k \geq 0, 1<i\leq n)$ is an infinite collection of independent generics over $b_1,\dots,b_n$ ($1^ki$ denotes the sequence consisting of $k$ many 1's followed by $i$). Moreover, every $f_{1^ki}(a)$ belongs to $dcl(a,b_1,\dots,b_n)$. So, $a \mathop{\mathpalette\Notind{}}_{b_1,\dots,b_n} f_{1^ki}(a)$. We conclude that $w(a/b_1,\dots,b_n)$ is infinite. Thus, $w(K)$ is infinite, a contradiction. \hfill $\blacksquare$\\
We complete the paper with a couple of questions and conjectures related to the notions and techniques introduced here. We did not give much thought to the first one, but we are rather curious and there could be a simple construction.
\begin{problem} Construct an algebraically closed field $K$ with additional structure such that $Th(K)$ is stable and the generic type of $K$ has weight $1$ but is not regular.
\end{problem}
\begin{conjecture} Let $K$ be a field with additional structure which is stable (and saturated). Then, the following are equivalent:
\newline
(1) $K$ is separably closed,
\newline
(2) For any small subfield $k<K$, $n < \omega$, and $a_{1},\dots,a_{n},b_{1},\dots,b_{n}\in K$, IF $a_{1},\dots,a_{n}$ are independent generics over $k$, and each $a_{i}$ is separably algebraic over $k(b_{1},\dots,b_{n})$ (of course, in the field-theoretic sense), THEN $b_{1},\dots,b_{n}$ are independent generics over $k$.
\end{conjecture}
Note that (2) is precisely the statement of Lemma 1.5, and the proof of Theorem 1.7 shows that (2) implies (1).
Here is a version for ``algebraically closed'' rather than ``separably closed''.
\begin{conjecture} Let $K$ be a field with additional structure which is stable (and saturated). Then, the following are equivalent:
\newline
(1') $K$ is algebraically closed,
\newline
(2') For any small subfield $k<K$ and $a,b\in K$, IF $a$ is generic over $k$, and $a$ is algebraic over $k(b)$ in the field-theoretic sense, THEN $b$ is generic over $k$.
\end{conjecture}
In fact, it is not hard to show that (2') implies (2''): if $a_{1},\dots,a_{n}$ are generic independent over $k$ and contained in $k(b_{1},\dots,b_{n})^{alg}$, then $b_{1},\dots,b_{n}$ are generic independent over $k$. And by the standard argument (as in the proof of 1.7), one deduces from (2'') that $K$ is algebraically closed. The converse (1') implies (2') looks attractive, and concerns some kind of uniqueness of ``generic types'' on irreducible plane curves in stable expansions of algebraically closed fields.
In any case, the point of Conjectures 2.7 and 2.8 is that the kind of methods in the proof of Theorem 1.7 are not only sufficient but should also be necessary.
|
1,108,101,565,138 | arxiv | \section{Introduction}\label{introduction}
In the past three decades non-Gaussian time series have attracted a
lot of interest, see e.g. Cox (1981), Kaufmann (1987), Kitagawa
(1987), Shephard and Pitt (1997), and Durbin and Koopman (2000),
among others. In the context of regression modelling, generalized
linear models (McCullagh and Nelder, 1989; Dobson, 2002) offer a
solid theoretical basis for statistical analysis of independent
non-normal data. A general framework for dealing with time series
data is the dynamic generalized linear model (DGLM), which considers
generalized linear modelling with time-varying parameters and hence
it is capable to model time series data for a wide range of response
distributions. DGLMs have been widely adopted for non-normal time
series data, see e.g. West {\it et al.} (1985), Gamerman and West
(1987), Fahrmeir (1987), Fr\"{u}hwirth-Schnatter, S. (1994), Lindsey
and Lambert (1995), Chiogna and Gaetan (2002), Hemming and Shaw
(2002), Godolphin and Triantafyllopoulos (2006), and Gamerman (1991,
1998). Dynamic generalized linear models are reported in detail in
the monographs of West and Harrison (1997, Chapter 14), Fahrmeir and
Tutz (2001, Chapter 8), and Kedem and Fokianos (2002, Chapter 6).
In this paper we propose a unified treatment of DGLMs that includes
approximate Bayesian inference and multi-step forecasting. In this
to end we adopt the estimation approach of West {\it et al.} (1985),
but we extend it as far as model diagnostics and forecasting are
concerned. In particular, we discuss likelihood-based model
assessment as well as Bayesian model monitoring. In the literature,
discussion on the DGLMs is usually restricted to the binomial and
the Poisson models, see e.g. Fahrmeir and Tutz (2001, Chapter 8).
Even for these response distributions, discussion is limited on
estimation, while forecasting and in particular multi-step
forecasting does not appear to have received much attention. We
provide detailed examples of many distributions, including binomial,
Poisson, negative binomial, geometric, normal, log-normal, gamma,
exponential, Weibull, Pareto, two special cases of the beta, and
inverse Gaussian. We give numerical illustrations for all
distributions, except for the normal (for which one can find
numerous illustrations in the time series literature) using real and
simulated data.
The paper is organized as follows. In Section \ref{dglm} we discuss
Bayesian inference of DGLMs. Section \ref{examples} commences by
considering several examples, where the response time series follows
a particular distribution. Section \ref{discussion} gives concluding
comments. The appendix includes some proofs of arguments in Section
\ref{examples}.
\section{Dynamic generalized linear models}\label{dglm}
\subsection{Model definition}
Suppose that the time series $\{y_t\}$ is generated from a
probability distribution, which is a member of the exponential
family of distributions, that is
\begin{equation}\label{exp}
p(y_t|\gamma_t) = \exp\left( \frac{1}{a(\phi_t)} \left(
z(y_t)\gamma_t - b(\gamma_t)\right) \right) c(y_t,\phi_t),
\end{equation}
where $\gamma_t$, known as the natural parameter, is the parameter
of interest and other parameters that can be linked to $\phi_t$,
$a(.)$, $b(.)$ and $c(.,.)$ are usually referred to as nuisance
parameters or hyperparameters. The functions $a(.)$, $b(.)$ and
$c(.,.)$ are assumed known, $\phi_t,a(\phi_t),c(y_t,\phi_t)>0$,
$b(\gamma_t)$ is twice differentiable and according to Dobson (2002,
\S3.3)
$$
\mathbb{E}(z(y_t)|\gamma_t)=\frac{\,db(\gamma_t)}{\,d\gamma_t} \quad
\textrm{and} \quad \text{Var}(z(y_t)|\gamma_t) =
\frac{a(\phi_t)\,d^2b(\gamma_t)}{ \gamma_t^2}.
$$
The function $z(.)$ is usually a simple function in $y_t$ and in
many cases it is the identity function; an exception of this is the
binomial distribution. If $z(y_t)=y_t$, distribution (\ref{exp}) is
said to be in the {\it canonical} or {\it standard} form. Dobson
(2002, \S3.3) gives expressions of the score statistics and the
information matrix, although the consideration of these may not be
necessary for Bayesian inference.
The idea of generalized linear modelling is to use a non-linear
function $g(.)$, which maps $\mu_t=\mathbb{E}(y_t|\gamma_t)$ to the linear
predictor $\eta_t$; this function is known as link function. If
$g(\mu_t)=\gamma_t$, this is referred to as {\it canonical link},
but other links may be more useful in applications (see e.g. the
inverse Gaussian example in Section \ref{continuous}). In GLM
theory, $\eta_t$ is modelled as a linear model, but in DGLM theory,
the linear predictor is replaced by a state space model, i.e.
$$
g(\mu_t)=\eta_t=F_t'\theta_t \quad \textrm{and} \quad \theta_t=
G_t\theta_{t-1}+\omega_t,
$$
where $F_t$ is a $d\times 1$ design vector, $G_t$ is a $d\times d$
evolution matrix, $\theta_t$ is a $d\times 1$ random vector and
$\omega_t$ is an innovation vector, with zero mean and some known
covariance matrix $\Omega_t$. It is assumed that $\omega_t$ is
uncorrelated of $\omega_s$ (for $t\neq s$) and $\omega_t$ is
uncorrelated of $\theta_0$, for all $t$. It is obvious that if one
sets $G_t=I_p$ (the $d\times d$ identity matrix) and $\omega_t=0$
(i.e. its covariance matrix is the zero matrix), then the above
model is reduced to a usual GLM.
For the examples of Section \ref{examples} we consider simple state
space models, which assume that $F_t=F$, $G_t=G$, $\Omega_t=\Omega$
are time-invariant. However, in the next sections, we present
Bayesian inference and forecasting for time-varying $F_t$, $G_t$,
$\Omega_t$ in order to cover the general situation.
\subsection{Bayesian inference}
Suppose that we have data $y_1,\ldots,y_T$ and we form the
information set $y^t=\{y_1,\ldots,y_t\}$, for $t=1,\ldots,T$. At
time $t-1$ we assume that the posterior mean vector and covariance
matrix of $\theta_{t-1}$ are $m_{t-1}$ and $P_{t-1}$, respectively,
and we write $\theta_{t-1}|y^{t-1}\sim (m_{t-1},P_{t-1})$. Then from
$\theta_t=G_t\theta_{t-1}+\omega_t$, it follows that
$\theta_t|y^{t-1}\sim (h_t, R_t)$, where $h_t=G_tm_{t-1}$ and
$R_t=G_tP_{t-1}G_t'+\Omega_t$.
The next step is to form the prior mean and variance of $\eta_t$ and
$\theta_t$, that is
\begin{equation}\label{prior:p2}
\left.\left[ \begin{array}{c} \eta_t \\ \theta_t \end{array}\right]
\right|y^{t-1} \sim \left(\left[ \begin{array}{c} f_t \\ h_t
\end{array}\right], \left[ \begin{array}{cc} q_t &
F_t'R_t \\ R_tF_t & R_t
\end{array} \right] \right),
\end{equation}
where $f_t=F_t'h_t$ and $q_t=F_t'R_tF_t$. The quantities $f_t$ and
$q_t$ are the forecast mean and variance of $\eta_t$.
In order to proceed with Bayesian inference, we assume the conjugate
prior of $\gamma_t$, so that
\begin{equation}\label{prior:g1}
p(\gamma_t|y^{t-1})=\kappa(r_t,s_t)\exp(r_t\gamma_t-s_tb(\gamma_t)),
\end{equation}
for some known $r_t$ and $s_t$. These parameters can be found from
$g(\mu_t)=\eta_t$ and $f_t=\mathbb{E}(\eta_t|y^{t-1})$,
$q_t=\text{Var}(\eta_t|y^{t-1})$, which are known from (\ref{prior:p2}).
The normalizing constant $\kappa(.,.)$ can be found by
$$
\kappa(r_t,s_t)=\left(\int
\exp(r_t\gamma_t-s_tb(\gamma_t))\,d\gamma_t \right)^{-1},
$$
where the integral is Lebesque integral, so that it includes
summation / integration of discrete / continuous variables. We note
that in most of the cases, the above distribution will be
recognizable (e.g. gamma, beta, normal) and so there is no need of
evaluating the above integral. One example that this is not the case
is the inverse Gaussian distribution (see Section \ref{continuous}).
Then observing $y_t$, the posterior distribution of $\gamma_t$ is
\begin{eqnarray}
p(\gamma_t|y^t) & = & \frac{p(y_t|\gamma_t) p(\gamma_t|y^{t-1})} {
\int p(y_t|\gamma_t)p(\gamma_t|y^{t-1})\,d\gamma_t} \nonumber
\\ &=&
\kappa\left(r_t+\frac{z(y_t)}{a(\phi_t)},s_t+\frac{1}{a(\phi_t)}\right)
\exp\left(\left(r_t+\frac{z(y_t)}{a(\phi_t)}\right) \gamma_t -
\left(s_t+\frac{1}{a(\phi_t)}\right)b(\gamma_t)
\right).\label{post:g1}
\end{eqnarray}
In many situations we are interested in parameters that are given as
functions of $\gamma_t$. In such cases we derive the prior/posterior
distributions of $\gamma_t$ as above and then we apply a
transformation to obtain the prior/posterior distribution of the
parameter in interest. The examples of Section \ref{examples} are
illuminative.
Finally, the posterior mean vector and covariance matrix of
$\theta_t$ are approximately given by
\begin{equation}\label{post:th1}
\theta_t|y^t \sim (m_t,P_t),
\end{equation}
with
$$
m_t=h_t+R_tF_t(f_t^*-f_t)/q_t \quad \textrm{and} \quad P_t= R_t -
R_t F_tF_t' R_t (1-q_t^*/q_t)/q_t,
$$
where $f_t^*=\mathbb{E}(\eta_t|y^t)$ and $q_t^*=\mathbb{E}(\eta_t|y^t)$ can be found
from $g(\mu_t)=\eta_t$ and the posterior (\ref{post:g1}). The priors
(\ref{prior:p2}), (\ref{prior:g1}) and the posteriors
(\ref{post:g1}), (\ref{post:th1}) provide an algorithm for
estimation, for any $t=1,\ldots,T$. For a proof of the above
algorithm the reader is referred to West {\it et al.} (1985).
An alternative approach for the specification of $r_t$ and $s_t$ is
to make use of {\it power discounting} and this is briefly discussed
next. The idea of power discounting stem in the work of Smith
(1979); power discounting is a method of obtaining the prior
distribution at time $t+1$, from the posterior distribution at time
$t$. Here we consider a minor extension of the method by replacing
$t+1$ by $t+\ell$, for some positive integer $\ell$. Then, according
to the principle of power discounting, the prior distribution at
time $t+\ell$ is proportional to $(p(\gamma_t|y^t))^\delta$, where
$\delta$ is a discount factor. Thus we write
$$
p(\gamma_{t+\ell}|y^t) \propto (p(\gamma_t)|y^t)^\delta, \quad
\textrm{for} \quad 0<\delta<1.
$$
This ensures that the prior distribution of $\gamma_{t+\ell}$ is
flatter than the posterior distribution of $\gamma_t$. The above
procedure assumes that $r_t(\ell)=r_{t+1}$ and $s_t(\ell)=s_{t+1}$,
which implicitly assumes a random walk type evolution of the
posterior/prior updating, in the sense that Bayes decisions in the
interval $(t,t+\ell)$ remain constant, while the respective expected
loss (under step loss functions) increase (Smith, 1979).
\subsection{Bayesian forecasting and model assessment}
Suppose that the time series $\{y_t\}$ is generated by density
(\ref{exp}) and let $y^t$ be the information set up to time $t$.
Then the $\ell$-step forecast distribution of $y_{t+\ell}$ is
\begin{equation}\label{eq:for}
p(y_{t+\ell}|y^t) = \int p(y_{t+\ell}|\gamma_{t+\ell})
p(\gamma_{t+\ell}|y^t)\,d\gamma_{t+\ell} = \frac{
\kappa(r_t(\ell),s_t(\ell)) c(y_{t+\ell},\phi_{t+\ell}) }{
\kappa\left(r_t(\ell)+\frac{z(y_{t+\ell})}{a(\phi_{t+\ell})},
s_t(\ell)+\frac{1}{a(\phi_{t+\ell})}\right)},
\end{equation}
where $r_t(\ell)$ and $s_t(\ell)$ are evaluated from $f_t(\ell)$ and
$q_t(\ell)$, the mean and variance of $\eta_{t+\ell}|y^t$, and the
distribution of $\gamma_{t+\ell}|y^t$, which takes a similar form as
the distribution of $\gamma_t|y^{t-1}$.
Model assessment can be done via the likelihood function, residual
analysis, and Bayesian model comparison, e.g. based on Bayes
factors. The likelihood function of $\gamma_1,\ldots,\gamma_T$,
based on information $y^T$ is
$$
L(\gamma_1,\ldots,\gamma_T;y^T)=\prod_{t=1}^T
p(y_t|\gamma_t)p(\gamma_t|\gamma_{t-1}),
$$
where the first probability in the product is the distribution
(\ref{exp}) and the second indicates the evolution of $\gamma_t$,
given $\gamma_{t-1}$. Then the log-likelihood function is
\begin{equation}\label{logl}
\ell(\gamma_1,\ldots,\gamma_T;y^T)= \sum_{t=1}^T \left(
\frac{1}{a(\phi_t)} (z(y_t)\gamma_t-b(\gamma_t)) + \log
c(y_t,\phi_t) \right) + \sum_{t=1}^T \log p(\gamma_t|\gamma_{t-1}).
\end{equation}
The likelihood function can be used as a means of model comparison
(for example looking at two model specifications, which differ in
some quantitative parts, we choose the model that has larger
likelihood). For model assessment the likelihood function can be
used in order to choose some hyperparameters (discount factors, or
nuisance parameters) so that the likelihood function is maximized in
terms of these hyperparameters. The evaluation of (\ref{logl})
requires the distribution $p(\gamma_t|\gamma_{t-1})$. This depends
on the state space model for $\eta_t$ used. In the examples of
Section \ref{examples} we look at these probabilities, based mainly
on Gaussian random walk evolutions for $\eta_t$, but also we
consider a linear trend model for $\eta_t$. Note that the
consideration of $\omega_t$ following a Gaussian distribution does
not imply that $\theta_t|y^t$ follows a Gaussian distribution too,
since the distribution of $\theta_0$ may not be Gaussian.
For the sequential calculation of the Bayes factors (which for
Gaussian responses are discussed in Salvador and Gargallo, 2005), a
typical setting suggests the formation of two models $\mathcal{M}_1$
and $\mathcal{M}_2$, which differ in some quantitative aspects, e.g.
some hyperparameters. Then, the cumulative Bayes factor of
$\mathcal{M}_1$ against $\mathcal{M}_2$ is defined by
\begin{equation}\label{bf1}
H_t(k) = \frac{ p(y_{t}, \ldots, y_{t-k+1} | y^{t-k}, \mathcal{M}_1
) }{ p(y_{t}, \ldots, y_{t-k+1} | y^{t-k}, \mathcal{M}_2 ) } =
H_{t-1}(k-1) H_t(1) = \prod_{i=t-k+1}^t H_i(1)
\end{equation}
where $H_1(1)=H_t(0)=1$, for all $t$, and $p(y_{t}, \ldots,
y_{t-k+1} | y^{t-k}, \mathcal{M}_j )$ denotes the joint distribution
of $y_{t},\ldots,y_{t-k+1}$, given $y^{t-k}$, for some integer
$0<k<t$ and $j=1,2$. Then preference of model 1 would imply larger
forecast distribution of this model (or $H_t(k)>1$); likewise
preference of model 2 would imply $H_t(k)<1$; $H_t(k)=1$ implies
that the two models are probabilistically equivalent in the sense
they provide the same forecast distributions.
\section{Examples}\label{examples}
\subsection{Discrete distributions for the response $y_t$}\label{discrete}
\subsubsection{Binomial}
The binomial distribution (Johnson {\it et al.}, 2005) is perhaps
the most popular discrete distribution. It is typically generated as
the sum of independent success/failure bernoulli trials and in the
context of generalized linear modelling is associated with logistic
regression (Dobson, 2002).
Consider a discrete-valued time series $\{y_t\}$, which, for a given
probability $\pi_t$, follows the binomial distribution
$$
p(y_t|\pi_t)=\binom {n_t}{y_t} \pi_t ^{y_t} (1-\pi_t)^{n_t-y_t},
\quad y_t=0,1,2,\ldots,n_t; \quad n_t=1,2,\ldots; \quad 0<\pi_t<1,
$$
where $\binom {n_t}{y_t}$ denotes the binomial coefficient. It is
easy to verify that the above distribution is of the form
(\ref{exp}) with $z(y_t)=y_t/n_t$, $\gamma_t=\log \pi_t/(1-\pi_t)$,
$a(\phi_t)=\phi_t^{-1}=n_t^{-1}$, $b(\gamma_t)=\log
(1+\textrm{exp}(\gamma_t))$, and $c(\gamma_t,\phi_t)=\binom
{n_t}{y_t}$. The logarithmic, known also as logit, link
$\eta_t=g(\mu_t)=\gamma_t=\log \pi_t/(1-\pi_t)$ maps $\pi_t$ to the
linear predictor $\eta_t$, which with the setting
$\eta_t=F'\theta_t$ and $\theta_t=G\theta_{t-1}+\omega_t$, generates
the dynamic evolution of the model.
The prior of $\pi_t|y^{t-1}$, follows by the prior of
$\gamma_t|y^{t-1}$ and the transformation $\gamma_t=\log
\pi_t/(1-\pi_t)$ as beta distribution $\pi_t|y^{t-1}\sim
B(r_t,s_t-r_t)$, with density
$$
p(\pi_t|y^{t-1})=\frac{\Gamma(s_t)}{\Gamma(r_t)\Gamma(s_t-r_t)}
\pi_t^{r_t-1} (1-\pi_t)^{s_t-r_t-1},
$$
where $\Gamma(.)$ denotes the gamma function and $s_t>r_t>0$. Then,
observing $y_t$, the posterior of $\pi_t|y^t$ is $\pi_t|y^t\sim
B(r_t+y_t,s_t+n_t-r_t-y_t)$.
In the appendix it is shown that, with $f_t$ and $q_t$ the prior
mean and variance of $\eta_t$, an approximation of $r_t$ and $s_t$
is given by
\begin{equation}\label{eq:binom:rt}
r_t=\frac{1+\textrm{exp}(f_t)}{q_t} \quad \textrm{and} \quad
s_t=\frac{2+\textrm{exp}(f_t)+\textrm{exp}(-f_t)}{q_t}.
\end{equation}
In order to proceed with the posterior moments of $\theta_t|y^t$ as
in (\ref{post:th1}), we can see that
$$
f_t^*=\psi(r_t+y_t)-\psi(s_t-r_t+n_t-y_t) \quad \textrm{and} \quad
q_t^*= \left.\frac{\,d\psi(x)}{\,dx}\right|_{x=r_t+y_t}+
\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=s_t+n_t-y_t},
$$
where $\psi(.)$ denotes the digamma function (see the Poisson
example and the appendix). In the appendix approximations of
$\psi(.)$ and of its first derivative (also known as trigamma
function) are given. These definitions as well as the parameters of
the beta prior are slightly different from the ones obtained by West
and Harrison (1997), as these authors use a different
parameterization, which does not appear to be consistent with the
prior/posterior updating.
Given information $y^t$, the $\ell$-step forecast distribution is
obtained by first noting that
\begin{equation}\label{eq:binom2}
\pi_{t+\ell}|y^t\sim B(r_t(\ell),s_t(\ell)-r_t(\ell)),
\end{equation}
where $r_t(\ell)$ and $s_t(\ell)$ are given by $r_t$ and $s_t$, if
$f_t$ and $q_t$ are replaced by $f_t(\ell)=\mathbb{E}(\eta_{t+\ell}|y^t)$
and $q_t(\ell)=\text{Var}(\eta_{t+\ell}|y^t)$, which are calculated
routinely by the Kalman filter (see Section \ref{dglm}). Then the
$\ell$-step forecast distribution is given by
\begin{eqnarray*}
p(y_{t+\ell}|y^t) &=& \frac{ \Gamma(s_t(\ell)) } { \Gamma(r_t(\ell))
\Gamma(s_t(\ell)-r_t(\ell))\Gamma(s_t(\ell)+n_{t+\ell}) } \\ &&
\times \frac{1}{n_{t+\ell}} \binom {n_{t+\ell}} {y_{t+\ell}}
\Gamma(r_t(\ell)+y_{t+\ell})
\Gamma(s_t(\ell)-r_t(\ell)+n_{t+\ell}-y_{t+\ell} ).
\end{eqnarray*}
We can use conditional expectations in order to calculate the
forecast mean and variance, i.e.
$$
y_t(\ell)=\mathbb{E}(y_{t+\ell}|y^t)=\mathbb{E}(\mathbb{E}(y_{t+\ell}|\pi_{t+\ell})|y^t) =
\frac{ n_{t+\ell} (r_t(\ell)+1) }{ r_t(\ell) + s_t(\ell) + 1}
$$
and
\begin{eqnarray*}
\text{Var}(y_{t+\ell}|y^t) &=& \mathbb{E}(\text{Var}(y_{t+\ell}|\pi_{t+\ell})|y^t) +
\text{Var}(\mathbb{E}(y_{t+\ell}|\pi_{t+\ell})) \\ &=& \frac{ n_{t+\ell}
(r_t(\ell)+1) }{ r_t(\ell) + s_t(\ell) + 1} - \frac{ n_{t+\ell}
(r_t(\ell) +1) (r_t(\ell)+2) }{ (r_t(\ell)+s_t(\ell) +1) (r_t(\ell)
+ s_t(\ell) + 2) } \\ && + \frac{ n_{t+\ell}^2 (r_t(\ell)+1)
s_t(\ell) }{ (r_t(\ell)+s_t(\ell) +1)^2 (r_t(\ell) + s_t(\ell) + 2)
}
\end{eqnarray*}
For the specification of $r_t$ and $s_t$, we can alternatively use
power discounting (see Section \ref{dglm}). This yields
$$
r_{t+1}=\delta r_t + \delta y_t +1-\delta \quad \textrm{and} \quad
s_{t+1} = \delta s_t+\delta n_t + 2-\delta,
$$
where $\delta$ is a discount factor and $r_0,s_0$ are initially
given.
For the evolution of $\eta_t$ via $\theta_t$, the obvious setting is
the random walk, which sets $\eta_t=\theta_t=\theta_{t-1}+\omega_t$.
From the logit link we have $\pi_t/(1-\pi_t)=\textrm{exp}(\theta_t)$
and so the evolution of $\theta_t$ yields
$$
\pi_t=\frac{ \textrm{exp}(\omega_t)\pi_{t-1}} {
1-\pi_{t-1}+\textrm{exp}(\omega_t)\pi_{t-1}},
$$
which gives the evolution of $\pi_t$, given $\pi_{t-1}$, as a
function of the Gaussian shock $\omega_t$. Then the distribution of
$\pi_t|\pi_{t-1}$ is
$$
p(\pi_t|\pi_{t-1})=\frac{1}{\sqrt{2\pi\Omega} \pi_t(1-\pi_t)}
\exp\left(-\frac{1}{2\Omega}\left(\log\frac{\pi_t(1-\pi_{t-1})}{
\pi_{t-1}(1-\pi_t)}\right)^2\right)
$$
and so from (\ref{logl}) the log-likelihood function is
\begin{eqnarray*}
\ell(\pi_1,\ldots,\pi_T;y^T) &=& \sum_{t=1}^T \bigg( y_t \log\pi_t
-y_t\log(1-\pi_t)+n_t\log(1-\pi_t)+\log\binom {n_t}{y_t} \\ &&
-\log\sqrt{2\pi\Omega}\pi_t(1-\pi_t)
-\frac{1}{2\Omega}\left(\log\frac{\pi_t(1-\pi_{t-1})}{\pi_{t-1}(1-\pi_t)}\right)^2
\bigg)
\end{eqnarray*}
The Bayes factors are easily computed from (\ref{bf1}) and the
forecast distribution $p(y_{t+\ell}|y^t)$.
If we use a linear trend evolution on $\theta_t$, we can specify
$$
\eta_t=[1~0]\left[\begin{array}{c} \theta_{1t} \\
\theta_{2t}\end{array}\right] \quad \textrm{and} \quad \left[\begin{array}{c} \theta_{1t} \\
\theta_{2t}\end{array}\right] = \left[\begin{array}{cc} 1 & 1 \\ 0 &
1
\end{array}\right] \left[\begin{array}{c} \theta_{1,t-1} \\
\theta_{2,t-1}\end{array}\right] + \left[\begin{array}{c} \omega_{1t} \\
\omega_{2t}\end{array}\right].
$$
Here $\theta_t=[\theta_{1t}~\theta_{2t}]'$ is a 2-dimensional random
vector and $\omega_t=[\omega_{1t}~\omega_{2t}]'$ follows a bivariate
normal distribution with zero mean vector and some known covariance
matrix. Then, conditional on $\pi_{t-1}$, from the logit link
function we can recover the relationship of $\pi_t$ as
$$
\pi_t=\frac{ \textrm{exp}(\theta_{2,0} + \sum_{i=1}^t
\omega_{2i}+\omega_{1t} ) \pi_{t-1} } { 1- \pi_{t-1} + \textrm{exp}
(\theta_{2,0}+\sum_{i=1}^t\omega_{2i} +\omega_{1t})\pi_{t-1}}.
$$
To illustrate the binomial model, we consider the data of Godolphin
and Triantafyllopoulos (2006), consisting of quarterly binomial data
over a period of 11 years. In each quarter $n_t=25$ Bernoulli trials
are performed and $y_t$, the number of successes, is recorded. The
data, which are plotted in Figure \ref{fig1}, show a clear
seasonality and therefore, modelling this data with GLMs is
inappropriate. The data exhibit a trend/periodic pattern, which can
be modelled with a DGLM, by setting $\eta_t=F'\theta_t$ and
$\theta_t=G\theta_{t-1}+\omega_t$, where the design vector $F$ has
dimension $5\times 1$ and the $5\times 5$ evolution matrix $G$
comprises a linear trend component and a seasonal component. One way
to do this is by applying the trend / full harmonic state space
model
$$
F=\left[ \begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \\ 1
\end{array}\right] \quad \textrm{and} \quad G=\left[ \begin{array}{ccccc} 1 & 1 & 0 &
0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & \cos (\pi/2) & \sin (\pi/2) & 0 \\
0 & 0 & -\sin (\pi/2) & \cos (\pi/2) & 0 \\ 0 & 0 & 0 & 0 & -1
\end{array}\right],
$$
where $G$ is a block diagonal matrix, comprising the linear trend
component and the seasonal component, for the latter of which, with
a cycle of $c=4$, we have $h=c/2=2$ harmonics and the frequencies
are $\omega=2\pi/4=\pi/2$ for harmonic 1 and $\omega=4\pi/4=\pi$ for
harmonic 2 (the Nyquist frequency). Similar models, with Gaussian
responses, are described in West and Harrison (1997), and Harvey
(2004). The covariance matrix $\Omega$ of $\omega_t$ is set as the
block diagonal matrix $\Omega=\textrm{block
diag}(\Omega_1,\Omega_2)$, where $\Omega_1=1000 I_2$ corresponds to
the linear trend component, $\Omega_2=100I_3$ corresponds to the
seasonal component and it is chosen so that the trend has more
variability than the seasonal component (West and Harrison, 1997).
The priors $m_0$ and $P_0$ are set as $m_0=[0~0~0~0~0]'$ and
$P_0=1000I_5$, suggesting a weakly informative prior specification.
Figure \ref{fig1} plots the one-step forecast mean of $\{y_t\}$
against $\{y_t\}$. We see that the forecasts fit the data very
closely proposing a good model fit.
\begin{figure}[t]
\epsfig{file=fig1.ps, height=10cm, width=15cm}
\caption{Binomial data of 25 Bernoulli trials (solid line) and
one-step forecast mean (dashed line).}\label{fig1}
\end{figure}
\subsubsection{Poisson}
In the context of generalized linear models, the Poisson
distribution (Johnson {\it et al.}, 2005) is associated with
modelling count data (Dobson, 2002). In a time series setting count
data are developed as in Jung {\it et al.} (2006).
Suppose that $\{y_t\}$ is a count time series, so that, for a
positive real-valued $\lambda_t>0$, $y_t|\lambda_t$ follows the
Poisson distribution, with density
$$
p(y_t|\lambda_t)=\textrm{exp}(-\lambda_t)\frac{\lambda_t^{y_t}}{y_t!},
\quad y_t=0,1,2,\ldots; \quad \lambda_t>0,
$$
where $y_t!$ denotes the factorial of $y_t$.
We can easily verify that this density is of the form (\ref{exp}),
with $z(y_t)=y_t$, $a(\phi_t)=\phi_t=1$, $\gamma_t=\log\lambda_t$,
$b(\gamma_t)=\textrm{exp}(\gamma_t)$, and $c(y_t,\phi_t)=1/y_t!$. We
can see that
$\mathbb{E}(y_t|\lambda)=\,db(\gamma_t)/\,d\gamma_t=\textrm{exp}(\gamma_t)=\lambda_t$
and
$\text{Var}(y_t|\lambda_t)=\,d^2b(\gamma_t)/\,\gamma_t^2=\textrm{exp}(\gamma_t)=\lambda_t$.
From the prior of $\gamma_t|y^{t-1}$ and the transformation
$\gamma_t=\log\lambda_t$, we obtain the prior of $\lambda_t|y^{t-1}$
as a gamma distribution, i.e. $\lambda_t|y^{t-1}\sim G(r_t,s_t)$,
with density
$$
p(\lambda_t|y^{t-1})=\frac{s_t^{r_t}}{\Gamma(r_t)} \lambda^{r_t-1}
\textrm{exp}(-s_t\lambda_t),
$$
for $r_t,s_t>0$. Then it follows that the posterior of $\lambda_t$
is the gamma $G(r_t+y_t,s_t+1)$.
For the definition of $r_t$ and $s_t$ we use the logarithmic link
$g(\lambda_t)=\log\lambda_t=\eta_t=F'\theta_t$ or
$\lambda_t=\textrm{exp}(F'\theta_t)$. Based on an evaluation of the
mean and variance of $\log\lambda_t$ and a numerical approximation
of the digamma function (see appendix), we can see
\begin{equation}\label{eq:poisson:rt}
r_t=\frac{1}{q_t} \quad \textrm{and} \quad
s_t=\frac{\exp(-f_t)}{q_t},
\end{equation}
where $f_t$ and $q_t$ are the mean and variance of $\eta_t$.
For the computation of $f_t^*$ and $q_t^*$, the posterior mean and
variance of $\gamma_t$, first define the digamma function $\psi(.)$
as $\psi(x)=\,d\log\Gamma(x)/\,dx$, where $\Gamma(.)$ denotes the
gamma function and of course $x>0$. Then we have
$$
f_t^*=\psi(r_t+y_t)-\log (s_t+1) \quad \textrm{and} \quad
q_t^*=\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=r_t+y_t},
$$
which can be computed by the recursions $\psi(x)=\psi(x+1)-x^{-1}$
and $\,d\psi(x)/\,dx=\,d\psi(x+1)/\,dx+x^{-2}$. Using the
approximations $\psi(x)=\log x+(2x)^{-1}$ and
$\,d\psi(x)/\,dx=x^{-1}(1-(2x)^{-1})$, we can write
$$
f_t^*\approx \log \frac{r_t+y_t}{s_t+1}+\frac{1}{2(r_t+y_t)} \quad
\textrm{and} \quad q_t^*\approx\frac{2r_t+2y_t-1}{2(r_t+y_t)^2}.
$$
With $r_t$, $s_t$, $f_t^*$ and $q_t^*$ we can compute the first two
moments of $\theta_t|y^t$ as in (\ref{post:th1}). For a detailed
discussion on digamma functions the reader is referred to Abramowitz
and Stegun (1964, \S6.3).
Defining $r_t(\ell)$ and $s_t(\ell)$ according to $f_t(\ell)$ and
$q_t(\ell)$ and equation (\ref{eq:poisson:rt}), the $\ell$-step
forecast distribution of $y_{t+\ell}|y^t$ is given by
$$
p(y_{t+\ell}|y^t)=\binom{r_t(\ell)+y_{t+\ell}-1}{y_{t+\ell}} \left(
\frac{s_t(\ell)}{1+s_t(\ell)}\right)^{r_t(\ell)} \left(
\frac{1}{1+s_t(\ell)}\right)^{y_{t+\ell}},
$$
which is a negative binomial distribution. The forecast mean and
variance can be calculated by using conditional expectations, i.e.
$$
y_t(\ell)=\mathbb{E}(y_{t+\ell}|y^t)=\mathbb{E}(\mathbb{E}(y_{t+\ell}|\lambda_{t+\ell})|y^t)=
\frac{r_t(\ell)}{s_t(\ell)}
$$
and
$$
\text{Var}(y_{t+\ell}|y^t)=\mathbb{E}(\text{Var}(y_{t+\ell}|\lambda_{t+\ell})|y^t) +
\text{Var}(\mathbb{E}(y_{t+\ell}|\lambda_{t+\ell})|y^t) =
\frac{r_t(\ell)(s_t(\ell)+1)}{(s_t(\ell))^2}.
$$
The power discounting yields
$$
r_{t+1}=\delta (r_t+y_t)+1-\delta \quad \textrm{and} \quad
s_{t+1}=\delta(s_t+1).
$$
Considering the random walk evolution for $\theta_t$ so that
$\eta_t=\theta_t=\theta_{t-1}+\omega_t$, where $\omega_t\sim
N(0,\Omega)$, for some variance $\Omega$, we can see that
$$
\lambda_t=\textrm{exp}(\omega_t)\lambda_{t-1},
$$
since $\log\lambda_t=\eta_t=\theta_t$. Then from the normal
distribution of $\omega_t$, the distribution of
$\lambda_t|\lambda_{t-1}$ is
$$
p(\lambda_t|\lambda_{t-1}) = \frac{1}{\sqrt{2\pi\Omega}\lambda_t}
\textrm{exp}\left( -
\frac{(\log\lambda_t-\log\lambda_{t-1})^2}{2\Omega}\right),
$$
which is a log-normal distribution (see Section \ref{continuous}).
Tnen from (\ref{logl}) the log-likelihood function is
$$
\ell(\lambda_1,\ldots,\lambda_T;y^T) = \sum_{t=1}^T \left(
y_t\log\lambda_t -\lambda_t-\log y_t!
-\log\sqrt{2\pi\Omega}\lambda_t -
\frac{(\log\lambda_t-\log\lambda_{t-1})^2}{2\Omega}\right)
$$
Bayes factors can be calculated using (\ref{bf1}) and the negative
binomial one-step ahead forecast probability functions
$p(y_{t+1}|y^t)$.
In order to illustrate the Poisson model we consider US annual
immigration data, in the period of 1820 to 1960. The data, which are
described in Kendall and Ord (1990, page 13), are shown in Figure
\ref{fig2}. The nature of the data fits to the assumption of a
Poisson distribution, but it can be argued that, after applying a
suitable transformation, some Gaussian time series model can be
appropriate. The data are non-stationary and a visual inspection
shows that they exhibit a local level behaviour. One simple model to
consider is the random walk evolution of $\eta_t=\theta_t$ as
described above. We use power discounting with $\delta=0.5$, which
is a low discount factor capable to capture the peak values of the
data. Figure \ref{fig2} shows the one-step forecast mean against the
actual data and as we see the forecasts capture well the immigration
data.
\begin{figure}[t]
\epsfig{file=fig2.ps, height=10cm, width=15cm}
\caption{US annual immigration in thousands (solid line) and
one-step forecast mean (dashed line).}\label{fig2}
\end{figure}
\subsubsection{Negative binomial and geometric}
The negative binomial distribution (Johnson {\it et al.}, 2005)
arises in many practical situations and it can be generated via
independent Bernoulli trails or via the Poisson/gamma mixture. In
time series analysis, an application of negative binomial responses
is given in Houseman {\it et al.} (2006). We note that the negative
binomial distribution includes the geometric as a special case (see
below).
Suppose that the time series $\{y_t\}$ is generated from the
negative binomial distribution, with probability function
$$
p(y_t|\pi_t)=\binom{y_t+n_t-1}{n_t-1}\pi_t^{n_t}(1-\pi_t)^{y_t},
\quad y_t=0,1,2,\ldots; \quad 0<\pi_t<1,
$$
where $\pi_t$ is the probability of success and $n_t$ is the number
of successes. This distribution belongs to the exponential family
(\ref{exp}), with $z(y_t)=y_t$, $a(\phi_t)=\phi_t=1$,
$\gamma_t=\log(1-\pi_t)$,
$b(\gamma_t)=-n_t\log(1-\textrm{exp}(\gamma_t))$, and
$c(y_t,\phi_t)=\binom{y_t+n_t-1}{n_t-1}$. Then it follows that
$\mathbb{E}(y_t|\pi_t)=\,db(\gamma_t)/\,d\gamma_t=n_t(1-\pi_t)/\pi_t$ and
$\text{Var}(y_t|\pi_t)=\,d^2b(\gamma_t)/\,d\gamma_t^2=n_t(1-\pi_t)/\pi_t^2$.
We note that by setting $n_t=1$ and $x_t=y_t-1$, the time series
$x_t$ follows a geometric distribution and thus all what follows
applies readily to the geometric distribution too.
By using the prior of $\gamma_t|y^{t-1}$ and the transformation
$\gamma_t=\log(1-\pi_t)$, the prior of $\pi_t|y^{t-1}$ is the beta
distribution $\pi_t|y^{t-1}\sim B(n_ts_t+1,r_t)$ and the posterior
of $\pi_t|y^t$ is the beta $\pi_t|y^t\sim B(n_ts_t+n_t+1,r_t+y_t)$.
Using the logit link, as in the binomial example, the definitions of
$r_t$ and $s_t$ are
$$
r_t=\frac{1+\exp(-f_t)}{q_t}\quad\textrm{and}\quad
s_t=\frac{1+\exp(f_t)-q_t}{n_tq_t}
$$
and the posterior moments $f_t^*$ and $q_t^*$ are
$$
f_t^*=\psi(n_ts_t+n_t+1)-\psi(r_t+y_t) \quad \textrm{and} \quad
q_t^*=\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=n_ts_t+n_t+1} +
\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=r_t+y_t},
$$
which can be approximated by
$$
f_t^*\approx\log \frac{n_ts_t+n_t+1}{r_t+y_t} +
\frac{1}{2(n_ts_t+n_t+1)}-\frac{1}{2(r_t+y_t)}
$$
and
$$
q_t^*\approx
\frac{2n_ts_t+2n_t+1}{2(n_ts_t+n_t+1)^2}+\frac{2r_t+2y_t-1}{2(r_t+y_t)^2}.
$$
Thus we can compute the moments of $\theta_t|y^t$ as in
(\ref{post:th1}) and so we obtain an approximation of the quantities
$r_t(\ell)$ and $s_t(\ell)$, as functions of $f_t(\ell)$ and
$q_t(\ell)$.
The $\ell$-step forecast distribution is given by
$$
p(y_{t+\ell}|y^t) = \frac{\Gamma(r_t(\ell)+n_{t+\ell}+s_t(\ell)+1)
\Gamma(r_t(\ell)+y_{t+\ell})
\Gamma(n_{t+\ell}s_t(\ell)+n_{t+\ell}+1) }{ \Gamma(r_t(\ell))
\Gamma(n_{t+\ell}s_t(\ell)+1)
\Gamma(r_t(\ell)+y_{t+\ell}+n_{t+\ell}s_t(\ell)+n_{t+\ell}+1) }
\binom{y_{t+\ell}+n_{t+\ell}-1}{n_{t+\ell}-1}.
$$
The forecast mean and variance of $y_{t+\ell}$ are given by
$$
y_t(\ell)=\mathbb{E}(y_{t+\ell}|y^t)=\mathbb{E}(\mathbb{E}(y_{t+\ell}|\pi_{t+\ell})|y^t)=\frac{r_t(\ell)}{s_t(\ell)}
$$
and
\begin{eqnarray*}
\text{Var}(y_{t+\ell}|y^t)&=&\mathbb{E}(\text{Var}(y_{t+\ell}|\lambda_{t+\ell})|y^t) +
\text{Var}(\mathbb{E}(y_{t+\ell}|\lambda_{t+\ell})|y^t) \\ &=&
\frac{(r_t(\ell)+n_{t+\ell}s_t(\ell))(r_t(\ell)+n_{t+\ell}r_t(\ell)+n_{t+\ell}^2
s_t(\ell)-n_{t+\ell})}{s_t(\ell)(n_{t+\ell}s_t(\ell)-1)} -
\frac{r_t(\ell)^2}{n_{t+\ell}^2s_t(\ell)^2}.
\end{eqnarray*}
The power discounting yields
$$
r_{t+1}=\delta(r_t+y_t-1)+1 \quad \textrm{and} \quad
s_{t+1}=\frac{\delta(n_ts_t+n_t)}{n_{t+1}},
$$
where as usual $\delta$ is a discount factor.
Considering the random walk evolution for
$\eta_t=\theta_t=\theta_{t-1}+\omega_t$, the link $\log
n_t(1-\pi_t)/n_t=\eta_t$, yields the evolution for $\pi_t$
\begin{equation}\label{nb:pi}
\pi_t=\frac{\pi_{t-1}}{\pi_{t-1}+\exp(\omega_t)-\pi_{t-1}\exp(\omega_t)}.
\end{equation}
Given that $\omega_t\sim N(0,\Omega)$, for a known variance
$\Omega$, the distribution of $\pi_t|\pi_{t-1}$ is
$$
p(\pi_t|\pi_{t-1})=\frac{1}{\sqrt{2\pi\Omega}\pi_t(1-\pi_t)}
\exp\left(-\frac{1}{2\Omega}\left(\log\frac{\pi_{t-1}(1-\pi_t)}{\pi_t(1-\pi_{t-1})
}\right)^2\right)
$$
and so from (\ref{logl}) the log-likelihood function is
\begin{eqnarray*}
\ell(\pi_1,\ldots,\pi_T;y^T) &=& \sum_{t=1}^T \bigg(
y_t\log(1-\pi_t) +n_t\log\pi_t + \log \binom {y_t+n_t-1}{n_t-1} \\
&& -\log\sqrt{2\pi\Omega}\pi_t(1-\pi_t) -\frac{1}{2\Omega}\left(
\log\frac{\pi_{t-1}(1-\pi_t)}{\pi_t(1-\pi_{t-1})}\right)^2\bigg)
\end{eqnarray*}
Bayes factors can be computed using (\ref{bf1}) and the predictive
distribution $p(y_{t+1}|y^t)$.
\begin{figure}[t]
\epsfig{file=negbinom.ps, height=10cm, width=15cm}
\caption{Negative binomial simulated data (solid line) and
one-step forecast mean (dashed line).}\label{fig2a}
\end{figure}
To illustrate the above model we have simulated 100 observations
from the above model; we simulate one draw from $\pi_0\sim B(2,1)$
so that $\mathbb{E}(\pi_0)=2$, we simulate 100 innovations
$\omega_1,\ldots,\omega_{100}$ from a $N(0,1)$, then using
(\ref{nb:pi}) we generate $\pi_1,\ldots,\pi_{100}$ and finally, for
each time $t$, we simulate one draw from a negative binomial with
parameters $n_t=n=10$ and $\pi_t$. Figure \ref{fig2a} shows the
simulated data (solid line) together with the one-step ahead
forecast means $r_t/s_t$. For the fit, we pretend we did not have
knowledge of the simulation process and so we have specified
$F=[1~0]'$, $G=\Omega=I_2$ (the $2\times 2$ identity matrix),
$m_0=[0~0]'$, and $P_0=1000I_2$, the last indicating a weakly
informative prior specification (i.e. $P_0^{-1}\approx 0$). We
observe that the forecasts follow the data closely indicating a good
fit. We have found that as it is well known for Gaussian time
series, these prior settings are insensitive to forecasts, since
prior information is deflated with time.
\subsection{Continuous distributions for the response $y_t$}\label{continuous}
\subsubsection{Normal}
Normal or Gaussian time series are discussed extensively in the
literature, see e.g. West and Harrison (1997) for a Bayesian
treatment of Gaussian state-space models. Here we discuss Gaussian
responses in the DGLM setting, for completeness purposes, but also
because the normal distribution has many similarities with the
log-normal distribution that follows.
Suppose that $\{y_t\}$ is a time series generated from a normal
distribution, i.e. $y_t|\mu_t\sim N(\mu_t,V)$, with density
$$
p(y_t|\mu_t)=\frac{1}{\sqrt{2\pi V}}
\exp\left(-\frac{(y_t-\mu_t)^2}{2V}\right), \quad
-\infty<y_t,\mu_t<\infty; \quad V>0,
$$
where $\mu_t$ is the level of $y_t$. The variance $V$ of the process
can be time-varying, but for simplicity here, we assume it
time-invariant. Here, this variance is assumed known, while $\mu_t$
is assumed unknown. If $V$ is unknown, Bayesian inference is
possible by assuming that $1/V$ follows a gamma distribution and
this model leads to a conjugate analysis (resulting to a posterior
gamma distribution for $1/V$ and to a Student $t$ distribution for
the forecast distribution of $y_{t+\ell}$). This model is examined
in detail in West and Harrison (1997, Chapter 4). Returning to the
above normal density, we can easily see that $p(y_t|\mu_t)$ is of
the form of (\ref{exp}), with $z(y_t)=y_t$,
$a(\phi_t)=\phi_t^{-1}=V$, $\gamma_t=\mu_t$,
$b(\gamma_t)=\gamma_t^2/2$ and $c(y_t,\phi_t)=(2\pi
V)^{-1/2}\exp(-y_t^2/(2V)$.
The prior for $\mu_t|y^{t-1}$ is the normal distribution
$\mu_t|y^{t-1}\sim N(r_ts_t^{-1},s_t^{-1})$ and the posterior of
$\mu_t|y^t$ is the normal distribution
$$
\mu_t|y^t\sim
N\left(\frac{r_t+V^{-1}y_t}{s_t+V^{-1}},\frac{1}{s_t+V^{-1}}\right).
$$
The link function is the identity link, i.e. $g(\mu_t)=\mu_t$ and so
we have $\mu_t=\eta_t=F'\theta_t$, which implies $r_t=f_t/q_t$ and
$s_t=1/q_t$. By replacing these quantities in the above prior and
posterior densities, we can verify the Kalman filter recursions.
It turns out that the $\ell$-step forecast distribution is also a
normal distribution, i.e.
$$
y_{t+\ell}|y^t \sim N\left(
\frac{r_t(\ell)}{s_t(\ell)},V+\frac{1}{s_t(\ell)}\right).
$$
The power discounting yields
$$
r_{t+1}=\delta^2(r_t+V^{-1}y_t) \quad \textrm{and} \quad
s_{t+1}=\delta^2(s_t+V^{-1}).
$$
Adopting the random walk evolution for
$\theta_t=\theta_{t-1}+\omega_t$, from the identity link
$\mu_t=\eta_t=\theta_t$, we have that $\mu_t|\mu_{t-1}\sim
N(\mu_{t-1},\Omega)$, where $\omega_t\sim N(0,\Omega)$. From
(\ref{logl}) the log-likelihood function is
$$
\ell(\mu_1,\ldots,\mu_T;y^T)=\sum_{t=1}^T \left(
\frac{1}{2V}(2y_t\mu_t-\mu_t^2)
-\log\sqrt{4\pi^2V\Omega}-\frac{y_t^2}{2V}-\log\frac{(\mu_t-\mu_{t-1})^2}{2\Omega}\right).
$$
Bayes factors can be easily computed from the forecast density
$p(y_{t+1}|y^t)$ and the Bayes factor formula (\ref{bf1}).
\subsubsection{Log-normal}
The log-normal distribution has many applications, e.g. in
statistics (Johnson {\it et al.}, 1994), in economics (Aitchison and
Brown, 1957), and in life sciences (Limpert {\it et al.}, 2001).
Suppose that the time series $\{y_t\}$ is generated from a
log-normal distribution, with density
$$
p(y_t|\lambda_t)=\frac{1}{\sqrt{2\pi V}} \exp\left(-\frac{(\log
y_t-\lambda_t)^2}{2V}\right),\quad y_t>0; \quad -\infty
<\lambda_t<\infty; \quad V>0,
$$
where $\log y_t | \lambda_t \sim N(\lambda_t,V)$. We will write
$y_t|\lambda_t\sim LogN(\lambda_t,V)$. This distribution is of the
form of (\ref{exp}), with $z(y_t)=\log y_t$,
$a(\phi_t)=\phi_t^{-1}=V$, $\gamma_t=\lambda_t$,
$b(\gamma_t)=\gamma_t^2/2$ and $c(y_t,\phi_t)=(2\pi
V)^{-1/2}y_t^{-1}\exp(-(\log y_t)^2/(2V))$.
From the normal part we can see
$$
\mathbb{E}(\log
y_t|\lambda_t)=\frac{\,db(\gamma_t)}{\,d\gamma_t}=\lambda_t
$$
and from the log-normal part we can see
$$
\mathbb{E}(y_t|\lambda_t)=\exp(\lambda_t+V/2)=\mu_t
$$
from the latter of which the logarithmic link can be suggested,
i.e. $\eta_t=\log\mu_t=\lambda_t+V/2$.
From the normal distribution of $\log y_t$, it follows that the
prior distribution of $\lambda_t|y^{t-1}$ is
$$
\lambda_t|y^{t-1}\sim N\left(\frac{r_t}{s_t},\frac{1}{s_t}\right)
$$
and the posterior distribution of $\lambda_t|y^t$ is
$$
\lambda_t|y^t\sim N\left(\frac{r_t+V^{-1}\log y_t}{s_t+V^{-1}},
\frac{1}{s_t+V^{-1}}\right),
$$
where $r_t$ and $s_t$ are calculated as in the normal case, i.e.
$r_t=f_t/q_t$ and $s_t=1/q_t$. With the definitions of $r_t(\ell)$
and $s_t(\ell)$, we have that the $\ell$-step forecast
distribution of $y_{t+\ell}$ is
$$
y_{t+\ell}|y^t\sim
LogN\left(\frac{r_t(\ell)}{s_t(\ell)},V+\frac{1}{s_t(\ell)}\right).
$$
The forecast mean of $y_{t+\ell}$ is
$$
y_t(\ell)=\mathbb{E}(y_{t+\ell}|y^t)=\exp\left(\frac{r_t(\ell)}{s_t(\ell)}+\frac{1}{2s_t(\ell)}
\right)
\exp\left(\frac{V}{2}\right)=\exp\left(\frac{2f_t(\ell)+q_t(\ell)+V}{2}\right),
$$
where $f_t(\ell)$ and $q_t(\ell)$ are the respective mean and
variance of $\eta_{t+\ell}$, given information $y^t$.
Considering power discounting, the updating of $r_t$ and $s_t$ is
$$
r_{t+1}=\delta^2(r_t+V^{-1}\log y_t) \quad \textrm{and} \quad
s_{t+1}=\delta^2(s_t+V^{-1}).
$$
Adopting the random walk evolution for
$\eta_t=\theta_t=\theta_{t-1}+\omega_t$, the distribution of
$\lambda_t|\lambda_{t-1}$ is normal, i.e.
$\lambda_t|\lambda_{t-1}\sim N(\lambda_{t-1},\Omega)$, where
$\Omega$ is the variance of $\omega_t$. From (\ref{logl}) the
log-likelihood function is obtained as
\begin{eqnarray*}
\ell(\lambda_1,\ldots,\lambda_T;y^T)&=&\sum_{t=1}^T
\bigg(\frac{1}{2V}(2\lambda_t\log y_t-\lambda_t^2) -\log
\sqrt{4\pi^2V\Omega} -\log y_t \\ && - \frac{(\log y_t)^2}{2V} -
\log\frac{(\lambda_t-\lambda_{t-1})^2}{2\Omega} \bigg).
\end{eqnarray*}
Bayes factors can be calculated from (\ref{bf1}) and the log-normal
predictive density $p(y_{t+1}|y^t)$. As an example, consider the
comparison of two models $\mathcal{M}_1$ and $\mathcal{M}_2$, which
differ in the variances $V_1$ and $V_2$, respectively. Then, by
denoting $r_{1t}$, $s_{1t}$, $r_{2t}$ and $s_{2t}$, the values of
$r_t$, $s_t$, for $\mathcal{M}_j$ $(j=1,2)$, we can express the
logarithm of the Bayes factor $H_t(1)$ as
$$
\log H_t(1) = \frac{1}{2}\log
\frac{V_2+s_2,{t+1}^{-1}}{V_1+s_{1,t+1}^{-1}} + \frac{ (\log y_{t+1}
- r_{2,t+1}s_{2,t+1}^{-1})^2}{2(V_2-s_{2,t+1}^{-1})} - \frac{ (\log
y_{t+1} - r_{1,t+1}s_{1,t+1}^{-1})^2}{2(V_1-s_{1,t+1}^{-1})}.
$$
By comparing $\log H_t(1)$ to 0, we can conclude preference of
$\mathcal{M}_1$ or $\mathcal{M}_2$, i.e. if $\log H_t(1)>0$ we
favour $\mathcal{M}_1$, if $\log H_t(1)<0$ we favour
$\mathcal{M}_1$, while if $\log H_t(1)=0$ the two models are
equivalent, in the sense that they both produce the same one-step
forecast distributions.
\begin{table}
\caption{Mean square error (MSE) and Log-likelihood function $(\ell(
.))$ for several values of the discount factor $\delta$ for the
log-normal data.}\label{table:logn}
\begin{center}
\begin{tabular}{|c||ccccccccc|}
\hline $\delta$ & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 &
0.99 \\ $MSE$ & 103.75 & 13.34 & 3.16 & \textbf{2.22} & 2.72 & 3.37 & 3.93 & 4.34 & 4.57 \\
$\ell(.)$ & -35.26 & -35.28 & -35.34 & -35.44 & -35.61 & -35.86 &
-36.2 & -36.60 & -36.93 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[h]
\epsfig{file=logn.ps, height=10cm, width=15cm}
\caption{Log-normal data (solid line) and one-step forecasts (dotted line) for $\delta=0.5$.}\label{fig:logn}
\end{figure}
To illustrate the above DGLM for log-normal data we consider
production data, consisting of 30 consecutive values of value of a
product; these data are reported in Morrison (1958). A simple
histogram shows that these data are positively skewed and it can be
argued that the data exhibit local level time series dependence.
Morrison (1958) show that modelling these data with the normal
distribution can lead to inappropriate control. Here we use the
power discounting approach to update $r_t$ and $s_t$; Table
\ref{table:logn} shows the mean square forecast error (MSE) and the
value of the log-likelihood function evaluated at the posterior mean
$\mathbb{E}(\lambda_t|y^t)$ for a range of values of $\delta$. The result is
that $\delta=0.5$ produces the smallest MSE, while the likelihood
function does not change dramatically. Figure \ref{fig:logn} plots
the one-step forecasts for $\delta=0.5$ against the actual data.
Although the extreme value $y_{29}=9.48$ is poorly predicted, we
conclude that the overall forecast performance of this model is
good, especially given the short length of this time series.
\subsubsection{Gamma}
The gamma distribution (Johnson {\it et al.}, 1994) is perhaps one
of the most used continuous distributions, as it can serve as a
model for the variance or precision of a population or experiment.
In particular in Bayesian inference it is a very popular choice as
the conjugate prior for the inverse of the variance of a linear
conditionally Gaussian model (see also the discussion of the normal
distribution above).
Suppose that $\{y_t\}$ is a time series generated from a gamma
distribution, with density
$$
p(y_t|\alpha_t,\beta_t)=\frac{\beta_t^{\alpha_t}}{\Gamma(\alpha_t)}
y_t^{\alpha_t-1} \exp(-\beta_t y_t), \quad y_t>0; \quad
\alpha_t,\beta_t>0.
$$
This distribution is referred to as $y_t|\alpha_t,\beta_t\sim
G(\alpha_t,\beta_t)$. Our interest is focused on $\beta_t$ and so we
will assume that $\alpha_t$ is known {\it a priori}. Thus we write
$p(y_t|\alpha_t,\beta_t)\equiv p(y_t|\beta_t)$.
The above gamma distribution is of the form of (\ref{exp}), with
$z(y_t)=y_t$, $a(\phi_t)=\phi_t=1$, $\gamma_t=-\beta_t$,
$b(\gamma_t)=-\log ((-\gamma_t)^{\alpha_t}/\Gamma(\alpha_t))$ and
$c(y_t,\phi_t)=y_t^{\alpha_t-1}$.
It follows that
$$
\mathbb{E}(y_t|\beta_t)=\frac{\,db(\gamma_t)}{\,d\gamma_t}=\frac{\alpha_t}
{\beta_t}=\mu_t>0
$$
and
$$
\text{Var}(y_t|\beta_t)=\frac{\,d^2b(\gamma_t)}{\,d\gamma_t^2}=\frac{\alpha_t}
{\beta_t^2}.
$$
The prior and posterior distributions of $\beta_t$ are gamma, i.e.
$\beta_t|y^{t-1}\sim G(\alpha_ts_t+1,r_t)$ and $\beta_t|y^t\sim
G(\alpha_ts_t+\alpha_t+1,r_t+y_t)$.
Since $\mu_t>0$, the logarithmic link is a appropriate, i.e.
$g(\mu_t)=\log\mu_t=\eta_t=F'\theta_t$. Then $r_t$ and $s_t$ are
defined in a similar way as in the Poisson case, i.e.
$$
r_t=\frac{\exp(-f_t)}{q_t}\quad \textrm{and} \quad
s_t=\frac{1-q_t}{\alpha_tq_t},
$$
where $\alpha_ts_t+1>0$. The posterior moments of $\log\mu_t$ are
given by
$$
f_t^*=\psi(\alpha_ts_t+y_t+1)-\log(r_t+1) \quad \textrm{and} \quad
q_t^*=\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=\alpha_ts_t+y_t+1},
$$
which can be approximated, as in the Poisson case, by
$$
f_t^*\approx \log \frac{\alpha_ts_t+y_t+1}{r_t+1} +
\frac{1}{2(\alpha_ts_t+y_t+1)} \quad \textrm{and} \quad q_t^*
\approx \frac{2\alpha_ts_t+2y_t+1}{2(\alpha_ts_t+y_t+1)}.
$$
With the definition of $r_t(\ell)$ and $s_t(\ell)$, the
$\ell$-step forecast distribution is
$$
p(y_{t+\ell}|y^t)=\frac{r_t(\ell)^{\alpha_{t+\ell}s_t(\ell)}
\Gamma(\alpha_{t+\ell}s_t(\ell)+\alpha_{t+\ell}+1) }{
\Gamma(r_t(\ell)) \Gamma(\alpha_{t+\ell})}
y_{t+\ell}^{\alpha_{t+\ell}-1}(r_t(\ell)+y_{t+\ell})^{-(
\alpha_{t+\ell}s_t(\ell)+\alpha_{t+\ell}+1)}.
$$
The mean and variance of this distribution can be obtained by
conditional expectations, i.e
$$
y_t(\ell)=\mathbb{E}(y_{t+\ell}|y^t)=\mathbb{E}(\mathbb{E}(y_{t+\ell}|\beta_{t+\ell})|y^t)=\frac{r_t(\ell)}{
s_t(\ell)}
$$
and
$$
\text{Var}(y_{t+\ell}|y^t) = \mathbb{E}(\text{Var}(y_{t+\ell}|\beta_{t+\ell})|y^t)+
\text{Var}(\mathbb{E}(y_{t+\ell}|\beta_{t+\ell})|y^t) =
\frac{r_t(\ell)^2(s_t(\ell)+1)}{s_t(\ell)^2(\alpha_{t+\ell}s_t(\ell)
-1)}.
$$
The power discounting yields
$$
r_{t+1}=\delta(r_t+y_t) \quad \textrm{and} \quad
s_{t+1}=\frac{\delta \alpha_ts_t+\delta \alpha_t}{\alpha_{t+1}}.
$$
From the logarithmic link function we have
$\beta_t=\alpha_t/\exp(\eta_t)$ and if we consider a random walk
evolution for $\eta_t=\theta_t=\theta_{t-1}+\omega_t$, we obtain
the evolution of $\beta_t$ as
$$
\beta_t=\frac{\alpha_t\beta_{t-1}}{\alpha_{t-1}\exp(\omega_t)},
$$
which together with the normal distribution of $\omega_t\sim
N(0,\Omega)$, results to the distribution
$$
p(\beta_t|\beta_{t-1})=\frac{1}{\sqrt{2\pi \Omega}\beta_t}
\exp\left(-\frac{(\log\beta_t- \alpha_t\alpha_{t-1}^{-1}
\log\beta_{t-1})^2}{2\Omega}\right),
$$
which is the log-normal distribution $\beta_t|\beta_{t-1}\sim LogN
(\alpha_t\alpha_{t-1}^{-1}\log\beta_{t-1},\Omega)$. Note that the
above expressions can be simplified when $\alpha_t=\alpha$ is
time-invariant. Model comparison and model monitoring can be
conducted by considering the Bayes factors, which can be computed
from (\ref{bf1}) and the predictive density $p(y_{t+\ell}|y^t)$.
Bayes factors can be computed using (\ref{bf1}) and the predictive
distribution $p(y_{t+1}|y^t)$. Here we give two examples, both of
which are using the power discounting approach. In the first we
consider two competing models $\mathcal{M}_1$ and $\mathcal{M}_2$,
which differ in the discount factors $\delta_1$ and $\delta_2$,
respectively, but otherwise they have the same structure. Then, if
we denote $r_{it}$ and $s_{it}$ the values of $r_t$ and $s_t$ for
model $\mathcal{M}_i$ $(i=1,2)$, then the Bayes factor $H_t(1)$ can
be expressed as
$$
H_t(1)=\frac{ r_{1,t+1}^{\alpha s_{1,t+1}} \Gamma(\alpha
s_{1,t+1}+\alpha+1) (r_{1,t+1}+y_{t+1})^{-(\alpha
s_{1,t+1}+\alpha+1)} \Gamma(r_{2,t+1}) }{ r_{2,t+1}^{\alpha
s_{2,t+1}} \Gamma(\alpha s_{2,t+1}+\alpha+1)
(r_{2,t+1}+y_{t+1})^{-(\alpha s_{2,t+1}+\alpha+1)} \Gamma(r_{1,t+1})
},
$$
where, for simplicity we assume that $\alpha_t=\alpha$ is invariant
over time and known.
In the second example we consider a fixed discount factor
$\delta_1=\delta_2=\delta$, but now the two models $\mathcal{M}_1$
and $\mathcal{M}_2$ differ in the values of $\alpha$, namely
$\alpha_1$ and $\alpha_2$. Then we can see that
$r_t=r_{it}=\delta(r_{t-1}+y_{t-1})$ and $s_t=s_{it}=(\delta\alpha
s_{i,t-1}+\delta\alpha)/\alpha=\delta s_{t-1}+\delta$, since $r_t$
and $s_t$ do not depend on $\alpha_i$ (note that this would not be
the case if $\alpha_i$ were time-varying). Then the Bayes factor of
$\mathcal{M}_1$ against $\mathcal{M}_2$ can be expressed as
$$
H_t(1)=r_{t+1}^{s_{t+1}(\alpha_1-\alpha_2)}
y_{t+1}^{\alpha_1-\alpha_2}
(r_{t+1}+y_{t+1})^{(s_{t+1}+1)(\alpha_2-\alpha_1)} \frac{
\Gamma(\alpha_2) \Gamma(\alpha_1 s_{t+1}+\alpha_1+1)}{
\Gamma(\alpha_1) \Gamma(\alpha_2 s_{t+2}+\alpha_2+1)}.
$$
Thus, by comparing $H_t(1)$ with 1, we have a means for choosing the
parameter $\alpha$.
To illustrate the gamma distribution we give an example from
finance. Suppose that $y_t$ represents the continually compound
return, known also as log-return, of the price of an asset, defined
as $y_t=\log p_t - \log p_{t-1}$, where $p_t$ is the price of the
asset at time $t=1,\ldots,T$. In volatility modelling, one wishes to
estimate the conditional variance $\sigma_t^2$ of $y_t$. This plays
an important role in risk management and in investment strategies
(Chong, 2004), as it quantifies the uncertainty around assets. A
classical model is the generalized autoregressive heteroscedastic
(GARCH), which assumes that given $\sigma_t$, $y_t$ follows a normal
distribution, i.e. $y_t|\sigma_t\sim N(0,\sigma_t^2)$ and then it
specifies the evolution of $\sigma_t^2$ as a linear function of past
values of $\sigma_t^2$ and $y_t^2$. GARCH models are discussed in
detail in Tsay (2002).
From $y_t|\sigma_t\sim N(0,\sigma_t^2)$, we can see that, given
$\sigma_t$, $y_t^2/\sigma_t^2$ follows a chi-square distribution
with 1 degree of freedom or a $G(1/2,1/2)$. Thus $y_t^2|\sigma_t
\sim \sigma_t^2 G(1/2,1/2) \equiv G(1/2,1/(2\sigma_t^2))$. Then by
defining $\alpha_t=1/2$ and $\beta_t=1/(2\sigma_t^2)$, we have that
$y_t^2|\beta_t\sim G(1/2,\beta_t)$ and so we can apply the above
inference of the gamma response. Assuming a random walk evolution
for $\eta_t=\theta_t$, we have
$$
\beta_t=\frac{\beta_{t-1}}{\exp(\omega_t)} \Rightarrow
\sigma_t^2=\exp(\omega_t) \sigma_{t-1}^2,
$$
where $\omega_t$ is defined above.
We note that from power discounting we have $r_t=\delta
r_{t-1}+\delta y_{t-1}^2=\sum_{i=1}^{t-1}\delta^iy_{t-i}^2$ and
$s_t=\delta
s_{t-1}+\delta=\sum_{i=1}^{t-1}\delta^i=\delta(1-\delta^t)/(1-\delta)$.
Thus the one-step forecast mean of $y_t^2$ is
$$
\mathbb{E}(y_t^2|y^{t-1})=\frac{r_{t-1}(1)}{s_{t-1}(1)}=\frac{r_t}{s_t}=\frac{1-\delta}{\delta(1-\delta^t)}
\sum_{i=1}^{t-1} \delta^i y_{t-i}^2.
$$
From the prior of $\beta_t|y^{t-1}$, we can see that $1/\sigma_t^2 |
y^{t-1} \sim G((s_t+3)/2,r_t/2)$ and so $\sigma_t^2|y^{t-1}$ follows
an inverted gamma distribution, i.e. $\sigma_t^2|y^{t-1} \sim
IG((s_t+3)/2,r_t/2)$. Similarly, we can see that the posterior
distribution of $1/\sigma_t^2$ and $\sigma_t^2$ are $1/\sigma_t^2 |
y^t \sim G((s_t+3)/2,(r_t+y_t)/2)$ and $\sigma_t^2|y^t\sim
IG((s_t+3)/2,(r_t+y_t)/2)$, respectively. From these distributions
we can easily report means, variances and quantiles, as required.
We consider log returns from IBM stock prices over a period of 74
years. These data, which are described in Tsay (2002, Chapter 9),
are plotted in Figure \ref{fig3}. Figure \ref{fig4} shows the
posterior estimate of the volatility
$\widehat{\sigma}_t^2=\mathbb{E}(\sigma_t^2|y^t)$. We can see that the
volatile periods are captured well, e.g. the first 120 observations
in both figures indicate the high volatility. The model performance
can be assessed by looking at the log-likelihood function of
$\beta_t=1/(2\sigma_t^2)$, evaluated at the posterior mean
$\widehat{\sigma}_t^2$. The log-likelihood is
$$
\ell(\beta_1,\ldots,\beta_T;y^T) = -\frac{T}{2}\log (2\Omega\pi^2) -
\sum_{t=1}^T \log y_t^2 - \frac{1}{2\Omega} \sum_{t=1}^T (\log
\beta_t-\log \beta_{t-1})^2,
$$
where $\Omega$ is the variance of $\omega_t$ (the innovation of the
random walk evolution of $\eta_t=\theta_t$). Here $T=888$ and with
$\delta=0.6$ and $\Omega=100$, we compare this model with several
ARCH/GARCH models. Table \ref{table1} shows the log-likelihood
function of our model compared with those of the ARCH/GARCH. We see
that our model outperforms the ARCH/GARCH producing much larger
values of the log-likelihood function.
\begin{figure}
\epsfig{file=fig3.ps, height=10cm, width=15cm}
\caption{Log-returns of IBM stock prices.}\label{fig3}
\end{figure}
\begin{figure}
\epsfig{file=fig4.ps, height=10cm, width=15cm}
\caption{Posterior volatility of the IBM log-returns.}\label{fig4}
\end{figure}
\begin{table}
\caption{Comparison of the gamma model with ARCH and GARCH models.
Shown are the log-likelihood functions of the models, using the IBM
data.}\label{table1}
\begin{center}
\begin{tabular}{|c||ccccc|}
\hline model & gamma & ARCH(1) & ARCH(2) & ARCH(3) & ARCH(4) \\
$\ell(.)$ & \textbf{-241.07} & -2133.79 & -2123.10 & -2115.11 & -2110.93 \\
model & & GARCH(1,1) & GARCH(1,2) & GARCH(2,1) & GARCH(2,2) \\
$\ell(.)$ & & -2109.33 & -2125.05 & -2130.86 & -2123.74
\\
\hline
\end{tabular}
\end{center}
\end{table}
Inference and forecasting for the inverse or inverted gamma model is
very similar with the gamma model. For example suppose that given
$\alpha_t$ and $\beta_t$, the response $y_t$ follows the inverse
gamma distribution $y_t\sim IG(\alpha_t,\beta_t)$, so that
$$
p(y_t|\alpha_t,\beta_t)=\frac{\beta_t^{\alpha_t}}{\Gamma(\alpha_t)}
\frac{1}{y_t^{\alpha_t+1}} \exp\left(-\frac{\beta_t}{y_t}\right),
\quad y_t>0; \quad \alpha_t,\beta_t>0.
$$
Given $\alpha_t$ (as in the gamma case), the above inverse gamma
distribution is of the form of (\ref{exp}), with $z(y_t)=1/y_t$,
$a(\phi_t)=\phi_t=1$, $\gamma_t=-\beta_t$, $b(\gamma_t)=-\log
((-\gamma_t)^{\alpha_t}/\Gamma(\alpha_t))$ and
$c(y_t,\phi_t)=y_t^{-(\alpha_t+1)}$. The prior distribution for
$\beta_t$ is the gamma $\beta_t|y^{t-1}\sim G(\alpha_ts_t+1,r_t)$
and the posterior distribution is the gamma $\beta_t|y^t\sim
G(\alpha_ts_t+1,r_t+y_t^{-1})$. Thus the above prior is the same as
in the gamma model and the posterior changes slightly. As a result
inference and forecasting for the inverse gamma follows readily from
the gamma distribution.
\subsubsection{Weibull and exponential}
The exponential and the Weibull distributions can be used in
survival analysis, for example, in medicine, to estimate the
survival of patients, or in reliability, to estimate failure times
of say a manufacturing product. The exponential distribution is a
special case of the Weibull and for a discussion of both, the reader
is referred to Johnson {\it et al.} (1994).
Suppose that the time series $\{y_t\}$ is generated by a Weibull
distribution, with density function
$$
p(y_t|\lambda_t)=\frac{\nu_t}{\lambda_t}y_t^{\nu_t-1}\exp\left(-\frac{y_t^{\nu_t}}
{\lambda_t}\right), \quad y_t>0; \quad \lambda_t,\nu_t>0.
$$
Here we assume that $\nu_t$ is known and we note that for
$\nu_t=1$ we obtain the exponential distribution with parameter
$1/\lambda_t$. The above distribution is of the form of
(\ref{exp}), with $z(y_t)=y_t^{\nu_t}$, $a(\phi_t)=\phi_t=1$,
$\gamma_t=-1/\lambda_t$, $b(\gamma_t)=-\log(-\nu_t\gamma_t)$ and
$c(y_t,\phi_t)=y_t^{\nu_t-1}$.
Given $\lambda_t$, the expectation and variance of $y_t^{\nu_t}$
are
$$
\mathbb{E}(y_t^{\nu_t}|\lambda_t)=\frac{\,db(\gamma_t)}{\,d\gamma_t}=\lambda_t
$$
and
$$
\text{Var}(y_t^{\nu_t}|\lambda_t)=\frac{\,d^2b(\gamma_t)}{\,d\gamma_t^2}=\lambda_t^2.
$$
Since $\lambda_t=\mu_t>0$, the logarithmic link
$g(\lambda_t)=\log\lambda_t=\eta_t$ can be used.
The prior and posterior distributions of $\lambda_t$ are inverted
gamma, i.e. $\lambda_t|y^{t-1}\sim IG(s_t-1,r_t)$ and
$\lambda_t|y^t\sim IG(s_t,r_t+y_t^{\nu_t})$ so that
$1/\lambda_t|y^{t-1}\sim G(s_t-1,r_t)$ and $1/\lambda_t|y^t\sim
G(s_t,r_t+y_t^{\nu_t})$, e.g.
$$
p(\lambda_t|y^{t-1})=\frac{r_t^{s_t-1}}{\Gamma(s_t-1)}
\frac{1}{\lambda_t^{s_t}} \exp\left(-\frac{r_t}{\lambda_t}\right).
$$
Since the link is logarithmic and the prior/posterior distributions
are inverted gamma, by writing $\log\lambda_t=-\log\lambda_t^{-1}$,
the approximation of $r_t$ and $s_t$ follow from a similar way as in
the Poisson, i.e.
$$
r_t=\frac{\exp(f_t)}{q_t} \quad \textrm{and} \quad
s_t=\frac{1+q_t}{q_t}
$$
and the posterior moments of $\log\lambda_t$ are given by
$$
f_t^*=\psi(s_t+y_t^{\nu_t}-1)-\log(r_t+1) \quad \textrm{and} \quad
q_t^*=\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=s_t+y_t^{\nu_t}-1},
$$
which can be approximated by
$$
f_t^*\approx \log \frac{s_t+y_t^{\nu_t}-1}{r_t+1} +
\frac{1}{2(s_t+y_t^{\nu_t}-1)} \quad \textrm{and} \quad q_t^*
\approx \frac{2s_t+2y_t^{\nu_t}-3}{2(s_t+y_t^{\nu_t}-1)}.
$$
With the usual definition of $r_t(\ell)$ and $s_t(\ell)$ and their
calculation via $f_t(\ell)$, $q_t(\ell)$ and the above equation,
we obtain the $\ell$-step forecast distribution of $y_{t+\ell}$ as
\begin{equation}\label{weibull:for}
p(y_{t+\ell}|y^t)=\frac{r_t(\ell)^{s_t(\ell)-1}
y_{t+\ell}^{\nu_{t+\ell}-1} (s_t(\ell)-1) } {
(r_t(\ell)+y_{t+\ell}^{\nu_{t+\ell}})^{s_t(\ell)} }.
\end{equation}
Using conditional expectations, we can obtain the forecast mean
and variance of $y_{t+\ell}^{\nu_{t+\ell}}$ as
$$
y_t^{\nu_t}(\ell)=\mathbb{E}(y_{t+\ell}^{\nu_{t+\ell}}|y^t)=
\mathbb{E}(\mathbb{E}(y_{t+\ell}^{\nu_{t+\ell}}|\lambda_{t+\ell})|y^t)=\frac{r_t(\ell)}{s_t(\ell)-2},
$$
for $s_t(\ell)>2$ and
$$
\text{Var}(y_{t+\ell}^{\nu_{t+\ell}}|y^t) =
\mathbb{E}(\text{Var}(y_{t+\ell}^{\nu_{t+\ell}}|\lambda_{t+\ell})|y^t) +
\text{Var}(\mathbb{E}(y_{t+\ell}^{\nu_{t+\ell}}|\lambda_{t+\ell})|y^t) = \frac{
r_t(\ell)^2 (s_t(\ell)-1) }{ (s_t(\ell)-2)^2(s_t(\ell)-3) },
$$
for $s_t(\ell)>3$.
Considering a random walk evolution for
$\eta_t=\theta_t=\theta_{t-1}+\omega_t$, from the logarithmic
link, we obtain
\begin{equation}\label{weibull:lambda}
\lambda_t=\exp(\omega_t) \lambda_{t-1}
\end{equation}
and so $\lambda_t|\lambda_{t-1}\sim
LogN(\log\lambda_{t-1},\Omega)$, where $\Omega$ is the variance of
$\Omega$. The derivation of this result is the same as in the
Poisson example.
From (\ref{logl}) and $\lambda_t|\lambda_{t-1}\sim
LogN(\log\lambda_{t-1},\Omega)$, the log-likelihood function of
$\lambda_1,\ldots,\lambda_T$, based on data $y^T=\{y_1,\ldots,y_T\}$
is
$$
\ell(\lambda_1,\ldots,\lambda_T;y^T) = -\sum_{t=1}^T \left(
\frac{y_t^{\nu_t}}{\lambda_t}+\log\frac{\lambda_t}{\nu_t}+(1-\nu_t)
\log y_t +\frac{\log(2\pi\Omega)}{2}+
\frac{(\log\lambda_t-\log\lambda_{t-1})^2}{2\Omega}\right) .
$$
Power discounting yields
$$
r_{t+1}=\delta (r_t+y_t^{\nu_t}) \quad \textrm{and} \quad
s_{t+1}=\delta (s_t+1).
$$
We consider model comparison for the Weibull distribution when
$\eta_t=F\theta_t$ and $\theta_t=\theta_{t-1}+\omega_t$, for some
scalar $F$. This is an autoregressive type evolution for $\eta_t$.
We specify the variance of $\omega_t$ with a discount factor (West
and Harrison, 1997, Chapter 6) as
$\text{Var}(\omega_t)=\Omega_t=(1-\delta)P_{t-1}/\delta$, where $P_t$ is
the posterior variance of $\theta_t|y^t$. The density of
$y_t|y^{t-1}$ is given by (\ref{weibull:for}), for $\ell=1$,
$r_{t-1}(1)=r_t=\exp(f_t)/q_t$ and $s_{t-1}(1)=s_t=(1+q_t)/q_t$,
where $f_t=Fm_{t-1}$, $q_t=F^2P_{t-1}/\delta$ and $m_t$, $P_t$ are
updated from (\ref{post:th1}) as
$$
m_{t}=\log \frac{s_{t}+y_t-1}{r_{t}+1}+\frac{1}{2(s_{t}+y_t-1)}
$$
and
$$
P_t= \frac{P_{t-1}}{\delta}-\frac{P_{t-1}^2}{\delta^2} \left( 1-
\frac{ 2s_t+2y_t-3}{2(s_t+y_t-1)q_t}\right) \frac{1}{q_t} =
\frac{2s_t+2y_t-3}{2(s_t+y_t-1)F^2}.
$$
We consider now the situation of the choice of $\delta$. Suppose we
have two models $\mathcal{M}_1$ with a discount factor $\delta_1$
and $\mathcal{M}_2$ with $\delta_2$ and otherwise the models are the
same. The Bayes factor from a single observation ($k=1$) is given by
$$
H_t(1)= \frac{ r_{1t}^{s_{1t}-1} (s_{1t}-1)
(r_{2t}+y_t^{\nu_t})^{s_{2t}} }{ r_{2t}^{s_{2t}-1} (s_{2t}-1)
(r_{1t}+y_t^{\nu_t})^{s_{1t}}},
$$
where $r_{jt}$ and $s_{jt}$ are defined as $r_t$ and $s_t$ if we
replace $\delta$ by $\delta_j$, for $j=1,2$.
For illustration, we simulate 500 observations from a Weibull
distribution with $\nu_t=3$ and $\{\lambda_t\}$ being simulated from
(\ref{weibull:lambda}), where we have used $F=1$, $\lambda_0=1$ and
$\omega_t\sim N(0,1)$. Figure \ref{fig5} shows the simulated data.
In order to choose the discount factor $\delta$, we apply the Bayes
factor $H_t(1)$ over a range values of $\delta_1,\delta_2\geq 0.5$.
We have used $m_0=0$ and a weakly informative prior $P_0=1000$.
Table \ref{table2} reports on $\bar{H}(1)$, the mean of $H_t(1)$,
and on the log-likelihood function
$\ell(\lambda_1,\ldots,\lambda_{500}|y^{500})$ evaluated at
$\widehat{\lambda}_t=(r_t+y_t^{\nu_t})/s_t$ (see the posterior
distribution of $\lambda_t|y^t$). This table indicates that there is
little difference in the performance of the one-step forecast
distribution, under the two models. The log-likelihood function
clearly indicates that $\delta_1=0.9$ produces the model with the
largest likelihood. The deficiency to separate the models using the
Bayes factor criterion, indicates that, in a sequential setting
which is appropriate for time series, one should better look at the
Bayes factor for each time $t$ and not at the overall mean of the
Bayes factor. Figure \ref{fig6} shows the Bayes factor of
$\mathcal{M}_1$ (with $\delta_1=0.9$) against $\mathcal{M}_2$ (with
$\delta_2=0.7$). We see that, although the mean of the Bayes factor
is 0.996 (see Table \ref{table2}), at $t=1-50$ and $t=100-200$,
there can be declared significant difference between the two models,
which is slightly in favour of model $\mathcal{M}_1$. This effect is
masked when one looks at the overall picture, considering the mean
$\bar{H}(1)$, and it indicates the benefit of sequential application
of Bayes factors.
\begin{figure}[t]
\epsfig{file=fig5.ps, height=10cm, width=15cm}
\caption{Simulated data from a Weibull distribution with $\nu_t=3$ and
$\lambda_t$ generated from (\ref{weibull:lambda}).}\label{fig5}
\end{figure}
\begin{table}
\caption{Log-likelihood function $\ell(.)$ and mean $\bar{H}(1)$ of
the Bayes factor sequence $\{H_t(1)\}$ of $\mathcal{M}_1$ (with
$\delta_1$) against $\mathcal{M}_2$ (with
$\delta_2$).}\label{table2}
\begin{center}
\begin{tabular}{|c||ccccccc|}
\hline & $\ell(.)$ & & & $\bar{H}(1)$ & & & \\
$\delta_1\backslash\delta_2$ & & 0.99 & 0.9 & 0.8 & 0.7 & 0.6 & 0.5
\\ \hline 0.99 & -5.787 & 1 & 0.997 & 0.995 & 0.994 & 0.994 & 0.998
\\ 0.95 & -7.411 & 1.001 & 0.999 & 0.997 & 0.995 & 0.995 &
0.999 \\ 0.90 & \textbf{-3.123} & 1.002 & 1 & 0.998 & 0.996 & 0.996 & 1 \\
0.85 & -8.547 & 1.004 & 1.001 & 0.999 & 0.997 & 0.997 & 1.001 \\
0.80 & -8.854 & 1.005 & 1.002 & 1 & 0.998 & 0.998 & 1.002 \\ 0.75 &
-9.098 & 1.006 & 1.003 & 1.001 & 0.999 & 0.999 & 1.002 \\ 0.70 &
-9.301 & 1.007 & 1.004 & 1.002 & 1 & 0.999 & 1.003 \\ 0.65 & -9.476
& 1.008 & 1.005 & 1.002 & 1 & 1 & 1.003 \\ 0.6 & -9.631 & 1.008 &
1.005 & 1.003 & 1.001 & 1 & 1.003 \\ 0.55 & -9.771 & 1.008 & 1.005 &
1.003 & 1 & 0.999 & 1.002 \\ 0.50 & -9.947 & 1.007 & 1.004 & 1.001 &
0.998 & 0.997 & 1 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[t]
\epsfig{file=fig6.ps, height=10cm, width=15cm}
\caption{Bayes factor $\{H_t(1)\}$ of model $\mathcal{M}_1$ with $\delta=0.9$
vs model $\mathcal{M}_2$ with $\delta_2=0.7$.}\label{fig6}
\end{figure}
The exponential and Weibull distributions are useful models for the
analysis of survival times data. In the context of DGLMs, we have
dynamic survival models due to Gamerman (1991). Here we give a brief
description of dynamic survival models and we extend a result of
Gamerman (1991).
Suppose that, given $\nu_t$ and $\lambda_t$, the survival time $y_t$
follows the Weibull distribution $p(y_t|\lambda_t)$ (here we assume
that $\nu_t$ is known and so we exclude it from conditioning). For
example, if the exponential distribution is believed to be an
appropriate model, we have $\nu_t=1$. The survivor function of the
Weibull distribution is
\begin{equation}\label{survival1}
S(y_t|\lambda_t)=\frac{\nu_t}{\lambda_t}\int_{y_t}^\infty
u_t^{\nu_t-1} \exp\left( - \frac{u_t^{\nu_t}}{\lambda_t}\right)\,d
u_t = \exp\left( -\frac{y_t^{\nu_t}}{\lambda_t}\right).
\end{equation}
Suppose we have a vector of $p$ regressor variables or covariates
$x=[x_1~\cdots~x_p]'$ and we consider a vector of parameters $\beta$
so that $1/\lambda_t$ is proportional to $\exp(x'\beta)$. Then the
hazard function $h(y_t;\nu_t,\lambda_t)\equiv h(t)\propto
\nu_ty_t^{\nu_t-1} \exp(x_t'\beta)$ and this leads to the
proportional hazards model with $h(t)=h_0(t) \exp(x'\beta)$, where
$h_0(t)$ is the baseline hazard function (Dobson, 2002, \S10.2). So
one can write $\log h(t)=\log h_0(t) +x'\beta$ and considering a
partition of $(0,N)$ as $0=y_0<y_1<\cdots<y_T=N$ so that $t\in
I_t=(y_{t-1},y_t]$, we write $\log h_0(t)=\alpha_t$, i.e. the
baseline is a step function that takes a constant value $\alpha_t$
at each time interval $I_t$.
Now in the DGLM flavor, dynamic survival models assume that $\beta$
evolves over time between intervals $I_1,\ldots,I_T$, but it remains
constant inside each interval $I_t$. Gamerman (1991) considers the
model
\begin{equation}\label{survival2}
\log \lambda_t^{(j)}=\log h^{(j)}(t) = F_j' \theta_t, \quad
j=1,\ldots,i_t; \quad t=1,\ldots,T,
\end{equation}
where $F_j=[1~x_j']'$ is the design vector and
$\theta_t=[\alpha_t~\beta_t']'$ is the time-varying parameter
vector, which is assumed to follow a random walk evolution according
to $\theta_t=\theta_{t-1}+\omega_t$, and $\lambda_t$ has been
modified to $\lambda_t^{(j)}$ to account for individual $j$. Here,
$t$ indexes the $T$ intervals $I_1,\ldots,I_T$ of $(0,N)$ and $j$
indexes each individual to be alive at the beginning of $I_t$, where
$i_t$ is the number of such individuals in $I_t$. Note that through
$x_j$, each individual $j$ may have different effects through
different regressor variables, although it is not unrealistic to set
$x_j=x$ or $F=[1~x']'$ (for all individuals we have the same
regressor variables). The dynamics of the system is reflected on the
dynamics of $\theta_t$. Equations (\ref{survival1}) and
(\ref{survival2}) define a dynamic survival model, which Bayesian
inference follows, in an obvious extension of the DGLM estimation,
providing the posterior first two moments of $h^{(j)}(t)$ (details
appear in Gamerman, 1991).
Fix individual $j$ and write $\lambda_t^{(j)}=\lambda_t$. Given the
adopted random walk evolution for $\theta_t$, for any $y_t^*\in
I_t=(y_{t-1},y_t]$, the prior $\lambda_t^{-1}|y^{t-1}\sim
G(s_t-1,r_t)$ combines with the survivor function (\ref{survival1})
to give the survivor prediction
\begin{eqnarray*}
S(y_t^*|y^{t-1}) &=& \int_0^\infty
S((y_t^*-y_{t-1})|\lambda_t)p(\lambda_t^{-1}|y^{t-1})\,d\lambda_t^{-1}
\\ &=& \frac{r_t^{s_t-1}}{\Gamma(s_t-1)} \int_0^\infty
\lambda_t ^{-(s_t-1)} \exp ( -
((y_t^*-y_{t-1})^{\nu_t}+r_t)\lambda_t^{-1} ) \,d\lambda_t^{-1} \\
&=& \left( 1+ \frac{(y_t^*-y_{t-1})^{\nu_t}}{r_t} \right)
^{-(s_t-1)},
\end{eqnarray*}
where we can see that for $\nu_t=1$, we obtain the survivor
prediction of the exponential distribution, reported in Gamerman
(1991). Thus $S(y_t^*|y^{t-1})$ predicts the remaining survival time
of individual $j$ still alive.
\subsubsection{Pareto and beta}
The Pareto (Johnson {\it et al.}, 1994) is a skewed distribution
with many applications in social, scientific and geophysical
phenomena. For example, in economics it can describe the allocation
of wealth among individuals or prices of the returns of stocks.
Suppose that the time series $\{y_t\}$ is generated from Pareto
distribution with density
$$
p(y_t|\lambda_t)=\lambda_ty_t^{-\lambda_t-1}, \quad y_t\geq 1;
\quad \lambda_t>0.
$$
This distribution is also known as Pareto(I) distribution and
$\lambda_t$ is known as the index of inequality (this distribution
is examined in detail in Johnson {\it et al.}, 1994). The above
distribution is of the form of (\ref{exp}), with $z(y_t)=\log y_t$,
$a(\phi_t)=\phi_t=1$, $\gamma_t=-\lambda_t$,
$b(\gamma_t)=-\log(-\gamma_t)$ and $c(y_t,\phi_t)=1/y_t$. We note
that by setting $x_t=1/y_t$ or $x_t=1/(1-y_t)$, we have that
$0<x_t<1$ so that, given $\lambda_t$, $x_t$ follows a beta
distribution with parameters $\lambda_t,1$ and $1,\lambda_t$,
respectively. Thus inference for the Pareto distribution can be
readily applied to the beta distribution (Johnson {\it et al.},
1994) when at least one parameter of the beta distribution is equal
to 1. This is a useful consideration as we can deal with responses
being proportions or probabilities.
We have
$$
\mathbb{E}(y_t|\lambda_t)=\frac{\lambda_t}{\lambda_t-1}=\mu_t \quad
(\lambda_t>1) \quad \textrm{and} \quad
\text{Var}(y_t|\lambda_t)=\frac{\lambda_t}{(\lambda_t-1)^2 (\lambda_t-2)}
\quad (\lambda_t>2).
$$
Since $\mu_t>0$, the logarithmic link function can be used, so that
$g(\mu_t)=\log\mu_t=\log\lambda_t-\log(\lambda_t-1)$, for
$\lambda_t>1$. Using the transformation $\gamma_t=-\lambda_t$, we
find that the prior and posterior distributions of $\lambda_t$ are
gamma, i.e. $\lambda_t|y^{t-1}\sim G(s_t+1,r_t)$ and
$\lambda_t|y^t\sim G(s_t+2,r_t+\log y_t)$, respectively.
Following the approximation of $r_t$ and $s_t$ in the Poisson case,
we have that
$$
r_t=\frac{\exp(-f_t)}{q_t} \quad \textrm{and} \quad
s_t=\frac{1-q_t}{q_t}
$$
and the posterior moments of $\log\lambda_t$ are given by
$$
f_t^*=\psi(s_t+\log y_t+1) -\log(r_t+1) \quad \textrm{and} \quad
q_t^*=\left.\frac{\,d\psi(x)}{\,dx}\right|_{x=s_t+\log y_t +1},
$$
which can be approximated by
$$
f_t^*\approx \log \frac{s_t+\log y_t+1}{r_t+1} + \frac{1}{2(s_t+\log
y_t+1)} \quad \textrm{and} \quad q_t^*=\frac{2s_t+2\log
y_t+1}{2(s_t+\log y_t+1)}.
$$
Power discounting yields
$$
r_{t+1}=\delta(r_t+\log y_t) \quad \textrm{and} \quad
s_{t+1}=\delta(s_t+1).
$$
With $r_t(\ell)$ and $s_t(\ell)$ computed from $f_t(\ell)$ and
$q_t(\ell)$ and the above equations of $r_t$ and $s_t$, the
$\ell$-step forecast distribution of $y_{t+\ell}$ is
$$
p(y_{t+\ell}|y^t)=\frac{r_t(\ell)^{s_t(\ell)+1} (s_t(\ell)+1)}{
y_{t+\ell}(r_t(\ell)+\log y_{t+\ell})^{s_t(\ell)+1} }.
$$
Considering a random walk evolution for
$\eta_t=\theta_t=\theta_{t-1}+\omega_t$, we have that the evolution
of $\lambda_t$ is
$$
\lambda_t=\frac{\lambda_{t-1}\exp(\omega_t)}{\lambda_{t-1}\exp(\omega_t)-\lambda_{t-1}+1},
$$
from which we can obtain the distribution of
$\lambda_t|\lambda_{t-1}$. With this, assuming that $\omega_t\sim
N(0,\Omega)$ and that $\lambda_t>1$, the density of
$\lambda_t|\lambda_{t-1}$ is
$$
p(\lambda_t|\lambda_{t-1})=\frac{1}{\sqrt{2\pi\Omega}
\lambda_t(\lambda_t-1)} \exp\left( -\frac{1}{2\Omega} \left( \log
\frac{\lambda_t(\lambda_{t-1}-1)}{\lambda_{t-1}(\lambda_t-1)}\right)^2\right),
$$
where $\Omega$ should be chosen so that to guarantee $\lambda_t>1$,
for all $t$. Then from (\ref{logl}) the log-likelihood function is
\begin{eqnarray*}
\ell(\lambda_1,\ldots,\lambda_T;y^T) &=& \sum_{t=1}^T \bigg(
-\lambda_t\log y_t +\log\lambda_t-\log y_t \\ &&
-\log\sqrt{2\pi\Omega}\lambda_t(\lambda_t-1) - \frac{1}{2\Omega}
\left(
\log\frac{\lambda_t(\lambda_{t-1}-1)}{\lambda_{t-1}(\lambda_t-1)}\right)^2
\bigg),
\end{eqnarray*}
for $\lambda_1,\ldots,\lambda_T>1$.
Bayes factors can be computed from the predictive density
$p(y_{t+1}|y^t)$ and (\ref{bf1}). As an example consider the
comparison of two models $\mathcal{M}_1$ and $\mathcal{M}_2$, which
differ in some quantitative aspects, e.g. in the discount factor
$\delta$ (see also the illustration that follows). By defining
$r_{jt}$ and $s_{jt}$ the respective values of $r_t$ and $s_t$, for
model $\mathcal{M}_j$ $(j=1,2)$, the Bayes factor $H_t(1)$ can be
expressed as
$$
H_t(1)= \frac{ r_{1,t+1}^{s_{1,t+1}+1} (s_{1,t+1}+1) (r_{2,t+1}+\log
y_{t+1})^{s_{1,t+1}+1} } { r_{2,t+1}^{s_{2,t+1}+1} (s_{2,t+1}+1)
(r_{1,t+1}+\log y_{t+1})^{s_{2,t+1}+1} }.
$$
To illustrate the above Pareto model for time series data, we
consider the data of Arnold and Press (1989), consisting of 30 wage
observations (in multiples of US dollars) of production-line workers
in a large industrial firm; the data are also discussed in Dyer
(1981). The data are shown in Figure \ref{fig:pareto1}, from which
two points can be argued it: (a) the data appear to be
autocorrelated (in fact it is easy to run a corrolagram to justify
this) and (b) the data exhibit a local level behaviour (one could
argue for local stationarity, but with only 30 observations a local
level model seems more appropriate). Here we apply the Pareto model
with $r_t$ and $s_t$ being updated by the power discounting (this is
appropriate for the local level behaviour of the time series). Table
\ref{table3} shows the mean of the Bayes factors for various values
of the discount factors $\delta_1$ and $\delta_2$ in the range of
$[0.5,0.99]$. It is evident that the best model is the model with
$\delta=0.99$, which is capable of producing Bayes factors larger
than 1 as compared with models with lower discount factors. From
that table it is also evident that models with low discount factors
do worse than models with high discount factors and so by far the
worst model is that using $\delta=0.5$. Figure \ref{fig:pareto2}
shows the values of the Bayes factor of the model with $\delta=0.99$
against the model with $\delta=0.95$; we note that all values of the
Bayes factor are larger than one and there is a steady increase in
the Bayes factors indicating the superiority of the model with
$\delta=0.99$.
\begin{figure}[t]
\epsfig{file=pareto1.ps, height=10cm, width=15cm}
\caption{Annual wage Pareto data.}\label{fig:pareto1}
\end{figure}
\begin{table}
\caption{Mean $\bar{H}(1)$ of the Bayes factor sequence $\{H_t(1)\}$
of $\mathcal{M}_1$ (with $\delta_1$) against $\mathcal{M}_2$ (with
$\delta_2$) for the Pareto model.}\label{table3}
\begin{center}
\begin{tabular}{|c||cccccc|}
\hline & & & $\bar{H}(1)$ & & & \\
$\delta_1\backslash\delta_2$ & 0.99 & 0.9 & 0.8 & 0.7 & 0.6 & 0.5
\\ \hline 0.99 & 1 & 1.950 & 3.484 & 5.414 & 7.786 & 10.798 \\ 0.95
& 0.749 & 1.401 & 2.449 & 3.774 & 5.409 & 7.489 \\ 0.90 & 0.559 & 1
& 1.708 & 2.608 & 3.721 & 5.141 \\ 0.85 & 0.439 & 0.760 & 1.276 &
1.931 & 2.745 & 3.785 \\ 0.80 & 0.358 & 0.605 & 1 & 1.503 & 2.129 &
2.931 \\ 0.75 & 0.299 & 0.496 & 0.810 & 1.211 & 1.711 & 2.350 \\
0.70 & 0.254 & 0.415 & 0.672 & 1 & 1.408 & 1.932 \\ 0.65 & 0.218 &
0.352 & 0.566 & 0.839 & 1.179 & 1.616 \\ 0.60 & 0.189 & 0.302 &
0.482 & 0.712 & 1 & 1.368 \\ 0.55 & 0.164 & 0.261 & 0.414 & 0.609 &
0.854 & 1.167 \\ 0.50 & 0.143 & 0.225 & 0.356 & 0.523 & 0.732 & 1\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[h]
\epsfig{file=pareto2.ps, height=10cm, width=15cm}
\caption{Bayes factor $\{H_t(1)\}$ of model $\mathcal{M}_1$ with $\delta=0.99$
vs model $\mathcal{M}_2$ with $\delta_2=0.95$ for the Pareto data.}\label{fig:pareto2}
\end{figure}
\subsubsection{Inverse Gaussian}
The inverse Gaussian or Wald (Chhikara and Folks, 1989; Johnson {\it
et al.}, 1994) is a skewed distribution that can describe phenomena
in economics and in many other sciences. This distribution is known
as the first passage time distribution of Brownian motion with
positive drift. Recently, Huberman {\it et al.} (1998) used an
inverse Gaussian distribution to model internet flow and internet
traffic.
Suppose that the time series $\{y_t\}$ is generated from an inverse
Gaussian distribution, that is for given $\mu_t$ and $\lambda_t$,
the density function of $y_t$ is
$$
p(y_t|\mu_t,\lambda_t) = \sqrt{ \frac{\lambda_t}{2\pi y_t^3} } \exp
\left( - \frac{\lambda_t (y_t-\mu_t)^2 }{ 2\mu_t^2 y_t} \right),
\quad y_t>0; \quad \mu_t,\lambda_t>0.
$$
This is a unimodal distribution, which converges to the normal
distribution, as $\lambda_t\rightarrow\infty$. To the following we
will assume that $\lambda_t$ is a known parameter and interest will
be placed on $\mu_t$; hence we write $p(y_t|\mu_t,\lambda_t)\equiv
p(y_t|\mu_t)$. We can see that the above distribution is of the form
of (\ref{exp}), with $z(y_t)=y_t$, $\phi_t=\lambda_t$,
$a(\phi_t)=2/\lambda_t$, $\gamma_t=-1/\mu_t^2$,
$b(\gamma_t)=-2/\mu_t=-2\sqrt{-\gamma_t}$ and
$c(y_t,\phi_t)=(\lambda_t/(2\pi y_t^3))^{1/2}
\exp(-\lambda_t/(2y_t))$. Then we can verify that
$$
\mathbb{E}(y_t|\mu_t)=\frac{\,db(\gamma_t)}{\,d\gamma_t}=\frac{1}{\sqrt{-\gamma_t}}=\mu_t
$$
and
$$
\text{Var}(y_t|\mu_t)=a(\phi_t) \frac{\,d^2b(\gamma_t)}{\,d\gamma_t^2} =
\frac{a(\phi_t)}{2\sqrt{-\gamma_t^3}}=\frac{\mu_t^3}{\lambda_t}.
$$
The canonical link maps $\mu_t$ to $\gamma_t$, or
$g(\mu_t)=\gamma_t=-1/\mu_t^2$, but this is not convenient, since
$g(\mu_t)<0$ and hence we need to find an appropriate definition of
$F$ and $G$ in the state space representation of $g(\mu_t)=\eta_t$
in order to guarantee $-\infty<\eta_t<\infty$. The logarithmic link,
$g(\mu_t)=\log\mu_t$, seems to work better, since it maps $\mu_t$ to
the real line and so $F'\theta_t=\eta_t=g(\mu_t)$ is defined easily.
The prior distribution of $\mu_t$ can be defined via the prior
distribution of $\gamma_t$ and the transformation
$\gamma_t=-1/\mu_t^2$. In the appendix it is shown that
\begin{equation}\label{eq:igaussian:2}
p(\mu_t|y^{t-1})= \frac{ 2\exp(s_t^2/r_t) r_t }{ (\exp(s_t^2/r_t)
s_t \sqrt{\pi/r_t} + 1)\mu_t^3} \exp\left( -
\frac{(r_t-\mu_ts_t)^2}{r_t\mu_t^2}\right).
\end{equation}
In the appendix it is shown that
\begin{equation}\label{IG:prior:m}
\mathbb{E}(\mu_t|y^{t-1}) = \frac{ \sqrt{\pi r_t} \exp(s_t^2/r_t) }{
\exp(s_t^2/r_t)s_t\sqrt{\pi/r_t}+1}.
\end{equation}
The posterior distribution of $\mu_t$ is obtained from the posterior
distribution of $\gamma_t$ as
\begin{eqnarray*}
p(\mu_t|y^t) &=& \kappa(r_t+\lambda_ty_t, s_t+\lambda_t) \exp\left(
-\frac{r_t+\lambda_ty_t}{\mu_t^2} +
\frac{2(s_t+\lambda_t)}{\mu_t}\right) \frac{2}{\mu_t^3} \\ &=&
\frac{ 2 \exp ( (s_t+\lambda_t)^2/(r_t+\lambda_ty_t))
(r_t+\lambda_ty_t) }{ ( \exp ( (s_t+\lambda_t)^2/(r_t+\lambda_ty_t))
(s_t+\lambda_t) \sqrt{\pi / (r_t+\lambda_ty_t)} + 1 ) \mu_t^3} \\ &&
\times \exp \left( -
\frac{(r_t+\lambda_ty_t-\mu_t(s_t+\lambda_t))^2}{ (r_t+\lambda_t
y_t)\mu_t^2}\right),
\end{eqnarray*}
where in the appendix it is shown that
$$
\kappa(r_t,s_t)=r_t\left( \exp\left(\frac{s_t^2}{r_t}\right) s_t
\sqrt{\frac{\pi}{r_t}} + 1\right)^{-1}.
$$
The approximation of $r_t$ and $s_t$ is difficult, since the moment
generating function of $\eta_t=\log\mu_t$ (which is needed in order
to compute $r_t$ and $s_t$) is not available in close form. Thus
power discounting should be applied. From the posterior of
$\gamma_t|y^t$, given by (\ref{post:g1}), we have
$$
(p(\gamma_t|y^t))^\delta \propto \exp\left( \delta \left( r_t+
\frac{2y_t}{\lambda_t}\right)\gamma_t+2\delta \left(
s_t+\frac{2}{\lambda_t}\right)\sqrt{-\gamma_t}\right)
$$
and so from the prior of $\gamma_{t+1}$ (equation (\ref{prior:g1}))
and the power discounting law we obtain
$$
r_{t+1}=\frac{\delta (r_t\lambda_t +2y_t)}{\lambda_t} \quad
\textrm{and} \quad s_{t+1}= \frac{\delta
(s_t\lambda_t+2)}{\lambda_t}.
$$
With $r_t(\ell)=r_{t+1}$ and $s_t(\ell)=s_{t+1}$, the $\ell$-step
forecast distribution of $y_{t+\ell}|y^t$ is
\begin{eqnarray*}
p(y_{t+\ell}|y^t) &=& c (r_{t+1}+2y_{t+\ell})^{-1}
\frac{1}{\sqrt{y_{t+\ell}^3}} \exp
\left(-\frac{\lambda_{t+\ell}}{2y_{t+\ell}}\right) \left(
\frac{s_{t+1}\lambda_{t+\ell}+2}{\lambda_{t+\ell}} \right. \\ &&
\left.\times \exp \left( \frac{ (s_{t+1}\lambda_{t+\ell} + 2)^2}{
\lambda_{t+\ell} ( r_{t+1} \lambda_{t+\ell}+2y_{t+\ell})} \right)
\sqrt{
\frac{\lambda_{t+\ell}\pi}{r_{t+1}\lambda_{t+\ell}+2y_{t+\ell}}} + 2
\right),
\end{eqnarray*}
where the normalizing constant $c$ is
$$
c=(2\pi)^{-1/2}\sqrt{\lambda_{t+\ell}^3}r_{t+1} \left( s_{t+1}
\exp\left(\frac{s_{t+1}^2}{r_{t+1}}\right)\sqrt{\frac{\pi}{r_{t+1}}}+1\right)^{-1}.
$$
The $\ell$-step forecast mean can be deduced by (\ref{IG:prior:m})
as
$$
\mathbb{E}(y_{t+\ell}|y^t)=\mathbb{E}(\mathbb{E}(y_{t+\ell}|\mu_{t+\ell})|y^t)=\mathbb{E}(\mu_{t+\ell}|y^t)
= \frac{ \sqrt{\pi r_t(\ell)} \exp(s_t(\ell)^2/r_t(\ell))}{
\exp(s_t(\ell)^2/r_t(\ell))s_t(\ell)\sqrt{\pi / r_t(\ell)}+1}
$$
Of course the above power discounting specifies $r_t$ and $s_t$, for
a random walk type evolution for the prior (\ref{eq:igaussian:2}).
Following this, we can specify
$\log\mu_t=\eta_t=\theta_t=\theta_{t-1}+\omega_t$, with
$\omega_t\sim N(0,\Omega)$, and so
$$
\mu_t=\mu_{t-1}\exp(\omega_t),
$$
which leads to the density
$$
p(\mu_t|\mu_{t-1})=\frac{1}{\sqrt{2\pi\Omega}\mu_t} \exp\left( -
\frac{(\log\mu_t-\log\mu_{t-1})^2}{2\Omega}\right).
$$
Therefore, using (\ref{logl}), the log-likelihood function is
\begin{eqnarray*}
\ell(\mu_1,\ldots,\mu_T;y^T) &=& \sum_{t=1}^T \bigg(
\frac{\lambda_t}{2\mu_t^2}(2\mu_t-y_t)
+\log\sqrt{\frac{\lambda_t}{2\pi y_t^3}} - \frac{\lambda_t}{2y_t} \\
&& -\log\sqrt{2\pi\Omega}\mu_t -
\frac{(\log\mu_t-\log\mu_{t-1})^2}{2\Omega} \bigg).
\end{eqnarray*}
Bayes factors can be easily computed from $p(y_{t+1}|y^t)$ and the
Bayes factor formula (\ref{bf1}).
To illustrate the inverse Gaussian distribution we consider data
consisting of 30 daily observations of toluene exposure
concentrations (TEC) for a single worker doing stain removing. The
data can be found in Takagi {\it et al.} (1997) who propose a simple
model fit using maximum likelihood estimation for the inverse
Gaussian distribution. However, it may be argued that these data are
autocorrelated and so an appropriate time series should be fitted.
Figure \ref{fig:ig1} shows one-step forecasts means against the TEC
data. The forecast means are computed using the above DGLM model for
the inverse Gaussian response, using $\lambda_t=\lambda$. The
results show that a low value of the discount factor $\delta=0.5$
and a low value of $\lambda=0.01$ yield the best forecasts. The
posterior mean $\mathbb{E}(\mu_t|y^t)$ is plotted in Figure \ref{fig:ig2},
from which we can clearly see that there is a time-varying feature
of the parameters of the inverse Gaussian distribution. This is
failed to be recognized in Takagi {\it et al.} (1997). These authors
propose estimates for the mean and the scale of the inverse Gaussian
distribution as 16.7 and 6.4, which are both larger than the mean of
the posterior means
$(\mathbb{E}(\mu_1|y^1)+\cdots+\mathbb{E}(\mu_{30}|y^{30}))/30=14.48$ and
$\lambda=0.01$. We note that from Figure \ref{fig:ig1} as $\lambda$
increases, the forecast performance deteriorates so that a value of
$\lambda$ near 6.4 would yield poor forecast accuracy. The model we
propose here exploits the dynamic behaviour of $\mu_t$ and it is an
appropriate model for forecasting.
\begin{figure}[t]
\epsfig{file=ig1.ps, height=10cm, width=15cm}
\caption{One-step forecast mean for the TEC data; panel (a) shows the
actual data (solid line), the one-step forecasts with $\delta=0.5$ and $\lambda=0.01$ (dashed line),
and the one-step forecasts with $\delta=0.5$ and $\lambda=1$ (dotted line); panel (b) shows
the actual data (solid line), the one-step forecasts with $\delta=0.5$ and $\lambda=0.01$ (dashed line),
and the one-step forecasts with $\delta=0.9$ and $\lambda=0.01$ (dotted line).}\label{fig:ig1}
\end{figure}
\begin{figure}[h]
\epsfig{file=ig2.ps, height=10cm, width=15cm}
\caption{Posterior mean $\{\mathbb{E}(\mu_t|y^t)\}$ of the ETC data.}\label{fig:ig2}
\end{figure}
\section{Concluding comments}\label{discussion}
In this paper we discuss approximate Bayesian inference of dynamic
generalized linear models (DGLMs), following West {\it et al.}
(1985) and co-authors. Such an approach allows the derivation of the
multi-step forecast distribution, which is a useful consideration
for carrying out error analysis based on residuals, on the
likelihood function, or on Bayes factors. We explore all the above
issues by examining in detail several examples of distributions
including binomial, Poisson, negative binomial, geometric, normal,
log-normal, gamma, exponential, Weibull, Pareto, two special cases
of the beta, and inverse Gaussian.
We believe that DGLMs offer a unique statistical framework for
dealing with a range of statistical problems, including business and
finance, medicine, biology and genetics, and behavioural sciences.
In most of these areas, researchers are not well aware of the
advantages that Bayesian inference for DGLMs can offer. In this
context we believe that the present paper offers a clear description
of the methods with detailed examples of many useful response
distributions.
\renewcommand{\theequation}{A-\arabic{equation}}
\setcounter{equation}{0}
|
1,108,101,565,139 | arxiv | \section{Introduction}
The mean curvature flow for hypersurfaces in Euclidean space has been studied systematically since the late 1970s (to name but a few, see \cite{temam}, \cite{brakke}, \cite{huisken}, \cite{gage-hamilton}, \cite{grayson}, \cite{hamilton95}, \cite{white}, \cite{cm-mcf}, \cite{cm12}, and for early work on curve shortening flow \cite{mullins}), with considerable emphasis on the singularity models for the flow: the self-similar solitons.
The oldest known nontrivial complete embedded soliton is Calabi's self-translating curve in $\mathbb{R}^2$, also sometimes called the ``grim reaper'' translating soliton (see Grayson \cite{grayson} and also \cite{mullins}, where it seems to have been first found). For readers more familiar with the Ricci flow, the most analogous object there would be Hamilton's cigar soliton (see \cite{hamilton}, and recall G. Perelman's central ``no cigar'' theorem \cite{perelman}).
Self-translaters arise in the study of the so-called ``Type II'' singularities of the mean curvature flow. Indeed, using a classical result of Hamilton contained in \cite{hamilton95}, Huisken and Sinestrari \cite{HS99a} showed that blow-up limit flows at Type II singularities of mean convex mean curvature flows are complete, self-translaters of the kind $\mathbb{R}^{n-k} \times \Sigma^k$, where $\Sigma^k$ is a convex translater in $\mathbb{R}^{k+1}$, with $k = 1, \dots, n$. For the mean convex case see also \cite{HS99b}, \cite{Wh00}, \cite{Wh03} and \cite{HK17}.
If we remove the mean convexity hypothesis, it is known that blow-ups at Type II singularities must be eternal flows, but, to our knowledge, it is still not known whether these eternal flows are generally self-translaters.
(See Chapter 4 in \cite{mantegazza}.)
In the classical subject of minimal surfaces one of the cornerstones of the modern theory is the so-called ``Halfspace Theorem'' and convex hull classification, proven in 1989 by Hoffman and Meeks \cite{hoffman-meeks}. Numerous other authors have written about such halfspace theorems and convex hull properties, in various contexts: See f.ex. \cite{Xa84}, \cite{meeks-rosenberg}, \cite{bess}, \cite{meeks-rosenberg-again}, \cite{haus}, \cite{earp} and \cite{ro-schu-spru}.
In the literature, there are some results at the intersection of these two topics, of solitons and halfspace theorems. For instance in \cite{wei_wylie} (see also \cite{petersen}) there are some results for $f$-minimal hypersurfaces for the case of $\Ric_f>0$, including a halfspace theorem for one important class of mean curvature solitons, the self-shrinkers (see also \cite{PR14}). The paper \cite{CE16} also showed a halfspace theorem (by using the half-catenoid-like ``self-shrinking trumpets'' from \cite{steve-niels} as barriers) and \cite{impiri} showed a ``Frankel property'' for self-shrinkers (meaning: when it so happens that all minimal surfaces in a space must intersect, as in \cite{frankel} and \cite{petersen}). Additionally, for self-translaters, a few significant geometric classification and nonexistence results are now known, see \cite{xj_wang}, \cite{sha}, \cite{halihaj}, \cite{niels}, \cite{hasl}, \cite{jpg}, \cite{imp_rim}, \cite{bueno} and \cite{himw}, but these do not directly address the question of (bi-)halfspace and convex hull properties.
One good reason for the lack of results with a (bi-)halfspace theorem flavor in the case of self-translaters would likely be that the most naive results one might imagine are wrong: F.ex. vertical planes and grim reaper cylinders readily coexist as self-translating solitons without ever intersecting, so there is no easy general ``halfspace theorem'' nor any ``Frankel property''.
Moreover the typical arguments employed often rely on constructing barriers. As discussed in the Appendix, a strategy using other exact solutions to the translater equation does not seem readily available here, except in the case of 2-dimensional surfaces in $\mathbb{R}^3$.
In the present paper we will present the following three main contributions on $n$-dimensional mean curvature flow self-translating solitons (also known as ``translaters'', ``self-translaters'', ``translators'' or ``self-translators'') in $\mathbb{R}^{n+1}$. We assume in the below that the translation direction is $e_{n+1}$.
\begin{theorem}[Bi-Halfspace Theorem]\label{bi-halfspace}
There does not exist any properly immersed self-translating $n$-dimensional hypersurface $\Sigma^n\subseteq\R^{n+1}$, without boundary, which is contained in two transverse vertical halfspaces of $\R^{n+1}$.
\end{theorem}
\begin{theorem}[Bi-Halfspace Theorem w/ Compact Boundary]\label{bi-halfspace_boundary}
Suppose a properly immersed connected self-translating $n$-dimensional hypersurface $(\Sigma^n,\partial\Sigma)$ in $\R^{n+1}$ is contained in two transverse vertical halfspaces of $\R^{n+1}$. If $\partial \Sigma$ is compact then $\Sigma$ is compact.
\end{theorem}
In the next theorem we let $\pi \colon \mathbb{R}^{n+1} \to \mathbb{R}^n$ be the projection in the direction of translation
$\pi\left( x_1, \dots, x_n, x_{n+1} \right) = ( x_1, \dots, x_n)$.
\begin{theorem}[Convex Hull Classification]\label{convex_hull_noncomp_trans}
Let $(\Sigma^n,\partial \Sigma)$ be a properly immersed connected self-translater in $\mathbb{R}^{n+1}$, with (possibly empty) compact boundary $\partial \Sigma$.
Then exactly one of the following holds.
\begin{enumerate}
\item $\conv (\pi( \Sigma)) = \mathbb{R}^n$,
\item $\conv (\pi ( \Sigma ) )$ is a halfspace of $\mathbb{R}^n$,
\item $\conv (\pi( \Sigma)) $ is a closed slab between two parallel hyperplanes of $ \mathbb{R}^n$,
\item $\conv (\pi( \Sigma)) $ is a hyperplane in $ \mathbb{R}^n$,
\item $\conv (\pi( \Sigma)) $ is a compact convex set. This case occurs precisely when $\Sigma$ is compact.
\end{enumerate}
\end{theorem}
\begin{remark}
From examples (see below) there appears to be no hope of classifying any of the likely wild classes $\Sigma$, $\conv(\Sigma)$ or $\pi(\Sigma)$: Only after applying \emph{both} of the forgetful operations $\conv(\cdot)$ and $\pi(\cdot)$ do we find a short list, which in fact can be thought of plainly as ``vertical slabs'' (including their three degenerate cases).
Note also that $\conv(\cdot)$ and $\pi(\cdot)$ can be freely switched in the statement of Theorem \ref{convex_hull_noncomp_trans}, because for any subset $\Omega \subseteq \mathbb{R}^{n+1}$ they commute:
$$
\conv\left( \pi \left( \Omega \right) \right) =\pi\left( \conv \left( \Omega \right) \right).
$$
\end{remark}
\begin{remark}
We note that each of the five cases of Theorem \ref{convex_hull_noncomp_trans} can happen, when $n\geq 2$, except possibly for Case (2). Leaving the case $n=1$ to the reader, let us list examples for each case, assuming $n\geq 2$ (see also the longer list of examples below at the end of Section \ref{sec:prelims}):
\begin{enumerate}
\item Take any rotationally symmetric $\Sigma^n$, e.g. the ``bowl'' translater.
\item No examples appear to be known.
\item Take as $\Sigma^n$ a grim reaper cylinder or any in Ilmanen's $\Delta$-wing family.
\item Take as $\Sigma^n$ any vertical hyperplane of $\mathbb{R}^{n+1}$.
\item Take any compact subset of any of the known examples.
\end{enumerate}
\end{remark}
Observe that an immediate consequence of Theorem \ref{bi-halfspace_boundary} is the following
\begin{corollary}\label{corollary_ends}(Ends)
Any end of a properly immersed self-transating $n$-dimensional hypersurface $\Sigma $ cannot be contained in two transverse vertical halfspaces of $\mathbb{R}^{n+1}$.
\end{corollary}
\begin{remark}
The compact boundary version in Theorem \ref{bi-halfspace_boundary} does not follow from any generally valid modification of the proof of Theorem \ref{bi-halfspace}: For other related ambient spaces it can happen that even a halfspace theorem is true and yet no bi-halfspace theorem holds for the compact boundary case. See f.ex. the halfspace theorem for self-shrinkers in \cite{CE16}, and note how the asymptotically conical self-shrinkers in \cite{steve-niels} can easily be cut to get such examples which are noncompact with compact boundary.
\end{remark}
Let us quickly note how this is (for $\partial\Sigma=\emptyset$) strictly stronger than the old Hoffman-Meeks result, so that in the process we get a new proof of this classical fact:
\begin{corollary}[Hoffman-Meeks: \cite{hoffman-meeks}]\label{corr-hm}
The classification (1)-(5) in Hoffman-Meeks's Theorem 2 (Theorem \ref{hoffman_meeks_theorem} below) holds true for properly immersed minimal hypersurfaces in $\R^{n+1}$ without boundary.
\end{corollary}
\begin{proof}[Proof of Corollary \ref{corr-hm}]
For $n\geq 2$, let $N^{n-1}\subseteq \R^{n}$ be a connected properly immersed minimal hypersurface. If $\partial N=\emptyset$, apply Theorem \ref{convex_hull_noncomp_trans} to the self-translater $\Sigma^n=N^{n-1}\times\mathbb{R}$. Then note
\[
\conv(N^{n-1})=\conv(\pi(N^{n-1}\times\mathbb{R}))=\conv(\pi(\Sigma)),
\]
from which the conclusion follows.
\end{proof}
As immediate corollaries to Theorem \ref{convex_hull_noncomp_trans}, we also recover the following previously known result:
\begin{corollary}[Corollary 2.2 \cite{xj_wang}]\label{corollary_domain_convex_graphs}
Let $\Sigma^n \subseteq \mathbb{R}^{n+1} $ be a complete connected convex graphical self-translater. I.e. there exists a smooth function $u~\colon~\Omega~\to~\mathbb{R}$, where $\Omega \subseteq \mathbb{R}^n$, such that $\graph\left( u \right) = \Sigma$.
Then exactly one of the following holds.
\begin{enumerate}
\item $\Omega = \mathbb{R}^n$.
\item $ \Omega$ is a halfspace in $\mathbb{R}^n$.
\item $\Omega$ is a slab between two parallel hyperplanes of $\mathbb{R}^n$.
\end{enumerate}
\end{corollary}
\begin{proof}
Since $\Sigma $ is convex and complete, from a theorem of Sacksteder (see \cite{sacksteder}), we have that $\Sigma = \partial C$, where $C \subseteq \mathbb{R}^{n+1}$ is a convex set. Therefore $\Sigma$ is a closed set w.r.t. the ambient topology and thus is properly embedded.
Let $u \colon \Omega \subseteq \mathbb{R}^{n} \to \mathbb{R}$ be a smooth function such that $\Sigma = \graph(u)$. Then clearly $\Omega$ is convex (indeed it is the orthogonal projection of the convex set $C$ onto $\mathbb{R}^n$) and $u$ is a convex function. Therefore
$$
\conv ( \pi (\Sigma)) = \conv( \Omega) = \Omega.
$$
We can now apply Theorem \ref{convex_hull_noncomp_trans} in order to conclude the proof.
\end{proof}
\begin{remark}
X.-J. Wang proved more than Corollary \ref{corollary_domain_convex_graphs}: For convex graphs, Case $(2)$ (graph over a halfspace) cannot happen.
\end{remark}
In \cite{SX17}, Spruck and Xiao showed that any complete oriented immersed mean convex $2$-dimensional self-translater is convex. In particular, any complete $2$-dimensional graphical self-translater is convex. Therefore in the case $n = 2$ one can improve Corollary \ref{corollary_domain_convex_graphs} removing the convexity assumption. In particular we recover the following result.
\begin{corollary}[\cite{himw} and \cite{SX17}]
\label{corollary_no_wedge_domain} The domains for 2-dimensional graphical self-translaters belong to the Cases (1)-(3), respectively all $\mathbb{R}^2$, half-planes or slabs in $\mathbb{R}^2$. In particular, a properly immersed self-translating $2$-dimensional hypersurface $\Sigma^2\subseteq \mathbb{R}^3$ cannot be the graph over a wedge-shaped domain in $\mathbb{R}^2$.
\end{corollary}
\begin{remark}
The above Corollary \ref{corollary_no_wedge_domain} is contained in the paper \cite{himw}, where all complete $2$-dimensional graphical self-translaters have very recently been fully classified (using \cite{SX17}). Again, Case (2) in fact cannot happen for 2-dimensional graphs.
\end{remark}
In \cite{sha} and \cite{sha15}, Shahriyari proved that there are no complete $2$-dimensional translaters which are graphical over a bounded domain. This fact was later generalized by M\o{}ller in \cite{niels} (see \cite{halihaj} for the half-cylinder case), where he proved that there are no properly embedded without boundary $n$-dimensional self-translaters contained in a cylinder of the kind $\Omega \times \mathbb{R}$, where $\Omega \subseteq \mathbb{R}^n$ is bounded:
\begin{corollary}[\cite{niels}]\label{corollary_cylinders}
No noncompact properly immersed self-translating $n$-dimensional hypersurface $(\Sigma^n, \partial \Sigma) $ in $ \mathbb{R}^{n+1}$ with compact boundary can be contained in a cylinder $\Omega \times\mathbb{R}$ with $\Omega \subseteq \mathbb{R}^n$ bounded.
\end{corollary}
\begin{proof}
The proof follows easily from Theorem \ref{bi-halfspace_boundary}. Indeed note that given a bounded set $\Omega \subseteq \mathbb{R}^n$, the cylinder $\Omega \times\mathbb{R}$ is contained in the intersection of two transverse vertical halfspaces.
\end{proof}
\begin{remark}
The proof shows more than Corollary \ref{corollary_cylinders}, namely that the conclusion holds assuming only boundedness in two directions: $\Sigma^n\subseteq\Omega_2\times\mathbb{R}^{n-1}$ cannot happen for $\Omega_2\subseteq\mathbb{R}^{2}$.
\end{remark}
As will be clear below, most of the ideas that we will need were essentially in place as early as the 1960s, much earlier than the minimal surface and curvature flow papers cited above. Namely, in the original paper by Omori \cite{omori}, he showed by quite similar methods that in Euclidean $n$-space, cones with angle $0 < \theta < \pi$ cannot contain properly embedded minimal surfaces.
Somewhat later, in 1989, contained within the proof of ``Theorem 2'' from \cite{hoffman-meeks} (which seems independent of Omori's ideas) is the fact that, while the Hoffman-Meeks ``halfspace theorem'' only works for minimal 2-surface immersions $\Sigma^2\to\mathbb{R}^3$, one has a ``bi-halfspace theorem'' (stronger than the cone theorems) for minimal hypersurfaces $\Sigma^{n}\to\R^{n+1}$ for $n\geq 3$, even allowing compact boundary. Their proof used barriers from the nonlinear Dirichlet problem known as the $n$-dimensional Plateau problem for graphs. Some disadvantages of that approach are clear: For when do such barriers exist, and if they in fact do, what are their precise properties, as needed for a ``separating tangency'' argument to run?
It then appears that only within the last decade it was realized by Borb\'ely \cite{borbelywedge} that one can prove bi-halfspace theorems for minimal 2-surface immersions $\Sigma^2\to\mathbb{R}^3$, under the assumption that the Omori-Yau principle (so named after \cite{omori}-\cite{cheng-yau}) is known to be available on the given $\Sigma^2$. This was also expanded by Bessa, de Lira and Medeiros in \cite{bessalira} where they showed Borb\'ely-style ``wedge'' theorems for stochastically complete minimal surfaces in Riemannian products $(M\times N, g_M\oplus g_N)$, where $(N,g_N)$ is complete without boundary. Seeing as the Huisken-Ilmanen metric, in which self-translaters are the minimal surfaces, is not a Riemannian product\footnote{Note however that \cite{smocz} showed that it can be seen as a warped Riemannian product.} nor complete, and our surfaces can have boundaries,
we will directly take Borb\'ely's method as our point of departure.
Here, in our case of $n$-dimensional self-translaters $\Sigma^n\to\R^{n+1}$, the Omori-Yau maximum principle in turn works quite generally, which is a well-established fact that has previously been invoked by several authors for related problems: See \cite{xin}, \cite{SX16}-\cite{SX17} and \cite{imp_rim}. Many other authors have written on the topic, see e.g. \cite{schoen-yau-lectures}, \cite{pigola03}, \cite{barr}. For a general yet particularly easy to state result, let us mention this: The Omori-Yau maximum principle holds for every
submanifold properly immersed with bounded mean curvature into a Riemannian space form (see \cite{PRS05}). Here we will be using the formulation and short proof in \cite{xin}, so as to make the whole presentation quite elementary and essentially self-contained, including as a biproduct the proof of the Hoffman-Meeks results for $n\geq 3$ and empty boundary, in Corollary \ref{corr-hm} below.
In a later work \cite{CM19}, we generalize the main ideas contained in the present paper to ancient mean curvature flows, providing a parabolic Omori-Yau principle and using it for proving a bi-halfspace theorem for ancient flows.
\section{Overview}\label{sec:outline}
In Section \ref{sec:prelims} we introduce notation and list a few of the technical lemmas in the form that we will need them later, with (references to) short proofs.
In Section \ref{sec:bi-halfspace} we prove a new ``Bi-Halfspace Theorem'' for properly immersed self-translaters, which is Theorem \ref{bi-halfspace}. We also fully classify all the possible pairs of halfspaces such that their intersections contain a complete self-translater, in Corollary \ref{bi-halfspace_corollary}.
In Section \ref{sec:basics} we study the convex hull of such hypersurfaces, both for compact self-translaters and for noncompact ones, but with compact (possibly empty) boundary. We observe a behavior very similar to the one of minimal submanifolds of the Euclidean space. The main result of the section is Theorem~\ref{convex_hull_noncomp_trans} and it was inspired by a result by Hoffman and Meeks in the context of minimal submanifolds of $\mathbb{R}^{n+1}$ (see \cite{hoffman-meeks}). The proof here is based on our ``Bi-Halfspace'' Theorem \ref{bi-halfspace} and the compact boundary version Theorem \ref{bi-halfspace_boundary} and hence diverges significantly from the proof of the theorem of Hoffman and Meeks, which relied on constructing barriers via certain nonlinear Dirichlet problems.
In the Appendix (Section \ref{appendix}) we will comment more on this point and we will provide an alternative proof of Theorem \ref{convex_hull_noncomp_trans}, which is closer in spirit to the one by Hoffman and Meeks, but which only works in the case $n=2$.
\section{Preliminaries and Notation}\label{sec:prelims}
In what follows, $(x_1,x_2, \dots, x_{n}, x_{n+1})$ are the standard coordinates of $\mathbb{R}^{n+1}$ and $(\boldsymbol{e}_1,\boldsymbol{e}_2, \dots, \boldsymbol{e}_n, \boldsymbol{e}_{n+1})$ is the standard orthonormal basis of $\mathbb{R}^{n+1}$.
On $\mathbb{R}^{n+1}$ we will, with a slight abuse of notation, denote the coordinate vector fields by $\partial_i=\frac{\partial}{\partial x_i}=e_i$.
In this paper $\Sigma^n \subseteq \mathbb{R}^{n+1}$ will always denote a smooth properly immersed self-translater with velocity vector $\boldsymbol{e}_{n+1}$.
Recall that properly immersed hypersurfaces with boundary are geodesically complete with boundary in the induced Riemannian metric (the Heine-Borel property with Hopf-Rinow).
The evolution of $\Sigma^n$ under the mean curvature flow is a unit speed translation in the direction of the positive $x_{n+1}$-axis. Therefore $\Sigma^n$ satisfies the following equation
\begin{equation}\label{translater_equation}
\boldsymbol{H} = \langle \boldsymbol{e}_{n+1}, \boldsymbol{\nu} \rangle \boldsymbol{\nu},
\end{equation}
where $\boldsymbol{H}=-H\nu$ is the mean curvature vector of $\Sigma^n$ and $\boldsymbol{\nu}$ is the unit normal vector field on $\Sigma^n$.
Let us recall here two important tools that we will need for our work.
\begin{lemma}[Comparison Principle for MCF]\label{comparison_principle}
Let $\varphi \colon M_1 \times [0, T) \to \R^{n+1} $ and $\psi \colon M_2 \times [0, T)\to \R^{n+1}$ be two hypersurfaces evolving by mean curvature flow and let us assume that $M_1$ is properly immersed while $M_2$ is compact. Then the distance between them is nondecreasing in time.
\end{lemma}
\begin{proof}
See e.g. the proof of Theorem 2.2.1 in \cite{mantegazza}.
\end{proof}
\begin{lemma}[Principle of Separating Tangency for Self-Translaters] \label{tangency_principle}
Let $\Sigma_1^n$ and $\Sigma_2^n$ be two connected (unit speed, same direction) self-translaters immersed into $ \mathbb{R}^{n+1} $, with (possibly empty) boundaries $\partial \Sigma_1$ and $\partial \Sigma_2$.
Suppose that there exists a point $p \in \Sigma_1 \cap \Sigma_2$ such that it is an interior point for both the self-translaters. Let us assume that the corresponding tangent spaces $T_{p}\Sigma_1 $ and $T_p\Sigma_2$ coincide and assume that, locally around $p$, $\Sigma_1$ lies on one side of $\Sigma_2$.
Then there are open neighborhoods $U_1 \subseteq \Sigma_1$ and $U_2 \subseteq \Sigma_2$ of $p$ such that $U_1 = U_2$.
\end{lemma}
\begin{proof}
This uses the maximum principle and unique continuation. See Theorem 2.1.1 in \cite{jpg}, Lemma 2.4 in \cite{niels} and Theorem 2.1 in \cite{mss}.
\end{proof}
\subsection*{Well-known Examples}
We conclude this section by enumerating some of the most well-known examples of self-translaters.
\begin{enumerate}
\item (Translating minimal hypersurfaces) Any hyperplane of $\mathbb{R}^{n+1}$ which is parallel to $e_{n+1}$ is a self-translater. More generally, if $N^{n-1} \subseteq \mathbb{R}^n$ is a minimal submanifold, then we have that $\Sigma \coloneqq N \times \mathbb{R} \subseteq \mathbb{R}^{n+1}$ is self-translating in the $e_{n+1}$-direction. This follows from the short computation $H_{N\times \mathbb{R}}=0=\left\langle(\nu_N,0),(\textbf{0},1)\right\rangle_{\R^{n+1}}$.
\item (Grim reaper cylinder) Consider the function $f \colon \left( -\frac{\pi}{2}, \frac{\pi}{2}\right) \to \mathbb{R}$ defined as $f(x) \coloneqq - \ln\left( \cos \left( x \right) \right) $. Its graph $\Gamma \coloneqq \graph\left( f \right)$ is called Calabi's \emph{grim reaper} curve (first found in \cite{mullins}) and it is the only nonflat connected complete translating soliton for the curve shortening flow. The hypersurface $\Gamma^n \coloneqq \mathbb{R}^{n-1} \times \Gamma \subseteq \mathbb{R}^{n+1}$ is called a \emph{grim reaper cylinder} and it is a self-translater.
\item (Rotationally symmetric self-translaters) In \cite{CSS}, the authors classify all the self-translaters which are rotationally symmetric with respect to the $x_{n+1}$-axis. These are the so-called \emph{bowl soliton} $U$ which was already discovered in \cite{aw}, and the family of \emph{winglike self-translaters}, also known as \emph{translating catenoids}.
The bowl soliton is the graph of an entire convex function $u \colon \mathbb{R}^n \to \mathbb{R}$ and it is asymptotic to a paraboloid. Indeed it is also known as the \emph{translating paraboloid}.
The wing-like self-translaters are all diffeomorphic to $\mathbb{S}^{n-1} \times \mathbb{R}$, where $\mathbb{S}^{n-1}$ is the $(n-1)$-dimensional sphere. They roughly look like two bowl solitons, one above the other, glued together with a vertical neck. Both of the ends are asymptotic to $U$. For each $R > 0$ there exists a unique (up to a translation in the $x_{n+1}$ direction) winglike self-translater $W_R$ such that the size of its neck is $R>0$.
\item (Gluing constructions) The desingularization techniques, originally developed by Kapouleas (see \cite{kap}) for building new examples of minimal and constant mean curvature hypersurfaces, have been applied by X.H. Nguyen and others, in order to prove the existence of new translating solitons, by ``gluing together'' already known examples. For more details, we refer to \cite{tridents}, \cite{finite_gr}, \cite{doubly_periodic}, \cite{ddn} and \cite{smith}. See also \cite{kkm} (and \cite{nguy-shrink}) for the first gluing construction for mean curvature solitons with non-flat ends.
\item (Delta-wing self-translaters) Recently, Bourni, Langford, and Tinaglia (Theorem 1 in \cite{blt18}), and independently Hoffman, Ilmanen, Mart\'in and White (Theorems 4.1, 8.1 in \cite{himw}) have proved that for each $b > \frac{\pi}{2}$, there exists a strictly convex and complete self-translater which lies in the slab $(-b, b) \times \mathbb{R}^n$ and in no smaller slab.
Furthermore, also uniqueness was proven in \cite{himw}. They called this new family of self-translaters, which is parametrized by the width of the slab, the $\Delta$-wings.
\item (Annuli, helicoid and Scherk's) In an upcoming paper \cite{himw2}, the authors have announced that they will be constructing several new families of properly embedded (nongraphical)
translators (quoting the abstract for a talk at Stanford in July 2018): ``[...] a two-parameter family of translating annuli, examples that
resemble Scherk’s minimal surfaces, and examples that resemble helicoids.''
\end{enumerate}
\section{Bi-halfspace Theorems for Self-Translating Solitons}\label{sec:bi-halfspace}
In this section we prove the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} and the case with boundary Theorem \ref{bi-halfspace_boundary}. Let us first make a few remarks:
\begin{remark}
In the theorems, the transversality can simply be defined via the unit normals to the boundary hypersurfaces (which are affine hyperplanes) of the halfspaces: They must not be (anti-)parallel as vectors in $\R^{n+1}$.
Note that these theorems are vacuously true for $n=1$, as in $\mathbb{R}^2$ all vertical affine halfspaces are (anti-)parallel and hence never transverse. Thus, in the below we will throughout tacitly assume $n\geq 2$.
Note also that the statements and proofs of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} and the case with boundary Theorem \ref{bi-halfspace_boundary} can be either false or true, with an easy proof, if one or both of the two halfspaces are not vertical. See Corollary \ref{bi-halfspace_corollary} at the end of this section for a clarification.
\end{remark}
Let us state the version of the Omori-Yau lemma which we will be needing:
\begin{lemma}(Omori-Yau for Translating Solitons)\label{mementomori}
Let $(\Sigma^n,\partial\Sigma)$ be a properly immersed self-translating soliton in $\mathbb{R}^{n+1}$ which is complete with boundary. Suppose that $f:\Sigma^n\to\mathbb{R}$ is a function which satisfies:
\begin{itemize}
\item[(i)] $\sup_\Sigma |f|<\infty,\quad \sup_{\partial \Sigma} f<\sup_\Sigma f$,
\item[(ii)]$f\in C^0(\Sigma)$,
\item[(iii)] $\exists \varepsilon_f>0$ s.t. $f$ is $C^2$ on the set $\{p\in\Sigma: f(p) > \sup_\Sigma f -\varepsilon_f\}$.
\end{itemize}
Then there exists a sequence $\{p_k\}$ in $\Sigma^n$ such that:
\begin{align}
&\lim_{k\to\infty} f(p_k)= \sup_{\Sigma} f,\label{OY_prop1}\\
&\lim_{k\to\infty} \nabla^\Sigma f(p_k)=0\label{OY_prop2},\\
&\lim_{k\to\infty} \Delta_\Sigma f(p_k)\leq 0.\label{OY_prop3}
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{mementomori}]
A short direct proof can be found in \cite{xin} (using that $\Sigma^n$ is complete with boundary and properly immersed), which is easily adapted to the form stated here. For bounded $|f|$ the condition of Xin,
\[
a_k\in\Sigma^n,\quad\|a_k\|_{\R^{n+1}}\hspace{-1pt}\to\infty\quad\Rightarrow\quad\lim_{k\to\infty}\frac{f(a_k)}{\|a_k\|_{\R^{n+1}}\hspace{-1pt}}=0
\]
is of course trivially satisfied.
\end{proof}
\begin{proof}[Proof of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace}]
Any affine halfspace $H\subseteq\mathbb{R}^{n+1}$ can be given by a pair of (offset and direction, resp.) vectors $(b,w)\in \mathbb{R}^{n+1}\times \mathbb{S}^{n}$, where we view $\mathbb{S}^{n}\subseteq\R^{n+1}$. Namely:
\begin{align*}
&H=H_{(b,w)}:=\left\{x\in\mathbb{R}^{n+1}: \langle x-b,w\rangle\geq 0\right\},\\
&P:=\partial H = \left\{x\in\mathbb{R}^{n+1}: \langle x-b,w\rangle= 0\right\}.
\end{align*}
Note that $w$ is unique but any $b\in \partial H$ works. Recall that such two $n$-planes $P_1,P_2$ have transverse intersection $P_1\pitchfork P_2$ if and only if the corresponding unit normals $w_1\nparallel w_2$ (so antiparallel is also forbidden). This is also what it means for two halfspaces $H_1$ and $H_2$ to be transverse.
What we call vertical halfspaces are those $H_{(b,w)}$ for which $w\perp e_{n+1}$, i.e. $w=(w^{(1)},\ldots,w^{(n)},0)\in\mathbb{S}^{n}\times\{0\}$.
We now perform a couple of normalizations which are not essential but greatly simplify some of the computations: Suppose that an $e_{n+1}$-directed self-translating hypersurface $\Sigma^n\subseteq\R^{n+1}$ is contained in a pair of transverse vertical halfspaces, i.e. that $\Sigma^n\subseteq H_1\cap H_2$. By simultaneously moving $\Sigma^n$ and $H_i$, we may assume $b_1=b_2=0$ (pick any $b\in H_1\cap H_2$, then translate by $-b$). Note also that ${\operatorname{span}} (w_1,w_2)$ defines a $2$-dimensional subspace in $\R^{n}\times\{0\}$.
We can then, by acting rigidly with $O(n)$ on the $\mathbb{R}^{n}$-factor (take an orthonormal basis for this 2-plane, fill out to an orthonormal basis of $\R^{n}$ finally compose with an $O(2)$-map in the two first coordinates), we can assume that there exists $(\xi,\eta)$ such that $\xi,\eta>0$ with $\|(\xi,\eta)\|=1$ and:
\[
w_1 = (\xi,\eta,0,\ldots,0),\quad w_2 = (\xi,-\eta,0,\ldots,0).
\]
As explained in the introduction, we will now proceed with an adaptation of the method of Borb\'ely to our situation of $n$-dimensional self-translaters. Consider for $R>0$ the respective affine hyperplanes of equidistance: $P_i + Rw_i=\{x: \langle x,w_i\rangle = R\}$. Their intersection locus is an $(n-1)$-dimensional vertical affine subspace $\mathscr{L}_R:=(P_1 + Rw_1)\cap(P_2 + Rw_2)$. Linear algebra reveals a simple explicit expression for this locus:
\begin{equation}\label{LocusElectus}
\mathscr{L}_R:=\left\{\left(\fracsm{R}{\xi},0,x_3,\ldots,x_{n+1}\right):\: (x_3,\ldots,x_{n+1})\in\mathbb{R}^{n-1}\right\}.
\end{equation}
We consider then the ambient Euclidean distance function from points $x\in\R^{n+1}$ to $\mathscr{L}_R$:
\begin{equation}
d(x):=d_R(x):=\dist_{\R^{n+1}}\hspace{-1pt}(x,\mathscr{L}_R)=\sqrt{\left(x_1-\fracsm{R}{\xi}\right)^2 + x_2^2},\quad x\in\R^{n+1}.
\end{equation}
Clearly $\mathscr{L}_R=\{x\in\R^{n+1}: d_R(x) = 0\}$ and $\|\nabla^{\R^{n+1}}\hspace{-1pt} d\|=1$ on $\R^{n+1}\setminus\mathscr{L}_R$. We define the cylindrical set by:
\[
\mathscr{D}_R=\left\{x\in\R^{n+1}: d_R(x) \leq R\right\},
\]
which is an $(n+1)$-dimensional solid with boundary.
Then for any $R>0$, explicitly
\[
\mathscr{D}_R\cap P_i = \left\{\left(\fracsm{R\eta^2}{\xi},(-1)^{i} R\eta,x_3,\ldots,x_{n+1}\right): (x_3,\ldots,x_{n+1})\in\mathbb{R}^{n-1}\right\},
\]
which disconnects $\partial{(H_1\cap H_2)}$ and the set $(H_1\cap H_2)\setminus\mathscr{D}_R$ has exactly two connected components (both unbounded).
We label by $\mathcal{V}_R$ the connected component of $(H_1\cap H_2)\setminus\mathscr{D}_R$ where $d_R$ is bounded (the other component, where $d_R$ is unbounded, we will not need to refer to directly). Notice that as $R\nearrow \infty$ we have $\mathcal{V}_R\nearrow H_1\cap H_2$. From now on, we will pick a fixed $R>0$ large enough so that $\Sigma\cap \mathcal{V}_R\neq \emptyset$.
In the below, we will at times drop the subscript and write $d(x):=d_R(x)$.
A couple of standard, elementary computations show that
\begin{align}
\label{ZeroDir}
\Hess_{\R^{n+1}}\hspace{-1pt} d \left(\nabla^{\R^{n+1}}\hspace{-1pt} d_R,\nabla^{\R^{n+1}}\hspace{-1pt} d_R \right)&=0, \quad\textrm{on}\quad \R^{n+1}\setminus \mathscr{L}_R,\\
\Delta_{\R^{n+1}}\hspace{-1pt} d_R & = \frac{1}{d_R}, \quad\textrm{on}\quad \R^{n+1}\setminus \mathscr{L}_R.
\end{align}
The first equation, giving an eigenvector field for the eigenvalue $\lambda = 0$, can also be deduced from $d_R(x)$ being linear in the gradient direction. Note also that as $d_R$ does not depend on the last $n-1$ coordinates of $\R^{n+1}$, $\Hess_{\R^{n+1}}\hspace{-1pt}$ has the $n-1$ orthonormal eigenvector fields with eigenvalue zero $e_3,\ldots,e_{n+1}$, all perpendicular to $\nabla^{\R^{n+1}}\hspace{-1pt} d_R$. The only nonzero eigenvalue is $\lambda = 1/d_R$ with unit length eigenvector field correspondingly given by e.g.
\begin{equation}\label{ChiField}
\chi=\Big(-\fracsm{\partial d_R}{\partial x_2},\fracsm{\partial d_R}{\partial x_1},0,\ldots,0\Big), \quad\textrm{on}\quad \R^{n+1}\setminus \mathscr{L}_R,
\end{equation}
which together with the other listed eigenvector fields forms an orthonormal frame field on $\mathbb{R}^{n+1}\setminus\mathcal{L}_R$.
The following simple fact follows from a small exercise in linear algebra: Given a square symmetric matrix $A\in\mathrm{Mat}_{n+1}(\mathbb{R})$ the trace over an $n$-dimensional hyperplane $P_{\mu}$ defined by a unit normal vector $\mu\in\R^{n+1}$ is:
\begin{equation}
\tr_{\mu}(A) = \sum_{i=1}^{n+1}\lambda_i\left(1-\left(\left\langle v_i,\mu\right\rangle_{\R^{n+1}}\hspace{-1pt}\right)^2\right),
\end{equation}
where the $(\lambda_1,\ldots,\lambda_{n+1})$ are the eigenvalues of $A$ with multiplicity and $(v_i)\subseteq\R^{n+1}$ a corresponding orthonormal basis of eigenvectors. Thus in our case of a Hessian with only one nonzero eigenvalue and corresponding unit eigenvector field $\chi$, we get the comparatively simple expression from tracing over $T_p\Sigma$ with the unit normal $\nu$:
\begin{equation}\label{TraceTrace}
\tr_{\Sigma} \left(\Hess_{\R^{n+1}}\hspace{-1pt} d\right) = \frac{1-\left(\left\langle \chi,\nu\right\rangle_{\R^{n+1}}\hspace{-1pt}\right)^2}{d},\quad\textrm{on}\quad\Sigma^n\setminus \mathscr{L}_R.
\end{equation}
We now define the modified distance function $f:\Sigma^n\to \mathbb{R}$:
\begin{equation}\label{definition_f}
f(p)=\begin{cases}
d_R(p), \quad p\in\Sigma\cap \mathcal{V}_R,\\
R, \quad\quad\:\:\: p\in\Sigma^n\setminus\big(\mathcal{V}_R\cap\mathscr{D}_R\big).
\end{cases}
\end{equation}
This function is well-defined and continuous (as $d_{\mid\partial\mathscr{D}_R}=R$) and it is smooth on $\Sigma^n\setminus\mathscr{D}_R$. It is also bounded, namely note that explicitly we have (using for the first inequality that $R>0$ was fixed large enough that $\Sigma\cap \mathcal{V}_R\neq\emptyset$, and recall also $0<\xi<1$):
\begin{equation}\label{f-bounds}
R<\sup_\Sigma f \leq R/\xi<\infty.
\end{equation}
At points $p\in\Sigma\cap \mathcal{V}_R$ (so that in particular $f=d_{\mid\Sigma}$ is smooth), we have that the gradient equals the tangential part of the ambient gradient:
\begin{equation}\label{grad_f}
\nabla^\Sigma f = \left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\top = \nabla^{\R^{n+1}}\hspace{-1pt} d- \left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp=\nabla^{\R^{n+1}}\hspace{-1pt} d - \langle\nabla^{\R^{n+1}}\hspace{-1pt} d,\nu\rangle_{\R^{n+1}}\hspace{-1pt}\nu,
\end{equation}
with length computed using (\ref{ChiField}) to be (recall again $\|\nabla^{\R^{n+1}}\hspace{-1pt} d\|_{\R^{n+1}}\hspace{-1pt}=1$):
\begin{equation}\label{dot_to_zero}
\begin{split}
\|\nabla^\Sigma f\| & = \sqrt{1 - \left(\left\langle \nabla^{\R^{n+1}}\hspace{-1pt} d,\nu\right\rangle_{\R^{n+1}}\hspace{-1pt}\right)^2}\\
& = \left|\left\langle \chi,\nu\right\rangle_{\R^{n+1}}\hspace{-1pt}\right|.
\end{split}
\end{equation}
So we can finally recast (\ref{TraceTrace}) as the following fundamental identity for the distance function to the locus $\mathcal{L}_R$:
\begin{equation}\label{Laplace-f-one}
\tr_{\Sigma} \left(\Hess_{\R^{n+1}}\hspace{-1pt} d_R\right) = \left(1-\|\nabla^\Sigma f\|^2\right)\Delta_{\R^{n+1}}\hspace{-1pt} d_R,\quad\mathrm{on}\quad\Sigma\cap \mathcal{V}_R.
\end{equation}
We recall that the vector-valued second fundamental form is $A(X,Y):=(\nabla^{\R^{n+1}}_X Y)^\perp$. Now apply (\ref{grad_f}) and recall $\nabla^\Sigma_X Z=\left(\nabla^{\R^{n+1}}_X \overline{Z}\right)^\top$, for $\overline{Z}$ any extension of $Z$. Then for any $X,Y\in T_p\Sigma$:
\begin{align*}
\Hess_\Sigma f(X,Y) &:= \left\langle \nabla^\Sigma_X\nabla^\Sigma f, Y \right\rangle_\Sigma =
\left\langle \nabla^\Sigma_X\left[\nabla^{\R^{n+1}}\hspace{-1pt} d - \left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp\right], Y \right\rangle\\
&=\left\langle \nabla_X^{\R^{n+1}}\hspace{-1pt}\left[\nabla^{\R^{n+1}}\hspace{-1pt} d - \overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}\right], Y \right\rangle\\
&=\Hess_{\R^{n+1}}\hspace{-1pt} d(X,Y) - \left\langle \nabla_X^{\R^{n+1}}\hspace{-1pt}\overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}, Y \right\rangle\\&=\Hess_{\R^{n+1}}\hspace{-1pt} d(X,Y) + \left\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, A(X,Y)\right\rangle_{\R^{n+1}}\hspace{-1pt},
\end{align*}
where the last step is seen by computing
\[
X.\left\langle \overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}, \overline{Y} \right\rangle = \left\langle \nabla_X^{\R^{n+1}}\hspace{-1pt}\overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}, \overline{Y} \right\rangle + \left\langle \overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}, \nabla_X^{\R^{n+1}}\hspace{-1pt} \overline{Y} \right\rangle,
\]
and then evaluting on $\Sigma$ to get:
\[
0=\left\langle \nabla_X^{\R^{n+1}}\hspace{-1pt}\overline{\left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp}, Y \right\rangle + \left\langle \left(\nabla^{\R^{n+1}}\hspace{-1pt} d\right)^\perp, A(X,Y)\right\rangle.
\]
Taking now the trace over $T_p\Sigma$ we see:
\begin{equation}
\Delta_\Sigma f = \tr_{\Sigma} \left(\Hess_{\R^{n+1}}\hspace{-1pt} d\right) + \left\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \boldsymbol{H}\right\rangle_{\R^{n+1}}\hspace{-1pt}
\end{equation}
Here we used that the mean curvature vector is $\boldsymbol{H}:=\tr_\Sigma A = -H\nu$. Using now the self-translater equation $H=\langle e_{n+1},\nu\rangle$, we get:
\begin{equation}\label{Laplace-f}
\Delta_\Sigma f = \tr_{\Sigma} \left(\Hess_{\R^{n+1}}\hspace{-1pt} d\right) - \langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \rangle\langle e_{n+1},\nu\rangle.
\end{equation}
Combining (\ref{Laplace-f-one}) and (\ref{Laplace-f}) we finally have shown:
\begin{equation}\label{main_identity}
\Delta_\Sigma f=\frac{1-\|\nabla^\Sigma f\|^2}{d} - \big\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \big\rangle\big\langle e_{n+1},\nu\big\rangle,\quad\mathrm{on}\quad \Sigma\cap \mathcal{V}_R.
\end{equation}
We will now apply the Omori-Yau principle in Lemma \ref{mementomori} to $f:\Sigma^n\to\mathbb{R}$, so we get a sequence of points $\{p_k\}$ on $\Sigma^n$ with the Omori-Yau properties (\ref{OY_prop1})-(\ref{OY_prop3}). To see that the Omori-Yau principle indeed applies here, we check that all the conditions in Lemma \ref{mementomori} hold. By construction $0<\sup_{\Sigma} f<\infty$, $f\in C^0(\Sigma)$ and $f$ is $C^2$ where relevant. Recall also that since by (\ref{f-bounds}) we know $\sup_\Sigma f > R $, and as $f|_{\Sigma\setminus \mathcal{V}_R}\leq R$ (note also that in principle $\Sigma\setminus \mathcal{V}_R=\emptyset$ is possible), we may assume that all $p_k\in \Sigma\cap \mathcal{V}_R$.
To proceed we now need to analyze the last ``perturbation term'' in (\ref{main_identity}), which came from the self-translater equation. Notice first that by the triangle inequality
\begin{equation}\label{SqueezeThatTerm}
\big|\big\langle e_{n+1},\nu\big\rangle\big|\leq \big|\big\langle e_{n+1},\nabla^{\R^{n+1}}\hspace{-1pt} d\big\rangle\big| + \big|\big\langle e_{n+1},\nu-\nabla^{\R^{n+1}}\hspace{-1pt} d\big\rangle\big|\leq\big\|\nu-\nabla^{\R^{n+1}}\hspace{-1pt} d\big\|,
\end{equation}
using also the fact that $\langle e_{n+1},\nabla^{\R^{n+1}}\hspace{-1pt} d\rangle=0$ and finally applying the Cauchy-Schwarz inequality.
We know from the property (\ref{OY_prop2}) combined with Equation (\ref{dot_to_zero}) that the limit
\begin{equation}\label{junk_limit}
\big|\big\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \big\rangle\big|(p_k)\to 1,\quad\mathrm{as}\quad k\to\infty.
\end{equation}
holds, so from a certain stage the inner product has at each point a definite sign. By the Pigeon Hole Principle, there must then exist a sign $\sigma_\infty\in\{-1,1\}$ and a subsequence of points such that $\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \rangle \to \sigma_\infty$. So by, if necessary, flipping orientations $\nu \leftrightarrow -\nu$ (a symmetry for the self-translater equation) we may assume that $0<\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \rangle \to 1$ on the sequence of points. This also leads to:
\begin{equation}
\big\|\nu(p_k)-\nabla^{\R^{n+1}}\hspace{-1pt} d_R(p_k)\big\|_{\R^{n+1}}^2 = 2\left[1 - \langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu\rangle\right]\to 0.
\end{equation}
In consequence, we can use (\ref{SqueezeThatTerm}) to conclude that:
\begin{equation}\label{JunkTerm}
\left|\left\langle e_{n+1},\nu\right\rangle\right|(p_k)\to 0.
\end{equation}
Now, from (\ref{JunkTerm}) with either (\ref{junk_limit}) or simply $|\langle \nabla^{\R^{n+1}}\hspace{-1pt} d, \nu \rangle|\leq 1$, the last term in (\ref{Laplace-f}) tends to zero. Going to the limit in (\ref{main_identity}), we thus conclude that the limits exist in the following relation:
\begin{equation}\label{laplace_contradict}
\lim_{k\to\infty}\Delta_\Sigma f (p_k)= \lim_{k\to\infty}\frac{1}{d(p_k)} \geq \frac{\xi}{R} > 0,
\end{equation}
using again $0<\xi<1$.
This violates Property (\ref{OY_prop3}) in the Omori-Yau maximum principle of Lemma \ref{mementomori}, namely that $\lim_{k\to\infty} \Delta_\Sigma f(p_k)\leq 0$. This contradiction concludes the proof that there cannot exist any such self-translater.
\end{proof}
\begin{proof}[Proof of the Theorem \ref{bi-halfspace_boundary}]
To proceed in the case of compact nonempty boundary, we will again assume that $H_1$ and $H_2$ are as in the proof of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace}, while we now allow $(\Sigma^n,\partial\Sigma)$ to be complete with compact boundary and still properly immersed. We furthermore assume that $\Sigma^n$ is connected. For every $R>0$, let $\mathscr{L}_R$, $\mathscr{D}_R$ and $d = d_R$ be as in the proof of the Theorem \ref{bi-halfspace}. Recall that $\mathcal{V}_R$ denotes that connected component of $\left( H_1 \cap H_2 \right)\setminus \mathscr{D}_R$ on which $d$ is bounded. Let again $f$ be the function defined in \eqref{definition_f}.
Note that since $\partial \Sigma$ is compact, we can pick $R>0$ large enough so that $\partial \Sigma \subseteq \mathcal{V}_R$.
We will now, for contradiction, assume that $(\Sigma,\partial \Sigma)$ is not compact. We will distinguish between two different cases and finally see that each of them leads to a contradiction.
\begin{itemize}
\item \textbf{Case (a)}: $\Sigma\cap \mathcal{V}_R$ is bounded in $\R^{n+1}$ for every $R > 0$.
\item \textbf{Case (b)}: There exists $R > 0$ s.t. $\Sigma\cap \mathcal{V}_R$ is unbounded in $\R^{n+1}$.
\end{itemize}
\noindent\textbf{Proof for Case (a)}:
By the definition of $\mathscr{D}_R$, we can fix $R>0$ large enough so that
\begin{equation}\label{distance_between_components}
\dist\left( \partial \Sigma, \mathscr{D}_R \right) > \pi.
\end{equation}
Since $\mathscr{D}_R\subseteq\R^{n+1}$ has compact vertical projection, there exists an open vertical slab $S\subseteq\R^{n+1}$ between two parallel vertical hyperplanes at distance $\pi$ separating $\partial \Sigma$ and $\mathscr{D}_R$ . More precisely, we can arrange that $\partial \Sigma$ and $\mathscr{D}_R$ are contained in two different connected components of $\mathbb{R}^{n+1} \setminus \overline{S}$. Let now $\Gamma^n:=\Gamma\times\mathbb{R}^{n-1} \subseteq S$ be a grim reaper cylinder. Let us consider the family $\{ \Gamma^n_s \}_{s \in \mathbb{R}}$ defined via $\Gamma^n_s \coloneqq \Gamma^n + s e_{n+1}$. Note that $\cup_{s\in\mathbb{R}}\Gamma^n_s=S$.
Since in the present case, $\Sigma^n$ is assumed noncompact and hence unbounded (using that it is properly immersed), while $\Sigma\cap \mathcal{V}_R$ is assumed bounded, we surely have $\Sigma\setminus \mathcal{V}_R\neq\emptyset$ regardless of how large we take $R>0$. Seeing as $\Sigma^n$ is connected, we therefore conclude that $\Sigma\cap S\neq \emptyset$. Therefore there also exists $s\in\mathbb{R}$ small enough so that $\left( \Sigma\cap \mathcal{V}_R \right) \cap \Gamma^n_s\neq\emptyset$.
On the other hand, since $\Sigma\cap \mathcal{V}_R$ is assumed bounded, then for $s\in\mathbb{R}$ large enough we have that
$
\left( \Sigma\cap \mathcal{V}_R \right) \cap \Gamma^n_s = \emptyset$. Because $\Gamma^n$ is properly embedded, and since $\Sigma\cap \mathcal{V}_R$ is assumed bounded, there exists an extremal value $s_0$:
$$
s_0 \coloneqq \sup \{ s \in \mathbb{R} \colon \left( \Sigma\cap \mathcal{V}_R \right) \cap \Gamma^n_s \ne \emptyset \}<\infty.
$$
By compactness of $\overline{\Sigma\cap \mathcal{V}_R}$ hence of $\overline{\Sigma\cap S}$ and since $\Sigma$ is properly immersed, this $s_0$ is attained at some $p_0 \in \left( \Sigma\cap \mathcal{V}_R \right) \cap \Gamma^n_{s_0}$, where we note that $p_0 \in \overline{S}$. Therefore $p$ is a point of $\Sigma\cap \mathcal{V}_R$ which is interior relative to $\Sigma$. We can therefore apply Separating Tangency from Lemma \ref{tangency_principle}, which by completeness, connectedness and compactness of the boundary implies that $\Sigma$ and $\Gamma\times\mathbb{R}^{n-1}$ coincide outside some ambient ball, leading to a contradiction with f.ex. the assumption that $\Sigma\subseteq H_1\cap H_2$ (or with the boundedness of $\Sigma\cap \mathcal{V}_R$).
\noindent\textbf{Proof for Case (b)}: Let us summarize how we will now fix the setup throughout the rest of the proof: $R>0$ will be taken large enough so that $\partial \Sigma\subseteq \mathcal{V}_R$ and, as we are in Case (b), also taken so large that $\Sigma\cap \mathcal{V}_R$ is unbounded (in particular nonempty).
The proof of Theorem \ref{bi-halfspace} might not work here, because it could be that the function $f$ approaches its supremum only by attaining it on the boundary $\partial \Sigma$. Therefore the idea is to modify $f$ in a suitable way, so that the supremum of the new function is guaranteed to not be attained on $\partial \Sigma$ and also in such a way that the argument in the proof of the ``Bi-Halfspace'' Theorem~\ref{bi-halfspace} still goes through. The resulting argument, using the noncompactness to our advantage, is what we call an ``adiabatic trick'' since it involves tuning a certain length scale as slowly as needed together with estimates for the PDE.
To begin, recall that in the present case, $\Sigma\cap \mathcal{V}_R$ is now assumed to be an unbounded subset of $\R^{n+1}$, so the extrinsic distance to $0\in\R^{n+1}$ is an unbounded function on $\Sigma\cap \mathcal{V}_R$:
\begin{equation}\label{p-unbounded}
\sup_{p\in\Sigma\cap \mathcal{V}_R}\|p\|_{\R^{n+1}}\hspace{-1pt}=\infty.
\end{equation}
Since $\partial \Sigma$ is compact, there exists a radius $\rho >0$ large enough so that $\partial \Sigma \subseteq B_\rho(0) = \{x \in \mathbb{R}^{n+1} \colon \|x\|_{\R^{n+1}}\hspace{-1pt} \le \rho \}$. For every length scale $\ell > \rho > 0$ (which we soon plan to take as large as needed), let us define the $C^\infty(\R^{n+1})$ function $\chi_{\ell} \colon \mathbb{R}^{n+1} \to \mathbb{R}$ by
\begin{equation}
\chi_\ell (x) = \psi(\|x\|/\ell),
\end{equation}
where $\psi:[0,\infty)\to\mathbb{R}$ is a standard $C^\infty$ monotone increasing cut-off function $0\leq\psi\leq 1$ such that $\psi|_{[0,1]} \equiv 0$ while $\psi|_{[2,\infty)} \equiv 1$. Thus since $\ell >\rho>0$ we have that $\chi_\ell$ vanishes inside the ball $B_\rho(0)$ and therefore also on $\partial \Sigma$. Furthermore, all ambient derivatives of $\chi_\ell$ are uniformly bounded with upper bounds depending only on $\ell$ (and of course $\psi$, which we fix once and for all):
\begin{equation}\label{bound_chi_ell}
\sup_{x\in\R^{n+1}}\left\|\nabla^{\mathbb{R}^{n+1}}\chi_\ell(x)\right\|_{\R^{n+1}} \leq \frac{C}{\ell}\quad \text{ and } \quad \sup_{x\in\R^{n+1}}\left|\Delta_{\mathbb{R}^{n+1}}\chi_\ell(x)\right| \leq\frac{C}{\ell^2} .
\end{equation}
For every $\ell > 0$, let us define the new function $f_\ell \colon \Sigma^n \to \mathbb{R}$ as follows. With $f$ as in Equation \eqref{definition_f} let $M \coloneqq \sup_{\Sigma} f$ and define:
\begin{equation}
f_\ell \left( p \right) \coloneqq f(p) + M \chi_\ell \left( p\right),\quad p\in\Sigma.
\end{equation}
Note that the continuity and smoothness of $f_\ell$ are no worse than of $f$. Recall from (\ref{f-bounds}) that $f\leq \fracsm{R}{\xi}$ so that $f_\ell$ is also bounded:
\begin{equation}
\sup_\Sigma f_\ell \leq \fracsm{R}{\xi} + M <\infty.
\end{equation}
Also, since $f > R$ on $\Sigma\cap \mathcal{V}_R$ we have by (\ref{p-unbounded}) and by the fact that $\chi_\ell|_{\R^{n+1}\setminus B_{2l}(0)}=1$:
\begin{equation}\label{avoid_M}
\forall \ell>\rho:\:\max_{\partial \Sigma} f_\ell \leq M <R + M<\sup_{\Sigma} f_\ell=\sup_{\Sigma\cap \mathcal{V}_R} f_\ell,
\end{equation}
using for the first equality that $\chi_\ell|_{\partial\Sigma} =0$ and for the last that $\sup_{\Sigma\setminus \mathcal{V}_R} f_\ell \leq R+M$. Thus we can now for each $\ell>\rho$ apply the Omori-Yau argument as in the proof of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} to the function $f_{\ell}$, this time in the boundary version, now that we by (\ref{avoid_M}) have verified the condition in Lemma \ref{mementomori}(i).
Suppose now that there exists $\ell_0 > 0$ such that there is at least one Omori-Yau sequence $p_k\in\Sigma\cap \mathcal{V}_R$ for $f_{\ell_0}:\Sigma\to\mathbb{R}$ with the property that $\|p_k\|_{\R^{n+1}}\hspace{-1pt}\to \infty$. Since $\chi_\ell$ is constant outside a compact subset of $\R^{n+1}$, we see $\Delta_\Sigma f(p_k) = \Delta_\Sigma f_\ell(p_k)$ for all sufficiently large values of $k$, so that the argument in (\ref{laplace_contradict}) from the case without boundary applies.
Assume now conversely that for every $\ell > 0$, none of the Omori-Yau sequences have unbounded Euclidean norm. Then in consequence $f_{\ell}$ attains its maximum at some point $q_\ell\in\Sigma\cap \mathcal{V}_R\setminus \partial \Sigma$ so that $f_\ell(q_\ell) = \sup_{\Sigma\cap \mathcal{V}_R} f_\ell$. Note that then in fact $\|q_\ell\|\geq \ell$ must be the case, as follows from Equation (\ref{avoid_M}). Namely, inside $B_\ell(0)$ holds that $\chi_\ell = 0$, so we get $\sup_{B_\ell(0)} f_\ell \leq M < \sup_{\Sigma} f_\ell$ and thus the maximum must be attained outside of $B_\ell(0)$.
Now we do analysis on the sequence of maximum points $\{q_\ell\}$. By criticality we have $\nabla^{\Sigma} f_\ell (q_\ell) = 0$, so by \eqref{bound_chi_ell} and $\nabla^\Sigma \chi_\ell = \frac{1}{\ell}\psi'(\|p\|/\ell)\nabla^\Sigma \|p\|$:
\begin{equation}\label{surf_grad}
\left\|\nabla^{\Sigma} f (q_\ell) \right\| = \left\|\nabla^{\Sigma} f_\ell (q_\ell) - M\nabla^{\Sigma} \chi_\ell (q_\ell)\right\| = M\left\| \nabla^{\Sigma} \chi_\ell (q_\ell)\right\| \leq \frac{CM}{\ell},
\end{equation}
where we also used
\begin{equation}\label{p-grad}
\left\|\nabla^\Sigma \|p\|\right\|=\Big\|\big( \nabla^{\mathbb{R}^{n+1}} \|p\| \big)^{\top}\Big\| \leq\big\|\nabla^{\mathbb{R}^{n+1}} \|p\|\big\|=1.
\end{equation}
As for estimating the Laplacian, we can compute:
\begin{align*}
\Delta_{\Sigma}\|p\| &= \di_{\Sigma} \left( \nabla^\Sigma \|p\| \right) \\
&= \di_{\Sigma} \left( \left( \nabla^{\mathbb{R}^{n+1}} \|p\| \right)^{\top} \right) \\
&= \di_{\Sigma} \left( \nabla^{\mathbb{R}^{n+1}}\|p\| - \left( \nabla^{\mathbb{R}^{n+1}}\|p\| \right)^{\perp} \right) \\
&= \frac{n}{\|p\|} + H \left\langle \nabla^{\mathbb{R}^{n+1}}\|p\|, \nu \right\rangle.
\end{align*}
Therefore, since $\Sigma$ is a self-translater and hence $|H|\leq 1$, we get by Cauchy-Schwarz:
\begin{equation}\label{laplacian_distance_on_sigma}
\left|\Delta_{\Sigma}\|p\|\right| \le \frac{n}{\|p\|} + 1, \quad p\in\Sigma.
\end{equation}
We thus get, using (\ref{p-grad}) and (\ref{laplacian_distance_on_sigma}) with $\|q_\ell\|\geq \ell$ :
\begin{equation}
|\Delta_\Sigma \chi_\ell(q_\ell)| \leq \left[\frac{\psi'(\|p\|/\ell)}{\ell}|\Delta_\Sigma \|p\|| + \frac{|\psi''|(\|p\|/\ell)}{\ell^2}\|\nabla^\Sigma \|p\|\|^2\right]_{\mid q_\ell}
\leq \frac{C'}{\ell}.
\end{equation}
Thus, since $\Delta_\Sigma f_\ell (q_\ell)\leq 0$ we get:
\begin{equation}\label{lim_laplace}
\lim_{\ell\to\infty} \Delta_\Sigma f(q_\ell) = \lim_{\ell\to\infty} \Delta_\Sigma f_\ell(q_\ell)-\lim_{\ell\to\infty} \Delta_\Sigma \chi_\ell(q_\ell)\leq 0.
\end{equation}
Therefore, by (\ref{surf_grad}) and (\ref{lim_laplace}), we can plug the sequence of maximum points $\{q_\ell\}$ directly into the same identity (\ref{main_identity}) derived in the course of the proof of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} for the $\partial\Sigma=\emptyset$ case, in order to get a contradiction.
Since, both in Case (1) and in Case (2), we have thus reached a contradiction, we conclude that the hypersurface $(\Sigma,\partial \Sigma)$ must in fact be compact.
\end{proof}
The following corollary completes the picture given by the ``Bi-Halfspace'' Theorem \ref{bi-halfspace}, providing a complete characterization of all the possible couples of hyperspaces such that their intersection contains a properly immersed self-translater. In particular it shows that the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} does not not hold anymore if we drop the assumption about the verticality of the halfspaces.
\begin{corollary}\label{bi-halfspace_corollary}
Let $w_1, w_2 \in \mathbb{S}^n$
and let $H_1 \coloneqq H_{(0, w_1)}$ and $H_2 \coloneqq H_{(0, w_2)}$.
Then there exists a properly immersed self-translater without boundary contained in $H_1 \cap H_2$ if and only if one of the following conditions hold.
\begin{enumerate}
\item $\langle w_1, e_{n+1}\rangle > 0$ and $\langle w_2, e_{n+1} \rangle > 0$;
\item $\langle w_1, e_{n+1}\rangle > 0$ and $\langle w_2, e_{n+1} \rangle = 0$;
\item $\langle w_1, e_{n+1}\rangle = 0$ and $\langle w_2, e_{n+1} \rangle > 0$;
\item $\langle w_1, e_{n+1}\rangle = \langle w_2, e_{n+1} \rangle =0 $ and $w_1 \parallel w_2$.
\end{enumerate}
\end{corollary}
\begin{proof}
Let us first assume that none of the conditions $(1)$, $(2)$, $(3)$ and $(4)$ are satisfied. This means that $\langle w_1, e_{n+1}\rangle = \langle w_2, e_{n+1} \rangle = 0$ and $w_1 \nparallel w_2$ or one of the two scalar products is strictly negative. In the first case, we know from the ``Bi-Halfspace'' Theorem \ref{bi-halfspace} that there cannot be properly immersed self-translaters contained in $H_1 \cap H_2$.
Let us assume that one of the two scalar products is strictly negative, say $\langle w_1, e_{n+1} \rangle < 0$.
Then we claim that $H_1$ cannot contain any properly immersed self-translater. This, in particular implies that $H_1 \cap H_2$ does not contained any properly immersed self-translater. Indeed, by contradiction, assume that there exists a properly immersed self-translater $\Sigma^n \subseteq H_1$.
Then one can easily find a contradiction by using Lemma~\ref{comparison_principle} and comparing the time evolution of $\Sigma^n$ with the evolution of some suitably large sphere lying in $\mathbb{R}^{n+1} \setminus H_1$
Let us now check that if any of $(1)$, $(2)$, $(3)$ or $(4)$ hold, then there exists a properly immersed self-translater contained in $H_1 \cap H_2$.
If $(1)$ holds, then consider for instance the bowl self-translater $U$. Since $U$ is asymptotic to a paraboloid at infinity, it is clear that, up to a translation in the $e_{n+1}$ direction, $U \subseteq H_1 \cap H_2$.
Let us now assume that $(2)$ or $(3)$ hold. Without loss of generality, we can assume $H_1 = \{ x_1 \ge 0\}$ and $\langle w_2, e_{n+1}\rangle >0$. Since we are assuming $\langle w_2, e_{n+1}\rangle >0$, we have that $P_2 \coloneqq \partial H_2$ is the graph of an affine function $f$ defined over $\{x_{n+1} = 0\}$. More precisely, let $w_2 = (w_{2, 1}, \dots, w_{2, n}, w_{2, n+1})$. Then $f$ is defined as
$$
f(x_1, \dots, x_n) \coloneqq - \frac{x_1 w_{2,1} + x_2 w_{2, n}\dots + x_n w_{2,n}}{w_{2,n+1}}.
$$
For any $L >0$, let us define the slab $S_L \coloneqq (0, L) \times \mathbb{R}^{n-1}$. Note that on $S_L$ the function $f|_{S_L}$ is bounded from above by the function
$$
g_L(x_1, \dots x_n) \coloneqq L\frac{|w_{2,1}|}{w_{2,n+1}} - \frac{x_2 w_{2,2}\dots + x_n w_{2,n}}{w_{2,n+1}}
$$
and clearly $\nabla g_L = \frac{1}{w_{2,{n+1}}} (0, w_{2,2}, \dots, w_{2,n})$. Note that $\nabla g_L$ does not depend on $L$.
Now take $L$ large enough so that there exists a tilted grim reaper cylinder $\Sigma$ which is the graph of a function defined on $S_L$ and such that it grows linearly in the direction of $\nabla g_L$ and with the same slope of $g_L$ (for a detailed description of tilted grim reaper cylinders, see \cite{gama_martin} and \cite{blt18}). Then, since $\Sigma$ is the graph of a function which is strictly convex w.r.t. the first variable $x_1$, it can be chosen in such a way that it lies above the graph of $g_L$ and, in particular, inside $H_2$. Moreover, by construction, $\Sigma$ is also contained in $H_1$.
If $(4)$ holds, then observe that $P \coloneqq \partial H_1 = \partial H_2 $ is a translater contained in $H_1 \cap H_2$.
\end{proof}
\section{On the Convex Hulls of Self-Translaters}\label{sec:basics}
In this section we want to study the convex hulls of self-translaters. We will derive a sort of ``convex hull property'' for compact self-translaters and then we will discuss the classification of the convex hulls of (possibly noncompact) self-translaters with compact boundary, proving Theorem \ref{convex_hull_noncomp_trans}. Those two results have been inspired by the theory of classical minimal submanifolds of the Euclidean space. They both show that, up to projecting onto the hyperplane $\mathbb{R}^n \times \{0\}$, the convex hull of a self-translater behaves quite similarly to the convex hull of a minimal submanifold of $\mathbb{R}^{n+1}$.
\subsection{Convex Hulls of Compact Self-Translaters}
The first lemma is a well-known fact about self-translaters and can be proved in several different ways, but, at least to our knowledge, they are all based on some version of the maximum principle. For the sake of completeness we include a proof, close in spirit to an argument given in \cite{pyo}.
\begin{lemma}\label{lemma_boundary}
Let $(\Sigma^n,\partial\Sigma) $ be a compact $e_{n+1}$-directed self-translater in $\R^{n+1}$.
Then $\partial \Sigma \ne \emptyset$ and
$$
\max_{\overline{\Sigma}} x_{n+1} = \max_{\partial\Sigma}x_{n+1}.
$$
\end{lemma}
\begin{proof}
Recall that given a function $f \in C^1(\mathbb{R}^{n+1})$, the gradient $\nabla^{\Sigma} f|_{\Sigma}$ is given by
\begin{equation}\label{equation_rest_gradient}
\nabla^{\Sigma} f|_{\Sigma} = \left( \nabla f \right)^{\top},
\end{equation}
where $\left( \nabla f \right)^{\top}$ is the projection of $\nabla f $ on the tangent bundle of $\Sigma$.
If we apply \eqref{equation_rest_gradient} to the coordinate function $x_{n+1}$, we get
\begin{equation}
\nabla^{\Sigma} x_{n+1} = \boldsymbol{e}_{n+1}^\top.
\end{equation}
Let $\boldsymbol{E}_1, \dots, \boldsymbol{E}_n$ be a orthonormal frame on $\Sigma$ and let $\boldsymbol{\nu}$ be a unit normal vector field.
Then, using \eqref{translater_equation}, we have
\begin{align*}
\Delta_{\Sigma} x_{n+1}
&= \di_{\Sigma}(\boldsymbol{e}^{\top}_{n+1}) = \di_{\Sigma}(\boldsymbol{e}_{n+1} - \boldsymbol{e}_{n+1}^\perp) \\
&= -\Sigma_{j=1}^n \langle \nabla_{\boldsymbol{E}_j}\langle \boldsymbol{e}_{n+1}, \boldsymbol{\nu} \rangle \boldsymbol{\nu}, \boldsymbol{E}_j \rangle \\
&= - \langle \boldsymbol{e}_{n+1}, \boldsymbol{\nu}\rangle \Sigma_{j=1}^n \langle \nabla_{\boldsymbol{E}_j} \boldsymbol{\nu}, \boldsymbol{E}_j \rangle \\
&= H^2.
\end{align*}
Therefore $x_{n+1}$ is a subharmonic function on $\Sigma$, and hence by the strong maximum principle it cannot have any interior maximum points.
\end{proof}
Now let us show a new ``convex hull'' property for self-translaters, in the same spirit as the classical one for minimal hypersurfaces. Let us first remind the reader of the minimal hypersurface case.
\begin{proposition}(See e.g. Proposition 1.9 in \cite{cm-min}). \label{convex_hull_cm}
If $\Sigma^n\subseteq \mathbb{R}^{n+1}$ is a compact minimal hypersurface with boundary, then $\Sigma\subseteq\conv(\partial\Sigma)$, where $\conv(\partial \Sigma)$ is the convex hull of $\partial \Sigma \subseteq \mathbb{R}^{n+1}$.
\end{proposition}
Read verbatim, such a statement is ostensibly wrong for self-translaters, as e.g. seen by taking the (compact) pieces of the Altschuler-Wu bowl solution below planes perpendicular to $e_{n+1}$. Nonetheless, we do have the following modified version. We will by $\pi \colon \mathbb{R}^{n+1} \to \mathbb{R}^n$ denote the standard orthogonal projection $\pi(x_1, \dots, x_n, x_{n+1}) \coloneqq (x_1, \dots, x_n)$.
\begin{proposition}\label{prop_convex_hull}
Let $\Sigma^n \subseteq \mathbb{R}^{n+1}$ be a compact $e_{n+1}$-directed self-translater with boundary $\partial \Sigma\neq \emptyset$.
Then
$$
\Sigma \subseteq \conv\left( \pi\left( \partial \Sigma \right)\rig \times (-\infty, \max_{\partial \Sigma} x_{n+1}],
$$
where $\conv\left(\pi\left( \partial \Sigma \right)\rig$ is the convex hull of $\pi(\partial \Sigma) \subseteq \mathbb{R}^n$.
\end{proposition}
\begin{proof}
Let $\tilde{\mathbb{R}}^{n+1} \coloneqq \lef \R^{n+1}, e^{\frac{2}{n}x_{n+1}}\delta_{ij}\rig = \left( \mathbb{R}^{n+1}, \tilde{h} \right)$ be the so-called Huisken-Ilmanen space. It plays an important role due to the following well-known correspondence: $\Sigma^n \subseteq \mathbb{R}^{n+1}$ is a unit speed self-translating surface in the $x_{n+1}$-direction if and only if $\Sigma$ is a minimal submanifold of $\tilde{\mathbb{R}}^{n+1}$. See for instance \cite{sha} for a proof in the case $n = 2$ or \cite{jpg} for the general case.
Observe that given a function $f \in C^1 \left( \mathbb{R}^{n+1} \right)$, the gradient $\tilde{\nabla} f $ of $f$ w.r.t. the metric $\tilde{h}$ is given by
\begin{equation}\label{formula_conformal_gradient}
\tilde{\nabla} f = e^{- \frac{2}{n} x_{n+1} } \nabla f.
\end{equation}
We can now compute $\Delta_{\tilde{\Sigma}}x_j$, for $j = 1, \dots, n$, using \eqref{formula_conformal_gradient} and \eqref{equation_rest_gradient}.
\begin{align*}
\Delta_{\tilde{\Sigma}} x_{j}
&= \di_{\tilde{\Sigma}} \left( \nabla^{\tilde{\Sigma}} x_j \right) \\
&= \di_{\tilde{\Sigma}} \left( \left( \tilde{\nabla} x_j \right)^T \right)\\
&= \di_{\tilde{\Sigma}} \left( e^{-\frac{2}{n}x_{n+1}} \boldsymbol{e}_{j}^{\top} \right) \\
&= -\frac{2}{n} e^{-\frac{2}{n}x_{n+1}} \tilde{h} \left( \nabla^{\tilde{\Sigma}}x_{n+1}, \boldsymbol{e}^{\top}_j \right) + e^{-\frac{2}{n}x_{n+1}} \di_{\tilde{\Sigma}} \left( \boldsymbol{e}^{\top}_j \right) \\
&= - \frac{2}{n} \tilde{h} \left( \nabla^{\tilde{\Sigma}} x_{n+1}, \nabla^{\tilde{\Sigma}} x_j \right) + e^{-\frac{2}{n} x_{n+1}} \di_{\tilde{\Sigma}}\left( \boldsymbol{e}_j \right).
\end{align*}
Note that $ \di_{\tilde{\Sigma}}\left( \boldsymbol{e}_j^\top \right) = \di_{\tilde{\Sigma}}\left( \boldsymbol{e}_j \right)$ because $\tilde{\Sigma}$ is minimal in $\tilde{\mathbb{R}}^{n+1}$. Moreover note that $\di_{\tilde{\Sigma}}\left( \boldsymbol{e}_j \right) = 0$ since $\boldsymbol{e}_j$ is a Killing field on $\tilde{\mathbb{R}}^{n+1}$, for every $j = 1, \dots, n$. Indeed let $\mathcal{L}$ denote the Lie derivative. Then we have
\begin{equation}\label{eq_lie_derivative}
\mathcal{L}_{e_j} \tilde{h} = \mathcal{L}_{e_j} \left( e^{\frac{2}{n}x_{n+1}} h\right) = e^{\frac{2}{n}x_{n+1}} \mathcal{L}_{e_j} h = 0.
\end{equation}
Therefore for each $j = 1, \dots, n$, the coordinate function $x_j$ satisfies the following linear elliptic PDE:
$$
\Delta_{\tilde{\Sigma}} x_{j} + \frac{2}{n} \tilde{h} \left( \nabla^{\tilde{\Sigma}} x_{n+1}, \nabla^{\tilde{\Sigma}} x_j \right) = 0,\quad j=1,\ldots, n.
$$
From the maximum principle we have that each $x_j$, for $j=1,\ldots,n$, attains its maximum and minimum on $\partial \Sigma$. This, together with Lemma \ref{lemma_boundary}, concludes the proof.
\end{proof}
\begin{remark}
Observe that for the proof of Proposition \ref{prop_convex_hull} one could alternatively have proven by contradiction that $x_j$, for $j=1,\ldots,n$ has no interior maxima and minima using the Lemma \ref{tangency_principle} and comparing with vertical translating planes. This is not surprising, since the Principle of Separating Tangency is another manifestation of the strong maximum principle for quasilinear elliptic equations.
Note also that only $x_i$ when $i=1,\ldots, n$ works, and that one could not use $x_{n+1}$ in Proposition \ref{prop_convex_hull}, as the similar computation as in \eqref{eq_lie_derivative} performed for $e_{n+1}$ shows that $e_{n+1}$ is not a Killing field of $\tilde{\mathbb{R}}^{n+1}$.
\end{remark}
The ``convex hull'' property provides immediately the following monotonicity of topology for compact self-translaters.
\begin{corollary}
Let $\Sigma^n \subseteq\mathbb{R}^{n+1}$ be a compact self-translater. Let $C \subseteq\mathbb{R}^n$ be a compact convex set such that $C \cap \pi\left( \partial \Sigma \right) = \emptyset$, where $\pi$ is the usual projection $\pi \colon (x_1, \dots, x_n, x_{n+1}) \to (x_1, \dots, x_n)$.
Then the inclusion map $ i \colon \left( C \times \mathbb{R}\right) \cap \Sigma \hookrightarrow \Sigma$ induces an injection on the $(n-1)$-st homology group.
\end{corollary}
\begin{proof}
The proof is very similar to the one of Lemma 1.11 in \cite{cm-min}.
\end{proof}
\subsection{Convex Hulls of Noncompact Self-Translaters}
Note that the results in the preceding section were all about compact self-translaters. We will now study the convex hull property in the noncompact case (Theorem~\ref{convex_hull_noncomp_trans}). Also, as mentioned in the introduction, this result was inspired by the classical result for minimal submanifolds in Euclidean space proved by Hoffman and Meeks in \cite{hoffman-meeks} that we recall here.
\begin{theorem}[Hoffman-Meeks: Theorem 3 in \cite{hoffman-meeks}]\label{hoffman_meeks_theorem}
Let $\Sigma^n \subseteq \mathbb{R}^{n+1}$ be a properly immersed connected minimal submanifold whose (possibly empty) boundary $\partial \Sigma$ is compact.
Then exactly one of the following holds:
\begin{enumerate}
\item $\conv(\Sigma) = \mathbb{R}^{n+1}$,
\item $\conv(\Sigma)$ is a halfspace,
\item $\conv(\Sigma)$ is a closed slab between two parallel hyperplanes,
\item $\conv(\Sigma)$ is a hyperplane,
\item $\conv(\Sigma)$ is a compact convex set. This case occurs precisely when $\Sigma$ is compact.
\end{enumerate}
Moreover, when $n = 2$, $\partial \Sigma$ has nonempty intersection with each boundary component of $\conv(\Sigma)$.
\end{theorem}
Recall again that from the known examples (see Section \ref{sec:prelims}), we cannot hope to have the same characterization of the convex hulls of self-translaters. But we can characterize the convex hull of the projection onto the hyperplane $\mathbb{R}^n \times \{0\}$. This is the content of Theorem \ref{convex_hull_noncomp_trans} and the proof is based on the ``Bi-Halfspace'' Theorem \ref{bi-halfspace}.
\begin{remark}
Note that the last statement of Theorem \ref{hoffman_meeks_theorem}, which follows from the Halfspace Theorem (Theorem 1 in \cite{hoffman-meeks}), does not have a straightforward equivalent in the context of self-translaters. Indeed it is natural to ask if it is true or not that given a connected, properly immersed, $2$-dimensional self-translater $\Sigma^2 \subseteq \mathbb{R}^3$ with compact boundary, $\pi \left( \partial \Sigma\right)$ has nonempty intersection with each topological boundary component of $\conv \left( \pi\left( \Sigma \right)\rig$. The answer is negative. Indeed one can easily build a counterexample by taking as $\Sigma$ a grim reaper cylinder with a compact set removed.
\end{remark}
Before giving the proof of Theorem \ref{convex_hull_noncomp_trans}, let us first prove the following simple characterizations of compact self-translaters.
\begin{lemma}[Characterization of Compact Self-Translaters]\label{lemma_ceiling}
Let $(\Sigma^n,\partial\Sigma)$ be a properly immersed, connected self-translater with compact boundary. Then the following are equivalent.
\begin{enumerate}
\item $\Sigma$ is compact.
\item $\sup_{\Sigma} x_{n+1} < \infty$.
\item $\Sigma$ is contained in a cylinder of the kind $K \times \mathbb{R}$, where $K \subseteq \mathbb{R}^n$ is a compact set.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma_ceiling}]
$(1) \Rightarrow (2)$. If $\Sigma$ is compact, then clearly $\sup_{\Sigma} x_{n+1} <~\infty$.
$(2) \Rightarrow (3)$. Let us assume that $\sup_{\Sigma} x_{n+1} < \infty$. Let $R > 0$ be a radius large enough such that $\pi \left( \partial \Sigma \right) \subseteq B_R(0)$, where $B_R(0)$ is the ball of radius $R>0$ in $\mathbb{R}^{n} \times \{0\}$, centered in $0$.
Let us consider the winglike self-translaters $W_R$ from \cite{CSS}, which we translate so that $\inf_{p\in W_R} x_{n+1}(p)=0$. Let us define the one-parameter family of wing-like self-translater $\{W_{R, s} \}_{s \in \mathbb{R}}$, where $W_{R, s} \coloneqq W_R + s\, e_{n+1}$. Clearly we have that
\begin{equation}\label{intersection_wing_sigma}
W_{R, s} \cap \Sigma = \emptyset,
\end{equation}
for every $s >\sup_\Sigma x_{n+1}$.
Assume by contradiction that there exists $s \in \mathbb{R}$ such that $W_{R, s} \cap \Sigma \ne \emptyset$.
Since $\Sigma$ is properly immersed, there exists
$$
s_0 \coloneqq \max \{ s \in \mathbb{R} \colon W_{R, s} \cap \Sigma \ne \emptyset \}.
$$
This leads to a contradiction, thanks to Lemma \ref{tangency_principle}.
Therefore \eqref{intersection_wing_sigma} holds for every $s \in \mathbb{R}$ and thus $\Sigma$ is contained in the cylinder $B_R(0) \times \mathbb{R}$.
$(3) \Rightarrow (1)$ Let us assume that $\Sigma \subseteq K \times \mathbb{R}$, for some compact set $K \subseteq \mathbb{R}^{n}$.
Let us assume by contradiction that $\Sigma$ is not compact. This implies that $\sup_{\Sigma}x_{n+1} = \infty$ or $\inf_{\Sigma}x_{n+1} = -\infty$. Let us consider the first case (the other case is similar).
Since $\partial \Sigma$ is compact, we can assume w.l.o.g. that $\partial \Sigma \subseteq \{x_{n+1} \le -1\}$. For every $R >0$, let $W_{R, 0}$ be the winglike self-translater with neck size $R>0$ and such that $\min_{W_{R, 0}} x_{n+1} = 0$. Let us consider the family $\{W_{R, 0} \}_{R> 0}$. Note the difference with the winglike self-translaters family above: now the ``height'' is fixed and $R>0$ is a parameter.
Observe that $W_{R, 0} \cap \left( K \times \mathbb{R}\right) = \emptyset $ for $R>0$ large enough. Therefore $W_{R, 0} \cap \Sigma = \emptyset$, for $R>0$ large enough. On the other hand, since $\Sigma$ is connected and since $\sup_{\Sigma}x_{n+1} = \infty$, there exists $r > 0$ small enough such that $W_{r, 0} \cap \Sigma \ne \emptyset $. Since $\Sigma$ is properly immersed, there exists
$$
r_{0} \coloneqq \max \{ r > 0 \colon W_{r, 0} \cap \Sigma \ne \emptyset \}.
$$
Note that since $\partial \Sigma \subseteq \{x_{n+1} \le -1\}$ every point in the intersection $W_{r_0, 0} \cap \Sigma$ is an interior point. This contradicts Lemma \ref{tangency_principle}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{convex_hull_noncomp_trans}]\label{general_proof}
First of all, observe that the ``if and only if'' part in Theorem \ref{convex_hull_noncomp_trans}'s Case (5) follows directly from Lemma \ref{lemma_ceiling}.
Take $\Sigma^n\subseteq\R^{n+1}$ possibly with compact boundary $\partial \Sigma$. The vertical projection of the convex hull of $\Sigma^n$, or equivalently convex hull of the vertical projection, can be written as the intersection of all vertical halfspaces in $\R^{n+1}$ which contain it:
\begin{equation}\label{vertichull}
\conv (\pi( \Sigma))\quad =\quad\bigcap_{\left\{H:\: \Sigma\subseteq H \:\mathrm{vertical\:halfspace\:of\:}\R^{n+1}\right\}} \hspace{-52pt}\pi(H)\quad\quad\quad\subseteq \R^{n}.
\end{equation}
If the index set is empty we get $\conv (\pi( \Sigma)) =\R^{n}$ and arrive at Case (1). So, we assume now that this is not the case.
We will now deduce that in the intersection (\ref{vertichull}) all the involved halfspaces $H\subseteq\R^{n+1}$, and hence all the $\pi(H)\subseteq\R^{n}$, are in fact (anti-)parallel halfspaces, unless we are in Case (5). Namely, let $H_1$ and $H_2$ be any two vertical closed halfspaces of $\mathbb{R}^{n+1}$, i.e. such that $P_1 \coloneqq \partial H_1$ and $P_2 \coloneqq \partial H_2$ are two hyperplanes both containing $e_{n+1}$, and with $\Sigma^n \subseteq H_1 \cap H_2$. Then if $H_1$ and $H_2$ were not (anti-)parallel, the compact boundary version of the ``Bi-Halfspace'' Theorem \ref{bi-halfspace_boundary} would imply that $\Sigma^n$ is compact (and note that necessarily $\partial \Sigma\neq\emptyset$ too), so that we would arrive at Case (5).
We may thus finally assume that we are not in Case (1) nor in Case (5). Since all vertical halfspaces in $\R^{n+1}$ which contain $\Sigma^n$ are then mutually (anti-)parallel, so are all the $(n-1)$-dimensional hyperplanes $\pi(H)$ in $\R^{n}$ and the intersection in (\ref{vertichull}) is now easy to evaluate: One of the Cases (2), (3) or (4) must occur. This concludes the proof of Theorem \ref{convex_hull_noncomp_trans}.
\end{proof}
\begin{remark}
Even though Theorem \ref{convex_hull_noncomp_trans} was inspired by Theorem \ref{hoffman_meeks_theorem}, our proof is quite different from the original proof of Hoffman and Meeks in \cite{hoffman-meeks}.
First of all, observe that the ``if and only if'' of point $(5)$ in Theorem \ref{hoffman_meeks_theorem} is trivial, but one implication of the ``if and only if'' of point $(5)$ in Theorem \ref{convex_hull_noncomp_trans} is not completely obvious.
But the most important difference is that the proof of Hoffman and Meeks is an elaborate application of the maximum principle for the nonlinear minimal hypersurface equation, while our proof is based on the Omori-Yau maximum principle.
In the Appendix \ref{appendix} we provide an alternative proof of Theorem \ref{convex_hull_noncomp_trans} in the case $n=2$ which is based on Lemma \ref{tangency_principle} and it is closer in spirit to the original proof of Hoffman and Meeks. We also explain why it is hard to extend it to higher dimension.
\end{remark}
\section{Appendix}\label{appendix}
In this appendix we present an alternative proof of Theorem \ref{convex_hull_noncomp_trans}, which works only in the case $n=2$.
Before presenting the proof, let us recall the following simple property about winglike self-translaters.
\begin{lemma}\label{auxiliary_lemma}
Let $R > 0$ and let $W_R \subseteq \mathbb{R}^{n+1}$ be the wing-like self-translater as in \cite{CSS} and \cite{niels}. Let us denote by $R^*> R$ the radius at which the coordinate function $x_{n+1}$ attains the minimum on $W_R$.
Then
$$
R^* - R \le \frac{\pi}{2}.
$$
\end{lemma}
\begin{proof}
The proof of this lemma is contained in the proof of Lemma 2.1 in \cite{niels}.
\end{proof}
\begin{proof}[Proof of the $2$-dimensional version of Theorem \ref{convex_hull_noncomp_trans}] Let $\Sigma^2 \subseteq \mathbb{R}^3$ be a properly immersed self-translater with compact boundary $\partial \Sigma$.
In the theorem, let us assume that the Cases $(1)$, $(4)$ and $(5)$ do not occur. We want to show that then Case $(2)$ or Case $(3)$ must occur.
Let $H_1$ and $H_2$ be two closed halfspaces (here: halfplanes) in $\mathbb{R}^2$ such that $\conv (\pi( \Sigma)) \subseteq H_1 \cap H_2$. Let $P_1 \coloneqq \partial H_1$ and $P_2 \coloneqq \partial H_2$. In order to show that case $(2)$ or case $(3)$ must occur, it is sufficient to show that the lines $P_1$ and $P_2$ are parallel.
Let us assume by contradiction that $P_1$ and $P_2$ are not parallel.
The idea is to show that $\Sigma$ must be then contained in a halfspace of the kind $\{x_{3} \le K \}$ for $K$ large enough. This will contradict Lemma \ref{lemma_ceiling}.
Let us consider $\tilde{H}_1 \coloneqq \pi^{-1} \left( H_1 \right) = H_1 \times \mathbb{R}$ and $\tilde{H}_2 \coloneqq \pi^{-1} \left( H_2 \right) = H_2 \times \mathbb{R}$. Note that $\tilde{H}_1$ and $\tilde{H}_2$ are closed halfspaces of $\mathbb{R}^{3}$ and $\Sigma \subseteq \tilde{H}_1 \cap \tilde{H}_2$. Moreover we will denote $\tilde{P}_1 \coloneqq \pi^{-1}\left( P_1\right) = P_1 \times \mathbb{R}$ and $\tilde{P}_1 \coloneqq \pi^{-1}\left( P_1\right) = P_1 \times \mathbb{R}$. Note that $\tilde{P}_1$ and $\tilde{P}_1$ are affine planes in $\mathbb{R}^{3}$, both parallel to the $x_{3}$-axis. Without loss of generality, we may assume that $\tilde{P}_1 \cap \tilde{P}_2$ is the $x_{3}$-axis.
From Lemma \ref{tangency_principle}, since $\tilde{P}_1$ and $\tilde{P}_2$ are both self-translaters, $\Sigma$ does not have any interior point in common with them, i.e. $\left( \Sigma \setminus \partial \Sigma\right) \cap \left( \tilde{P}_1 \cup \tilde{P}_2\right) = \emptyset$.
For every $R>0$, let $S_R \subseteq H_1 \cap H_2 \subseteq \mathbb{R}^2$ be the unique circle of radius $R>0$ and tangent to $P_1$ and $P_2$ and let $p_R \in H_1 \cap H_2$ be the center of $S_R$. Moreover let $\bar{B}_R(p_R)$ be the closed ball of center $p_R$ and radius $R>0$. Observe that since $S_R$ is tangent to $P_1$ and $P_2$, $\left( H_1 \cap H_2\right) \setminus \bar{B}_R$ consists of two connected regions, one bounded and the other one unbounded. Let us denote by $A_R$ the the closure of the bounded region. Observe that
$$
\lim_{R \searrow 0} \diam A_R = 0.
$$
For each $R>0$, let $W_{R}$ be the wing-like self-translater such that it is rotationally symmetric around $\{p_R\} \times \mathbb{R}$ and $\min_{W_{R}} x_{3} = 0$ and $R>0$ is the aperture of the ``hole''. Moreover, let $R^*$ be the radius as in Lemma \ref{auxiliary_lemma}, i.e. $x_{3} = 0$ on the circle $S_{R^*}(p_R)$ of radius $R^*$ and centered in $p_R$.
$$
\tilde{W}_{R} \coloneqq W_{R} \cap \left( A_R \times \mathbb{R} \right).
$$
It is easy to check that $\tilde{W}_{R} \subseteq \tilde{H}_1 \cap \tilde{H}_2 $ is compact and $\partial \tilde{W}_{R} \subseteq \tilde{P}_1 \cup \tilde{P}_2$.
Since $\partial \Sigma $ is compact, up to a translation in the $x_{3}$-direction, we can assume $\partial \Sigma \subseteq \{x_{3} \le - 1 \}$.
Moreover, since $\Sigma$ is properly immersed, we have that there exists $r>0$ small enough, such that
$$
\tilde{W}_{r} \cap \Sigma = \emptyset.
$$
Consider the $1$-parameter family $\{\tilde{W}_{R}\}_{R>0}$. Using Lemma \ref{tangency_principle} and a standard argument, we have that $\tilde{W}_{R} \cap \Sigma = \emptyset $ for every $R >0$.
From Lemma \ref{auxiliary_lemma}, we have that $S_{R^*}(p_R) \cap A_R \ne \emptyset$, for every $R>0$ such that $\dist(p_R, 0) > \frac{\pi}{2}$. Moreover the family of compact sets $\{S_{R^*}(p_R) \cap A_R \}_{R>0}$ swipes out the whole plane $\mathbb{R}^2 \times \{0\}$, i.e.
$$
\bigcup_{R>0} S_{R^*}(p_R) \cap A_R = \mathbb{R}^2 \times \{0\}.
$$
Therefore we have that
\begin{equation}\label{1.8.18}
\Sigma \subseteq \{x_{3} \le 0\}.
\end{equation}
Recall that $\Sigma$ is not compact, because we are assuming that $(1), (4)$ and $(5)$ do not hold. This generates a contradiction because from \eqref{1.8.18} and from Lemma \ref{lemma_ceiling}, we have that $\Sigma$ must be compact.
Therefore we showed that if $(1), (4)$ and $(5)$ do not hold, then $(2)$ or $(3)$ must occur.
\end{proof}
Observe that the above proof is quite similar to the proof in \cite{hoffman-meeks}, but it works only for $n=2$. Indeed note that it is not possible to naively generalize the above proof to higher dimension. The problem is that it is not possible to define the set $A_R$. Indeed let us assume that $n \ge 3$ and let $H_1$ and $H_2$ be halfspaces of $\mathbb{R}^n$ as in the proof above, and let $P_1$ and $P_2$ be their boundaries respectively. Then let $B$ a closed ball such that $S = \partial B$ is tangent both to $P_1$ and to $P_2$ and such that $B \subseteq H_1 \cap H_2$. Then $\left( H_1 \cap H_2 \right) \setminus B $ is connected. Therefore the argument of the proof above does not work.
However, with a straightforward generalization of the argument above, one can prove a weaker version of Theorem \ref{bi-halfspace_boundary}. More precisely, one can prove the following result.
\begin{theorem} Let $(\Sigma^n, \partial \Sigma)$ be a properly immersed connected self-translating n-dimensional hypersurface in $\mathbb{R}^{n+1}$. Let $\mathcal{C} \subseteq \mathbb{R}^n$ be a half-cone, i.e.
$$
\mathcal{C} = \{ x \in \mathbb{R}^n \colon \mathrm{angle}(x , w) < \alpha \}
$$
for some $w \in \mathbb{S}^{n-1}$ and some angle $\alpha \in (0, \frac{\pi}{2})$.
Then if $\Sigma^n \subseteq \mathcal{C} \times \mathbb{R}$ it must be compact.
\end{theorem}
\begin{remark}
The proof of Hoffman and Meeks works in any dimension because they used as barriers solutions of a Dirichlet problem for the minimal hypersurface equation.
Indeed it is known that for every bounded, convex, $C^2$ domain $\Omega \subseteq \mathbb{R}^n$, and for every $\varphi \in C^0\left( \partial \Omega \right)$ there exist a solution $u \in C^2\left( \Omega \right) \cap C^0\left( \bar{\Omega} \right)$ of the following Dirichlet problem.
\begin{equation}
\begin{cases}
\di \left( \frac{D u }{\sqrt{1 + |Du|^2}} \right) = 0 \qquad &\text{in } \Omega \\
u|_{\partial \Omega} = \varphi \qquad &\text{on } \partial \Omega.
\end{cases}
\end{equation}
For more details, see Section 16.3 in \cite{gt}.
In our case we would have needed to solve a Dirichlet problem of the kind \eqref{side_trans_dirichlet}. Indeed it is easy to verify that a self-translater which is graphical w.r.t. a direction orthogonal to the moving direction $e_{n+1}$ is the graph of a function satisfying the PDE below in \eqref{side_trans_dirichlet}. Unfortunately in this case there is no general existence result, even assuming the initial data to be smooth. See Proposition \ref{prop_counter_example_dirichlet} below. Therefore we firstly resorted to building barriers carefully from the known family of wing-like self-translaters, the drawback being that this procedure only works in the case $n=2$, as we already explained. This motivated us to look for a different approach and led us to the proof of the ``Bi-Halfspace'' Theorems \ref{bi-halfspace}--\ref{bi-halfspace_boundary} and consequently to the proof of Theorem \ref{convex_hull_noncomp_trans}, as presented in the main parts (see Section \ref{general_proof}) of this paper.
\end{remark}
\begin{proposition}\label{prop_counter_example_dirichlet}
There exists $\Omega \subseteq \mathbb{R}^n$ bounded, convex with smooth boundary $\partial \Omega$ and there exists $\varphi \in C^{\infty}\left( \partial \Omega \right)$ such that there exists no function $u \in C^2\left( \Omega \right) \cap C\left( \bar{\Omega} \right)$, $u = u(y_1, \dots, y_n)$, satisfying the following Dirichlet problem.
\begin{equation}\label{side_trans_dirichlet}
\begin{cases}
\di\left( \frac{Du}{\sqrt{1 + |Du|^2}} \right) = \frac{u_{y_1}}{\sqrt{1 + |Du|^2}} \qquad &\text{ in } \Omega \\
u|_{\partial \Omega} = \varphi \qquad & \text{ on } \partial \Omega
\end{cases}
\end{equation}
\end{proposition}
\begin{figure}
\centering
\begin{tikzpicture}[ scale = 0.45]
\draw[thick, scale=1,domain=-3:3,smooth,variable=\y,] plot ({\y*\y},{\y});
\draw[ scale=1,domain=-3:3,smooth,variable=\y,] plot ({\y*\y +8},{\y});
\draw[ yscale=2] (17,0) circle (1.5);
\draw[ scale=1,domain=-3:3,smooth,variable=\y,] plot ({\y*\y - 8},{\y});
\draw[ yscale=2] (1,0) circle (1.5);
\draw[thick, yscale=2] (9,0) circle (1.5);
\draw[thick, rotate around={45:(5,0)},red] (5,-0.63) ellipse (20pt and 89pt);
\draw [thick] (3.17, 1.65) -- (3.17, -5);
\draw [thick ] (7.72, -2.5) -- (7.72, -5);
\draw[thick, red, xscale=2, yscale=0.5, fill=gray!20] (2.73,-10) circle (1.13);
\draw[thick] (-2, -6) -- (10, -6) -- ( 12, -4) -- (0,-4) -- (-2,-6);
\draw[thick, ->] (14, -5) -- (16, -5) ;
\draw [] ( 2, -5 ) node [anchor = center]{$\Omega$};
\draw [] ( 5, 3) node [anchor = center]{$U_0$};
\draw [] (13, 3) node [anchor = center]{$U_t$};
\draw [] (-3, 3) node [anchor = center]{$U_t$};
\draw [] ( 15.5, -5 ) node [anchor = north]{$e_{n+1}$};
\draw [] ( -3, -5 ) node [anchor = north]{$Q$};
\draw [] ( 6, 0) node [anchor = south]{$\Gamma$};
\end{tikzpicture}
\caption{}
\label{counter_example_dirichlet}
\end{figure}
\begin{proof}
Let $U \subseteq \mathbb{R}^{n+1}$ be the bowl self-translater. Let $P$ be an affine hyperplane of $\mathbb{R}^{n+1}$ such that it is not parallel to $e_{n+1}$ but not orthogonal to $e_{n+1}$. Let $Q$ be another hyperplane parallel to $e_{n+1}$ and such that $P$ is graphical over $Q$.
Let $\Gamma \coloneqq U \cap P$. Observe that, up to translating $P$ in the direction of $e_{n+1}$, we can assume $\Gamma \ne \emptyset$. Moreover, we can take $P$ such that $\Gamma = \partial U_\Gamma$, where $U_\Gamma \subseteq U$ is a bounded subset of $U$ which is not graphical over $Q$.
Let $\pi_Q \colon \mathbb{R}^{n+1} \to Q$ be the orthogonal projection onto $Q$.
Since $U$ is a convex hypersurface, we have that $\pi \left( \Gamma \right) $ is the boundary of some bounded convex domain $\Omega \subseteq Q$ (see Figure \ref{counter_example_dirichlet}). Since $P$ is graphical over $Q$, we have that $\Gamma$ is the graph of some function $\phi \colon \partial \Omega \to \mathbb{R}$.
Let $y_1, \dots, y_n$ be Cartesian coordinates on $Q$ such that the coordinate $y_1$ coincides with $ x_{n+1}$.
Now assume by contradiction that there exists a solution $u$ for the Dirichlet problem \eqref{side_trans_dirichlet}.
Therefore $\graph\left( u \right) $ is a compact self-translater with unit velocity $e_{n+1}$ with boundary $\Gamma$.
Now for every $t \in \mathbb{R}$ define $U_t \coloneqq U + t e_{n+1}$. Observe that the family $\{U_t\}_{t\in \mathbb{R}}$ foliates $\mathbb{R}^{n+1}$.
Since $\graph \left( u \right)$ is compact and each $U_t$ is properly immersed, there exist
$$
t_{\min} \coloneqq \min \{ t \in \mathbb{R} \colon U_t \cap \graph \left( u \right) \ne \emptyset \}
$$
and
$$
t_{\max} \coloneqq \max \{ t \in \mathbb{R} \colon U_t \cap \graph \left( u \right) \ne \emptyset \}.
$$
If $t_{\min} < 0$, then every point $p \in U_{t_{\min}} \cap \graph \left( u \right)$ would be an interior point of $\graph\left( u\right)$. From Lemma \ref{tangency_principle}, we would have that $\graph\left( u\right) \subseteq U_{t_{\min}}$, and therefore $\Gamma =\partial \left( \graph\left( u\right)\rig \subseteq U_{t_{\min}}$. But this is a contradiction because $\Gamma \subseteq U_0 = U$. Therefore $t_{\min} = 0$.
With a similar argument one can show that $t_{\max} = 0$. Therefore $\graph\left( u\right) = U_\Gamma \subseteq U_0$. But this is a contradiction, because $U_\Gamma$ is not graphical by construction.
\end{proof}
|
1,108,101,565,140 | arxiv | \section{Introduction}
Throughout the paper, let $G$ be an additively written finite cyclic group of order $|G|=n$. By a sequence over $G$ we mean a finite sequence of terms from $G$ which is unordered and repetition of terms is allowed. We view sequences over $G$ as elements of the free abelian monoid $\mathcal{F}(G)$ and use multiplicative notation. Thus a sequence $S$ of length $|S|=k$ is written in the form $S=(n_1g)\cdot...\cdot(n_kg)$, where $n_1,\cdots,n_k\in\mathbb{N}$ and $g\in G$. We call $S$ a {\it zero-sum sequence} if $\sum_{j=1}^kn_jg=0$. If $S$ is a zero-sum sequence, but no proper nontrivial subsequence of $S$ has sum zero, then $S$ is called a {\it minimal zero-sum sequence}. Recall that the index of a sequence $S$ over $G$ is defined as follows.
\begin{defi}
For a sequence over $G$
\begin{eqnarray*} S=(n_1g)\cdot...\cdot(n_kg), &&\hbox{where}\;1\leq n_1,\cdots,n_k\leq n,\end{eqnarray*}
the index of $S$ is define by $\hbox{\rm ind}(S)=\min\{\|S\|_g|g\in G \hbox{~with~}\langle g\rangle=G\}$, where
\begin{eqnarray*} \|S\|_g=\frac{n_1+\cdots+n_k}{\hbox{\rm ord}(g)}.\end{eqnarray*}
\end{defi}
Clearly, $S$ has sum zero if and only if $\hbox{\rm ind}(S)$ is an integer.
\begin{conj}
Let $G$ be a finite cyclic group such that $\gcd(|G|,6)=1$. Then every minimal zero-sum sequence $S$ over $G$ of length $|S|=4$ has $\hbox{\rm ind}(S)=1$.
\end{conj}
The index of a sequence is a crucial invariant in the investigation of (minimal) zero-sum
sequences (resp. of zero-sum free sequences) over cyclic groups. It was first addressed by
Kleitman-Lemke (in the conjecture [9, page 344]), then used as a key tool by Geroldinger ([6, page736]), and investigated by Gao [3] in a systematical way. Since then it has received a great
deal of attention (see for example [1, 2, 4, 7, 10, 11, 12, 13, 14, 15, 16, 17, 18]). A main focus of the investigation of index is to determine minimal zero-sum sequences of index 1. If $S$ is a minimal zero-sum sequence of length $|S|$ such that $|S|\leq3$ or $|S|\geq\lfloor \frac{n}2\rfloor+2$, then $\hbox{\rm ind}(S)=1$ (see [1, 14, 16]). In contrast to that, it was shown that for each $k$ with $5\leq k\leq \lfloor \frac{n}2\rfloor+1$, there is a minimal zero-sum subsequence $T$ of length $|T| = k$ with $\hbox{\rm ind}(T)\geq 2$ ([13, 15]) and that the same is true for $k = 4$ and $\gcd(n, 6)\not= 1$ ([13]). The left case leads to the above conjecture.
In [12], it was prove that Conjecture 1.2 holds true if $n$ is a prime power. In [11], it was prove that Conjecture 1.2 holds for $n=p_1^\alpha\cdot p_2^\beta,(p_1\not=p_2)$ and if the sequence contains an element $g$ of order $\hbox{\rm ord}(g)=n$. However, the general case is still open.
\begin{defi} Let $S=(n_1g)\cdot...\cdot(n_kg)$ be a minimal zero-sum sequence over $G$.
Then $S$ is called reduced if $(pn_1g)\cdot...\cdot(pn_kg)$ is not a minimal zero-sum sequence for any prime factor $p$ of $n$.
\end{defi}
In this paper, our main result is stated by the following theorem.
\begin{theo}
Let $G$ be a finite cyclic group such that $\gcd(|G|,6)=1$, $S=(n_1g)\cdot...\cdot(n_kg)$ be a minimal zero-sum sequence over $G$ with $\hbox{\rm ord}(g)=|G|$. If $S$ is reduced and at least one $n_i$ coprime to $n$, then $\hbox{\rm ind}(S)=1$
\end{theo}
It was mentioned in \cite{P} that Conjecture 1.2 was confirmed computationally if $n\leq1000$. Hence, throughout the paper, we always assume that $n>1000$.
\section{Induction on prime decomposition of $n$}
Throughout, let $G$ be a cyclic group of order $|G|=n>1000$. Given real numbers $a,b\in\mathbb{R}$, we use $[a,b]=\{x\in\mathbb{Z}|a\leq x\leq b\}$ to denote the set of integers between $a$ and $b$. For $x\in\mathbb{Z}$, we denote by $|x|_n\in[1,n]$ the integer congruent to $x$ modulo $n$. Suppose that $n$ has a prime decomposition $n=p_1^{\mu_1}\cdots p_d^{\mu_d}$. Let $S=(x_1g)\cdot...\cdot(x_kg)$ be a minimal zero-sum sequence over $G$ such that $\hbox{\rm ord}(g)=n=|G|$ and $1\leq x_1,x_2,x_3,x_4\leq n-1$. Then $x_1+x_2+x_3+x_4=\nu n$, where $1\leq\nu\leq 3$.
For convenience, we use the following symbols:
\begin{eqnarray*} {\mathcal T}=\{p_1,\cdots, p_d\},&& {\mathcal T}_i=\{p\in{\mathcal T}|p=\gcd(p,x_i)\},\;i=1,2,3,4.\end{eqnarray*}
\begin{theo}
If $S$ is reduced and $\gcd(x_1,x_2,x_3,x_4,n)=1$, then $|{\mathcal T}|\leq3$. Particularly, if $|{\mathcal T}|=3$, then after renumbering if necessary one of the following statements holds:
(A1) $\{\gcd(x_i,n)|i=1,2,3,4\}=\{p_1p_2,p_2,p_1p_3,p_3\}$.
(A2) $\{\gcd(x_i,n)|i=1,2,3,4\}=\{1,p_1,p_2,p_1p_2\}$.
(A3) $\gcd(x_i,n)=1$ for $i=1,2,3,4$.
(A4) $\gcd(x_1,n)=1,\gcd(x_2,n)=p_1p_2,\gcd(x_3,n)=p_1p_3,\gcd(x_4,n)=p_2p_3$.
\end{theo}
For the proof of this theorem, we need the following lemma.
\begin{lemma}
Suppose that $|{\mathcal T}|\geq3$, $p\in{\mathcal T}$ and $1\leq|px_i|_n\leq n-1$ for $i=1,2,3,4$. If for any $q\in{\mathcal T}$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is not a minimal zero-sum sequence, then $n=p_1p_2p_3$ and one of (A1),
(A2), (A3) holds.
Particularly, we can assume that $x_1=1, \{\gcd(n,x_2),\gcd(n,x_3),\gcd(n,x_4)\}=\{p_1, p_2, p_1p_2\}$ for (A2), and $x_1=1, p_1p_2|(x_2+1),p_1p_3|(x_3+1),p_2p_3|(x_4+1)$ for (A3).
\end{lemma}
\begin{proof}
Since $(px_1g)\cdot(px_2g)\cdot(px_3g)\cdot(px_4g)$ is not a minimal zero-sum sequence, without loss of generality, we can assume that $|px_1|_n+|px_2|_n=n$ and $|px_3|_n+|px_4|_n=n$. We distinguish four cases.
{\bf Case 1.} $p\in{\mathcal T}_1\cap{\mathcal T}_2$.
For any $q\in{\mathcal T}_3$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence, hence ${\mathcal T}_3=\emptyset$.
If $|{\mathcal T}_1|>2$, then there is $q\in{\mathcal T}_1$ such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence, contradiction..
If $|{\mathcal T}_1|=2<|{\mathcal T}|$, then there is $q\in{\mathcal T}_1$ such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence, contradiction.
If $|{\mathcal T}_1|=1, |{\mathcal T}|>2$, then there is $q\in{\mathcal T}$ such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence, contradiction.
{\bf Case 2.} $p\in{\mathcal T}_1\cap{\mathcal T}_3$.
We must have $\gcd(x_2,n)|\gcd(x_1,n)$ and $\gcd(x_4,n)|\gcd(x_3,n)$. If $|{\mathcal T}|\geq|{\mathcal T}_2\cup{\mathcal T}_3|+2$, then for $q\in{\mathcal T}\setminus({\mathcal T}_2\cup{\mathcal T}_3)$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence. Since $|{\mathcal T}|\geq3$, we have ${\mathcal T}_2\cup{\mathcal T}_4\not=\emptyset$.
If $|{\mathcal T}_2\cup{\mathcal T}_4|\geq3$, then there is $q\in{\mathcal T}_2\cup{\mathcal T}_4$ such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
If $|{\mathcal T}|=|{\mathcal T}_2\cup{\mathcal T}_3|+1$ and $|{\mathcal T}_2\cup{\mathcal T}_4|=2$, then for $q\in{\mathcal T}_2\cup{\mathcal T}_4$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
If $|{\mathcal T}|=|{\mathcal T}_2\cup{\mathcal T}_3|$ and $|{\mathcal T}_2|=2$, then there is $q\in{\mathcal T}_2$ such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
If $|{\mathcal T}|=|{\mathcal T}_2\cup{\mathcal T}_3|$ and $|{\mathcal T}_2|=|{\mathcal T}_4|=1$, then we can assume that ${\mathcal T}_1=\{p_1,p_2\}, {\mathcal T}_2 =\{p_2\}, {\mathcal T}_3=\{p_1,p_3\}, {\mathcal T}_4=\{p_3\}$. $|p_1x_1|_n+|p_1x_2|_n=n$ implies that $\mu_1=1$.
Since $(p_2x_1g)\cdot(p_2x_2g)\cdot(p_2x_3g)\cdot(p_2x_4g)$ is not a minimal zero-sum sequence, we can get $\mu_2=1$. Similarly, $\mu_3=1$.
Besides all of above, we can assume ${\mathcal T}=\{p_1,p_2,p_3\}, {\mathcal T}_1=\{p_1,p_2\},{\mathcal T}_2=\{p_1\}, {\mathcal T}_3=\{p_2\}, {\mathcal T}_4=\emptyset$. Moreover, $p_2^{\mu_2}|(p_2x_1+p_2x_2)$ implies $\mu_2=1$. Similarly, $p_1=1$. If $\mu_3>1$, then it is easy to check that $(p_3x_1g)\cdot(p_3x_2g)\cdot(p_3x_3g)\cdot(p_3x_4g)$ is a minimal zero-sum sequence, contradiction. Hence $\mu_3=1$ and $n=p_1p_2p_3$.
{\bf Case 3.} $p\in{\mathcal T}_1, p\not\in\cap_{i=2}^4{\mathcal T}_i$.
We must have $\gcd(x_2,n)|\gcd(x_1,n)$ and $\gcd(x_3,n)=\gcd(x_4,n)$.
If ${\mathcal T}\not={\mathcal T}_1\cup{\mathcal T}_3$ or $|{\mathcal T}_3|\geq2$, then for any $q\in{\mathcal T}_3$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
Let ${\mathcal T}={\mathcal T}_1\cup{\mathcal T}_3$ If $|{\mathcal T}_3|=1$. For any $q\in{\mathcal T}_2$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
If $|{\mathcal T}_2|\geq2$, then there is $q\in{\mathcal T}_2$, such that $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ is a minimal zero-sum sequence.
{\bf Case 4.} $p\not\in\cup_{i=1}^4{\mathcal T}_i$.
We must have $\gcd(x_1,n)=\gcd(x_2,n)$ and $\gcd(x_3,n)=\gcd(x_4,n)$. For any $q\in{\mathcal T}_1$, it holds that $1\leq|qx_i|_n\leq n-1$ for $i=1,2,3,4$. If ${\mathcal T}_3$ is not empty, then $|qx_1|_n+|qx_3|_n=n$ or $|qx_2|_n+|qx_3|_n=n$. However, there is $q'\in{\mathcal T}_3$ such that $q'\nmid qx_1,q'\nmid qx_2, q'\mid qx_3$, it is a contradiction. Repeat some similar discussions, we infer that $|{\mathcal T}_1|+|{\mathcal T}_3|\leq1$.
If ${\mathcal T}_1=\{q\}$, then there is $p'\in{\mathcal T}\setminus{\mathcal T}_1$. Clearly, $1\leq|p'x_i|_n\leq n-1$ for $i=1,2,3,4$. Then $|p'x_1|_n+|p'x_3|_n=n$ or $|p'x_1|_n+|p'x_4|_n=n$. However, we have $q\mid p'x_1, q\nmid p'x_3, q\nmid p'x_4$, it is a contradiction.
If ${\mathcal T}_1={\mathcal T}_3=\emptyset$. Then there exist $p_1,p_2,p_3\in{\mathcal T}$ such that
\begin{eqnarray*} \gcd(x_1+x_2,n)=\frac{n}{p_1}=\gcd(x_3+x_4,n),\\
\gcd(x_1+x_3,n)=\frac{n}{p_2}=\gcd(x_2+x_4,n),\\
\gcd(x_1+x_4,n)=\frac{n}{p_3}=\gcd(x_2+x_3,n).\end{eqnarray*}
For any $q\in{\mathcal T}\setminus\{p_1,p_2,p_3\}$, $(qx_1g)\cdot(qx_2g)\cdot(qx_3g)\cdot(qx_4g)$ must be a minimal zero-sum sequence, hence ${\mathcal T}=\{p_1,p_2,p_3\}$.
If $\mu_1>1$, then $p_1|(x_1+x_2), p_1|(x_1+x_3), p_1|(x_2+x_3)$, we infer that $p_1|\gcd(x_1,x_2,x_3)$, contradiction. So $\mu_1=1$. Similarly, $\mu_2=\mu_3=1$.
\end{proof}
Now we can finish the proof of Theorem 2.1 via the following discussion:
(1) If $\cup_{i=1}^4{\mathcal T}_i\not={\mathcal T}$, then for any $p\in \cup_{i=1}^4{\mathcal T}_i$, we have
$1\leq|px_i|_n\leq n-1$ for $i=1,2,3,4$.
(2) If $\cup_{i=1}^4{\mathcal T}_i$ is empty and $d\geq 2$, then for any $p\in {\mathcal T}$, we have
$1\leq|px_i|_n\leq n-1$ for $i=1,2,3,4$.
For the above two cases, by Lemma 2.2, we can assume that $\cup_{i=1}^4{\mathcal T}_i={\mathcal T}$ and $d\geq 3$. Without lace of generality, we let $x_1,x_2,x_3,x_4$ be such that
$|{\mathcal T}_1|\leq|{\mathcal T}_2|\leq|{\mathcal T}_3|\leq|{\mathcal T}_4|$.
(3) If $|{\mathcal T}_3|\leq\frac{d}{2}$ and ${\mathcal T}_4<d$, then for any $p\in {\mathcal T}_4$, we have
$1\leq|px_i|_n\leq n-1$ for $i=1,2,3,4$.
(4) If $|{\mathcal T}_3|\leq\frac{d}{2}$ and ${\mathcal T}_4=d$. There must be an index $1\leq k\leq d$ such that $p_k^{\mu_k}\nmid x_4$. Then for any $j\not=k$, we have
$|p_jx_i|_n\leq n-1$ for $i=1,2,3,4$.
Now we can assume that $|{\mathcal T}_3|>\frac{d}{2}$, then ${\mathcal T}_3\cap{\mathcal T}_4$ is nonempty(since $|{\mathcal T}_3|+|{\mathcal T}_4|\geq2|{\mathcal T}_3|>d=|{\mathcal T}|$).
(5) If $|{\mathcal T}_3\cap{\mathcal T}_4|\geq3$, there is $p\in{\mathcal T}_3\cap{\mathcal T}_4$ such that $1\leq|px_i|_n\leq n-1$ for $i=1,2,3,4$.
Clearly, there is $p\in{\mathcal T}_3\cap{\mathcal T}_4$ such that $1\leq|px_3|_n\leq n-1, 1\leq|px_4|_n\leq n-1$. If $n|px_2$, then for any $q(\not=p)\in{\mathcal T}_3\cap{\mathcal T}_4$, we have $q|x_2$, and then $q|x_1$, which contradicts to $\gcd(x_1,x_2,x_3,x_4,n)=1$. Hence $1\leq|px_2|_n\leq n-1$. Similarly, $1\leq|px_1|_n\leq n-1$.
(6) If $|{\mathcal T}_3\cap{\mathcal T}_4|=2$ and there is $p_k\in {\mathcal T}_3\cap{\mathcal T}_4$ such that $\mu_k\geq2$, $p_k^{\mu_k}\mid\gcd(x_3,x_4)$, then we have $1\leq|p_kx_i|_n\leq n-1$ for $i=1,2,3,4$.
It can be shown by similar argument in (5).
(7) If ${\mathcal T}_3\cap{\mathcal T}_4=\{p_k,p_l\}$ and $p_k^{\mu_k}\nmid x_3$, $p_l^{\mu_l}\nmid x_4$, then for any $j\not=k,l$, we have $1\leq|p_jx_i|_n\leq n-1$ for $i=1,2,3,4$.
(8) If ${\mathcal T}_3\cap{\mathcal T}_4=\{p_k,p_l\}$ and $p_k^{\mu_k}\nmid x_3$, $p_l^{\mu_l}\nmid x_3$, $p_k^{\mu_k}\mid x_4$, $p_l^{\mu_l}\mid x_4$, then we have $1\leq|p_kx_i|_n\leq n-1$ for $i=1,2,3,4$.
From (5),(6),(7),(8), we can assume that $|{\mathcal T}_3\cap{\mathcal T}_4|=1$, then $d$ is odd, otherwise $|{\mathcal T}_4|\geq|{\mathcal T}_3|\geq\frac{d}{2}+1$, it implies that $|{\mathcal T}_3\cap{\mathcal T}_4|\geq|{\mathcal T}_3|+|{\mathcal T}_4|-d\geq2$, contradiction. Hence $|{\mathcal T}_3|=|{\mathcal T}_4|=\frac{d+1}{2}$. Without lack of generality, we let ${\mathcal T}_3\cap{\mathcal T}_4=\{p_d\}$.
(9) If $d>3$, then $|{\mathcal T}_1|\leq|{\mathcal T}_2|\leq|{\mathcal T}_3|\leq|{\mathcal T}_4|=\frac{d+1}{2}\leq d-2$, then $1\leq|p_dx_i|_n\leq n-1$ for $i=1,2,3,4$.
(10) If $d=3$ and $\mu_d>1$, then $1\leq|p_dx_i|_n\leq n-1$ for $i=1,2,3,4$.
(11) If $d=3$ and $|{\mathcal T}_2|\leq1$, then $1\leq|p_dx_i|_n\leq n-1$ for $i=1,2,3,4$.
(12) If $d=3$ and $|{\mathcal T}_2|=2$, then ${\mathcal T}_1$ is empty.
From all discussion above, we can assume that
${\mathcal T}_1=\emptyset, {\mathcal T}_2=\{p_1,p_2\}, {\mathcal T}_3=\{p_1,p_3\}, {\mathcal T}_4=\{p_2,p_3\}$ and $\mu_3=1$. Since $|{\mathcal T}_2|=|{\mathcal T}_3|=|{\mathcal T}_4|=2$, replace the positions of $x_2,x_3$ and repeat case (10), we can have $\mu_2=1$. Similarly, $\mu_1=1$. Hence we have $\gcd(x_1,n)=1,\gcd(x_2,n)=p_1p_2,\gcd(x_3,n)=p_1p_3,\gcd(x_4,n)=p_2p_3.$ Up to now, Theorem 2.1 is proved.
If $S$ contains at least one $x_i$ coprime to $n$, then $u=\gcd(x_1,x_2,x_3,x_4,n)=1$. For $|{\mathcal T}|<3$, Theorem 1.4 is proved by the results in [11] and [12]. Hence, in order to prove Theorem 1.4, it is sufficient to show the following Theorem 2.3:
\begin{theo}
Let $n=p_1p_2p_3$, where $p_1,p_2,p_3$ are three different primes, and $\gcd(n,6)=1$. Let $S=(x_1g)\cdot(x_2g)\cdot(x_3g)\cdot(x_4g)$ be a minimal zero-sum sequence over $G=\langle g\rangle$ such that $\hbox{\rm ord}(g)=n$, where
$(x_1,x_2,x_3,x_4)$ satisfies one of $(A2), (A3)$ and $(A4)$.
Then $\hbox{\rm ind}(S)=1$.
\end{theo}
Notice that, under each assumption of $(A2),(A3)$ and $(A4)$, we always assume that $(px_1g)\cdot(px_2g)\cdot(px_3g)\cdot(px_4g)$ is not a minimal zero-sum sequence for any $p\in{\mathcal T}$.
\section{Preliminaries for Theorem 2.3}
Let $S$ be the sequence as described in Theorem 2.3. Similar to Remark 2.1 of [11], we may always assume that
$x_1=1, 1+x_2+x_3+x_4=2n$ and $1<x_2<\frac{n}{2}<x_3\leq x_4<n-1$. Let $c=x_2,b=n-x_3,a=n-x_4$, then it is easy to show that the following proposition implies Theorem 2.3 under assumption $(A2),(A3)$ or $(A4)$.
\begin{proposition}
Let $n=p_1p_2p_3$, where $p_1,p_2,p_3$ are three different primes, and $\gcd(n,6)=1$. Let $S=(g)\cdot(cg)\cdot((n-b)g)\cdot((n-a)g)$ be a minimal zero-sum sequence over $G$ such that $\hbox{\rm ord}(g)=n$, where $1+c=a+b$, and
$(A2)$ $\{\gcd(c,n),\gcd(b,n), \gcd(a,n)\}=\{p_1, p_2, p_1p_2\}$.
$(A3)$ $\gcd(c+1,n)=p_1p_2$, $\gcd(b-1,n)=p_1p_3$, $\gcd(a-1,n)=p_2p_3$.
$(A4)$ $\gcd(c,n)=p_1p_2$, $\gcd(b,n)=p_1p_3$, $\gcd(a,n)=p_2p_3$.
{\noindent}Then $\hbox{\rm ind}(S)=1$.
\end{proposition}
\begin{remark}
Notice that $\gcd(n,6)=1$, and $(p_ig)\cdot(|p_ic|_ng)\cdot(|p_i(n-b)|_ng)\cdot(|p_i(n-a)|_ng)$ is not a minimal zero-sum sequence, we infer that $a\geq 36$ under $(A3)$, $a\geq 35$ under $(A2)$ or $(A4)$.
\end{remark}
\begin{lemma}
Proposition 3.1 holds if and only if one of the following conditions holds:
(1) There exist positive integers $k,m$ such that $\frac{kn}{c}\leq m\leq\frac{kn}{b}$, $\gcd(m,n)=1$, $1\leq k\leq b$, and $ma<n$.
(2) There exists a positive integer $M\in[1,\frac{n}{2}]$ such that $\gcd(M,n)=1$ and at least two of the following inequalities hold:
$$|Ma|_n>\frac{n}{2}, |Mb|_n>\frac{n}{2}, |Mc|_n<\frac{n}{2}.$$
\end{lemma}
\begin{lemma}
If there exist integers $k$ and $m$ such that $\frac{kn}{c}\leq m\leq\frac{kn}{b}$, $\gcd(m,n)=1$, $1\leq k\leq b$, and $a\leq\frac{kn}{b}$, then Proposition 3.1 holds.
\end{lemma}
From now on, we assume that $s=\lfloor\frac{b}{a}\rfloor$. Then we have $1\leq s\leq\frac{b}{a}<s+1$. Since $b\leq\frac{n}{2}$, we have $\frac{n}{2b}=\frac{(2s-t)n}{2b}-\frac{(2s-t-1)n}{2b}>1$, and then $[\frac{(2s-t-1)n}{2b},\frac{(2s-t)n}{2b}]$ contains at least one integer for every $t\in[0,\cdots,s-1]$.
\begin{lemma}
Suppose $s\geq2$ and $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains an integer co-prime to $n$ for some $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Then Proposition 3.1 holds.
\end{lemma}
For the proof of Lemma 3.3, Lemma 3.4 and Lemma 3.5, one is referred to the proof of Lemma 2.3-2.5 in \cite{LP}, and we omit it here.
Let $\Omega$ denote the set of those integers: $x\in \Omega$ if and only if $x\in [\frac{(2s-t-1)n}{2b},\frac{(s-t)n}{b}]$ for some $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. By Lemma 3.5, we also assume that
$(B)$: $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains no integers co-prime to $n$ for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$.
\begin{lemma}
If $s\geq2$ and $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains no integers co-prime to $n$ for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Then $[\frac{(2s-t-1)n}{2b},\frac{(s-t)n}{b}]$ contains at most 3 integers for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Hence $\frac{n}{2b}<4$.
\end{lemma}
\begin{proof}
If there exists some $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$ such that $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains at least 4 integers, hence $x,x+1,x+2,x+3\in\Omega$ for some $x$. It is easy to know that at least one of the four integers is co-prime to $n$. Then this lemma holds.
\end{proof}
\begin{lemma}
If $s\geq4$ and $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains 3 integers for some $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Then one of the following holds for some $x$:
(c1) $x,x+1,x+2,x+7,x+8,x+9\in\Omega$; (c2) $x,x+1,x+2,x+6,x+7,x+8\in\Omega$;
(c3) $x,x+1,x+2,x+5,x+6,x+7\in\Omega$; (c4) $x,x+1,x+2,x+6,x+7\in\Omega$;
(c5) $x,x+1,x+5,x+6,x+7\in\Omega$; (c6) $x,x+1,x+2,x+5,x+6\in\Omega$;
(c7) $x,x+1,x+4,x+5,x+6\in\Omega$.
\end{lemma}
\begin{proof}
Since $s\geq4$, we can consider $t=0,1$ respectively. Because $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains exactly three integers for $t=0$ or $t=1$, we have $1\leq \frac{n}{2b}<3$, then (c1)-(c5) are all possible cases of the integers contained by $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ for some $t=0,1$.
\end{proof}
\begin{lemma}
Suppose $s\geq4$ and $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains no integers co-prime to $n$ for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Then $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains at most two integers for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$ and $\frac{n}{2b}<3$.
\end{lemma}
\begin{proof}
If $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains 3 integers for some $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$.
For (c1) case. Since $\gcd(x,n)>1, \gcd(x+1,n)>1, \gcd(x+2,n)>1, \gcd(x+7,n)>1, \gcd(x+8,n)>1, \gcd(x+9,n)>1$ and $\gcd(x,x+1,n)=\gcd(x,x+2,n)=\gcd(x+1,x+2,n)=1$, we have $\gcd(x+2,x+7)=5, \gcd(x+1,x+8)=7$, then $\gcd(x,x+9)>1, \gcd(x+1,x+9)=\gcd(x+2,x+9)=1$, so we have $\gcd(x+9,n)=1$, which contradicts to our assumption.
The proof for (c2)-(c7) is similar. Then this lemma holds.
\end{proof}
\begin{lemma}
If $s\geq 6$ and $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains exactly two integers for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$. Then one of the following holds for some $x$:
(c8) $x,x+1,x+5,x+6,x+10,x+11\in\Omega$;
(c9) $x,x+1,x+4,x+5,x+9,x+10\in\Omega$;
(c10) $x,x+1,x+5,x+6,x+9,x+10\in\Omega$;
(c11) $x,x+1,x+4,x+5,x+8,x+9\in\Omega$;
(c12) $x,x+1,x+4,x+5,x+7,x+8\in\Omega$;
(c13) $x,x+1,x+3,x+4,x+7,x+8\in\Omega$;
(c14) $x,x+1,x+3,x+4,x+6,x+7\in\Omega$.
\end{lemma}
\begin{proof}
Similar to the proof of Lemma 3.7.
\end{proof}
\begin{lemma}
If $s\geq 6$, then there exist $t_1\in\{0,\cdots,\lfloor\frac{s}2\rfloor-1\}$ such that $[\frac{(2s-t_1-1)n}{2b},\frac{(2s-t_1)n}{2b}]$ contains exactly one integer and $\frac{n}{2b}<2$.
\end{lemma}
\begin{proof}
Suppose that $[\frac{(2s-2t-1)n}{2b},\frac{(s-t)n}{b}]$ contains exactly two integers for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$.
Case (c8): if (c8) holds, then $\gcd(x,x+1,n)=\gcd(x+1,x+5,n)=\gcd(x,x+5,n)=1$ or $\gcd(x+1,x+5,n)=\gcd(x+5,x+6,n)=\gcd(x+1,x+6,n)=1$, $\gcd(x,x+5)=5$.
If $\gcd(x,x+1,n)=\gcd(x+1,x+5,n)=\gcd(x,x+5,n)=1$, then $\gcd(x,x+10)=\gcd(x+5,x+10)=1$ and $\gcd(x+1,x+10)>1$. However, $\gcd(x+1,x+10)\in\{1,3,9\}$, so $\gcd(x+10,n)=1$, which contradicts to our assumption.
If $\gcd(x+1,x+5,n)=\gcd(x+5,x+6,n)=\gcd(x+1,x+6,n)=1$, $\gcd(x,x+5)=5$, then $\gcd(x+1, x+11)=\gcd(x+6,x+11)=1$ and $\gcd(x+5,x+11)>1$. However, since $\gcd(x+5,x+11)|6$, we have $\gcd(x+11,n)=1$, contradiction.
The proof for (c9-c14) is similar. Then this lemma holds.
\end{proof}
\begin{lemma}
If $s\geq 8$ and there exists $t_2\in\{0,\cdots,\lfloor\frac{s}2\rfloor-1\}$ such that $[\frac{(2s-t_2-1)n}{2b},\frac{(2s-t_2)n}{2b}]$ contains exactly two integers. Then one of the following holds for some $x$:
(c15) $x,x+3,x+4,x+7,x+8,x+11,x+12\in\Omega$; (c16) $x,x+1,x+4,x+7,x+8,x+11,x+12\in\Omega$;
(c17) $x,x+1,x+4,x+5,x+8,x+11,x+12\in\Omega$; (c18) $x,x+1,x+4,x+5,x+8,x+9,x+12\in\Omega$;
(c19) $x,x+1,x+3,x+4,x+7,x+10,x+11\in\Omega$; (c20) $x,x+1,x+3,x+4,x+7,x+8,x+11\in\Omega$;
(c21) $x,x+3,x+4,x+6,x+7,x+10,x+11\in\Omega$; (c22) $x,x+1,x+4,x+5,x+7,x+8,x+11\in\Omega$;
(c23) $x,x+3,x+4,x+7,x+8,x+10,x+11\in\Omega$; (c24) $x,x+1,x+4,x+7,x+8,x+10,x+11\in\Omega$;
(c25) $x,x+1,x+3,x+4,x+6,x+7,x+10\in\Omega$; (c26) $x,x+3,x+4,x+6,x+7,x+9,x+10\in\Omega$;
(c27) $x,x+2,x+3,x+5,x+6,x+8,x+9\in\Omega$; (c28) $x,x+1,x+3,x+5,x+6,x+8,x+9\in\Omega$;
(c29) $x,x+1,x+3,x+4,x+6,x+8,x+9\in\Omega$; (c30) $x,x+1,x+3,x+4,x+6,x+7,x+9\in\Omega$;
(c31) $x,x+3,x+6,x+7,x+10,x+11\in\Omega$; (c32) $x,x+3,x+6,x+7,x+9,x+10\in\Omega$;
(c33) $x,x+3,x+5,x+6,x+8,x+9\in\Omega$; (c34) $x,x+2,x+4,x+5,x+7,x+8\in\Omega$;
(c35) $x,x+3,x+4,x+7,x+10,x+11\in\Omega$; (c36) $x,x+2,x+3,x+5,x+7,x+8\in\Omega$;
(c37) $x,x+1,x+4,x+7,x+8,x+11\in\Omega$; (c38) $x,x+1,x+3,x+5,x+6,x+8\in\Omega$;
(c39) $x,x+3,x+4,x+7,x+8,x+11\in\Omega$; (c40) $x,x+2,x+3,x+5,x+6,x+8\in\Omega$;
(c41) $x,x+3,x+4,x+6,x+7,x+10\in\Omega$; (c42) $x,x+1,x+3,x+6,x+8,x+9\in\Omega$;
(c43) $x,x+1,x+4,x+7,x+10,x+11\in\Omega$; (c44) $x,x+1,x+3,x+5,x+7,x+8\in\Omega$;
(c45) $x,x+3,x+6,x+9,x+10\in\Omega$; (c46) $x,x+3,x+6,x+8,x+9\in\Omega$;
(c47) $x,x+3,x+5,x+7,x+8\in\Omega$; (c48) $x,x+2,x+5,x+7,x+8\in\Omega$;
(c49) $x,x+2,x+4,x+6,x+7\in\Omega$; (c50) $x,x+1,x+4,x+7,x+10\in\Omega$;
(c51) $x,x+1,x+3,x+6,x+9\in\Omega$; (c52) $x,x+1,x+3,x+5,x+8\in\Omega$;
(c53) $x,x+1,x+3,x+6,x+8\in\Omega$; (c54) $x,x+1,x+3,x+5,x+7\in\Omega$;
(c55) $x,x+3,x+4,x+7,x+10\in\Omega$; (c56) $x,x+2,x+3,x+5,x+8\in\Omega$;
(c57) $x,x+2,x+3,x+5,x+7\in\Omega$; (c58) $x,x+3,x+6,x+7,x+10\in\Omega$;
(c59) $x,x+3,x+5,x+6,x+8\in\Omega$; (c60) $x,x+2,x+4,x+5,x+7\in\Omega$.
\end{lemma}
\begin{lemma}
If $s\geq 8$, then $[\frac{(2s-2t-1)n}{2b},\frac{(2s-t)n}{b}]$ contains exactly one integer for every $t\in[0,\cdots,\lfloor\frac{s}2\rfloor-1]$.
\end{lemma}
\begin{lemma}
If $s\geq10$, then one of the following holds for some $x$:
(c61) $x,x+3,x+6,x+9,x+12\in\Omega$; (c62) $x,x+2,x+5,x+8,x+11\in\Omega$;
(c63) $x,x+3,x+5,x+8,x+11\in\Omega$; (c64) $x,x+3,x+6,x+8,x+11\in\Omega$;
(c65) $x,x+3,x+6,x+9,x+11\in\Omega$; (c66) $x,x+2,x+4,x+7,x+10\in\Omega$;
(c67) $x,x+2,x+5,x+7,x+10\in\Omega$; (c68) $x,x+2,x+5,x+8,x+10\in\Omega$;
(c69) $x,x+3,x+5,x+7,x+10\in\Omega$; (c70) $x,x+3,x+5,x+8,x+10\in\Omega$;
(c71) $x,x+3,x+6,x+8,x+10\in\Omega$; (c72) $x,x+2,x+4,x+6,x+9\in\Omega$;
(c73) $x,x+2,x+4,x+7,x+9\in\Omega$; (c74) $x,x+2,x+5,x+7,x+9\in\Omega$;
(c75) $x,x+3,x+5,x+7,x+9\in\Omega$; (c76) $x,x+2,x+4,x+6,x+8\in\Omega$.
\end{lemma}
\begin{lemma}
$s\leq9$.
\end{lemma}
The proof of Lemma 3.11 and Lemma 3.13 is similar to that of Lemma 3.7, and the proof of Lemma 3.12 and Lemma 3.14 is similar to that of Lemma 3.10.
In view of Lemmas 3.5 and 3.14, from now on we may always assume that $s\leq9$.
Let $k_1$ be the largest positive integer such that $\lceil\frac{(k_1-1)n}{c}\rceil=\lceil\frac{(k_1-1)n}{b}\rceil$ and $\frac{k_1n}{c}\leq m<\frac{k_1n}{b}$. Since $\frac{bn}{c}\leq n-1<n=\frac{bn}{b}$ and $\frac{tn}{b}-\frac{tn}{c}=\frac{t(c-b)n}{bc}>2$ for all $t\geq b$, such integer $k_1$ always exists and $k_1\leq b$.
\begin{lemma}
Suppose $\lceil\frac{n}{c}\rceil=\lceil\frac{n}{b}\rceil$, then $k_1\leq\frac{b}{a}$.
\end{lemma}
\begin{proof}
Since $\lceil\frac{n}{c}\rceil=\lceil\frac{n}{b}\rceil$, we have $k_1\geq2$. Assume on the contrary that $k_1>\frac{b}{a}.$
Since $\lceil\frac{(k_1-1)n}{c}\rceil=\lceil\frac{(k_1-1)n}{b}\rceil$, we have
\begin{eqnarray} 1>\frac{(k_1-1)n}{b}-\frac{(k_1-1)n}{c}=\frac{(c-b)(k_1-1)n}{cb}.\end{eqnarray}
If $a-1\geq\frac{b}{k_1}$, then $\frac{(c-b)(k_1-1)n}{cb}=\frac{(a-1)(k_1-1)n}{cb}\geq\frac{(k_1-1)}{k_1}\times\frac{n}{c}>1$, contradiction. Thus we have that $\frac{b}{k_1}+1>a>\frac{b}{k_1}$ and $\frac{b}{k_1}$ is not an integer.
If $k_1\geq 3$. Since $a\geq35$, we have $\frac{(c-b)(k_1-1)n}{cb}=\frac{a-1}{a}\times\frac{a}{b}\times\frac{(k_1-1)n}{c}\geq\frac{34}{35}\times\frac{3-1}{3}\times2>1$, contradiction.
If $k_1=2$. $b<2a<b+2$, thus $b$ is an odd number, we may assume $b=2l+1$. Then $a=l+1$ and $c=3l+1$. (4.6) implies that $n<6l+5+\frac{1}{l}$. Moreover, $\gcd(n,6)=1$, by $6l+2<n\leq 6l+5$, we infer that $n=6l+5$. Thus
\begin{eqnarray*} \Big\lceil\frac{n}{c}\Big\rceil=\Big\lceil\frac{6l+5}{3l+1}\Big\rceil=3<4=\Big\lceil\frac{6l+5}{2l+1}\Big\rceil=\Big\lceil\frac{n}{b}\Big\rceil,\end{eqnarray*} contradiction.
\end{proof}
Then we can show that Proposition 3.1 holds through the following two propositions.
\begin{proposition}
Suppose $\lceil\frac{n}{c}\rceil<\lceil\frac{n}{b}\rceil$, then Proposition 3.1 holds.
\end{proposition}
\begin{proposition}
Suppose $\lceil\frac{n}{c}\rceil=\lceil\frac{n}{b}\rceil$ and $k_1\leq\frac{b}{a}$, then Proposition 3.1 holds.
\end{proposition}
\section{Proof of Proposition 3.16}
\begin{lemma}
If $[\frac{n}{c},\frac{n}{b}]$ contains at least two integers, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
We prove this lemma under assumption $(A4)$. For $(A2)$ and $(A3)$, the proof is very similar.
Since $a\leq b$, by Lemma 3.4 we may assume every integer in $[\frac{n}{c},\frac{n}{b}]$ in not co-prime to $n$. Since $n=p_1p_2p_3$, and $p_1p_3|b$, it follows that $\frac{n}{b}\leq p_2$. Then one of the following holds:
\begin{eqnarray} m_1-1<\frac{n}{c}\leq m_1<m_1+1< \frac{n}{b}=m_1+2=p_2,\\
m_1-1<\frac{n}{c}\leq m_1<m_1+1\leq \frac{n}{b}<m_1+2.\end{eqnarray}
{\it For case} (4.1): We have that $\gcd(m_1,n)>1, \gcd(m_1+1,n)>1$ and $\gcd(n,6)=1$, we infer that $p_2\geq23$ and $\gcd(n,m_1+3)=1$, then $m_1\geq21$ and $n\geq23b$. Note that
\begin{eqnarray} 2m_1-2<\frac{2n}{c}<\frac{2n}{b}=2m_1+4<2m_1+5.\end{eqnarray}
Let $m=2m_1+5$ and $k=2$. Since $1+c=a+b$, by (4.3) we have $(2m_1-2)(b+a-1)<(2m_1+5)b$, and thus $(2m_1-2)(a-1)<7b$. Since $a\geq 35$ and $m_1\geq 21$, we have
\begin{eqnarray*} ma=(2m_1+5)a=\frac{2m_1+5}{2m_1-2}\times\frac{a}{a-1}\times(2m_1-2)(a-1)<\frac{2\times21+5}{2\times21-2}\times\frac{35}{35-1}\times7b<23b\leq n,\end{eqnarray*}
and the result holds.
{\it For case} (4.2):
We have that $\gcd(m_1,n)>1, \gcd(m_1+1,n)>1$ and $\gcd(n,6)=1$, we infer that $m_1\geq10$. Then $n\geq 11b$.
Since $n=p_1p_2p_3$, we have $\gcd(2m_1+1,n)=1$ or $\gcd(2m_1+3,n)=1$. Note that
\begin{eqnarray} 2m_1-2<\frac{2n}{c}\leq2m_1<2m_1+1<2m_1+2\leq\frac{2n}{b}<2m_1+4.\end{eqnarray}
Since $1+c=a+b$, by (4.4) we have $(2m_1-2)(b+a-1)<(2m_1+4)b$, and thus $(2m_1-2)(a-1)<6b$. Let $k=2$ $m\in\{2m_1+1,2m_1+3\}$ be such that $\gcd(m,n)=1$. Since $a\geq 35$ and $m_1\geq 10$, we have
\begin{eqnarray*} ma\leq(2m_1+3)a=\frac{2m_1+3}{2m_1-2}\times\frac{a}{a-1}\times(2m_1-2)(a-1)<\frac{2\times10+3}{2\times10-2}\times\frac{35}{35-1}\times6b=\frac{805b}{102}<n,\end{eqnarray*}
and the result holds.
\end{proof}
By Lemma 4.1, we nay assume that $[\frac{n}{c},\frac{n}{b}]$ contains exactly one integer $m_1$, and thus
\begin{eqnarray*} m_1-1<\frac{n}{c}\leq m_1<\frac{n}{b}<m_1+1.\end{eqnarray*}
Consequently, $\frac{n}{b}-m_1<1$ and $m_1-\frac{n}{c}<1$.
\begin{lemma}
If $4<\frac{n}{c}\leq5<\frac{n}{b}<6$ and $5|n$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Since $4<\frac{n}{c}\leq5<\frac{n}{b}<6$, $n\geq5b$. Note that $m_1=\lceil\frac{n}{c}\rceil=5$. We divide the proof into eight cases.
{\noindent \bf Case 1.} $8<\frac{2n}{c}\leq9<10<\frac{2n}{b}<12$.
Since $8(b+a-1)=8c<2n<12b$, we have $8(a-1)<4b$. Clearly, $\gcd(n,9)=1$. Then
\begin{eqnarray*} 9a=\frac{9a}{8(a-1)}\times 8(a-1)<\frac{9}{8}\times\frac{35}{34}\times 4b=\frac{315b}{272}<5b\leq n.\end{eqnarray*}
By Lemma 3.3(1), this lemma holds with $k=2, m=9$.
{\noindent \bf Case 2.} $9<\frac{2n}{c}\leq10<11<\frac{2n}{b}<12$ and $\gcd(n,11)=1$.
Since $9(b+a-1)=9c<12b$, we have $9(a-1)<3b$ and
\begin{eqnarray*} 11a=\frac{11a}{9(a-1)}\times 9(a-1)<\frac{11}{9}\times\frac{35}{34}\times 3b=\frac{385b}{102}<5b\leq n.\end{eqnarray*}
Then Lemma 3.3(1) can be applied with $k=2, m=11$.
{\noindent \bf Case 3.} $9<\frac{2n}{c}\leq10<11<\frac{2n}{b}<12$, $\gcd(n,11)=11$ and $13<\frac{3n}{c}\leq14<15<16<\frac{3n}{b}<18$.
It still holds $9(a-1)<3b$. By assumption $n\geq1000$ and $5\times7\times11<1000$, we have $\gcd(n,14)=1$. Then
\begin{eqnarray*} 14a=\frac{14a}{9(a-1)}\times 9(a-1)<\frac{14}{9}\times\frac{35}{34}\times 3b=\frac{245b}{51}<5b\leq n.\end{eqnarray*}
{\noindent \bf Case 4.} $9<\frac{2n}{c}\leq10<11<\frac{2n}{b}<12$, $\gcd(n,11)=11$ and $14<\frac{3n}{c}\leq15<16<\frac{3n}{b}<18$.
In this case, $14(a-1)<4b$. Since $\gcd(n,16)=1$, we have
\begin{eqnarray*} 16a=\frac{16a}{14(a-1)}\times 14(a-1)<\frac{16}{14}\times\frac{35}{34}\times 4b=\frac{80b}{17}<5b\leq n.\end{eqnarray*}
{\noindent \bf Case 5.} $9<\frac{2n}{c}\leq10<\frac{2n}{b}<11$.
In this case we have $9(a-1)<2b$.
{\it Subcase 5.1.} $13<\frac{3n}{c}\leq15<16<\frac{3n}{b}<17$.
Clearly, $\gcd(n,16)=1$. Then
\begin{eqnarray*} 16a=\frac{16a}{9(a-1)}\times 9(a-1)<\frac{16}{9}\times\frac{35}{34}\times 2b=\frac{560b}{153}<5b\leq n.\end{eqnarray*}
{\it Subcase 5.2.} $13<\frac{3n}{c}\leq14<15<\frac{3n}{b}<16$ and $\gcd(n,7)=1$.
\begin{eqnarray*} 14a=\frac{14a}{9(a-1)}\times 9(a-1)<\frac{14}{9}\times\frac{35}{34}\times 2b=\frac{490b}{153}<5b\leq n.\end{eqnarray*}
{\it Subcase 5.3.} $13<\frac{3n}{c}\leq14<15<\frac{3n}{b}<16$ and $\gcd(n,7)>1$.
Since $n>1000$ and $35|n$, we have $\gcd(19,n)=1$. It is easy to know that $18<\frac{4n}{c}\leq19<20<\frac{3n}{b}<22$. Then
\begin{eqnarray*} 19a=\frac{19a}{9(a-1)}\times 9(a-1)<\frac{19}{9}\times\frac{35}{34}\times 2b=\frac{665b}{153}<5b\leq n.\end{eqnarray*}
{\noindent \bf Case 6.} $14<\frac{3n}{c}\leq15<\frac{3n}{b}<16$.
In this case we have $14(a-1)<2b$.
{\it Subcase 6.1.} $18<\frac{4n}{c}\leq19<20<21<\frac{3n}{b}<22$.
By assumption $n>1000$ and $5\times7\times19=665<1000$, we have $\gcd(n,21)=1$ or $\gcd(n,19)=1$. Let $m$ be one of $19$ and $21$ such that $\gcd(n,m)=1$. Then
\begin{eqnarray*} ma\leq21a=\frac{21a}{14(a-1)}\times 14(a-1)<\frac{21}{14}\times\frac{35}{34}\times 2b=\frac{105b}{34}<5b\leq n.\end{eqnarray*}
{\it Subcase 6.2.} $19<\frac{4n}{c}\leq20<21<\frac{4n}{b}<22$ and $\gcd(n,21)=1$.
The proof is similar to above.
{\it Subcase 6.3.} $19<\frac{4n}{c}\leq20<21<\frac{4n}{b}<22$ and $\gcd(n,21)>1$.
In this case we have $\gcd(n,26)=1$ and $23<\frac{5n}{c}\leq25<26<\frac{5n}{b}<28$. Then
\begin{eqnarray*} 26a=\frac{26a}{14(a-1)}\times 14(a-1)<\frac{26}{14}\times\frac{35}{34}\times 2b=\frac{65b}{17}<5b\leq n.\end{eqnarray*}
{\it Subcase 6.4.} $18<\frac{4n}{c}\leq19<20<\frac{3n}{b}<21$.
It must hold that $23<\frac{5n}{c}\leq24<25<\frac{3n}{b}<27$. Since $\gcd(24,n)=1$, we have
\begin{eqnarray*} 24a=\frac{24a}{14(a-1)}\times 14(a-1)<\frac{24}{14}\times\frac{35}{34}\times 2b=\frac{60b}{17}<5b\leq n.\end{eqnarray*}
{\noindent \bf Case 7.} $19<\frac{4n}{c}\leq20<\frac{4n}{b}<21$.
In this case, $19(a-1)<2b$.
{\it Subcase 7.1.} $23<\frac{5n}{c}\leq24<25<\frac{5n}{b}<27$.
Since $\gcd(24,n)=1$, we have
\begin{eqnarray*} 24a=\frac{24a}{19(a-1)}\times 19(a-1)<\frac{24}{19}\times\frac{35}{34}\times 2b=\frac{840b}{323}<5b\leq n.\end{eqnarray*}
{\it Subcase 7.2.} $24<\frac{5n}{c}\leq25<26\frac{5n}{b}<27$.
It must hold that $32<\frac{7n}{c}\leq35<36\frac{5n}{b}<39$. Since $\gcd(n,36)=1$, we have
\begin{eqnarray*} 36a=\frac{36a}{19(a-1)}\times 19(a-1)<\frac{36}{19}\times\frac{35}{34}\times 2b=\frac{1260b}{323}<5b\leq n.\end{eqnarray*}
{\noindent \bf Case 8.} $24<\frac{5n}{c}\leq25<\frac{5n}{b}<26$.
By directly computing, we have $24(a-1)<2b$, then $\frac{b}{a}>\frac{24}{2}\times\frac{33}{34}=\frac{198}{17}$ and $s\geq11$, which contradicts to our assumption $s\leq9$.
\end{proof}
\begin{lemma}
If $6<\frac{n}{c}\leq7<\frac{n}{b}<8$ and $7|n$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Since $6<\frac{n}{c}\leq7<\frac{n}{b}<8$, $n\geq7b$. Note that $m_1=\lceil\frac{n}{c}\rceil=7$. We divide the proof into five cases.
{\noindent \bf Case 1.} $12<\frac{2n}{c}\leq13<14<15<\frac{2n}{b}<16$.
Since $13\times7\times5<1000$, we have $\gcd(13,n)=1$ or $\gcd(15,n)=1$. Let $m$ be one of $13$ and $15$ such that $\gcd(n,m)=1$. Easily, we have $12(a-1)<4b$, then
\begin{eqnarray*} ma\leq15a=\frac{15a}{12(a-1)}\times 12(a-1)<\frac{15}{12}\times\frac{35}{34}\times 4b=\frac{175b}{34}<7b\leq n.\end{eqnarray*}
{\noindent \bf Case 2.} $12<\frac{2n}{c}\leq13<14<\frac{2n}{b}<15$.
It must hold that $24<\frac{4n}{c}\leq26<27<28<\frac{4n}{b}<30$ and $12(a-1)<3b$. Since $\gcd(n,27)=1$, we have
\begin{eqnarray*} 27a=\frac{27a}{12(a-1)}\times 12(a-1)<\frac{27}{12}\times\frac{35}{34}\times 3b=\frac{945b}{136}<7b\leq n.\end{eqnarray*}
{\noindent \bf Case 3.} $13<\frac{2n}{c}\leq14<15<\frac{2n}{b}<16$.
{\it Subcase 3.1.} $\gcd(n,15)=1$. The proof is similar to Case 1.
{\it Subcase 3.2.} $\gcd(n,15)=5$. Then we have $\gcd(n,22)=1$, $12(a-1)<3b$ and $19<\frac{3n}{c}\leq21<22<\frac{2n}{b}<24$.
\begin{eqnarray*} 22a=\frac{22a}{12(a-1)}\times 12(a-1)<\frac{22}{12}\times\frac{35}{34}\times 3b=\frac{385b}{68}<7b\leq n.\end{eqnarray*}
{\noindent \bf Case 4.} $13<\frac{2n}{c}\leq14<\frac{2n}{b}<15$.
In this case $13(a-1)<2b$.
{\it Subcase 4.1.} $19<\frac{3n}{c}\leq21<22<\frac{2n}{b}\leq23$.
We have $\frac{5n}{c}\leq35<36<\frac{5n}{b}$ and $\gcd(n,36)=1$. Then
\begin{eqnarray*} 36a=\frac{36a}{13(a-1)}\times 13(a-1)<\frac{36}{13}\times\frac{35}{34}\times 2b=\frac{1260b}{221}<7b\leq n.\end{eqnarray*}
{\it Subcase 4.2.} $19<\frac{3n}{c}\leq20<21<\frac{3n}{b}\leq22$ and $\gcd(n,20)=1$.
\begin{eqnarray*} 20a=\frac{20a}{13(a-1)}\times 13(a-1)<\frac{20}{13}\times\frac{35}{34}\times 2b=\frac{700b}{221}<7b\leq n.\end{eqnarray*}
{\it Subcase 4.3.} $19<\frac{3n}{c}\leq20<21<\frac{3n}{b}\leq22$ and $\gcd(n,20)=5$.
It must hold $\frac{4n}{c}\leq27<28<\frac{4n}{b}$. Since $\gcd(27,n)=1$, we have
\begin{eqnarray*} 27a=\frac{27a}{13(a-1)}\times 13(a-1)<\frac{27}{13}\times\frac{35}{34}\times 2b=\frac{945b}{221}<7b\leq n.\end{eqnarray*}
{\noindent \bf Case 5.} $20<\frac{3n}{c}\leq21<\frac{3n}{b}\leq22$.
If $29<\frac{4n}{b}$, the proof is similar to Subcase 4.1. If $\frac{4n}{c}\leq 27$, the proof is similar to Subcase 4.2.
If $27<\frac{4n}{c}\leq28<\frac{4n}{b}\leq29$, we have $27(a-1)<2b$, then $\frac{b}{a}>\frac{27}{2}\times\frac{33}{34}=\frac{891}{68}$ and $s\geq13$, which contradicts to our assumption $s\leq9$.
\end{proof}
Let $l$ be the smallest integer such that $[\frac{ln}{c},\frac{ln}{b})$ contains at least four integers. Clearly, $l\geq3$. Since $\frac{n}{b}-m_1<1$ and $m_1-\frac{n}{c}<1$, by using the minimality of $l$ we obtain that
$lm_1-4<\frac{ln}{c}<\frac{ln}{b}<lm_1+4$. Then $\frac{ln(c-b)}{bc}=\frac{ln}{b}-\frac{ln}{c}<(lm_1+4)-(lm_1-4)=8$ and thus
\begin{eqnarray*} l<\frac{8bc}{(c-b)n}<\frac{8b}{(a-1)(m_1-1)}\leq \frac{8b}{(35-1)(5-1)}<b.\end{eqnarray*}
We claim that $[\frac{ln}{c},\frac{ln}{b})$ contains at most six integers. For any positive integer $j$, let $N_j$ denote the number of integers contained in $[\frac{jn}{c},\frac{jn}{b})$. Since \begin{eqnarray*} \Big(\frac{(j+1)n}{b}-(j+1)m_1\Big)-\Big(\frac{jn}{b}-jm_1\Big)=\frac{n}{b}-m_1<1,\\
\Big((j+1)m_1-\frac{(j+1)n}{c}\Big)-\Big(jm_1-\frac{jn}{c}\Big)=m_1-\frac{n}{c}<1,\end{eqnarray*}
we infer that $N_{j+1}-N_j\leq2$, it is sufficient to show our claim.
By the claim above we have
\begin{eqnarray*} lm_1-j_0<\frac{ln}{c}\leq lm_1-j_0+1<\cdots<lm_1-j_0+4<\frac{ln}{b}\leq lm_1-j_0+6,\end{eqnarray*}
for some $1\leq j_0\leq 4$.
We remark that since $n=p_1p_2p_3$ and $[\frac{ln}{c},\frac{ln}{b})$ contains at least four integers, one of them (say $m$) must be co-prime to $n$. If $ma<n$, then we have done by Lemma 3.3(1)(with $k=l<b$).
\begin{lemma}
If $m_1\not=5,7$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
If $m_1\not=5,7$, then $m_1\geq10$ and $n\geq m_1b\geq10b$. Let $k=l$ and let $m$ be one of the integers in $[\frac{ln}{c},\frac{ln}{b})$ which is co-prime to $n$.
Then $(lm_1-j_0)(b+a-1)=(lm_1-j_0)c<ln\leq(lm_1-j_0+6)b$, so $(lm_1-j_0)(a-1)<6b$. Note that $m\leq lm_1+3$ and $l\geq 3$, then
\begin{eqnarray*} ma\leq(lm_1+3)a=\frac{lm_1+3}{lm_1-j_0}\times\frac{a}{a-1}\times(lm_1-j_0)(a-1)\\
<\frac{3\times10+3}{3\times10-4}\times\frac{35}{34}\times6b=\frac{3465b}{442}<10b\leq n,\end{eqnarray*}
and we have done.
\end{proof}
\section{Proof of Proposition 3.17}
In this section, we always assume that $\lceil\frac{n}{c}\rceil=\lceil\frac{n}{b}\rceil$, so $k_1\geq2$, and we also assume that $k_1\leq\frac{b}{a}$. Since $a\leq\frac{b}{k_1}$, by Lemma 3.3 we may assume that $\gcd(n,m_1)>1$ for every $m_1\in[\frac{k_1n}{c},\frac{k_1n}{b})$.
\begin{lemma}{\rm (Lemma 3.7 of \cite{LP})}
If $u<\frac{n}{c}<\frac{n}{b}<v$ for some real numbers $u,v$ and $u(k_1-1)>s+1$, then
\begin{eqnarray*} n<\frac{uv(k_1-1)(s+1)}{u(k_1-1)-(s+1)}.\end{eqnarray*}
\end{lemma}
\begin{lemma}
$k_1\leq6$.
\end{lemma}
\begin{proof}
If $k_1\geq7$, then $7\leq k_1\leq s\leq9$. By Lemma 3.10, $n<4b$. Applying Lemma 5.1 with $u=2$ and $v=4$, we infer that $n<240$, which contradicts to our assumption $n>1000$.
\end{proof}
\begin{lemma}
If $k_1=6$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Assume that $k_1=6$. Then $s\geq6$, by Lemma 3.10, we have $n<4b$. If $s\leq8$, applying Lemma 5.1 with $u=2$ and $v=4$, we infer that $n<360$, which contradicts to our assumption $n>1000$.
Let $s=9$. If $3<\frac{n}{c}<\frac{n}{b}<4$, the proof is similar to above. If $2<\frac{n}{c}<\frac{n}{b}<3$, then $10<\frac{5n}{c}<\frac{5n}{b}<15$. By the definition of $k_1$, we have $10+r<\frac{5n}{b}<\frac{5n}{b}\leq11+r$ for some $r=0,1,2,3,4$.
{\bf Case 1.} $r=0$. $10<\frac{5n}{c}<\frac{5n}{b}\leq11$, then
\begin{eqnarray*} 12<\frac{6n}{c}\leq13<\frac{6n}{b}<\frac{66}{5},&& 14<\frac{7n}{c}<15<\frac{7n}{b}<\frac{77}{5},\\
16<\frac{8n}{c}\leq17<\frac{8n}{b}<\frac{88}{5}, && 18<\frac{9n}{c}\leq19<\frac{9n}{b}<\frac{99}{5},\end{eqnarray*}
and we can find $m\in\{13,15,17,19\}$ such $\gcd(m,n)=1$, by Lemma 3.4, $\hbox{\rm ind}(S)=1$.
{\bf Case 2.} $r=1$. $11<\frac{5n}{c}<\frac{5n}{b}<12$, then $\frac{77}{5}<\frac{7n}{c}<16<\frac{7n}{b}<\frac{84}{5}$ and $\gcd(16,n)=1$, by Lemma 3.4, $\hbox{\rm ind}(S)=1$.
{\bf Case 3.} $r=2$. $12<\frac{5n}{c}<\frac{5n}{b}<13$. If $\frac{84}{5}<\frac{7n}{c}<18<\frac{91}{5}$, then $\gcd(18,n)=1$, and $\hbox{\rm ind}(S)=1$. Otherwise,
\begin{eqnarray*} \frac{72}{5}<\frac{6n}{c}\leq15<\frac{6n}{b}<\frac{78}{5},&& \frac{84}{5}<\frac{7n}{c}\leq17<\frac{7n}{b}<18,\\
\frac{96}{5}<\frac{8n}{c}\leq20<\frac{8n}{b}<\frac{144}{7}, && \frac{108}{5}<\frac{9n}{c}<22<\frac{9n}{b}<\frac{162}{7},\end{eqnarray*}
and we can find $m\in\{5,11,17\}$ such $\gcd(m,n)=1$, by Lemma 3.4, $\hbox{\rm ind}(S)=1$. Otherwise, $n=5\times11\times17=935<1000$, which contradicts to our assumption.
{\bf Case 4.} $r=3$. $13<\frac{5n}{c}<\frac{5n}{b}<14$. Then $\frac{78}{5}<\frac{6n}{c}<16<\frac{6n}{b}<\frac{84}{5}$, and $\gcd(16,n)=1$, by Lemma 3.4, $\hbox{\rm ind}(S)=1$.
{\bf Case 5.} $r=4$. $14<\frac{5n}{c}<\frac{5n}{b}<15$. Then \begin{eqnarray*} \frac{84}{5}<\frac{6n}{c}\leq17<\frac{6n}{b}<18,&& \frac{98}{5}<\frac{7n}{c}<20<\frac{7n}{b}<21,\\
\frac{112}{5}<\frac{8n}{c}\leq23<\frac{8n}{b}<24, && \frac{126}{5}<\frac{9n}{c}<26<\frac{9n}{b}<27,\end{eqnarray*}
and we can find $m\in\{5,13,17,23\}$ such $\gcd(m,n)=1$,by Lemma 3.4, $\hbox{\rm ind}(S)=1$.
\end{proof}
\begin{lemma}
If $k_1=5$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Assume that $k_1=5$. Since $s\geq5$, we have $n<6b$. If $s\leq6$ or $\frac{11}{4}<\frac{n}{c}$, we can get a contradiction by applying Lemma 5.1.
For $s=7,8,9$, let $2<\frac{n}{c}<\frac{n}{b}<3$. We have $8+r<\frac{4n}{c}<\frac{4n}{b}\leq9+r$ for some $r=0,1,2,3$. If $r=3$, then $\frac{11}{4}<\frac{n}{c}$, which has been solved.
{\bf Case 1.} $r=0$. Then
\begin{eqnarray*} 10<\frac{5n}{c}\leq11<\frac{5n}{b}<\frac{45}{4}, 12<\frac{6n}{c}\leq13<\frac{6n}{b}<\frac{27}{2}, 14<\frac{7n}{c}<15<\frac{7n}{b}<\frac{63}{4},\end{eqnarray*}
and we can find $m\in\{11,13,15\}$ such $\gcd(m,n)=1$ because $11\times13\times5<1000$. By Lemma 3.4, $\hbox{\rm ind}(S)=1$.
{\bf Case 2.} $r=1$. Then $\frac{45}{4}<\frac{5n}{c}<12<\frac{5n}{b}<\frac{25}{2}$,
and $\gcd(12,n)=1$. By Lemma 3.4, $\hbox{\rm ind}(S)=1$.
{\bf Case 3.} $r=2$. Then $15<\frac{6n}{c}<16<\frac{6n}{b}<\frac{33}{2}$, and $\gcd(16,n)=1$. Let $m=16$ and $k=6$, we have $m\cdot a=16\times(c+1-n)\leq16\times(\frac{2n-1}{5}-\frac{4n}{11}+1)=\frac{16\times(2n+44)}{55}<\frac{2n+44}{3}<n$.
By Lemma 3.4, $\hbox{\rm ind}(S)=1$.
\end{proof}
\begin{lemma}
If $k_1=4$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Assume that $k_1=4$. Since $s\geq4$, we have $n<6b$.
{\bf Case 1.} $s=4$, or $3<\frac{n}{c}$ and $s=5,6,7$, we can get a contradiction by applying Lemma 5.1.
{\bf Case 2.} If $2<\frac{n}{c}<\frac{n}{b}<3$ and $s=5,6,7$. Then $6+r<\frac{3n}{c}<\frac{3n}{b}\leq7+r$ for some $r=0,1,2$.
{\it Subcase 2.1.} $r=0$. We have $8<\frac{4n}{c}<\frac{4n}{b}\leq\frac{28}{3}$, then $m_1=9$, which contradicts to $\gcd(n,m_1)=1$.
{\it Subcase 2.2.} $r=1$. We have $12<\frac{5n}{c}\leq13<\frac{5n}{b}<\frac{40}{3}$ or $\frac{35}{3}<\frac{5n}{c}<12<\frac{5n}{b}<\frac{40}{3}$.
If $\frac{5n}{c}<12<\frac{5n}{b}$, then $\hbox{\rm ind}(S)=1$ by Lemma 3.4. If $12<\frac{5n}{c}\leq13<\frac{5n}{b}<\frac{40}{3}$,
we have $\frac{9n}{2b}<12<13<\frac{5n}{b}$, and $12\in[\frac{9n}{2b},\frac{5n}{b}]$. Since $\gcd(n,12)=1$, which contradicts to our previous assumption $(B)$ with $t=0,1,2$ for $s=5,6,7$, respectively.
{\it Subcase 2.3.} $r=2$. We have $\frac{8}{3}<\frac{n}{c}<\frac{n}{b}<3$, we can get a contradiction by Lemma 5.1.
{\bf Case 3.} $s=8,9$. We have $n<4b$. Then $6+r<\frac{3n}{c}<\frac{3n}{b}\leq7+r$ for some $r=0,1,2,3,4,5$.
{\it Subcase 3.1.} $r=0$. We have $8<\frac{4n}{c}<\frac{4n}{b}<\frac{28}{3}$, then $m_1=9$, which contradicts to $\gcd(n,m_1)=1$.
{\it Subcase 3.2.} $r=1$. We have $\frac{28}{3}<\frac{4n}{c}\leq10<\frac{4n}{b}<\frac{32}{3}$. Assume that $5|n$, otherwise $\hbox{\rm ind}(S)=1$ by Lemma 3.4. Furthermore, if $\frac{5n}{c}<12<\frac{5n}{b}$, we also have $\hbox{\rm ind}(S)=1$ by Lemma 3.4. Then $12<\frac{5n}{c}\leq13<\frac{5n}{b}<\frac{40}{3}$.
Since $\gcd(n,18)=1$, we infer that $\frac{84}{5}<\frac{7n}{c}\leq17<\frac{7n}{b}<18$ and $n=5\times13\times17$. Otherwise, we have $\frac{13n}{2b}<\frac{52}{3}<18<\frac{7n}{b}$, which contradicts to $(B)$ with $t=1,2$ for $s=8,9$, respectively.
Under assumption $(A4)$: $17\geq\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}\geq2\times8=16>13>\frac{17}{2}$, we infer that $a=5\times13$. Because $8\leq\frac{b}{a}<10$, and $b=j\times17\times5 (j<7)$ or $b=j\times 17\times 13 (j<3)$. However,
$\frac{6\times17\times5}{5\times13}=\frac{102}{13}<8$, $\frac{2\times17\times13}{5\times13}=\frac{34}{5}<8$, contradiction.
Under assumption $(A3)$: we infer that $a=5\times13+1$. Because $8\leq\frac{b}{a}<10$, and $b=j\times17\times5+1 (j<7)$ or $b=j\times 17\times 13+1 (j<3)$. However,
$\frac{6\times17\times5+1}{5\times13+1}=\frac{511}{66}<8$, $\frac{2\times17\times13+1}{5\times13+1}=\frac{443}{66}<8$, contradiction.
Under assumption $(A2)$, we distinguish three cases.
$(A2.1)$: $p_1p_2|a,p_1|b, p_2|c$, then $a=5\times13$. Moreover, $40\leq b<50$ when $p_1=5$ and $104\leq b<130$ when $p_1=13$. If $p_1=5$, then $\frac{n}{b}\geq \frac{5\times13\times17}{50}>22$, contradiction. If $p_1=13$, then $\frac{n}{b}\geq \frac{5\times13\times17}{130}=\frac{85}{8}$, contradiction.
$(A2.2)$: $p_1|a, p_1p_2|b, p_2|c$. If $p_1=5$, then by $a=j\times5$ and $16<\frac{n}{a}\leq17$, we have $j=13$, contradiction. If $p_1=13$, then by $a=j\times5$ and $16<\frac{n}{a}\leq17$, we have $j=5$, contradiction. If $p_1=17$, then $a=4\times17=68$. Moreover, $b=2\times13\times17$ or $b=j\times5\times17(j=4,5,6)$. If $b=2\times13\times17$, then $c=a+b-1=373$, which contradicts to $\gcd(n,c)=5$. If $b=j\times5\times17(j=4,5,6)$, then
$c\in\{261,356,431\}$, which contradicts to $\gcd(n,c)=13$.
$(A2.3)$: $p_1|a, p_2|b, p_1p_2|c$. Similar to $(A2.2)$, we have $a=4\times17=68$. Then $544=68\times8\leq b<680$. Since $b<\frac{n}{2}<553$, we have $p_2=5, b\in\{545,550\}$ or $p_2=13, b=546$. If $p_2=5$, then $c\in\{612,617\}$, which contradicts to $\gcd(n,c)=5\times17$. If $p_2=13$, then $c=613$, which contradicts to $\gcd(n,c)=17\times13$.
{\it A remark on the proof.} From now on, throughout this section, if $n$ is determined as a product of three small explicit primes similar to above, we only check it under assumption $(A4)$. The proof for $(A2)$ and $(A3)$ is not essentially different from the above process.
{\it Subcase 3.3.} $r=2$. We have
\begin{eqnarray*} \frac{32}{3}<\frac{4n}{c}\leq11<\frac{4n}{b}<12, \frac{40}{3}<\frac{5n}{c}\leq14<\frac{5n}{b}<15, 16<\frac{6n}{c}\leq17<\frac{6n}{b}<18.\end{eqnarray*}
Then $n=7\times11\times17$. Otherwise, there exists $m\in\{11,14,17\}$ such that $\gcd(n,m)=1$ and $\hbox{\rm ind}(S)=1$.
Clearly, $17\geq\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}\geq2\times8=16>11>\frac{17}{2}$, we have $a=7\times 11$. Because $8\leq\frac{b}{a}<10$, and $b=j\times17\times7 (j<6)$ or $b=j\times 17\times 11 (j<4)$. However,
$\frac{5\times17\times7}{7\times11}=\frac{85}{11}<8$, $\frac{3\times17\times13}{7\times13}=\frac{51}{7}<8$, contradiction.
{\it Subcase 3.4.} $r=3$. We have $15<\frac{5n}{c}<16<\frac{5n}{b}<\frac{50}{3}<17$ and $\gcd(n,16)=1$. Let $m=16$ and $k=5$, we have $m\cdot a=16\times(c+1-n)\leq16\times(\frac{n-1}{3}-\frac{3n}{10}+1)=\frac{16\times(n+28)}{10}<n$, then $\hbox{\rm ind}(S)=1$.
{\it Subcase 3.5.} $r=4$. We have
\begin{eqnarray*} \frac{40}{3}<\frac{4n}{c}\leq14<\frac{4n}{b}\leq\frac{44}{3},
\frac{50}{3}<\frac{5n}{c}<\frac{5n}{b}\leq\frac{55}{3}<19,
\frac{70}{3}<\frac{7n}{c}<\frac{7n}{b}\leq\frac{77}{3}<26.\end{eqnarray*}
Since $\gcd(18,n)=\gcd(24,n)=1$, we infer that
$\frac{50}{3}<\frac{5n}{c}\leq17<\frac{5n}{b}<18$, $24<\frac{7n}{c}<25<\frac{7n}{b}<\frac{77}{3}<26$. Because $5\times7\times17<1000$, at least one of $14,17,25$ is co-prime to $n$.
{\it Subcase 3.6.} $r=5$. By Lemma 5.1, we infer that $n<1000$ with $u=\frac{11}{3}$ and $v=4$, contradiction.
\end{proof}
\begin{lemma}
If $k_1=3$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
We distinguish five cases.
{\bf Case 1.} $s=3$. Then $\frac{n}{b}<8$, and we have $4+r<\frac{2n}{c}<\frac{2n}{b}\leq5+r$ for some $r=0,1,2,\cdots,11$.
{\it Subcase 1.1.} $r\geq1$. We infer that $n<160$ with $u=\frac{5}{2}$ and $v=8$, contradiction.
{\it Subcase 1.2.} $r=0$. We have $8<\frac{4n}{c}<9<\frac{4n}{b}\leq10$, and $\gcd(9,n)=10$. Let $k=4$ and $m=9$, then $ma=9\times(c-b+1)\leq9\times(\frac{n-1}{2}-\frac{2n}{5}+1)=\frac{9n+45}{10}<n$, then $\hbox{\rm ind}(S)=1$ by Lemma 3.3(1).
{\bf Case 2.} $s=4$. Then $\frac{n}{b}<6$, and we have $4+r<\frac{2n}{c}<\frac{2n}{b}\leq5+r$ for some $r=0,1,2,\cdots,7$.
{\it Subcase 2.1.} $r=0$. We have $8<\frac{4n}{c}<9<\frac{4n}{b}\leq10$, and $\gcd(n,9)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 2.2.} $r=1$. We have $\frac{15}{2}<\frac{3n}{c}<8<\frac{3n}{b}<9$, and $\gcd(n,8)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 2.3.} $r\geq2$. Then $3<\frac{n}{c}<\frac{n}{b}<6$, we infer that $n<180$, contradiction.
{\bf Case 3.} $s=5$. Then $\frac{n}{b}<6$, and we have $4+r<\frac{2n}{c}<\frac{2n}{b}\leq5+r$ for some $r=0,1,2,\cdots,7$.
{\it Subcase 3.1.} $r=0$. We have $8<\frac{4n}{c}<9<\frac{4n}{b}\leq10$, and $\gcd(n,9)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 3.2.} $r=1$. We have $\frac{15}{2}<\frac{3n}{c}<8<\frac{3n}{b}\leq9$, and $\gcd(n,8)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 3.3.} $r=2$. We have $9<\frac{3n}{c}<10<\frac{3n}{b}\leq\frac{21}{2}$, $12<\frac{4n}{c}\leq13<\frac{4n}{b}\leq14$, $15<\frac{5n}{c}<\frac{5n}{b}\leq\frac{35}{2}$. Since $\gcd(n,16)=1$, we infer that $16<\frac{5n}{c}\leq17<\frac{5n}{b}\leq\frac{35}{2}$ and $\frac{9n}{2b}<\frac{63}{4}<16<\frac{5n}{b}$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 3.4.} $r\geq3$. Then $\frac{7}{2}<\frac{n}{c}<\frac{n}{b}<6$, we infer that $n<294$, contradiction.
{\bf Case 4.} $s=6,7,8$. Then $\frac{n}{b}<4$, and we have $4+r<\frac{2n}{c}<\frac{2n}{b}\leq5+r$ for some $r=0,1,2,3$.
{\it Subcase 4.1.} $r=0$. We have $8<\frac{4n}{c}<9<\frac{4n}{b}\leq10$, and $\gcd(n,9)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 4.2.} $r=1$. We have $\frac{15}{2}<\frac{3n}{c}<8<\frac{3n}{b}\leq9$, and $\gcd(n,8)=1$, $\hbox{\rm ind}(S)=1$.
{\it Subcase 4.3.} $r=2$. Same to {\it Subcase 3.3.}
{\it Subcase 4.4.} $r=3$. We have $\frac{35}{2}<\frac{5n}{c}<\frac{5n}{b}\leq20$. Since $\gcd(n,18)=1$, we infer that $18<\frac{5n}{c}<19<\frac{5n}{b}<20$. Then $\frac{9n}{2b}<18<\frac{5n}{b}$, contradiction to $(B)$ with $t=1,2,3$ for $s=6,7,8$, respectively.
{\bf Case 5.} $s=9$. Then $\frac{n}{b}<4$, and we have $4+r<\frac{2n}{c}<\frac{2n}{b}\leq5+r$ for some $r=0,1,2,3$.
For $r=0,1,2$. Same to Case 4. Then let $r=3$, and $7<\frac{2n}{c}<\frac{2n}{b}<8$.
{\it Subcase 5.1.} $21<\frac{6n}{c}\leq22<23<\frac{6n}{b}<24$, then $\frac{49}{2}<\frac{7n}{c}<25<26<\frac{7n}{b}<28$, at least one of $22,23,25,26$ is co-prime to $n$, hence $\hbox{\rm ind}(S)=1$.
{\it Subcase 5.2.} $21<\frac{6n}{c}\leq22<\frac{6n}{b}<23$, $\frac{49}{2}<\frac{7n}{c}<25<26<\frac{7n}{b}<\frac{161}{6}$. We infer that at least one of $22,25,26$ is co-prime to $n$ and $\hbox{\rm ind}(S)$. Otherwise, $n=11\times5\times13<1000$, contradiction.
{\it Subcase 5.3.} $21<\frac{6n}{c}\leq22<\frac{6n}{b}<23$, $\frac{49}{2}<\frac{7n}{c}<25<\frac{7n}{b}<26$. Then $28<\frac{8n}{c}<29<\frac{8n}{b}<\frac{208}{7}$, $\frac{63}{2}<\frac{9n}{c}<29<\frac{9n}{b}<\frac{234}{7}$. We infer that $n=11\times5\times29$. Otherwise, at least one of $22,25,29$ is co-prime to $n$ and $\hbox{\rm ind}(S)=1$. So $29\geq\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}>2\times9=18>11$. Since $\frac{29}{2}<18$, we have $a=11\times5$. Then
$b=j\times 29\times5 (j<6)$ or $b=j\times 29\times11 (j<3)$ and $9\leq\frac{b}{a}<10$. However,
$\frac{3\times 29\times5}{11\times5}=\frac{97}{11}<9$, $\frac{1\times 29\times11}{11\times5}=\frac{29}{5}<9$, we have $b\in\{580,725,638\}$. If $b=638$, then $c=a+b-1=692$, which contradicts to $\gcd(n,c)=5\times29=145$.
If $b\in\{580,725\}$, then $c\in\{634,779\}$, which contradicts to $\gcd(n,c)=29\times11=319$.
\end{proof}
\begin{lemma}
If $k_1=2$, $4<\frac{2n}{c}\leq5<\frac{2n}{b}<6$ and $a\leq \frac{b}{2}$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Note that $m_1=5$ and $b\geq2a\geq70$. Since $\gcd(n,m_1)>1$ we have $5|n$. By the definition of $k_1$, we conclude that $[\frac{k_2n}{c},\frac{k_2n}{b})$ contains at least one integer for each $k_2\geq k_1=2$. Note that $6<\frac{3n}{c}<\frac{3n}{b}<9$. We distinguish three cases.
{\bf Case 1.} $7<\frac{3n}{c}<8<\frac{3n}{b}<9$. Then $\frac{n}{3}<b<\frac{3n}{8}\leq c<\frac{3n}{7}$.
Since $\gcd(n,8)=1$. Let $m=8$ and $k=3(<70\leq b)$. Then $ma=m(c-b+1)\leq8\times(\frac{3n-1}{7}-\frac{n+1}{3}+1)<n$, and we are done.
{\bf Case 2.} $6<\frac{3n}{c}\leq7<\frac{3n}{b}<8$. Then $\frac{3n}{8}<b<\frac{2n}{5}<\frac{3n}{7}\leq c<\frac{n}{2}$.
If $\gcd(n,7)=1$, then let $m=7$ and $k=3$. Since $\frac{3n}{8}<b<c<\frac{n}{2}$, $ma=m(c-b+1)\leq7\times(\frac{n-1}{2}-\frac{3n+1}{8}+1)<n$, and we are done.
Next assume that $7|n$. Note that $8<\frac{4n}{c}\leq10<\frac{4n}{b}<12$.
If $9\not\in[\frac{4n}{c},\frac{4n}{b})$, then $\frac{4n}{c}\geq9$. Let $m=12$ and $k=5$. Since $\frac{5n}{c}\leq 7\times\frac{5}{3}<12<10\times\frac{5}{4}<\frac{5}{4}\times\frac{4n}{b}=\frac{5n}{b}$ and $\frac{3n}{8}<b<c<\frac{4n}{9}$, we have
$ma=m(c-b+1)\leq12\times(\frac{4n-1}{9}-\frac{3n+1}{8}+1)<n$, and we are done.
If $9\in[\frac{4n}{c},\frac{4n}{b})$, then $\frac{4n}{c}\geq9$ and thus $\frac{3n}{8}<b<\frac{2n}{5}<\frac{4n}{9}<c<\frac{4n}{9}$. So
\begin{eqnarray*} 8n+\frac{n}{2}<\frac{69n}{8}<23b<\frac{46n}{5}<9n+\frac{n}{2}<10n<\frac{92n}{9}<23c<\frac{23n}{2}=11n+\frac{n}{2}.\end{eqnarray*}
Note that $a=c-b+1\leq\frac{n+3}{8}$. If $a>\frac{n}{8}$, let $M=12$. We obtain that $|Ma|_n>\frac{n}{2}$ and $|Mb|_n>\frac{n}{2}$, and we are done. If $a<\frac{n}{9}$, let $m=9$ and $k=4$, we have $ma<n$, and we are done. Then $\frac{n}{9}<a<\frac{n}{8}$, and thus
\begin{eqnarray*} 2n+\frac{n}{2}<\frac{23n}{9}<23a<\frac{23n}{8}<3n.\end{eqnarray*}
If $23c\leq11n$, then $\frac{n}{9}<a=c-b+1\leq\frac{19n+57}{184}$, which implies that $n<40$, contradiction. So we must have $23c>11n$. Similarly, we can show that $23b<9n$. Moreover, we have $\gcd(n,23)=1$, otherwise $n=5\times7\times23=805<1000$, contradiction. Then $|23|_n+|23c|_n+|23(n-b)|_n+|23(n-a)|_n=n$ and we are done.
{\bf Case 3.} $6<\frac{3n}{c}\leq7<8<\frac{3n}{b}<9$. Then $\frac{n}{3}<b<\frac{3n}{8}<\frac{3n}{7}\leq c<\frac{n}{2}$.
Note that $a=c-b+1\leq\frac{n+1}{6}$. If $a>\frac{n}{6}$, we have $n<6a\leq n+1$ implies that $6a=n+1$, and $\gcd(n,n+1)=\gcd(n,6a)=\gcd(n,a)>1$, contradiction. Then $a<\frac{n}{6}$.
{\it Subcase 3.1.} $11|n$. Then $\gcd(n,7)=1$, $\gcd(n,13)=1$ and $\gcd(n,17)=1$. Otherwise, $n\leq 5\times11\times17=935<1000$, contradiction.
We may assume that $a>\frac{n}{7}$. Otherwise, we can let $m=7$ and $k=3$, we have $ma<n$, so the lemma follows from Lemma 3.3(1). Then $\frac{3n}{2}<\frac{13n}{7}<13a<\frac{13n}{6}<\frac{5n}{2}<4n<\frac{13n}{3}<13b<\frac{39n}{8}<5n<\frac{11n}{2}<\frac{39n}{7}<13c<\frac{13}{2}$.
If $13c<6n$, then $\frac{n}{7}<a=c-b+1\leq\frac{5n+23}{39}$, so $n<41$, contradiction. Hence we must have that $13c>6n$, and then $|13c|_n<\frac{n}{2}$. If $13a<2n$ or $13b>\frac{9n}{2}$, then $|13a|_n>\frac{n}{2}$ or $|13b|_n>\frac{n}{2}$. Since $\gcd(n,13)=1$, the lemma follows from Lemma 3.3(2) with $M=13$.
Next assume that $13a>2n$ and $13b<\frac{9n}{2}$. Then $\frac{2n}{13}<a<b<\frac{9n}{26}$. Therefore,
\begin{eqnarray*} \frac{5n}{2}<\frac{34n}{13}<17a<\frac{17n}{6}<3n<\frac{11n}{2}<\frac{17n}{3}<17b<\frac{153n}{26}<6n.\end{eqnarray*}
We infer that $|17a|_n>\frac{n}{2}$ and $|17b|_n>\frac{n}{2}$. Since $\gcd(n,17)=1$, the lemma follows from Lemma 3.3(2) with $M=17$.
{\it Subcase 3.2.} $7|n$. Then $\gcd(n,11)=1$ and $\gcd(n,13)=1$.
As in {\it Subcase 3.1.} We may assume that $a>\frac{n}{8}$, and by a similar argument, we can complete the proof with $M=11$ or $M=13$.
{\it Subcase 3.3.} $\gcd(n,7)=\gcd(n,11)=1$. See the proof of Subcase 3.1 of Lemma 3.10 in \cite{LP}.
\end{proof}
\begin{lemma}
If $k_1=2$, then $\hbox{\rm ind}(S)=1$.
\end{lemma}
\begin{proof}
Since $k_1=2$, we have $\lceil\frac{n}{c}\rceil=\lceil\frac{n}{b}\rceil$ and have $2+r<\frac{n}{c}<\frac{n}{b}\leq3+r$ for some $r=0,1,2,3,4,5$. If $t=2$, then $8<\frac{2n}{c}<9<\frac{2n}{b}<10$ and $\gcd(n,9)=1$, contradiction. By Lemma 4.14, we only need to prove it for $t\not=0,2$. Particularly, when $s\geq6$, $\frac{n}{b}<4$, we only need consider $r=1$. We distinguish six cases.
{\bf Case 1.} $s=2$.
{\it Subcase 1.1.} $r=1$. Then $\frac{3n}{2b}<6<\frac{2n}{b}$, we have $6\in[\frac{3n}{2b},\frac{2n}{b}]$ and $\gcd(n,6)=1$, contradiction.
{\it Subcase 1.2.} $r\geq3$. By Lemma 5.1, we infer that $n<60$ with $u=5,v=8$, contradiction.
{\bf Case 2.} $s=3$.
{\it Subcase 2.1.} $r=1$. We infer that $6<\frac{2n}{c}\leq7<\frac{2n}{b}<8$ and $7|n$. Then
$9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, or $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$. Otherwise, $n=5\times7\times11<1000$, contradiction.
Then the proof is very similar to that in \cite{LP}.
{\it Subcase 2.2.} $r\geq3$. By Lemma 5.1, we infer that $n<320$ with $u=5,v=8$, contradiction.
{\bf Case 3.} $s=4$. Then $\frac{n}{b}<6$, and $t\leq3$.
{\it Subcase 3.1.} $r=1$. We infer that $6<\frac{2n}{c}\leq7<\frac{2n}{b}<8$ and $7|n$. Then
$9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, or $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$. Otherwise, $n=5\times7\times11<1000$, contradiction.
If $9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, then $\gcd(n,13)=1$, otherwise $n=5\times7\times13<1000$. Hence we infer that $13<\frac{4n}{c}\leq14<\frac{4n}{b}\leq\frac{44}{3}$, and $\frac{4n}{b}>14>13>\frac{77}{6}>\frac{7n}{2b}$, contradiction.
If $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$, then $\gcd(n,5)=1$, otherwise $n=5\times7\times11<1000$. Hence we infer that $\frac{3n}{b}>11>10>\frac{5n}{2b}$, contradiction.
{\it Subcase 3.2.} $r=3$. Then $10<\frac{2n}{c}\leq11<\frac{2n}{b}<12$, and $\frac{3n}{b}>\frac{33}{2}>16>15>\frac{5n}{2b}$. Then $\gcd(n,16)=1$ and $16\in[\frac{5n}{2b},\frac{3n}{b}]$, contradiction.
{\bf Case 4.} $s=5$. Then $\frac{n}{b}<6$, and $t\leq3$.
{\it Subcase 4.1.} $r=1$. We have $6<\frac{2n}{c}\leq7<\frac{2n}{b}<8$, and $7|n$.
If $9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, the proof is similar to {\it Subcase 3.1.}
If $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$, then $\gcd(n,5)=1$, otherwise $n=5\times7\times11<1000$. We infer that $\frac{40}{3}<\frac{4n}{c}\leq14<\frac{4n}{b}<15$ and $n=7\times11\times 17$. Moreover, $\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}>10$ implies that $a=7\times11$ or $a=7\times17$.
If $a=7\times11$, then $b=j\times11\times17$ or $b=j\times7\times17$ for some $j$. By $s=5$, we have $\frac{b}{a}\in[5,6)$, and we can't find such $j$.
If $a=7\times17$, then $b=j\times11\times17$ or $b=j\times11\times7$ for some $j$. We infer that $b=9\times11\times7=693$ and $c=a+b-1=811$, which contradicts to $\gcd(n,c)=11\times17$.
{\it Subcase 4.2.} $r=3$. Then $10<\frac{2n}{c}\leq11<\frac{2n}{b}<12$. Since $\gcd(n,16)=1$, We infer that $16<\frac{3n}{c}\leq17<\frac{3n}{b}<18$ and we can assume that $11\times17|n$. Then $\frac{4n}{b}>\frac{17\times4}{3}>22>21>\frac{7n}{2b}$ and $n=7\times11\times17$. Similar to {\it Subcase 4.1.}, it is impossible.
{\bf Case 5.} $s=6$.
$r=1$. We have $6<\frac{2n}{c}\leq7<\frac{2n}{b}<8$, and $7|n$.
If $9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, the proof is similar to {\it Subcase 3.1.}
If $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$, similar to {\it Subcase 3.1.}, $n=7\times11\times 17$. More over, $\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}>12$ implies that $a=7\times11$. Then $b=j\times11\times17$ or $b=j\times7\times17$ for some $j$. By $s=6$, $\frac{b}{a}\in[6,7)$, we infer that $b=4\times7\times17$, and $c=a+b-1=552$, which contradicts to $\gcd(n,c)=11\times17$.
{\bf Case 6.} $s=7,8,9$.
$r=1$. We infer that $6<\frac{2n}{c}\leq7<\frac{2n}{b}<8$ and $7|n$. Then
$9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$, or $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$. Otherwise, $n=5\times7\times11<1000$, contradiction.
{\it Subcase 6.1.} $9<\frac{3n}{c}<10<\frac{3n}{b}\leq11$. Then $\gcd(n,13)=1$, otherwise $n=5\times7\times13<1000$. Hence we infer that $13<\frac{4n}{c}\leq14<\frac{4n}{b}\leq\frac{44}{3}$, and $\frac{7n}{b}>24>\frac{13n}{2b}$, contradiction.
{\it Subcase 6.2.} $10<\frac{3n}{c}\leq11<\frac{3n}{b}<12$.
We have $\frac{50}{3}<\frac{5n}{c}<14<\frac{5n}{b}<20$. If $\frac{5n}{c}<17<\frac{5n}{b}$, then $n=7\times11\times17$. More over, $\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}>14$ and $\frac{19}{2}<14$ implies that $a=7\times11$. Then $b=j\times11\times17(j<4)$ or $b=j\times7\times17(j<6)$ for some $j$. We can't find suitable $j$ for $s=8,9$. When $s=7$, we have
$b=3\times11\times17$ or $b=5\times7\times17$. If $b=3\times11\times17$, $c=a+b-1=637=7\times91$, which contradicts to $\gcd(n,c)=7\times17$. If $b=5\times7\times17$, $c=a+b-1=594=11\times54$, which contradicts to $\gcd(n,c)=11\times17$.
If $\frac{5n}{c}<18<\frac{5n}{b}$, then $\hbox{\rm ind}(S)=1$.
If $\frac{5n}{c}\leq19<\frac{5n}{b}$, then $n=7\times11\times19$.
More over, $\frac{n}{a}=\frac{n}{b}\times\frac{b}{a}>14$ and $\frac{19}{2}<14$ implies that $a=7\times11$. Then $b=j\times11\times19(j<4)$ or $b=j\times7\times19(j<6)$ for some $j$. We can't find suitable $j$ for $s=7,9$. When $s=8$, we have
$b=3\times11\times19$ or $b=5\times7\times19$. If $b=3\times11\times19$, $c=a+b-1=703=19\times37$, which contradicts to $\gcd(n,c)=7\times19$. If $b=5\times7\times19$, $c=a+b-1=741=19\times39$, which contradicts to $\gcd(n,c)=11\times19$.
\end{proof}
{\noindent\bf Acknowledgements}
The author is thankful to the referees for valuable suggestions and to prof. Yuanlin Li and prof. Jiangtao Peng for their useful discussion and valuable comments.
\vskip30pt
\def\centerline{\bf REFERENCES}{\centerline{\bf REFERENCES}}
|
1,108,101,565,141 | arxiv | \section{Introduction}
\label{intro}
It is usually assumed that the $c/{\bar c} \to D$
fragmentation is responible for production of charmed mesons.
In leading order $g g \to c \bar c$ is dominant partonic subprocess.
The contribution of $q {\bar q} \to c {\bar c}$ is usually much smaller.
The leading-order production of charm is by far insufficient to
describe experimental distributions of $D$ mesons in rapidity and
transverse momentum. The NLO calculation is needed to describe
experimental data. An alternative is the $k_t$-factorization approach
which gives resonable description of $D$ meson single particle
distributions \cite{Maciula:2013wg}. It allows to describe even some
correlation observables \cite{hms2014}. Usually the Peterson fragmentation
functions \cite{Peterson:1982ak} are used for $c {\bar c} \to D$
fragmentations.
Recently the LHCb collaboration observed an intriguing asymmetries
for $D^+ D^-$ \cite{LHCb:2012fb} and $D_s^+ D_s^-$
\cite{Aaij:2018afd} production.
The question arises what is origin of such asymmetries.
In general, there can be a few reasons such as electroweak corrections,
higher-order pQCD effects. The electroweak corrections should
be important rather at large transverse momenta.
The LHCb collaboration measured the asymmetries at rather small
transverse momenta where statistics is enough to pin down the small
asymmetry effect.
In Fig.\ref{fig:dsig_dxf_partons} we show for ilustration distribution
of partons obtained in LO collinear approach.
Furthermore the distribution of light quarks and even antiquarks is
much larger than the distribution of $c/{\bar c}$ quarks/antiquarks
produced in gluon-gluon fusion process.
The distribution of light quarks is much larger than distribution
of corresponding antiquarks.
All this suggests that a nonzero subleading fragmentation
$d \to D^-$ and ${\bar d} \to D^+$ would produce an asymmetry
when added to the dominant $c/{\bar c} \to D$ fragmentation.
For $D_s$ meson production asymmetry the situation is more
subtle as far as subleading fragmentation is considered.
Here we have ${\bar s} \to D^+$ and $s \to D^-$ subleading
fragmentations. If $s(x) = {\bar s}(x)$ then of course
the asymmetry is zero. There are no deep reasons to assume
$s(x) = {\bar s}(x)$. Actually the nonperturbative
effects of the strange meson cloud lead to $s(x) \ne {\bar s}(x)$
(see e.g.\cite{Holtmann}).
Also some fits of parton distributions allow for different distributions
of $s$ and $\bar s$ partons \cite{Lai:2007dq}.
\begin{figure}[!h]
\begin{minipage}{0.45\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dxF_7TeV.eps}}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.45\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dxF_43TeV.eps}}
\end{minipage}
\caption{
\small Quark and antiquark distributions in Feynman $x_F$ for
$\sqrt{s} =$ 7 TeV (left panel) and $\sqrt{s} =$ 43 TeV (right panel)
corresponding to $E_{\mathrm{lab}}(p)$ = 10$^{9}$ GeV.
This calculation was performed within collinear-factorization
approach with somewhat arbitrary regularization parameter
$p_{T}^{0} =$ 0.5 GeV \cite{MS2018}.
}
\label{fig:dsig_dxf_partons}
\end{figure}
\section{Cross sections, production asymmetry
and subleading fragmentations}
Let us discuss first the dominant at the LHC contribution -- the
gluon-gluon fusion. The multi-diferential cross section for $c \bar c$
productions can be then calculated as:
\begin{eqnarray}\label{LO_kt-factorization}
\frac{d \sigma(p p \to c \bar c \, X)}{d y_1 d y_2 d^2p_{1,t} d^2p_{2,t}} &=&
\int \frac{d^2 k_{1,t}}{\pi} \frac{d^2 k_{2,t}}{\pi}
\frac{1}{16 \pi^2 (x_1 x_2 s)^2} \; \overline{ | {\cal M}^{\mathrm{off-shell}}_{g^* g^* \to c \bar c} |^2}
\\
&& \times \; \delta^{2} \left( \vec{k}_{1,t} + \vec{k}_{2,t}
- \vec{p}_{1,t} - \vec{p}_{2,t} \right) \;
{\cal F}_g(x_1,k_{1,t}^2) \; {\cal F}_g(x_2,k_{2,t}^2) \; \nonumber ,
\end{eqnarray}
where ${\cal F}_g(x_1,k_{1,t}^2)$ and ${\cal F}_g(x_2,k_{2,t}^2)$
are the gluon uPDFs for both colliding hadrons and
${\cal M}^{\mathrm{off-shell}}_{g^* g^* \to c \bar c}$ is the off-shell
matrix element for the hard subprocess.
First the distribution in rapidity and transverse momentum of
$c$ or ${\bar c}$ are obtained (inclusive cross section).
The cross section for $D$ meson can be obtained then as a convolution
of the partonic cross section for $g^* g^* \to c {\bar c}$ and
the $c / {\bar c} \to D$ fragmentation
functions. The Peterson fragmentation function \cite{Peterson:1982ak}
with $\epsilon$ parameter adjusted to experimental data.
In the studies presented here we include also $u,\bar u, d, \bar d \to D^i$
parton fragmentation to $D$ mesons.
We include only fragmentations of quarks/antiquarks that
are constituents of the $D$ meson.
We assume the following symmetry relation:
\begin{equation}
D_{d \to D^-}(z) = D_{\bar d \to D^+}(z) = D^{(0)}(z) \; .
\label{ff_symmetries}
\end{equation}
Similar flavor symmetry relations hold for fragmentation
of $u$ and $\bar u$ to $D^0$ and $\bar D^0$ mesons.\\
However $D_{q \to D^0}(z) \ne D_{q \to D^+}(z)$, which is caused
by the contributions from decays of vector $D^*$ mesons.
Furthermore we assume for doubly suppressed fragmentations:
\begin{equation}
D_{\bar u \to D^{\pm}}(z) = D_{u \to D^{\pm}}(z) = 0 \; .
\label{neglected_ff}
\end{equation}
The fragmentation functions at sufficiently large scales undergo
DGLAP evolution equations. Since in the presented here analysis
we are interested in small transverse momenta (small scales for
DGLAP evolution) we can just use rather the initial conditions for
the evolution, which are for the subleading fragmentation rather
poorly known.
We parametrize the unfavoured fragmentation functions as:
\begin{equation}
D_{q_f \to D}(z) = A_{\alpha} (1-z)^{\alpha} \; .
\label{ff_simple_parametrization}
\end{equation}
Instead of fixing the uknown $A_{\alpha}$ we will operate rather with
the fragmentation probabilities:
\begin{equation}
P_{q_f \to D} = \int dz \; A_{\alpha} \left( 1 - z \right)^{\alpha} \; .
\label{Dff_simple_parametrization}
\end{equation}
and calculate corresponding $A_{\alpha}$ for a fixed
$P_{q \to D}$ and $\alpha$.
Therefore in our effective approach we have only two free parameters.
Another simple option we considered in \cite{MS2018} is:
\begin{equation}
D_{q_f \to D}(z) = P_{q_f \to D} \cdot D_{\mathrm{Peterson}}(1-z) \; .
\label{Peterson}
\end{equation}
Then again $P_{q_f \to D}$ would be the only free parameter.
The flavour asymmetry in production of $D$ mesons is defined as:
\begin{equation}
A_{D^+/D^-}(\xi)
= \frac{ \frac{d \sigma_{D^-}}{d \xi}(\xi) - \frac{d \sigma_{D^+}}{d \xi}(\xi) }
{ \frac{d \sigma_{D^-}}{d \xi}(\xi) + \frac{d \sigma_{D^+}}{d \xi}(\xi) }
\; ,
\label{asymmetry_DpDm}
\end{equation}
where $\xi = x_F, y, p_T, (y,p_T)$.
For $D_s$ mesons we define the production asymmetry as:
\begin{equation}
A_{D_s^+/D_s^-}(\xi) =
\frac{ \frac{d\sigma(D_s^+)}{d\xi}(\xi) - \frac{d\sigma(D_s^-)}{d\xi}(\xi) }
{ \frac{d\sigma(D_s^+)}{d\xi}(\xi) + \frac{d\sigma(D_s^-)}{d\xi}(\xi) }
\; .
\label{asymmetry_DspDsm}
\end{equation}
The production of $D_s$ mesons is interesting in the context of the fact
that $D_s$ mesons are the main source of $\tau$-neutrinos:
\begin{eqnarray}
&&D_s^+ \to \tau^+ + \nu_{\tau} \; . \\
&&D_s^- \to \tau^- + \overline{\nu}_{\tau}
\end{eqnarray}
and in addition:
\begin{eqnarray}
&&\tau^+ \to {\bar \nu}_{\tau}+X \; , \\
&&\tau^- \to \nu_{\tau}+X \; .
\end{eqnarray}
Both emissions should be included in final evalution of $\tau$-(anti)neutrinos.
Finally in this presentation we consider production of $\Lambda_c$ baryons.
Whether the independent parton fragmentation works for $\Lambda_c$ baryons
was discussed in \cite{MS_Lambdac}.
In such an approach the cross section can be written as:
\begin{equation}
\frac{d \sigma(pp \rightarrow h X)}{d y_h d^2 p_{t,h}} \approx
\int_0^1 \frac{dz}{z^2} D_{c \to h}(z)
\frac{d \sigma(pp \rightarrow c X)}{d y_c d^2 p_{t,c}}
\Bigg\vert_{y_c = y_h \atop p_{t,c} = p_{t,h}/z} \;,
\label{Q_to_h}
\end{equation}
where $p_{t,c} = \frac{p_{t,h}}{z}$ and $z$ is the fraction of
longitudinal momentum of charm quark $c$ carried by a hadron
$h =D, \Lambda_c$.
A typical approximation in this formalism assumes $y_h = y_c$.
\section{Results}
In this section we will show our results for (anti)neutrino
production, cross sections for $D^+ D^-$ production and
$D^+ D^-$ and $D_s^+ D_s^-$ asymmetries as well as a possible
consequences for $\tau$ (anti)neutrino production and finally for
$\Lambda_c$ baryon production.
\subsection{Neutrino production in the atmosphere}
We start from showing our best (optimal) result for neutrino flux relevant
for the IceCube experiment. In Fig.\ref{fig:neutrino_flux_vs_PROSA}
we show our predictions obtained for calculating cross section
in the $k_t$-factorization approach with the KMR unintegrated
gluon distributions. Such an approach effectively includes higher-order
corrections as was discussed in the literature.
Our result well coincides with the PROSA
results within their uncertainty band.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{comparison_prosa.eps}
\caption{Comparison of our predictions for the prompt neutrino flux
and the Prosa results.}
\label{fig:neutrino_flux_vs_PROSA}
\end{center}
\end{figure}
The flux here was calculated within the $Z$-moment method \cite{GMPS2017}.
In such a calculation $\frac{d \sigma}{d x_F}(x_F,\sqrt{s})$ for
production of $D$ mesons is a crucial input.
Which energies of proton-proton scattering are responsible for
the production of high-energy neutrinos at IceCube?
In Fig.\ref{fig:energycut} we show how the upper cut on center-of-mass
energy influences the flux of high-energy neutrinos in the atmosphere.
For energies $E_{\nu} >$ 10$^8$ GeV, the collision energies larger than
those measured at the LHC enter the calculation. So predictions are
based on extrapolation to unexplored yet region.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{energycuts2.eps}
\caption{Impact of different cuts
on the maximal center-of-mass $pp$ collision energy
for the prompt neutrino flux.}
\label{fig:energycut}
\end{center}
\end{figure}
What are typical Feynman $x_F$ values responsible for production
of high-energy neutrinos is illustrated in Fig.\ref{fig:xf}.
Rather large values are important. Such a region is unfortunately not
covered by the LHC detectors. Even (often called) forward LHCb detector
is limited to $x_F <$ 0.1.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{xfcuts2.eps}
\caption{The effect of $x_F$ cuts on the prompt neutrino flux.}
\label{fig:xf}
\end{center}
\end{figure}
In Fig.\ref{fig:IceCube-data} we show our predictions for the flux
of high-energy neutrinos. This result was obtained within
$k_t$-factorization approach. Clearly such a calculation cannot describe
the measured flux of neutrinos. No subleading fragmentations were
included here. There seems to be arguments that at least part
of the missing yield is of astrophysical origin \cite{IceCube_Science}.
Can the subleading fragmentation play a role in this context ?
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{flux_IceCube_mmht.eps}
\caption{Comparison of predictions obtained with the CT14 and MMHT PDFs
for the prompt neutrino flux.
The data points are taken from IceCube analysis \cite{IceCube_fluxlimit}.
For comparison, a fit for the astrophysical contribution, proposed
in \cite{IceCube_fluxlimit} is presented as well.}
\label{fig:IceCube-data}
\end{center}
\end{figure}
\subsection{LHCb asymmetries}
The $D^+ D^-$ asymmetries obtained by us are shown in
Fig.\ref{fig:LHCb_asymmetry_charged} for $\sqrt{s}$ = 7 TeV.
Only one parameter, the quark/antiquark fragmentation probability,
was adjusted to the LHCb data.
In Ref.\cite{MS2018} we presented also our predictions for
$\sqrt{s}$ = 13 TeV.
\begin{figure}[!h]
\begin{minipage}{0.42\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{asymm_eta_DpDm_7TeV_PETv2.eps}}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.42\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{asymm_pt_DpDm_7TeV_PETv2.eps}}
\end{minipage}
\caption{
\small $A_{D^+/D^-}$ production asymmetry measured by the LHCb
collaboration at $\sqrt{s}= 7$ TeV as
a function of $D$ meson pseudorapidity (left panel)
and $D$ meson transverse momentum (right panel).
}
\label{fig:LHCb_asymmetry_charged}
\end{figure}
Similar asymmetry for the $D_s^+ D_s^-$ production is shown in
Fig.\ref{fig:Ds_asymmetry}.
Here the error bars are even larger than for the $D^+ D^-$ asymmetry
(see the previous figure).
Again adjusting only one free parameter we can roughly reproduce
the main trend of the LHCb data.
Please note that our approach predicts correct sign
of the asymmetry. In Ref.\cite{GMS2018} we showed also results
for $\sqrt{s}$ = 8 TeV.
\begin{figure}[!h]
\begin{minipage}{0.3\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{asymm_pt_Ds_7TeV_rap1_FF.eps}}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{0.3\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{asymm_pt_Ds_7TeV_rap2_FF.eps}}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{0.3\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{asymm_pt_Ds_7TeV_rap3_FF.eps}}
\end{minipage}
\caption{$D_s^+ / D_s^-$ asymmetry obtained by us together
with the LHCb collaboration for $\sqrt{s}$ = 7 TeV.
The CTEQ6.5 parton distributions are used in this calculation.
}
\label{fig:Ds_asymmetry}
\end{figure}
\subsection{Asymmetries at low collision energies}
Our approach has distinct predictions at low energies.
Here we show our predictions for low energies.
Quite large asymmetries were found.
As discussed in Ref.\cite{MS2018} detailed studies of the asymmetries
at low energies are necessary to pin down or limit subleading
fragmentation.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{asymm_y_DpDm_Energy.eps}
\caption{
\small $A_{D^{+}D^{-}}(y)$ production asymmetry in proton-proton
collisions for different center-of-mass energies $\sqrt{s}$.
}
\end{center}
\label{fig:assym_y_energy}
\end{figure}
\subsection{Charge-to-neutral $D$ meson ratio}
In Ref.\cite{MS2018} we discussed also the following ratio:
\begin{equation}
R_{c/n} \equiv \frac{D^+ + D^-}{D^0 + {\bar D}^0} \; .
\label{R_cton}
\end{equation}
In Fig.\ref{fig:R_cton} we show the ratio as a function of meson
rapidity for two different energies specified in the figure.
Evidently, when including subleading fragmentation, the ratio
depends on collision energy and rapidity. A test of such predictions
would be valuabale.
\begin{figure}[!h]
\begin{minipage}{0.42\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{R_eta_7_13TeV_pTo15.eps}}
\end{minipage}
\begin{minipage}{0.42\textwidth}
\centerline{\includegraphics[width=1.0\textwidth]{Ratio_cn_100GeV.eps}}
\end{minipage}
\caption{
\small The $R_{c/n}$ ratio as a function of meson pseudorapidity for
$\sqrt{s}= 7$ and $13$ TeV for the LHCb kinematics (left panel) and
as a function of meson rapidity for $\sqrt{s}$ = 100 GeV
in the full phase-space (right panel).
Only quark-gluon components (diagrams) are included here in calculating
cross section for $q$ and $\bar q$ production.
}
\label{fig:R_cton}
\end{figure}
\subsection{$\nu_{\tau}$ neutrinos and ${\bar \nu}_{\tau}$
antineutrinos at IceCube}
In our recent analysis we showed how the flux of $\tau$
neutrinos/antineutrinos could be modified by the subleading
$s/{\bar s} \to D_s$ fragmentation.
In Fig.\ref{fig:flux_tau_neutrinos} we show the conventional flux
(due to $g g \to c \bar c$ fusion) and that of the subleading
fragmentation (left panel) as well as the corresponding ratio (right panel).
The sizeable enhancement of the neutrino flux is not excluded in the moment.
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{tauneutrino.eps}
\includegraphics[width=5cm]{ratiotauneutrino.eps}
\caption{
\small Our predictions for the flux of $\tau$ neutrinos (left panel)
and the suggested enhancement factor with respect to the traditional
$c {\bar c} \to D_s$ component (right panel).
}
\label{fig:flux_tau_neutrinos}
\end{center}
\end{figure}
\subsection{$\Lambda_c$ production}
In Fig.\ref{fig:dsig_dpt_Dmesons} we show our description of $D$ meson
transverse momenta. In this calculation $y_D = y_c$ was assumed.
This is a standard technical prescription for $c / {\bar c} \to D$
meson production in $pp$ collisions.
\begin{figure}[!h]
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dpt_ALICE_Dmeson_chkd.eps}}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dpt_LHCb_Dmeson_chkd.eps}}
\end{minipage}
\caption{
\small Transverse momentum distribution of $D$ mesons for $\sqrt{s}$ = 7
TeV for ALICE (left panel) and LHCb (right panel).
}
\label{fig:dsig_dpt_Dmesons}
\end{figure}
In Fig.\ref{fig:dsig_dpt_LambdaC} we show similar results for
$\Lambda_c$ production. We have shown our results for different
$c / {\bar c} \to \Lambda_c$ transition probabilities.
Values of the transition probability smaller than 10 \% were
obtained from $e^+ e^-$ collisions. The new LHC data require
much larger transition probabilities. This is especially true for
the ALICE (midrapidity) data, where a value close to 20 \% is needed.
Does it signal a new mechanism?
\begin{figure}[!htbp]
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dpt_ALICE_LamC_Ffrac.eps}}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{dsig_dpt_LHCb_LamC_Ffrac.eps}}
\end{minipage}
\caption{
\small Transverse momentum distribution of $\Lambda_c$ baryon
for $\sqrt{s}$ = 7 TeV for ALICE (left panel) and LHCb (right panel).
}
\label{fig:dsig_dpt_LambdaC}
\end{figure}
In Fig.\ref{fig:ratio_pt_different_epsilons} we show the ratio of
cross section for $\Lambda_c$ to the cross section for $D^0$.
This once more shows a problem of independent-parton fragmentation
picture, especially at midrapidities.
\begin{figure}[!htbp]
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{Ratio_pt_ALICE.eps}}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{Ratio_pt_LHCb.eps}}
\end{minipage}
\caption{
\small Transverse momentum dependence of the
$\Lambda_c/D^0$ baryon-to-meson ratio for ALICE (left) and LHCb (right)
for different choices of the $\varepsilon_{c}^{\Lambda}$
parameter for $c \to \Lambda_c$ transition in the Peterson
fragmentation function.
}
\label{fig:ratio_pt_different_epsilons}
\end{figure}
In Ref.\cite{MS_Lambdac} we studied other options such as emissions
with the assumption $\eta_{\Lambda_c} = \eta_c$ (pseudorapidities)
as well as possible feed down from highly excited charmed baryons.
Some small improvements, especially for the ratio, are possible but
the main disagreement with independent parton fragmentation picture stays.
Perhaps this could be explained in terms of a recombination model.
This requires further studies and modeling of such processes.
\section{Conclusions}
In one of our recent papers we demonstrated that the production of
high-energy neutrinos is related to very high $pp$ collision energies
(even larger than at the LHC) and rather large $x_F$
(not accessible at the LHC).
Do we know mechanisms of $D$ meson production in these regions?
Here we have presented and discussed briefly some results on asymmetry
in the production of $D^+$ and $D^-$
\cite{MS2018} as well as $D_s^+ D_s^-$ mesons \cite{GMS2018}
observed recently by the LHCb collaboration \cite{LHCb:2012fb,Aaij:2018afd}.
Here we have discussed a scenario in which subleading
(unfavored) fragmentation $q/{\bar q} \to D^{\pm}$ is responsible
for the asymmetry.
In the case of $D^+ D^-$ asymmetry it is quark-antiquark asymmetry
in the nucleon which is responsible for the effect. Adjusting the
corresponding quark/antiquark fragmentation probability we were able
to describe the corresponding asymmetry.
This has dramatic consequences for low collision energies.
We predicted huge asymmetries for RHIC and even larger for lower energies.
We hope this will be verified in future by planned or possible
to perform experiments. It is not yet checked what are consequences
of the subleading fragmentation for high-energy neutrino production.
The asymmetry in the production of $D_s^+$ and $D_s^-$ mesons
is a bit more subtle. Here we have ${\bar s} \to D_s^+$ and $s \to D_s^-$
subleading fragmentations. The asymmetry of $D_s^+$ and $D_s^-$
production is possible provided there is $s(x) \ne {\bar s}(x)$.
Recently we have used one of the CTEQ parton distributions from
the fit which allows such a $s - \bar s$ asymmetry in longitudinal
momentum fraction. Our approach gives then correct sign of the asymmetry
and it was possible to find corresponding transition probability
to roughly describe the LHCb data.
This procedure was used to calculate flux of $\tau$ neutrinos
produced in the atmosphere. A significant enhancement was suggested.
There are first trials to identify
$\tau$ neutrinos with the help of IceCube aparatus \cite{Aartsen:2015dlt}.
Finally we have discussed production of $\Lambda_c$ baryons within
independent-parton fragmentation picture.
It was demonstrated that such a picture is insufficient to consistently
describe new LHC data. Especially for midrapidities (ALICE experiment)
one observes a significant enhancement compared to the results
with corresponding fragmentation probabilities $c / {\bar c} \to \Lambda_c$
obtained from $e^+ e^-$ collisions as well as for lower proton-proton
collision energies. This strongly suggest a new mechanism.
Quark recombination is a good candidate.
{\bf Acknowledgments}
This study was partially supported by the Polish National Science Center
grant DEC-2014/15/B/ST2/02528 and by the Center for Innovation and
Transfer of Natural Sciences and Engineering Knowledge in Rzesz{\'o}w.
|
1,108,101,565,142 | arxiv | \section{Introduction}
Vortices have been a source of fascination
since the works of Empedocles, Aristotle and Descartes, who tried
to explain the formation of the Earth, its gravity and the
dynamics of the solar system as due to primordial cosmic vortices.
Many interesting problems related to vortices are open in different
fields such as fluid mechanics, superconductivity, superfluidity,
light propagation, Bose-Einstein condensation (BEC), cosmology,
biosciences, or solid state physics \cite{Lug95,Pis99,Sols,experimental1,experimental2,experimental3}.
In wave mechanics a vortex is a screw phase
dislocation, or defect \cite{nye74}, where the amplitude of the
field vanishes. The phase around the singularity has an integer
number of windings, $\ell$, which plays the role of an angular
momentum. For symmetric systems, this
number is a conserved quantity and governs the interactions
between vortices as if they were endowed with electrostatic
charges. Thus, $\ell$ is usually called the \emph{topological
charge} of the defect.
Angular momentum is conserved in a quantum system
with O(2) rotational symmetry. If we consider a
state with well-defined angular momentum $ \ell \in \mathbb{Z}$, i.e.,
an eigenfunction of the angular momentum operator at a given time $t_0$, its
evolution will preserve the value of $\ell$. In a system possessing a discrete
point-symmetry (described by the $C_n$
and $C_{nv}$ groups) the angular momentum is no longer
conserved. However, in this case one can define another
quantity $m \in \mathbb{Z}$, the Bloch or angular pseudo-momentum,
which is conserved under time evolution \cite{9}. The angular
pseudo-momentum $m$ plays then the role of $\ell$ in a system with
discrete rotational symmetry. From the group theoretical point
of view, the angular momenta and pseudo-momenta, $\ell$ and $m$,
are also the indices of the 2D irreducible representations of
O(2) and $C_n$, respectively \cite{10,11,12}. Unlike $\ell$, the values of
$m$ are limited by the order of the point-symmetry group $C_n$.
The existence of vortices is one of the signatures of superfluidity and this is why in the field of Bose-Einstein
condensation, they have attracted so much interest. They can be generated in rotating traps \cite{experimental2,experimental3} or by phase imprinting methods \cite{Kett1,Kett2}. The later procedure allows to generate only multiply charged vortices with topological charges $m=2,4$.
In the last years there has been an enormous interest on the applications of group theory to study the
properties of defects in media with discrete symmetries including photonic crystals, periodic potentials
with discrete symmetries, etc \cite{9,10,11,12,SoliTop,Oster}.
In this paper we explore the application of group theory to
control the topological charge of vortices in Bose-Einstein condensates by using external potentials
with discrete rotational symmetry. To do so we propose a simple setup based on a non-periodic potential with discrete rotational symmetry which will allow us to perform many operations with the vortex charges depending on the initial charge and the potential symmetry order. While the vortex transmutation has been previously explored in potentials with broken symmetries \cite{Ripi} and in the context of photonic lattices \cite{13}, other operations to be proposed here have not been studied before.
Our proposal is simpler to implement than the 2D lattice type potentials proposed in the framework
of photonic lattices \cite{SoliTop} and is easier to reconfigure.
We will also show how starting from multiply charged vortices such as the ones which can be generated in atom chips by phase-imprinting methods \cite{Kett1,Kett2} one can generate different types of vortices by choosing an appropriate control potential.
Our plan is as follows: In Sec. \ref{III} we present the theory in which our methodology is based. In Sec. \ref{III} we present several examples. First we present our specific setup in Sec. \ref{Pra} and then discuss several phenomena which can be achieved such as: topological charge erasing (Sec. \ref{erase}), and two examples of topological charge inversion from $m=2$ to $m=-1$ (Sec. \ref{single}) and $m=4$ to $m=-1$ (Sec. \ref{minusingle}).
\section{Theory}
\label{III}
In this paper we will consider a Bose-Einstein condensate with tight confinement along an specific direction ($z$) leading to a quasi-two dimensional Bose-Einstein condensate.
Under the effect of an additional external potential $V(\mathbf{x})$ this system
is ruled in the mean field limit by an effective Gross-Pitaevskii equation \cite{PG98,Ripi}
\begin{equation}\label{NLS}
i \partial_t \psi(\mathbf{x}) = -\frac{1}{2} \Delta_{\mathbf{x}} \psi(\mathbf{x}) + V(\mathbf{x}) \psi(\mathbf{x}) + g |\psi(\mathbf{x})|^2 \psi(\mathbf{x}),
\end{equation}
in dimensionless units and where $g$ is a measure of the effective nonlinearity.
The symmetry of the potential induces strict restrictions on the vorticity of stationary solutions of Eq.(\ref{NLS}). In order to illustrate this statement, let us consider the equation for stationary solutions, which is a nonlinear eigenvalue equation of the
following type:
\begin{equation}\label{SNLS}
H \left(\psi(\mathbf{x}),\mathbf{x} \right) \psi(\mathbf{x})=\mu \psi(\mathbf{x}),
\end{equation}
where the nonlinear Hamiltonian operator depends on the field itself, i.e., $H\left(\psi(\mathbf{x}),\mathbf{x} \right) \equiv -\frac{1}{2} \Delta_{\mathbf{x}} + V(\mathbf{x}) + g |\psi(\mathbf{x})|^2$ The nonlinear solution $\psi$ can be considered as a self-consistent solution. It appears as an eigenstate of the Hamiltonian but, at the same time, it defines the Hamiltonian operator itself since $H$ depends on it through the nonlinear term $g |\psi|^2$.
One can establish necessary conditions for the existence of stationary symmetric solutions of Eq.(\ref{SNLS}) based on the symmetry properties of the potential $V(\mathbf{x})$. They will exhibit special features due to the nonlinear nature of the Hamiltonian operator $H(\psi)$. We can summarize the properties of these solutions in a single statement: (i) if the potential $V$ is invariant under a point symmetry group that we refer to generically as ${\cal G}$ ---i.e., ${\cal G}$ describes finite two-dimensional $2 \pi/N$ rotations around an axis ($C_N$ group) and specular reflections ($C_{Nv}$ group)--- and (ii) if we search for symmetric solutions fulfilling the condition $|\psi(G\mathbf{x})|^2=|\psi(\mathbf{x})|^2$ (where $G$ is any element of the group ${\cal G}$), then the solution $\psi$
must belong to some representation $D^m_\nu({\cal G})$ of the symmetry group ${\cal G}$ or to some of their subgroups ${\cal G'}\subset {\cal G}$. Since this statement is rather mathematical, it is convenient to analyze it in the light of the symmetry properties of the nonlinear Hamiltonian $H(\psi)$. First of all, let us recall that if $\psi$ is a given stationary solution $\psi=\psi_{\textrm{sol}}$ satisfying Eq.(\ref{SNLS}) then $\psi_{\textrm{sol}}$ plays two different roles in this nonlinear eigenvalue equation. On the one hand, $\psi_{\textrm{sol}}$ defines the Hamiltonian operator $H(\psi_{\textrm{sol}})$ whereas, on the other hand, it appears as an eigenfunction of the same operator. This two-fold role has profound implications on the allowed functional form of the solution. The first consequence of the explicit, and specific, dependence of the nonlinear Hamiltonian on $\psi_{\textrm{sol}}$ is that $H(\psi_{\textrm{sol}})$ inherits the symmetry of the potential. This is a consequence of assumptions (i) and (ii). Since the Laplacian operator is invariant under any type of rotation $\Delta_{G\mathbf{x}}=\Delta_{\mathbf{x}}$ and, on the other hand, $V(G\mathbf{x})=V(\mathbf{x})$ and $|\psi_{\textrm{sol}}(G\mathbf{x})|^2=|\psi_{\textrm{sol}}(\mathbf{x})|^2$ because of the previous assumptions, the nonlinear Hamiltonian evaluated at $\psi{_\textrm{sol}}$ is automatically invariant under the symmetry group $\cal{G}$:
$H\left( \psi(G \mathbf{x}), G \mathbf{x} \right) =-\frac{1}{2}\Delta_{G\mathbf{x}}+V(G\mathbf{}x) + g |\psi(G \mathbf{x})|^2=-\frac{1}{2} \Delta_{\mathbf{x}} + V(\mathbf{x}) + g |\psi(\mathbf{x})|^2=H\left(\psi(\mathbf{x}),\mathbf{x} \right)$. The fact that the Hamiltonian operator evaluated at $\psi_{\textrm{sol}}$ is invariant under the group $\cal{G}$ implies that $H(\psi_{\textrm{sol}})$ commutes with all elements of this group, i.e., $[H(\psi_{\textrm{sol}}),G]=0$ $\forall G \in {\cal G}$ and, therefore, according to standard quantum mechanics arguments, all its eigenfunctions must belong to the different representations of the symmetry group ${\cal G}$. At this point, we have considered $\psi_{\textrm{sol}}$ only in its role of generator of the nonlinear Hamiltonian operator. However, in its second role $\psi_{\textrm{sol}}$ must also appear as an eigenfunction of $H(\psi_{\textrm{sol}})$. Therefore, $\psi_{\textrm{sol}}$ must belong to some of the representations of $H(\psi_{\textrm{sol}})$, i.e., $\psi_{\textrm{sol}}\in D^m_\nu({\cal G})$, where $D^m_\nu$ indicates the representation characterized by the representation index $m$ and the degeneracy index $\nu$. The degeneracy of the representations of the point symmetry groups $C_N$ and $C_{Nv}$ is either one or two \cite{11}. One dimensional representations automatically fulfill the symmetry condition for the amplitude $|\psi(G\mathbf{x})|^2=|\psi(\mathbf{x})|^2$ assumed in (ii) since they do not transform, except for a sign, under the action of a finite rotation: $G \psi(\mathbf{x})=\psi(G\mathbf{x})=\pm \psi(\mathbf{x})$. The requirement of amplitude invariance is, however, trickier for two-dimensional representations. Let us see next why. A natural basis for a two-dimensional representation of a point symmetry group is that formed by the eigenvectors of the group operator in such representation, i.e., that given by the two functions $(\psi_m, \psi^\ast_m)$ fulfilling $G \psi_m(\mathbf{x})=\psi_m ({G\mathbf{x})=\epsilon^m \psi_m(\mathbf{x})}$ and its complex conjugate, $m$ being the representation index and the eigenvalue being given by $\epsilon=e^{i 2 \pi/N}$. Since a group rotation operator acting on this representation will be represented by a $2 \times 2$ matrix this two vectors provides the basis where this matrix is diagonal. However, the more general form of a function belonging to the $m$-representation will be given by a linear combination of $\psi_m$ and $\psi^\ast_m$. This general solution has a problem with respect to the requirement of invariance for the amplitude assumed in (ii). Indeed, in the most general case in which we consider arbitrary coefficients, this linear combination does not verify the previous condition and thus such a function would be excluded as a solution of the problem enjoying symmetry under the full group ${\cal G}$ (however, it can fulfill the condition for a subgroup ${\cal G'} \subset {\cal G}$, see Ref.\cite{10}). In this two-dimensional subspace the only two functions fulfilling the condition are the eigenvectors of the $G$ operator $\psi_m$ and $\psi^\ast_m$. This is so because the eigenvalue $\epsilon^m$ is a pure phase number and therefore with unit modulus. Consequently, only $\psi_m$ and $\psi^\ast_m$ can be nonlinear solutions of Eq.(\ref{SNLS}). This fact is in remarkable contrast with respect to the linear case in which all linear combinations belonging to the representation appear as solutions of the eigenvalue equation. Notice that the $\psi_m$ and $\psi^\ast_m$ complex solutions represent a vortex-antivortex soliton pair of topological charge $m$ and $-m$ respectively. In order to interpret this result on a more physical basis it is convenient to re-write the action of a discrete rotation of order $N$ on these functions using polar coordinates:
\begin{equation}
G \psi_m(r,\theta)=\psi_m(r,\theta+2\pi/N)=e^{i 2 \pi m/N}\psi_m(r,\theta)
\label{transformation_property}
\end{equation}
and its complex conjugate. As it was recognized in Ref.(\cite{9}), this condition is identical to that fulfilled by one-dimensional Bloch modes but by substituting a standard non-compact spatial coordinate by the compact angular one $\theta$. For this reason the most general form for such solutions are that corresponding to angular Bloch modes:
\begin{equation}\label{bloch_modes}
\psi_m(r,\theta)=e^{i m \theta}f_m(r,\theta)
\end{equation}
where $f_m$ is an invariant function under $2 \pi/N$ rotations and $m$ plays the role of ``angular" pseudo-momentum satisfying the following constraints: $|m|<N/2$ (if $N$ is even) and $|m|\le (N-1)/2$ (if $N$ is odd). Since $m$ is also a representation index, these constraints can be explained using group theory arguments \cite{12}. However, it is physically more intuitive to interpret them by means of the properties of the angular pseudo-momentum in an equivalent Bloch problem in terms of the angular variable $\theta$. The symmetry condition of the potential under $N$th-order rotations ---$V(r,\theta+2 \pi/N)=V(r,\theta)$--- is understood as a periodicity condition in the angular variable $\theta$ with period given by $a=2 \pi/N$. As we have seen, this periodicity property is inherited by the full Hamiltonian $H(\psi)$ when $\psi$ fulfills the symmetry condition $|\psi(r,\theta+2 \pi/N)|^2=|\psi(r,\theta)|^2$ (assumption (ii)). Since the full Hamiltonian is periodic it is then natural that their solutions transform as in Eq.(\ref{transformation_property}) and they present the angular Bloch form (\ref{bloch_modes}). The role of $m$ is therefore that corresponding to the conjugate variable of the periodic variable $\theta$, hence its name of ``angular" pseudo-momentum. The angular nature of $\theta$ forces $m$ to be an integer in order to preserve the single-valuedness of the wavefunction $\psi$. Energy is a periodic function of pseudo-momentum with periodicity given by the so-called reciprocal lattice vector $2\pi/a=N$, which determines the existence of equivalent zones in pseudo-momentum space called Brillouin zones. In this framework the constraints in the values of $m$ arise from the fact that all possible solutions can be reduced to those existing in the first Brillouin zone in pseudo-momentum space, i.e., those fulfilling $|m|\le \pi/a=N/2 $, which is equivalent to the aforementioned constraints. As an example, in Fig.\ref{Bloch_structure} it is represented the angular Bloch structure of energy eigenstates of two systems owning discrete symmetry of $4$th and $5$th order. Notice that the period of the Brillouin zone ---i.e., the reciprocal lattice vector--- is equal to the order of symmetry ($N=4,5$) and that the allowed angular pseudo-momenta fullfill the condition $m \le 2$ corresponding to the first Brillouin zone. In this scenario it is rather intuitive to understand what happens to a solution carrying angular momentum $l$ evolving in a medium characterized by full rotational invariance ---given by the $O(2)$ group--- when we suddenly switch on a potential that breaks full rotational symmetry into a discrete-symmetry of $N$th order ---given by the $C_N$ group. In terms of the angular variable $\theta$ the solution with angular momentum $l$ behaves like a sort of ``angular" plane wave $e^{i l \theta}f_l(r)$ since the amplitude $f_l(r)$ is angle-independent. The angular momentum $l$ plays the role of ordinary (discretized) momentum. The problem is thus equivalent to that of a plane wave propagating in a constant potential ---in $\theta$, since $U(r)=V(r)+g f^2_l(r)$ is angle independent--- that, at some specific moment $t_0$, feels the presence of a periodic potential $U(r,\theta+a)=U(r,\theta)$. At $t_0$ the wavefunction corresponding to the initial angular plane wave must excite the spectrum of the nonlineal operator $H(\psi(t_0))$. However, the full potential $U(r,\theta;t_0)=V(r,\theta)+g f^2_l(r)$ is no longer $O(2)$-invariant but $C_N$-invariant due to the breaking of continuous symmetry by the appearance of the discrete-symmetry potential $V(r,\theta)$ at $t_0$ and whose eigenstates are angular Bloch modes of the form (\ref{bloch_modes}). This fact implies that there must be a matching between the input angular momentum and the angular pseudo-momenta of allowed angular Bloch states after the $C_N$ interaction is switched on. In other words, the initial angular momentum $l$ carried by the angular plane wave $e^{i l \theta}f_l(r)$ must match the angular pseudo-momentum $m$ of some angular Bloch state. If $|l|\le N/2$ the initial angular momentum can always match an angular pseudo-momentum in the first Brillouin zone and, consequently, $l=m$. However, if $|l| > N/2$ we excite angular Bloch states in higher-order Brillouin zones which, on the other hand, we know are equivalent to those in the first Brillouin zone. This equivalence is given by periodicity in pseudo-momentum space which is provided by the reciprocal lattice vector $2\pi/a=N$ in such a way two pseudo-momenta are equivalent if they differ from each other by a multiple of this vector. This establishes the desired matching condition for angular pseudo-momentum in terms of the initial angular momentum $l$:
\begin{equation}
l-m=kN,\;k \in \mathbb{Z}.
\label{matching_condition}
\end{equation}
\begin{figure}
\epsfig{file=figure1.eps,width=\columnwidth}
\caption{[Color online] Angular Bloch structure of solutions for a system with rotational symmetry of order: (a) $N=4$ and (b) $N=5$. Red and green circles symbolize solutions that belong either to one dimensional or two dimensional irreducible representations. Red arrows represent input angular momentum while blue arrows represent output angular pseudo-momentum. Some examples of matching conditions for angular pseudo-momentum are indicated.\label{Bloch_structure}}
\end{figure}
This matching condition can be easily visualized in Fig.\ref{Bloch_structure} for the $N=4$ and $N=5$ cases. When $N=4$ and $l \le 2$ we can always find Bloch states in the first Brillouin zone with the same value of the angular pseudo-momentum $m=l$. This is the case of the input angular momentum $l=2$ represented in Fig.\ref{Bloch_structure}(a). For $N=5$ the situation is similar, i.e., we excite angular Bloch states with $m=l$ as long as $l\le 2$ (see Fig.\ref{Bloch_structure}(b)). However, when this condition is not fulfilled ($l>2$) we excite an equivalent Bloch state with different angular pseudo-momentum in the first Brillouin zone.
Let us choose, for example, the input angular momentum to be $l=3$ in the $N=5$ case. We immediately see that we are exciting an equivalent Bloch state with different angular pseudo-momentum in the first Brillouin zone given by $m=-2$, in agreement with the matching condition
(\ref{matching_condition}).
The previous considerations refer to angular pseudo-momentum and thus they are related to the way the wavefunction transform under discrete rotations; i.e., in the way described by
Eq.(\ref{transformation_property}). Using analogous arguments as those appearing in Ref.\cite{9}, it can be proven that the angular pseudo-momentum is conserved during time evolution. This means that if we start at the initial time $t_0$ with a solution fulfilling Eq.(\ref{transformation_property}), this solution will preserve this property during evolution provided no new alteration in the symmetry of the potential is given. In other words, if the initial angular pseudo-momentum is $m$ at $t_0$ it will remain the same at any other time $t>t_0$. This property has been numerically checked in optical vortex transmutation phenomena \cite{13}. When, besides having a solution with angular pseudo-momentum $m$, we are dealing with a function characterized by a single-phase singularity the angular pseudo-momentum $m$ becomes identical to the topological charge \cite{12,13}.
It is remarkable that the irreducible representations in $C_{nv}$ are either one or two dimensional. This statement is based in group theory arguments or can be deduced directly from inspection of the bloch functional form (\ref{bloch_modes}). In fact, if $N$ is even, the dimension of the representation is one for $m=0$ or $\mid m\mid =\frac{N}{2}$, while it is of dimension two for $\mid m\mid =1,\dots,\frac{N}{2}-1$. And if $N$ is odd, the dimension of representation is one for $m=0$, while it is two for $\mid m\mid =1,\dots,\frac{N-1}{2}$. Since one dimensional representations do not transform under rotations except for a sign, they are necessarily real. On the other hand, two dimensional representations present a non trivial phase structure. Thereby, only solutions belonging to two dimensional irreducible representations can be considered as vortex solutions. Notice that, as stated previously, the base for these two dimensional irreducible representations is the complex pair $\psi_m$ and $\psi^\ast_m$ of vortex-antivortex soliton solutions.
According to these statments and to the matching condition (\ref{matching_condition}) there are different kinds of vorticity transformations between $O(2)$ and $C_{nv}$ media. For $l$ in the first Brillouin zone no transformation is produced. For $l$ outside the first Brillouin zone there are two kinds of transformation: i) if the value of $m$ that corresponds to the input $l$ according to the matching condition (\ref{matching_condition}) is such that the solution belongs to a irreducible representation of dimension one, the phenomenon is called {\it charge erasing} and ii) if $m$ is such that the related representation is a two dimensional one, the phenomenon is called {\it vortex transmutation}. Moreover, there are two kinds of vortex transmutation phenomenon: i) charge downconversion if $m$ has the same sign that $l$ and ii) charge inversion if $m$ has different sign that $l$. For example, in in Fig.\ref{Bloch_structure}(a) a case of charge erasing is presented. On the other hand, the Fig.\ref{Bloch_structure}(b) represents a charge inversion.
This remarkable fact can be used to exploit the angular pseudo-momentum matching condition (\ref{matching_condition}) as a way to control the topological charge of matter wave vortices, as it will demonstrated extensively in the next sections.
\section{Applications}
\label{App}
\subsection{Practical configuration}
\label{Pra}
In order to work with the minimal configuration showing the phenomena to be described here we have made the simplifications of
taking the linear limit $g=0$ and chosing a simple potential $V(x,y)$ obtained as the superposition of $N$ gaussian functions of the form
\begin{equation}\label{pot}
V(x,y) = V_0 \sum_{j=0}^{N-1} \exp\left\{ \left[(x-x_j)^2+(y-y_j)^2\right]/(2 w^2)\right\},
\end{equation}
with $(x_j,y_j) = d\left(\cos(2\pi j/N),\sin(2\pi j/N)\right)$
This type of potential can be obtained physically by using a set of laser beams generating a set of optical dipole traps for the Bose-Einstein condensate.
We have checked that our results are essentially independent of the nonlinearity and that they remain valid for the more complicated case of a periodic potential of the required symmetry. However, the choice of the potential as given by Eq. (\ref{pot}) is not only simpler but allows for more freedom in the selection of the symmetry since we can take e.g. $N=5$ to obtain a discrete symmetry of fifth order, something which is not possible with lattice potentials.
\begin{figure}
\epsfig{file=figure2.eps,width=\columnwidth}
\caption{[Color online] Erasing of a vortex of topological charge $m=2$ due to the effect of the discrete symmetry of the potential \eqref{pot} with $N=4, d=4, V_0 = -0.6$. Shown are pseudocolor plots of the amplitude $|\psi(x,y,t)|^2$ (a,b,c,d) and phase $\arg(\psi(x,y,t))$ (e,f) for different times indicated on the subplots. The locations of the potential minima according to Eq. (\ref{pot}) are indicated with small white circles. \label{sec}}
\end{figure}
It is also very simple to change between configurations with different symmetry orders ($N$) even dynamically by
just adding or eliminating laser beams.
In what follows we will solve Eq. (\ref{NLS}) with $V(x,y)$ given by Eq. (\ref{pot}) and initial data of (multicharged) vortex type.
All simulations to be presented in this paper have been done using a split-step method where the spatial derivatives are computed by using a pseudoespectral formula
based on trigonometric polynomials. In all cases to be presented here we have chosen $\Delta t = 0.025$ and the simulation region $[-20,20]\times [-20,20]$. In the figures the spatial region shown is
$[-10,10]\times [-10,10]$. The outgoing radiation is elliminated by absorbing boundary conditions such as the ones implemented in Ref. \cite{IMACS}.
\subsection{Charge erasing}
\label{erase}
As a first example of vorticity control by discrete symmetries we consider the evolution of an initial configuration of the form
\begin{equation}\label{doubly}
\psi(x,y) = (x+iy)^2 \exp{\left[(-x^2-y^2)/8\right]}
\end{equation}
i.e. a vortex of topological charge $l=2$. This initial wavefunction
will be subject to a potential with fourth order symmetry corresponding to Eq. (\ref{pot}) with $N=4$. The other potential parameters are taken to be $V_0 = -0.6$ and $
d=4$.
The evolution of this initial datum is shown in Fig. \ref{sec}. Where some typical features are seen. First of all the amplitude of the wavefunction [Fig. \ref{sec}(a,b,c,d)] becomes localized on
the four potential wells after a transient in which some radiation is emitted. To achieve this effect of localization it is convenient to adjust the potential minima to be close to the
initial maximum density so that the fraction of the initial number of particles which is retained into the system is maximized.
At the same time the phase [Fig. \ref{sec}(e,f)] experiences a complicated evolution from the initial configuration with $l=2$, to a final configuration with two nodal lines with $x=0$ and $y=0$ (which are the symmetry axes of this potential) and constant phases across the four regions in which the space is separated by them, which correspond to a solution with $m=2$ that belongs to a representation of dimension one. Let us obtain the theoretical prediction for this case. We must take $k=0$ in expression (\ref{matching_condition}), since the input vortex is in the first Brillouin zone, and consequently, $m=2-0\cdot 4=2$, in agreement with the numerical result.
\subsection{Generation of singly charged vortices from doubly charged ones: charge inversion}
\label{single}
\begin{figure}
\epsfig{file=figure3.eps,width=\columnwidth}
\caption{[Color online] Vorticity inversion due to the effect of the discrete symmetry of the potential \eqref{pot} with $N=3, d=3.2, V_0 = -0.6$. Shown are pseudocolor plots of the amplitude $|\psi(x,y,t)|^2$ (a,c,e,g) and phase $\arg(\psi(x,y,t))$ (b,d,f,h) for different times indicated on the subplots. The locations of the potential minima according to Eq. (\ref{pot}) are indicated with small white circles. \label{sym3}}
\end{figure}
An interesting problem is how to process the doubly charged vortices obtained in atom chips by phase imprinting methods \cite{Kett1,Kett2} to obtain vortices with charge $m=-1$.
As in the previous examples, discrete symmetries can be helpful to accomplish this task. Let us take now a potential with $N=3$ and a doubly charged vortex as initial datum as in Eq. (\ref{doubly}).
Let us choose the potential parameters as $d=3.2$, and $V_0 = -0.6$.
The evolution of this configuration is shown in Fig. \ref{sym3} where the inversion of the central vortex from $l=2$ to $m=-1$ is clearly seen in the phase plots [Fig. \ref{sym3}(b,d,f,h)]. In this case, to obtain the theoretical prediction, we must take $k=1$, since $l=2$ falls in the second Brillouin zone for $N=3$. Consequently, according to the matching condition (\ref{matching_condition}), $m=2-1\cdot 3=-1$.
It can be seen how initially a central vortex with charge $m=-1$ coexists with other vortices located around it. However, as time becomes larger these vortices slowly spiral out of the system and are not visible in the region of interest for $t \sim 80$.
The transmutation phenomenon has been studied previously in Refs. \cite{13,Ripi}. However, in Ref. \cite{Ripi} the asymmetric trap used to induce the phenomenon leads to a recurrence in which the topological charge periodically oscillates between +1 and -1, while in this setup we get $m=-1$ for any time larger than the transient time in which the system becomes stabilized. With respect to the proposal of Ref. \cite{13} the scheme presented here has the advantage of introducing less radiation and being cleaner because of the finite range of the potential.
\subsection{Obtention of a vortex of charge $m=-1$ from a vortex with charge $l=4$ in a system with symmetry $N=5$: charge inversion}
\label{minusingle}
\begin{figure}
\epsfig{file=figure4.eps,width=\columnwidth}
\caption{[Color online] Vorticity inversion from $m=4$ to $m=-1$ due to the effect of the discrete symmetry of the potential \eqref{pot} with $N=5, d=3.2, V_0 = -2$. Shown are pseudocolor plots of the amplitude $|\psi(x,y,t)|^2$ (a,c,e) and phase $\arg(\psi(x,y,t))$ (b,d,f) for different times indicated on the subplots. The locations of the potential minima according to Eq. (\ref{pot}) are indicated with small white circles. \label{sym5}}
\end{figure}
As a final example we show how to obtain a vortex with charge $m=-1$ from a vortex with topological charge $l=4$. To do so we must use a system with symmetry $N=5$.
Our initial wavefunction is given by
\begin{equation}\label{four}
\psi(x,y) = (x+iy)^4 \exp{\left[(-x^2-y^2)/2\right]},
\end{equation}
and the parameters in the potential are taken as $V_0 = -2$ and $d=3.2$.
As in the previous examples we observe emission of ratiation (in this case stronger due to the increasing energy of the initial condition which leads to a faster scape from the origin) and after some time, the desired structure is observed. Again, $k=1$ since $l=4$ falls in the second Brillouin zone for $N=5$. Thereby, according to (\ref{matching_condition}), $m=4-1\cdot 5=-1$, in agreement with the numerical solution.
\section{Conclusions and discussion}
\label{Con}
In this paper we have studied the dynamics of multiply-charged vortices under the action of potentials with discrete symmetries. We have shown how the symmetry order of the potential can be used to manipulate the topological charge of the vortex in order to obtain a desired (lower) value of the vorticity. Specifically we have proposed a setup of gaussian traps which can be configured to have the appropriate symmetry and more flexibility than the lattice type potentials previously used.
As applications we have studied: topological charge erasing, charge inversion from $l=2$ to $m=-1$ and topological charge conversion from $l=4$ to $m=-1$
The last two examples are applicable to the controlled generation of singly charged vortices from the output of the phase imprinting method using matter-wave chips developed in Ref. \cite{Kett1}.
The vortex charge control and erasing properties which can be achieved in this system open new possibilites for the control of quantum matter. Moreover, although we have choosen an specific model given by Eqs. (\ref{NLS}) and (\ref{pot}) our arguments developed in Sec. \ref{III} are completely general and depend only on the discrete symmetry properties of the system. Thus, they can be extended beyond the specific form of the potential choosen and even the Gross-Pitaevskii equations used in this paper.
\acknowledgments
This work has been partially supported by grants BFM2003-02832, FIS2004-20188-E, FIS2005-01189, and FIS2006-04190 (Ministerio de Educaci\'on y Ciencia, {S\-pa\-in}), PAI05-001 (Consejer\'{\i}a de Educaci\'on y Ciencia de la Junta de Comunidades de Castilla-La Mancha).
|
1,108,101,565,143 | arxiv | \section{Introduction}
OJ287 (z=0.306) is an optically violent variable BL Lacertae object (BLO)
and also one of the bright Fermi $\gamma$-ray sources (Ackermann
et al. \cite{Ac11}, Hartman et al. \cite{Har99}, Agudo et al. \cite{Ag12}).
Its strong variability has been observed in all the
wavebands from radio to $\gamma$-rays with various timescales
(hours/days to years).\\
Its optical variability is particularly exceptional. The optical light curve
recorded since 1890s reveals quasi-periodic outbursts
with a cycle of $\sim$12\,yr (Sillanp\"a\"a et al. \cite{Si88}).
Up to now four periodic outbursts with double-peaked
flares have been observed in 1972--73, 1982--83, 1994--95 and 2005--2007.
The first flare of the fifth periodic outburst has been
observed in December/2015
and its second flare is predicted to peak on
July 31 2019 (Valtonen et al. \cite{Va18}, Dey et al. \cite{De18}
and references therein).
The long-lasting quasi-periodicity is believed to be
related to the orbital motion of a black hole binary in the center of
its host galaxy.\\
In the early works Brown et al. (\cite{Bro89a},
\cite{Bro89b}) showed that the variations at infrared (IR), optical and
ultraviolet wavelengths are well correlated. Correlation between
spectral index and flux density was observed at near-infrared (NIR)
wavelengths
(Gear et al. \cite{Ge85}): the source spectrum becomes steeper when it
becomes fainter and vice versa. But Sillanp\"a\"a et al. (\cite{Si96a},
\cite{Si96b}) found that the optical spectral index (or spectral color) was
very stable during the period 1994--1996 (OJ-94 project,
Takalo \cite{Tak96a}). Recently, the multi-wavelength observations
performed by Gupta et al. (\cite{Gu16}) during 2015--2017
also demonstrate the stability of the optical spectral color in OJ287.\\
Variations observed at centimeter wavelengths usually lag the optical
variations (Valtaoja et al. \cite{Val20}, Aller et al. \cite{Al94},
\cite{Al14}).
The radio time-delays can be attributed to shock evolution combined with
opacity effects. But there are some observations revealing
simultaneous variations at millimeter and optical wavelengths
(Sillanp\"a\"a et al. \cite{Si96b}, Valtaoja et al.\cite{Val20}).\\
OJ287 is a well-known superluminal source on parsec scales and VLBI
observations reveal that it has a core-jet structure and superluminal
components are steadily ejected from the core (Britzen et al. \cite{Br18},
Hodgson et al. \cite{Hod17}, Agudo et al. \cite{Ag12},
Cohen \cite{Co17}, Qian \cite{QiXiv18} and references therein).
It has been found that there is a close connection between the optical
flares and the emergence of superluminal components (Tateyama et al.
\cite{Ta99}, Qian \cite{QiXiv18}). Recently, based on the analysis of the
kinematics of superluminal components, Britzen et al. (\cite{Br18}) have
made an elaborated relativistic jet model (invoking jet precession plus
nutation) to explain the radio variability and the kinematics on
parsec scales. Based on a potential double-jet scenario,
Qian (\cite{QiXiv18}) tentatively
derived the total mass of the binary in the range
$10^8$--$10^9$$M_{\odot}$, which is consistent with the estimation by
Gupta et al. (\cite{Gu12}; also see Villforth et al. \cite{Vil10},
Valtaoja et al. \cite{Val20}).\\
Recently Kushwaha et al. (\cite{Ku18}) and Gupta et al. (\cite{Gu16})
have monitored the multi-wavelength variations in the NIR-optical-UV bands
during December/2015\,--\,May/2016, providing
new information about the variability behavior in OJ287.
They showed that the source has a stable color during that period,
confirming the finding by
Sillanp\"a\"a et al. (\cite{Si96a}) and supporting the "single mechanism" for
the optical flares (periodic major outbursts and non-periodic
synchrotron bursts) in OJ287.
Kushwaha et al. (\cite{Ku18}) have found that the December/2015
optical outburst
\footnote{This optical
outburst has been claimed to be a thermal flare produced by the secondary
black hole penetrating the disk of the primary hole.} was associated with
a simultaneous $\gamma$-ray flare. In addition, another strong synchrotron
outburst with polarization degree of $\sim$30\%
was observed in March/2016
which have its temporal and spectral variations very
similar to those observed in the December/2015 outburst.\\
The phenomena in OJ287 are very complex and may involve several different
mechanisms producing its variations from radio to $\gamma$-rays. Its
multi-wavelength variations reveal many prominent features: e.g., (1) 12\,yr
quasi-periodic optical variability; (2) double-peak structure of the
periodic optical outbursts; (3) symmetry of individual optical flare
profiles; (4) multi-component structure of the major optical outbursts;
(5) similarity in the variability behavior of individual bursts and major
periodic outbursts; (6) large range of optical polarization degrees (from
$<$2\% to $\sim$40\%); (7) stability of optical spectral index
(color stability); (8) connection between radio and optical variations;
(9) synchronous radio and optical variations
(simultaneity and similar profiles); (10) ejection of superluminal
components and jet precession;
(11) association of $\gamma$-ray flares with optical flares, etc. \\
A number of
models have been proposed (referring to the discussions in Villforth et
al. \cite{Vil10}, Qian \cite{QiXiv18}, \cite{Qi15}). On the whole,
these models can be divided into two categories, both involving a black hole
binary system in the nucleus of OJ287:
\begin{itemize}
\item The precessing binary model (or disk-impact model)
originally proposed by Lehto \& Valtonen
(\cite{Le96}; improved versions: Valtonen \cite{Va07}, Valtonen et al.
\cite{Va06}, Valtonen et al. \cite{Va18}, Dey et al. \cite{De18})
has been steadily elaborated to interpret the variability
behavior in OJ287, putting the emphasis on the accurate timing of
the first major flares of the periodic outbursts, which were suggested
to be bremsstrahlung in origin (unpolarized flares) and produced by
the secondary hole penetrating into the accretion disk
of the primary hole. In the case of an highly eccentric orbital
motion, two impacts would occur near pericenter passages and thus explain
the double-peak structure of the periodic outbursts. The recent
December/2015 optical outburst was studied and interpreted in detail
by Valtonen et al. (\cite{Va16}, \cite{Va17}). This model requires
a high inclination angle ($i$\,${\sim}$$50^{\circ}$\,--\,$90^{\circ}$)
\footnote{In
this case the jet associated with the secondary hole might not be pointed
toward us with a small angle, if its spin axis (and jet axis) is
approximately parallel to the orbital angular momentum.}
and a high eccentricity ($e$\,$\sim$\,0.66) and
a strong constraint on the total mass of the binary, reaching
${\sim}2{\times}{10^{10}}{M_{\odot}}$ with a mass ratio m/M$\sim$0.007. This
disk-impact model mainly concentrates on the interpretation of the
quasi-periodicity of the 12\,yr, double-peak structure and the accurate
timing of the periodic outbursts, regarding the periodic outbursts being
thermal flares due to the impact of the secondary hole penetrating
the primary disk. This model suggests that the follow-up
and non-periodic outbursts could be interpreted in terms of the enhanced
accretions (disturbances induced by the secondary impacts and tidal
effects near pericenter passages). But it can not be used to analyze
the complicated phenomena observed in the entire emission (from radio
to $\gamma$-rays) and the relationship between the emission properties
and the kinematic behaviors on parsec
scales. This model is based on very accurate solution of orbital motion by
including the post-Newtonian strong gravitational effects, but invoking
a fixed (not variable) disk model. \\
In contrast, Tanaka (\cite{Tan13}) considered a different mechanism
(cavity-accretion flare model) for explaining the
double-peak structure, assuming the binary having a comparable-mass in a
coplanar motion. According to the results
of hydrodynamic/magnetohydrodynamic (HD/MHD) simulations for binary
systems surrounded by circumbinary disks, the cavity-accretion
processes characteristic of comparable-mass binary systems would create
two gas streams impacting onto the disks of both the black holes near
pericenter passages, thus causing the double-peaked outbursts.
This model also suggests that the periodic outbursts are bremsstrahlung
flares caused by the impacts of the gas streams. It is not able to provide
accurate timing of the periodic outbursts. This cavity-accretion flare
model does not discuss the accretion processes during the intervening
periods and the interpretation of the follow-up and non-periodic outbursts
and the related jet behavior.
\item Relativistic jet models have been applied to understand the
optical and radio variability behavior in OJ287 and discussed by many
authors since the earlier years (e.g., Sillanp\"a\"a et al. \cite{Si96a},
Valtaoja et al. \cite{Val20}), because these models are considered
to be paradigmatic for explaining the variations
(from radio to $\gamma$-rays) observed in blazars. Villata et al.
(\cite{Villa98}) considered a precessing double-jet model to explain the
periodic double-peak structure. Villforth et al. (\cite{Vil10})
suggested that the periodic outbursts could be interpreted in terms of the
resonant disk accretion of magnetic field lines.
Qian (\cite{Qi15}) investigated the possibility that lighthouse effect
could cause the double-peak structure of the periodic outbursts.
Recently, based on the analysis of the
kinematics of the radio superluminal components, Britzen et al.
(\cite{Br18}) proposed an elaborated model to interpret the radio and
optical variations, emphasizing the precession and nutation of the
relativistic jet being the key ingredients causing the complex phenomena
in OJ287. In addition, Qian (\cite{QiXiv18}) tentatively suggested that
the periodic optical outbursts could be synchrotron flares produced
by the superluminal optical knots moving along parabolic trajectories.
But the explanation of the double-peak structure might have to invoke
the cavity-accretion process for comparable-mass binary systems
(e.g., Tanaka \cite{Tan13}, Artymowicz \& Lubow \cite{Ar96},
Hayasaki et al. \cite{Ha08}). Relativistic models do not involve
disk-impacting events, which cause strong thermal flaring events and
the optical/radio phenomena in OJ287 may be explained only by
invoking the enhanced disk-accretion near pericenter passages and
ejection of superluminal optical knots.\\
It might be worth noticing that, based on the modeling of the kinematics
of the superluminal components in OJ287 (Qian \cite{QiXiv18}),
the total mass of the potential binary has been tentatively determined to be
$\sim{10^8}$\,--\,${10^9}{M_{\odot}}$
\footnote{Mass ratio m/M\,$\sim$\,0.3.} which is consistent with the
estimations obtained by Gupta et al. (\cite{Gu12}), Villforth et al.
(\cite{Vil10}) and Valtaoja et al. (\cite{Val20}).
These values seem to favor a binary system with comparable-mass in
a coplanar motion. In this case both the jets associated
with the binary holes can point toward us with small angles.
\end{itemize}
Unfortunately, all the models currently available can only interpret
part of the phenomena observed in OJ287.
Some basic issues are remained to be clarified: mass of th binary,
double peak structure mechanism, color stability, synchronous
radio-optical variations, symmetry in burst
light curves, similar variability behavior of the flares,
simultaneity in optical and $\gamma$-ray flares, etc.
A comprehensive and coherent framework is imperatively needed to solve
all these issues.\\
In this paper we will apply the precessing jet nozzle model previously
proposed by Qian et al. (\cite{Qi91a}, \cite{Qi09}, \cite{Qi19}) to make
simulation of the optical light curves for the six periodic major
outbursts (in 1983.00, 1984.10, 1994.75, 2005.76, 2007.70 and 2015.87)
by using lighthouse effect due to the helical motion of superluminal
optical knots.
In particular, the multi-wavelength light curves of both the outbursts in
December/2015 and in March/2016 will be simulated and compared,
demonstrating the distinct similarity in their temporal
and spectral variations (multi-wavelength light curves with similar rising
and decaying time scales and similar broken power-law spectra). Since the
outburst in March/2016 is a highly polarized synchrotron flare, the similar
variability behaviors of the December/2015 outburst (peaking at 2457360)
and the March/2016 outburst (peaking at 2457450) may imply that they have
a common emission mechanism and the December/2015 outburst may be
synchrotron in origin.
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{znxnydel4heli94a.ps}
\includegraphics[width=6cm,angle=-90]{gaydel4heli94.ps}
\caption{Left panel: A sketch of the precessing jet nozzle scenario with
helical motion. The straight lines denote the precessing
jet axis (projected on the plane of the sky) which is described by the
precession phases of $\omega$=0.0\,rad, -2.0\,rad and -4.0\,rad,
respectively. The helices indicate the trajectories of the optical knots
moving along the jet axis in perfect collimation zones.
The helical trajectory defined by $\omega$=-2.0\,rad is used to simulate
the light curves of the optical outbursts in December/2015 and March/2016.
The corresponding Lorentz and Doppler factors are shown in the right panel.
The helical motion in the perfect collimation zone is assumed to start
at z=0.}
\end{figure*}
\begin{table}
\caption{Model parameters for the helical trajectories of optical knots.}
\begin{flushleft}
\centering
\begin{tabular}{ll}
\hline
Parameter & fixed value \\
\hline
$\epsilon$ & $3^{\circ}$ \\
$\psi$ & 0.0\,rad \\
$\omega$ & -2.0\,rad \\
$a$ & 0.0402 \\
$x$ & 1.0 \\
$A_0$ & 0.0138\,mas \\
d$\phi$/d$z_0$ & -7.04\,rad/mas \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{table}
\caption{Base-level (underlying jet) spectrum for OJ287.}
\begin{flushleft}
\centering
\begin{tabular}{ll}
\hline
Waveband & Flux (mJy) \\
\hline
K & 8.0 \\
J & 5.0 \\
I & 6.2 \\
R & 3.5 \\
V & 3.0 \\
B & 2.0 \\
U & 1.5 \\
UVW1 & 0.8 \\
UVM2 & 0.8 \\
UVW2 & 0.8 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=4.5cm,angle=-90]{Spect287.ps}
\includegraphics[width=4.5cm,angle=-90]{base287.ps}
\includegraphics[width=4.5cm,angle=-90]{IntF-IVU.ps}
\caption{Left panel: the modeled broken power-law synchrotron
spectrum with a spectral break at V-band of $\Delta{\alpha}$=0.5
(from $\alpha$=0.8 to $\alpha$=1.3,
$S_{\nu}$\,$\propto$\,${\nu}^{-\alpha}$).
The base-level spectrum is shown in the middle panel.
The modeled intrinsic flux densities (in the comoving frame of the
optical knot) at I-, V- and U-bands are shown in the right panel
(unit=$10^{-4}$\,mJy).}
\end{figure*}
\begin{table}
\caption{Parameters for model simulation of the periodic outburst in
December/2015 (peaking at JD2457360) and the March/2016 outburst (peaking
at JD2457450): $\Gamma$--Lorentz factor
of the optical knot, maximum Doppler
factor $\rm{{\delta}_{max}}$,
ratio $\rm{{\delta}_{max}}$/$\rm{{\delta}_{min}}$,
$\rm{S_{int}}$(mJy)--intrinsic flux density of the optical knot, base-level
(underlying jet) flux density $\rm{S_b}$=3.5\,mJy at R-band, FWHM (full
width at half maximum of the modeled light curve; day). $t$\,=\,flare
time\,=\,day--2457000.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & $\rm{{\delta}_{max}}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
360 & 9.5 & 18.88 & 4.11 & 1.16$\times{10^{-4}}$ & 5.9 \\
450 & 9.5 & 18.88 & 4.11 & 1.16$\times{10^{-4}}$ & 5.9 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\section{Assumptions and approaches}
In order to better understand the entire phenomena observed in the blazar
OJ287, we would perform detailed simulation of the light curves of its
optical outbursts, which include not only the periodic major outbursts
(or the "impact outbursts" claimed to be bremsstrahlung flares caused by
the evolving gas-bubbles torn out from the primary disk by the
secondary-hole penetrations), but also
the non-periodic outbursts (usually recognized as synchrotron flares with
high polarization). The multi-wavelength light curves of the December/2015
outburst (peaking at 2457360) and the March/2016 outburst (peaking at
2457450) are analyzed and compared for finding their common properties:
the symmetry of their light-curves and similarity in their temporary and
spectral variations.\\
We will apply the precessing jet-nozzle model previously proposed for
several blazars (3C345: Qian et al. \cite{Qi91a}, \cite{Qi09}; 3C279:
Qian \cite{Qi11}, \cite{Qi12}, \cite{Qi13}, Qian et al. \cite{Qi19};
3C454.3: Qian et al. \cite{Qi14};
NRAO 150: Qian \cite{Qi16}; B 1308-326:
Qian et al. \cite{Qi17}; PG 1302-102: Qian et al. \cite{Qi18};
OJ287: Qian \cite{QiXiv18})
to investigate the kinematics of the optical superluminal
components in OJ287 and propose an interpretation for the multi-wavelength
light curves of the optical outbursts in December/2015 and March/2016
obtained by Kushwaha et al. (\cite{Ku18}). We will also perform model
simulation of the V-band light-curves of the six periodic major outbursts in
1983.00, 1984.10, 1994.75, 2005.76, 2007.70 and 2015.87, suggesting a
coherent scenario
to understand the entire phenomena in OJ287.\\
We describe the main assumptions and relevant approaches
for the model simulation of the outburst light curves as follows.\\
\subsection{Parameters of precessing nozzle model}
In order to make model simulation of the light curves of the optical
outbursts in OJ287, we would need to use an appropriate and specific
scheme of the precessing nozzle model. Here we we adopt the geometric
parameters applied in the previous precessing nozzle models
(details referring to Qian \cite{QiXiv18}) and introduce the parameters
of helical motion.\\
We assume that the superluminal optical knots move along helical
trajectories around the jet axis which precesses around the precession
axis, as shown in Figure 1 (left panel). The precession axis is defined by
parameter $\epsilon$\,=\,$3^{\circ}$ and $\psi$\,=\,0.0\,rad. The jet axis
ia assumed to be a straight line with parameters $a$\,=\,0.0402 and
$x$\,=\,1.0 (details referring to Qian \cite{QiXiv18}), which precesses
around the precession axis with a period of 12\,yr. Optical superluminal
knots are assumed to be ejected from the jet nozzle, moving outward along
helical trajectories. The helical motion of the optical knots are skechily
shown in Figure 1 (left panel). Using the the precessing nozzle model we
can study the helical motion of superluminal optical knots ejected at
different precession phases. For concrete model simulations, we will
assume that the superluminal optical knots are ejected along the jet axis
defined by the precession phase $\omega$\,=\,--2.0\,rad, moving along the
helical trajectories which are defined by parameters $A_0$ and
d$\phi(z)$/dz: $A_0$ represents the amplitude of the helical trajectories
and d$\phi(z)$/dz represents the rotation rate of the helical motion.
We will take $A_0$\,=\,0.0138\,$\rm{mas}$ and
d$\phi$/dz\,=\,--7.04\,$\rm{rad/mas}$. The model parameters are summarized
in Table 1. In order to demonstrate the common features of helical
motion of the superluminal optical components, parameters $A_0$ and
d$\phi$/dz are assumed to be constant for all the optical flares and only
their bulk Lorentz ($\Gamma$) and intrinsic flux density ($S_{int}$)
are chosen to
model-fit their light curves. Constant $A_0$ and d$\phi$/dz describe
uniform helical motion in a perfect collimation zone.\footnote{Introducing
$A_0$ and d$\phi$/dz as functions of distance $z$, one can study various
patterns of helical motion.}
\subsection{March/2016 outburst: a synchrotron flare}
As shown by Kushwaha et al. (\cite{Ku18}), the strong optical outburst
observed in March/2016 (peaking at 2457450) is a synchrotron flare with a
high polarization degree of $\sim$30\% (at R-band). This is consistent with
the observations made by Valtonen et al. (\cite{Va17}). Both the optical
polarization observations rule out the possibility that the March/2016
outburst is a bremsstrahlung-dominated flare. The reason is: if this
outburst comprised of two components (one thermal and one synchrotron) and
was thermal-dominated, then in order to
explain its 30\% polarization, the synchrotron component would be required
to have a polarization degree of at least $>$60\%. Such high polarization
degrees have never been observed in OJ287 (e.g., see the photopolarimetric
light curves (R-band) during 2005--2008 in Villforth et al. \cite{Vil10}).
Thus we suggest that the March/2016 outburst (peaking at 2457450)
originates from synchrotron process, definitely a non-thermal flare.\\
This argument is a simple and natural one in blazar physics, but seems quite
important for understanding the physical processes in OJ287. For example,
the spectral energy distribution (SED) and its variation of the March/2016
outburst are very similar to those of the December/2015 outburst
(Fig.4 in Kushwaha et al. \cite{Ku18};
also see Figures 4\,--\,9 displayed below). The SED of both the
outbursts reveals two prominent
features: (1) an offset between the visible spectrum and the near-infrared
(NIR) spectrum; (2) a
transition from a rather steep visible spectrum to a flatter UV spectrum.
Superficially, these features look like those observed in other blazars
(e.g., 3C345, 3C454.3, BL Lac and AO 0235+106; Bregman et al. \cite{Bre86},
Raiteri et al. \cite{Ra07}, Villata et al. \cite{Villa04},\cite{Villa02},
Raiteri et al. \cite{Ra05}), which have been claimed to
be constructed from the "big blue bump" and "little blue bump"
produced by the accretion disks and emission lines in the broad-line regions
(BLR). However, in the case of OJ287, the variations in the UV-band are
simultaneous with the NIR-optical variations and no color changes occur
during the March/2016 synchrotron outburst (as well during the
December/2015 outburst).
This variability behavior seems indicating that the emission from the
NIR-optical to the UV-bands are all produced in the jet and the emitting
source might have a peculiar inhomogeneous structure (Raiteri et al.
\cite{Ra07}, \cite{Ra06}, Ostorero et al. \cite{Os04}). We will continue
to argue for this possibility below, especially based on the connection
between the radio and optical outbursts.
\subsection{Similarity between 2015 and 2016 outbursts}
In the following sections, we will perform model simulation of the
multi-wavelength (NIR-optical-UV) light curves of the December/2015 and
March/2016 outbursts
(Kushwaha et al. \cite{Ku18}, Gupta et al. \cite{Gu16}) and show that
the temporary and spectral variations of the December/2015 outburst
are very similar to those of the March/2016 outburst, although the
March/2016 outburst occurred $\sim$90\,days later, but was as strong as
the December/2015 outburst and had high polarization degrees. They have
similar multi-wavelength light curves in the
NIR-optical-UV bands having symmetric profiles with very similar rising
and declining timescales. While the December/2015 outburst
has been claimed to be an ``impact thermal outburst'',
occurring at a location $\sim$18,000\,AU away from the primary black hole
and originating from an evolving gas-bubble torn out from the accretion disk
of the primary hole (Lehto \& Valtonen \cite{Le96}, Valtonen et al.
\cite{Va16}), the March/2016 outburst is definitely a non-thermal flare,
originating from synchrotron process in the jet. It is rather difficult
to understand why the December/2015 thermal outburst could have
its temporary and spectral variability behavior so resembling to that
of the March/2016 synchrotron outburst. Our model simulation
of their light-curves indicates that the March/2016 and December/2015
outbursts could be interpreted in terms of the lighthouse effect due to
helical motion of one superluminal optical knot in a perfect collimation
zone of the jet via two helical revolutions. Thus
the resemblance in the temporary and spectral variations observed in
December/2015 and March/2016 outbursts may imply that both December/2015
and March/2016 outbursts originate from a common radiation mechanism and
they are non-thermal (synchrotron) flares produced in the jet.
The two outbursts may be combined into "one flaring event"
\footnote{The data-points of the March/2016 outbursts need to be shifted
backward in time by 89.4\,days.} and their observational data-points
are superposed to analyze the common properties of their temporary and
spectral variations. For the R-band light curves, the data-points measured
by Valtonen et al. (\cite{Va16}) are also incorporated in the analysis,
providing sufficiently complete temporal coverage for the simulation of
the multi-wavelength light curves.
\subsection{Nature of 2015 outburst: $\gamma$-ray observations}
The nature of the radiation of the outburst in December/2015 (peaking at
2457360) is still a debatable issue: whether it is a
synchrotron flare or a bremsstrahlung-dominated one. We argue that the
December/2015 optical outburst may be a synchrotron flare. \\
According to Valtonen et al. (\cite{Va16}), the December/2015 outburst
is composed of two components: one bremsstrahlung component and
one synchrotron component, and it is bremsstrahlung-dominated.
In order to explain its observed polarization
degree of $\sim$6\%, the non-thermal component is assumed to be highly
polarized with a polarization degree of 40\%.\footnote{Here
we do not consider the case that the base-level makes a 10\% contribution
to the polarization degree.}
In this case the thermal component is much stronger than the non-thermal
component, making
$\sim$68\% and $\sim$17\% contributions to the total flux of the outburst,
respectively.\footnote{The base-level emission makes a steady $\sim$15\%
contribution to the total flux density.} However, the
relationship between the thermal component and the non-thermal component
was not clarified: (1) where this highly-polarized component is produced:
in the jet of the primary hole or in the jet of the secondary hole ?
(2) How could the flux variation of the non-thermal component be
simultaneous with that of the thermal component, because the two emission
components appeared at different locations
(not co-spatial): the thermal flare occurred at $\sim$0.1\,pc away from the
primary hole and its jet; (3) How could the variable
non-thermal component (with a constant polarization degree) be possible to
closely match the behavior of the thermal component, because they originate
from different emission mechanisms: bremsstrahlung from an evolving
gas-bubble and synchrotron emission from a shock in the jet, having
different evolution behaviors with different timescales. \\
It is worth noticing that the December/2015 outburst emits $\gamma$-rays
and the variations in the $\gamma$-ray bands are simultaneous with the
variations in the NIR-optical-UV bands without time lags
(Kushwaha et al. \cite{Ku18}), having similar variability time scales.
Obviously, this $\gamma$-ray flare should be associated with
the synchrotron flare component and both emitting regions must be co-spatial
within the relativistic jet. Thus the simultaneity in the variations and
the similarity in the variability behavior between the $\gamma$-ray
flare and the bremsstrahlung-dominated outburst seems unlikely,
because the bremsstrahlung flare is believed to be produced by
an evolving gas-bubble torn off the
primary hole accretion disk by the second black hole penetrating into the
primary disk (Lehto \& Valtonen \cite{Le96}), occurring at a distance of
$\sim$18,000\,AU away from the primary black hole and its jet
(Valtonen et al. \cite{Va17}), while the synchrotron component flare and
its associated $\gamma$-ray flare are produced in the jet of the primary
black hole: they occur at different locations (not co-spatial) through
different mechanisms. The only plausible interpretation for the
simultaneous variations in the $\gamma$-ray and optical bands
may be that the December/2015 optical outburst is originated from the jet
through synchrotron process. This may be the most persuasive argument
for the December/2015 outburst being a non-thermal flare.
\subsection{Spectral energy distribution of December/2015 outburst}
As typically observed in generic blazars, the spectral energy distribution
of the December/2015 outburst consists of two bumps: one in the
NIR-optical-UV bands and the other one in the $\gamma$-ray bands (Kushwaha
et al. \cite{Ku18}). These two bumps are normally interpreted in terms of
synchrotron and inverse-Compton processes, respectively. In the one-zone
scenario (e.g., Qian et al. \cite{Qi98a}, \cite{Qi98b}, Ghisellini et al.
\cite{Gh07}, Vercellone et al. \cite{Ve10}, \cite{Ve12}),
the simultaneity of
the NIR-optical-UV and $\gamma$-ray variations and their similar
variability time scales (rising and declining timescales) would suggest
that the NIR-optical-UV emitting region and the $\gamma$-ray emitting
region are co-spatial in the jet of the primary hole. It seems difficult to
understand that the $\gamma$-ray flare could be simultaneous with the
NIR-optical-UV variations in a bremsstrahlung-dominated outburst. Moreover,
under the bremsstrahlung-dominated assumption for the December/2015
outburst, the low-frequency bump of its SED has to be decomposed into
two parts (Kushwaha et al. \cite{Ku18}): one thermal and one non-thermal.
The peak frequency of the non-thermal part has to be shifted to the
far-infrared regime and its optical-UV power has to be lowered
to only a half of the observed optical-UV power, reaching the power levels
during the quiescent states in OJ287
(e.g., Seta et al. \cite{Set09}, Kushwaha et al. \cite{Ku13},
Kushwaha et al. \cite{Ku18}). This seems inconsistent with the
normal behavior
observed in $\gamma$-ray blazars: the synchrotron bump moves to higher
frequency with higher peak power (${\nu}{F_{\nu}}$)
during $\gamma$-ray flaring states,
when the high-energy bump shifts to higher energy $\gamma$-ray bands.
Therefore, it seems more likely that the low-frequency bump of the
December/2015 outburst may be entirely synchrotron in origin.
\footnote{Note that, the peak frequency of the low-frequency bump
usually observed in OJ287 does not show significant
difference between the quiescent and flaring states, but the peak power
increases during the flaring states (Seta et al. \cite{Set09}).}\\
\subsection{Broken power-law spectrum}
The spectral break detected in the optical--UV wavebands for the
December/2015 outburst (Kushwaha et al. \cite{Ku18}) has been
interpreted as due to
the thermal emission of the accretion disk surrounding the primary
black hole with a mass of $1.8{\times}{10^{10}}{M_{\odot}}$. It is noted
that the March/2016 outburst has a similar spectral break, which may not
originate from the primary disk and due to the superposition of the
synchrotron emission from different jet regions.
Moreover, both outbursts has no color
variations, as Gupta et al. (\cite{Gu16}) observed. Thus in the following
simulations of the multi-wavelength light curves of the two outbursts,
a common broken power-law spectrum will be assumed with a spectral break
of $\Delta{\alpha}$=0.5: in the K- to V-bands
$S_{\nu}$\,${\propto}$\,${{\nu}^{-\alpha}}$ with $\alpha$=0.8,
and in the V- to UV-bands $S_{\nu}$\,${\propto}$\,${{\nu}^{\-\alpha}}$ with
$\alpha$=1.3. The spectrum is sketchily shown in Figure 2 (left panel).
This kind of broken power-law spectrum can be resulted from local
continuous injection or re-acceleration of relativistic electrons in the
superluminal optical knots under synchrotron/IC radiative
losses (Kardashev \cite{Ka62}, Pacholczyk \cite{Pa70},
Qian \cite{Qi78}, \cite{Qi96a},
\cite{Qi96b}, \cite{Qi97}, Sahayanathan et al.
\cite{Sa03}). The synchrotron spectrum assumed here is
essentially different from the bubble-emitting bremsstrahlung spectrum
produced by the disk-impacting process. It is noted that the thermal
spectrum predicted by the disk-impact model was much flatter at optical-UV
wavelengths. According to Valtonen \& Ciprini (\cite{Va12})
and Valtonen et al. (\cite{Val12}), the thermal spectrum of the 2005
outburst was derived from the observed spectrum by correcting the host
galaxy extinction with a hydrogen column density of
6.3$\times{10}^{20}$/${\rm{cm}}^2$. However,
if correction of extinction in the host galaxy were needed, the synchrotron
spectrum of the March/2016 outburst would also be converted into a thermal
spectrum. This seems unlikely. Thus the host galaxy extinction will not be
included here and we suggest that both the December/2015 and March/2016 have
similar non-thermal spectra.
\subsection{Spectral variability}
As explained in Sect.2.1, the spectral energy distribution of the
December/2015 outburst (peaking at 2457360) in the NIR-optical-UV bands
exhibits two distinct
features: the offset between the NIR and optical portions and the
transition from the rather steep optical spectrum to a flatter
spectrum in the UV portion. If this NIR-optical-UV spectrum could be
interpreted as composed of two constituents: a thermal spectrum
emitted from the accretion disk of the primary black hole (with a mass of
$\sim{2\times}{10^{10}}{\rm{M_{\odot}}}$) dominating the optical-UV
emission and a synchrotron spectrum emitted from a shock in the jet
dominating the IR-radio emission, one would have to explain why the two
emitting sources could vary simultaneously. Moreover, the March/2016
outburst (peaking at 2457450) has its spectrum and spectral variations
very similar to those of the December/2015 outburst and both outbursts
exhibit no color changes (Gupta et al. \cite{Gu16}). As the March/2016
outburst is a highly-polarized synchrotron one, its spectral
variations should not be related to the thermal emission from the
primary disk and its color stability must be a characteristic feature
of the synchrotron source itself. The color stability commonly observed
in both December/2015 and March/2016 outbursts may be a significant
signature demonstrating the nature of their emission. Thus the
similarity in the spectral variations between the December/2015 and
March/2016 outbursts may imply that the December/2015 outburst also
originate from the jet. In fact, according to the disk-impact model,
the December/2015
outburst is caused by the evolving gas-bubble torn off the primary
disk by the secondary hole impacting. The thermal emission from an
evolving gas-bubble during its adiabatic expansion and cooling
(from ${\sim}10^5$K to lower temperatures) would be color-changeable,
inconsistent with the observations.
\subsection{Symmetry in light curve profiles}
As Sillanp\"a\"a et al. (\cite{Si96a}) pointed out that during
the OJ-94 project period (1993.8--1996.1) the two major outbursts had a
strong symmetry. Through detailed inspection, we recognized that
the two major outbursts could be decomposed into a number of subbursts,
each having symmetric light curves with similar rising and declining
timescales. A few isolated moderate outbursts also have symmetric
light curves. Most interestingly, the December/2015 outburst peaking
at JD2457360 (Valtonen et al. \cite{Va16}) does not exhibit the
"standard light curve" expected for ``impact outbursts''
(Valtonen et al. \cite{Va11}), but showing a symmetric profile.
During the period of September/2015\,--\,May/2017, a number of rather
isolated moderate bursts (e.g., peaking at JD-2457379, -2457450, -2457759
and -2457885; Valtonen et al. \cite{Va17}) were observed to exhibit
symmetric profiles.\\
As discussed in Sect.4 below, the five periodic outbursts
(in 1983.00, 1984.10, 1994.75, 2005.76 and 2007.70) could be decomposed
into a number of subflares with each subflare (or "elementary flare")
having a symmetric light
curve. This can be clearly seen in Figures 10\,--\,12 and 14\,--\,15.\\
Since symmetry in the light curves is a common characteristic feature,
the periodic and non-periodic outbursts in OJ287 should originate
from similar mechanisms. However, both the
evolving gas-bubble emitting mechanism (Lehto \& Valtonen \cite{Le96})
and the shock-in-jet models (e.g., Marscher \& Gear \cite{Ma85},
Qian \cite{Qi10}), could not produce outbursts with symmetric
light-curves. In this work, we would suggest that the symmetric
light-curves observed in OJ287 are produced by lighthouse effect
due to the helical motion of superluminal optical knots in
the jet (Qian \cite{Qi15}).
\subsection{Connection between radio and optical flares}
Investigations of the connection between the radio and optical variations
may provide important clues for the entire phenomena in OJ287 and help to
understand the nature of its multi-waveband emissions. Centimeter
radio bursts (e.g., at 8\,GHz) are typically observed to be delayed with
respect to the optical outbursts by a month or so. The bump-like
structure in the radio light curves are connected with the spike-like
structure in the optical light-curves (Britzen et al. \cite{Br18},
Tateyama et al. \cite{Ta99}, Qian \cite{QiXiv18}).
This kind of radio-optical connection can be
understood as a result of the evolution of the superluminal optical knots
(shocks or blobs) combined with the opacity effects at radio
frequencies (Qian \cite{Qi10}). However, simultaneous flares have been
observed in OJ287 at millimeter and optical wavelengths. For example,
Sillanp\"a\"a (\cite{Si96b}) observed a simultaneous behavior at the
beginning of the year 1992 (JD2448610\,--\,JD2448670): the variations at
optical V-band and at 37\,GHz were not only simultaneous but also had
very similar profiles. According to Valtaoja et al. (\cite{Val20}), during
the period 1990.5--1994.0, the 37\,GHz variations were mostly simultaneous
with the optical variations with no measurable time delays. In addition, the
major 37\,GHz outburst (peaking in 1996.61) has an approximately symmetric
profile with its declining phase closely tracking the optical flare.\\
Most interestingly, the two major optical flares of the double-peaked
outbursts during 1994.7--1996.1 had different connections between the
millimeter flares and the optical flares. For the first optical flare
(1994.8--1995.3) there was no simultaneous 37\,GHz counterpart observed
(Sillanp\"a\"a et al. \cite{Si96b}). But for the second optical flare
(1995.90--1996.10) simultaneous 37\,GHz and optical variations
were observed. According to Valtaoja et al. (\cite{Val20}, Fig.8 therein),
both the optical and millimeter flares have complex multi-component
structures. The millimeter
and the optical variations are not only simultaneous but also have very
similar envelopes. Thus both the optical and radio/mm light curves
could be decomposed into a number of subflares (or elementary flares)
with symmetric profiles and
interpreted in terms of the helical motion model. \\
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{znxnM287C10-heli.ps}
\includegraphics[width=6cm,angle=-90]{znxnM287C11-heli.ps}
\caption{Model simulation of the superluminal motion of knot C10 (left
panel) and knot C11 (right panel). The black thin dashed lines denote the
precessing common trajectories. The black thick dashed lines represent
the model trajectories with outer curvatures. the red dashed lines indicate
the helical trajectory fits. The helical motions of both knots have very
small pitch angles on parsec scales (Qian \cite{QiXiv18}).}
\end{figure*}
Obviously, the close connection between the millimeter and optical flares
observed in the 1995.9 periodic outburst seems important, implying that:
(1) The optical and radio flares should be produced in
co-spatial emitting regions\footnote{Here ``co-spatial'' would mean some
special structure (or emission distribution) of the optical-millimeter
source in the direction perpendicular to its motion.}and originate from
a common synchrotron process in the relativistic jet; (2) The
simultaneity in the millimeter and optical variations may disfavor
shock-in-jet models, because optical shocks typically evolve through
three stages (Compton\,-\,synchrotron\,-\,adiabatic stages), resulting
in different optical-radio relationships (Qian et al. \cite{Qi10},
Qian \cite{Qi96a}, \cite{Qi96b}, \cite{Qi97},
Marscher \& Gear \cite{Ma85}, Valtaoja et al. \cite{Val88},
\cite{Val92}, Litchfield et al. \cite{Li95}).\footnote{This does not
exclude the possibility that the superluminal components are steady shocks
on time-scales of ten days or so.}
Thus shock-in-jet models seem not able to produce simultaneous mm-radio and
optical variations with very similar symmetric light curves.
We would suggest that lighthouse effect due to helical motion of
superluminal optical knots (shocks or blobs) may be the most plausible
mechanism to interpret the simultaneity and symmetry observed in the
mm-radio and optical variations observed in OJ287. The proposed helical
motion model seems working well, as described in Sect. 3 and 4.\\
We point out that helical motion of radio superluminal knots might also
exist. For example, model-simulation of the kinematics of the
superluminal components C10 and C11 of OJ287 in terms of helical models
has been tried in Qian (\cite{QiXiv18}), which are shown in Figure 3.
These helical trajectories have very small pitch angles and could not be
easily discovered. It should be noted that in the precessing jet-nozzle
models (Qian et al. \cite{Qi19}) the flux density curves of the optical
and radio knots can be explained in terms of lighthouse effect caused by
their helical motions, but different knots may move along different
helical trajectories. This is different from the Doppler boosting and
beaming effects caused by the precession of the whole jet
(with a $\sim$12\,yr period) which do not contribute to the
radio and optical flaring activities with timescales of ten days or so.
\begin{table}
\caption{Parameters for model simulation of the R-band light curves of the
periodic outburst in December/2015 (peaking at JD2457360) and the
March/2016 outburst (peaking at JD2457450). $\Gamma$ -- Lorentz factor
of the superluminal optical knot, ${\delta}_{max}$ -- maximum Doppler
factor, ratio ${\delta}_{max}$/${\delta}_{min}$,
$\rm{S_{int}}$($\rm{mJy}$) -- intrinsic flux density of the optical knot,
base-level (underlying jet) flux density $\rm{S_b}$=3.5$\rm{mJy}$
at R-band, FWHM (full width at half maximum) of the modeled light curve
(day). $t$\,=\,flare time\,=\,day-2457000.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & ${\delta}_{max}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
360 & 9.5 & 18.88 & 4.11 & 1.16$\times{10^{-4}}$ & 5.9 \\
450 & 9.5 & 18.88 & 4.11 & 1.16$\times{10^{-4}}$ & 5.9 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TValRydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSRydel4heli94.ps}
\caption{Left panel: model simulation of the R-band light-curves for the
outburst in December/2015 observed by Kushwaha et al. (\cite{Ku18};
labeled by "TR") and observed by Valtonen et al. (\cite{Va16}; labeled
by "TVal-R"). Right panel: model simulation of the R-band light curves for
the combination of December/2015 outburst and March/2016 outburst
(Kushwaha et al. \cite{Ku18}; labeled
by "SR"). The light curve of the March/2016 outburst has been shifted
in time backward by 89.4 days. The combination of the light-curves provides
a sufficient time coverage to clearly exhibit the symmetric profiles with
similar rising and declining time scales.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=5.5cm,angle=-90]{TKydel4heli94.ps}
\includegraphics[width=5.5cm,angle=-90]{TSKydel4heli94.ps}
\caption{Model simulation of the K-band light curves
for the outbursts in December/2015 (labeled by ``TK'') and in March/2016
(labeled by ``SK''). In the right panel the observing time
of the March/2016 outburst has been shifted backward by 89.4 days.
The light curves are very well simulated for both outbursts by the
lighthouse model.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TJydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSJydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TIydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSIydel4heli94.ps}
\caption{Model simulation of the J-band (top panels) and I-band
(bottom panels) light curves for the outbursts in December/2015 (labeled by
``TJ'' and ``TI'') and in March/2016 (labeled by ``SJ'' and ``SI'').
In the right panels the
observing time of the March/2016 outburst has been shifted backward
by 89.4 days. The observed I-band light curves are very well fitted by the
helical motion model. The peak of the model light curve for the J-band
is much higher than the observed one and this should have been expected,
because the assumed model spectral index at J-band $\alpha$=0.8
is larger than the observed one.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TRydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSValRydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TVydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSVydel4heli94.ps}
\caption{Model simulation of the R-band (top panels) and V-band
(bottom panels) light curves for the outbursts in
December/2015 and in March/2016. In the right panels the observing time
of the March/2016 outburst has been shifted backward by 89.4 days.
The light curves at both bands are very well simulated by the helical
motion model. The good fits to the combined light curves (right panels)
with a common helical motion model indicate the strong similarity
in optical variations between the December/2015 and the synchrotron
outburst in March/2016 and their common radiation mechanism.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TBydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSBydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TUydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSUydel4heli94.ps}
\caption{Model simulation of the B-band (top panels) and U-band
(bottom panels) light curves for the outbursts in December/2015 and
in March/2016. In the right panels the observing time of the March/2016
outburst has been shifted backward by 89.4 days. The good fits of the
combined light curves in both bands demonstrate the applicability of
the helical-motion model in the high-frequency region, revealing
the December/2015 outburst has a variability behavior very similar to
that of the synchrotron outburst in March/2016.}
\end{figure*}
\subsection{Lighthouse effect and Doppler boosting}
We will explain the multi-wavelength light curves of the optical
outbursts in terms of lighthouse effect. For simplicity, it is assumed
that the superluminal optical knots move along helical trajectories around
the rectilinear jet axis which precesses around the precession axis,
as sketchily shown in Figure 1 (left panel). In this case the
lighthouse effect results in a symmetric light curve via Doppler
boosting per revolution. In addition, we assume that the observed
flux density $\rm{S_{obs}}$ of the
optical outbursts at any frequency
consists of two constituents: a steady base-level ($\rm{S_b}$) and the
flaring part $\rm{S}$(t):
\begin{equation}
{\rm{S_{obs}}}(t)={\rm{S(t)}}+{\rm{S_b}}
\end{equation}
Using relativistic jet models, the evolution of the flux
density of superluminal optical knots can be written as:\\
$\rm{{S}(t)}$=${\rm{S_{int}}}$$\times$${\rm{{\delta}(t)}^{p+\alpha}}$.
$\rm{S_{int}}$ is the intrinsic flux density (in the comoving frame of
the optical knots). For moving optical knots p=3 (Blandford \& K\"onigl
\cite{Bl79}) and $\alpha$ is the spectral
index. In our model simulation $\rm{S_{int}}$ is assumed to be constant
\footnote{${\rm{S_{int}}}$=constant is a simplified assumption. Taking the
rising and declining parts of the outbursts into account would result in
truncations of the model light curves at both start- and end-points,
more clearly separating the contributions from consecutive subbursts.} and
the spectral index in V-band equals to 1.0. The broken power-law spectrum in
the NIR-optical-UV bands is
assumed as: $\alpha$=0.8 in the NIR\,--\,optical (V) bands and
$\alpha$=1.3 in the optical (V)\,--\,UV bands. The steady base-level
spectrum $\rm{{S_b}}$($\nu$) is listed in Table 3 and shown in Figure 2
(middle panel). The modeled intrinsic flux densities $\rm{S_{int}}{(\nu)}$
for the optical knot are shown in Figure 2 (right panel)
for I-, V- and U-bands as a demonstrating example. The multi-wavelength
light curves of the optical knot are only determined by the Doppler
boosting.\\
In the simulation of the outburst light curves the modeled flux
density of the outbursts (at V-band) will vary in the range from
$\rm{{S_{int}}{{\delta}^4}_{min}}$ to $\rm{{S_{int}}{{\delta}^4}_{max}}$,
while the total flux density in the range from
$\rm{S_b}$+$\rm{{S_{int}}{{\delta}^4}_{min}}$
to $\rm{S_b}$+$\rm{{S_{int}}{{\delta}^4}_{max}}$. Because the
base-level flux $\rm{S_b}$ does not
vary during the outbursts, the variability amplitude of the total
flux will be smaller than that of the flaring components.
This shows the characteristic feature of our
precessing nozzle model distinct from the precessing jet models
usually used in literature, where the precession of the entire jets will
also cause the variations in the base-level flux density.
\subsection{Selection of parameters for simulation}
In this work we shall use simple approaches to make model simulation
of the observed light curves for all the outbursts concerned, that is,
all the model parameters listed in Table 1 are assumed to be constant.
The parameters describing the spectral features of the outbursts
(the broken power-law spectra) are also taken to be constant.
Notwithstanding this simplification, the variability behaviors of all
the outbursts (both periodic and non-periodic) can be well
interpreted in terms of our lighthouse scenario. At the same time,
there remains a wide scope for choosing the model parameters to improve
the simulation for any individual outburst. For example, an adoption of a
different value for $\omega$ can consider the helical motion (and lighthouse
effect) at a different precession phase. Changes in parameters $A_0$ and
d$\phi$(\rm{z})/dz can be used to investigate various patterns of
helical motion of the superluminal optical knots. Specifically, a local
slight change of the spectral index at J-band could result in a better fit to
the J-band flux densities observed in the December/2015 and March/2016
outbursts (see Fig.6, upper panels). In addition, intraday
variations (IDV) could also be included for explaining the rapid
variations, which might be due to interstellar
scintillation or turbulent fluctuations in the emitting sources
(Qian et al. \cite{Qi91b},
Melrose \cite{Me94}, Marscher et al. \cite{Ma08}, Marscher \cite{Ma14}).
It is found from the light curve simulations that intraday spectral
variations might be an important ingredient, which could result
in the data-points deviating from the model light curves.\\
Therefore through adjusting the model parameters, the model-fits to
the light curves of all the outbursts discussed in this paper could be
further improved.
\subsection{Relevant MHD theories}
Most astrophysicists believe that relativistic jets are formed by rotating
magnetic fields in the magnetospheres of the black-hole/accretion-disk
systems, as the magnetohydrodynamic (MHD) theories of jet formation
indicate (Blandford \& Znajek \cite{Bl77}, Blandford \& Payne \cite{Bl82},
Camenzind \cite{Cam90}, Beskin \cite{Be10}, Meier \cite{Mei01}, Vlahakis \&
K\"onigl \cite{Vl04}). However, few observations have provided direct and
compelling evidence for helical magnetic fields and helical motions on
parsec scales (e.g., Gabuzda et al. \cite{Ga04}, \cite{Ga15}).
Now, as demonstrated in this work, the
prevailing symmetric properties of the light curves of the optical
outbursts observed in OJ287 may be recognized as a favorable evidence of
helical motion of its superluminal optical knots along helical magnetic
fields in the relativistic jet. This would also help to investigate the
helical motion of superluminal knots in other blazars. However, for
OJ287, the interpretation of the
quasi-periodicity in its variability behavior and the timing of the
double-peaked outbursts remains to be a challenge.
\subsection{A brief summary}
For understanding the phenomena observed in blazar OJ287, comparison of
the emission properties of the periodic outbursts (so claimed ``disk-impact
outbursts'') and the non-periodic optical outbursts (normal synchrotron
outbursts) may be an appropriate approach.
Based on the above assumptions we will be able to
model simulate the multi-wavelength light curves of both the December/2015
and March/2016 using a very simple model, which only involves a
superluminal optical knot (with a steady broken power-law synchrotron
spectrum) moving along a steady helical trajectory. As shown by the model
simulation results given in the next section, this simplified model is
already sufficient to explain the most of the basic properties of the
temporal and spectral variations of the two optical outbursts, showing
their very similar variability behaviors and the nature of their
optical/UV emission. Both the optical outbursts in December/2015 and
March/2016 can be well interpreted as produced by the lighthouse effect
due to one superluminal optical knot moving along a helical trajectory
through two helical revolutions: the
lighthouse effect firstly produces the December/2015 outburst and then
the March/2016 outburst 90-days later. This would imply, as suggested
in Qian (\cite{Qi15}), that there may exist a stable and perfect
collimation/acceleration zone (with strong magnetic fields) in OJ287,
providing the necessary
physical conditions (injection of relativistic electrons and helical motion)
to cause such a behavior of optical outbursts for long times, although
in most cases only one outburst is caused by one helical revolution due to
the opening of the jet or intrinsic dimming of the optical knots in the
outer parts of the jet.\\
We will also model simulate the light-curves of other five periodic
optical outbursts in 1983.00, 1984.10, 1994.75, 2005.76, and 2007.70, and
for a few isolated moderate outbursts (in 1993.93, 1994.17 and
at JD2457380), showing the common properties of these outbursts
and their common mechanism for optical radiation production.\\
All the data on the light curves of the optical outbursts used in this work
were collected from: Valtonen et al.(\cite{Va16}, \cite{Va17}, \cite{Va08}),
Kushwaha et al. (\cite{Ku18}),
Valtaoja et al (\cite{Val20}) and Sillanp\"a\"a et al. (\cite{Si96a},
\cite{Si96b} and private communication).\\
In this work, we adopt a $\Lambda$CDM cosmological model with the
parameters as: ${\Omega}_m$\,=\,0.27, ${\Omega}_{\Lambda}$\,=\,0.73 and
$\rm{H_0}$=71\,km\,${\rm{s}}^{-1}$\,${\rm{Mpc}}^{-1}$ (Spergel et al.
\cite{Sp03}, Komatsu et
al. \cite{Ko09}). 1\,mas\,=\,4.5\,pc (Hogg, \cite{Ho99}).
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TW1ydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSW1ydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TW2ydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSW2ydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TM2ydel4heli94.ps}
\includegraphics[width=6cm,angle=-90]{TSM2ydel4heli94.ps}
\caption{Model simulation of the UVW1-band (top panels), UVW2-band
(middle panels) and UVM2-band (bottom panels) light curves for the
outburst in December/2015 and in March/2016. For the UVW2-band the
data-points observed by Valtonen et al. (\cite{Va16}) are also
incorporated (labeled by TVal-UVW2 in green). In the right panels
the observing time of the March/2016 outburst has been shifted backward
by 89.4 days. The good fits of the combined UV-band light curves
(right panels) indicate the applicability of the lighthouse model
in the UV-bands
and the December/2015 outburst still has its variability behavior
similar to that of the synchrotron outburst in March/2016.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{T83peak1-4.ps}
\caption{Model simulation of the light curve for the 1983.00
periodic optical outburst, which
has been decomposed into three subbursts with symmetric profiles. The violet
line denotes the model-fit of the total flux density curve.
The origin of the time-axis is 1983.00. Data are taken from
Valtonen et al. (\cite{Va08}).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{T84peak1.ps}
\caption{Model simulation of the light curve for the 1984.10
periodic optical outburst, which
has been decomposed into two subbursts with symmetric profiles. The magenta
line represents the model-fit of the total flux density curve. The dashed
lines denote the model-fits of the two subbursts, respectively. The origin
of the time-axis is 1984.05. Data are taken from Valtaoja et al.
(\cite{Val20}).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{T94peak3e-2.ps}
\caption{Model simulation of the light curve for the 1994.75 periodic
optical outburst which has
been decomposed into five subbursts with symmetric profiles. The violet
line denotes the model-fit of the total flux density curve.
Data are taken from Sillanp\"a\"a et al. (\cite{Si96a};
OJ-94 project).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{T94peak1.ps}
\includegraphics[width=6cm,angle=-90]{T94peak2.ps}
\caption{Model simulation of the light curves for two isolated moderate
outbursts (in 1993.93 and 1994.17) with each having a symmetric profile.
Data are taken from Sillanp\"a\"a et al. (\cite{Si96a}, OJ-94 project).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{T05peak1-4.ps}
\caption{Model simulation of light curve for the 2005.76 periodic
optical outburst which is decomposed into five subbursts with symmetric
profiles. The violet line denotes the
model-fit of the total flux density curve. Time-axis denotes [day-2005.76].
Data are taken from Valtonen et al. (\cite{Va08}).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8cm,angle=-90]{T07peak1-4.ps}
\caption{Model simulation of the light curve for the 2007.70 periodic
optical outburst which is decomposed into three subbursts with
symmetric profiles. The violet line denotes
the model-fit of the total flux density. Time-axis denotes [day-2007.70].
Data are taken from Valtonen et al. (\cite{Va08}).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6cm,angle=-90]{TValRydel4heli94-2.ps}
\includegraphics[width=6cm,angle=-90]{TRydel4heli94s.ps}
\caption{Model simulation of the light curve for the isolated non-periodic
optical burst peaking at JD2457380 (right panel, R-band, highly polarized
with a polarization degree of $\sim$40\%) and its comparison with the
model simulation of the light curve of the major periodic optical
outburst peaking at JD2457360 (left panel). Both the light curves show
very similar symmetric profiles and are well fitted by the helical
motion model. The data points for the major outburst well emulate its
observed light curve after the small-amplitude fluctuations on it were
removed. Two small spikes (at JD2457364 and JD2457368) superposed
on the major outburst are not displayed.}
\end{figure*}
\section{Simulation results of multi-wavelength light curves}
The model simulation results of the multi-wavelength light curves
(in NIR-optical-UV bands)
for the periodic optical outburst in December/2015 (peaking at JD2457360)
and the non-periodic synchrotron outburst in March/2016
(peaking at JD2457450) are shown
in Figures 4--9. The relevant model parameters are listed in Table 4. \\
Firstly, in Figure 4 (left panel), we display the model simulation
of the R-band light curves of the December/2015 outburst observed
by Valtonen et al. (\cite{Va16}; labeled by ``TVal-R'') and by Kushwaha
et al. (\cite{Ku18}; labeled by ``TR''). It can be seen that the combination
of the data-points from Valtonen et al. (during the rising phase) and
Kushwaha et al. (during the decaying phase) constructs a well-defined
profile simulated by the modeled symmetric light curve. In the right panel
of Figure 4, the R-band light curve of the March/2016 outburst (labeled
by ``SR'') observed by Kushwaha et al. has been incorporated in the
simulation with its time-axis being shifted backward by 89.4 days.
It can be seen that the R-light curves of both the December/2015 and
March/2016 outbursts can be well simulated in terms of a common
symmetric profile. Since the March/2016 outburst is definitely a
non-thermal (synchrotron) flare with high optical polarization degrees,
the strong similarity between the variability behaviors of the
December/2015 and March/2016 outbursts leads us to recognize that
the December/2015 outburst\footnote{The December/2015 outburst peaking
at JD2457360 was identified as the periodic ``impact thermal flare''
in the disk-impact model.} is also a synchrotron flare generated in
the relativistic jet. Thus both outbursts can be interpreted in terms of
lighthouse effect due to the helical motion of one superluminal optical
component during two helical revolutions. \\
Now we present the simulation results of the multi-wavelength
light curves for each waveband individually.
\begin{itemize}
\item The model simulation results of the K-band light curves for both
outbursts in December/2015 and March/2016 are shown in Figure 5. Here
two panels are presented: the left panel displays the simulation of the
two light curves in time sequence and the right panel shows the simulation
of the combined light curve. It can be seen that the K-band light curves of
both outbursts are well fitted by the helical motion model, implying that
the outburst in December/2015 has its variability behavior very similar
to that of the synchrotron outburst in March/2016.
\item The model simulation results of the J-band and I-band light curves
for the outbursts in December/2015 and March/2016 are shown
in Figure 6. The upper panels show that the observed J-band peak
is much lower than the model light curve. This result should have
been expected, because the assumed model spectral index at J-band
${\alpha}$=0.8 is larger than the
observed one (Kushwaha et al.
\cite{Ku18}). The observed I-band light curves for both outbursts
are well fitted by the helical motion model.
\item The model simulation of the R-band light curves for both
outbursts in December/2015 and in March/2016 are shown in Figure 7
(top panels). The
left panel displays the simulation of the light curves for both outbursts
in time sequence and the right panel shows the simulation of the
three light curves (labeled by ``TR'', ``SR'' and ``TVal-R'') in a
combined form. It can be seen that the R-band light curve of the outburst
in December/2015 has its variability behavior very similar to that
of the synchrotron outburst in March/2016: both are well fitted by the
helical motion model with symmetric profiles
having similar rising and decaying time-scales.\\
Similarly, the model simulation results of the V-band
light curves for the outbursts in December/2015 and in March/2016
are displayed in Figure 7 (bottom panels).
It can be seen that both the observed V-band light curves (whether
presented in time sequence or in combined form) are very well fitted
by the helical motion model. Thus at both R- and V-bands the outburst in
December 2015 has its variability behavior very similar to that of
the synchrotron outburst in March/2016, implying that their emission
may originate from a similar mechanism. In this work we suggest that
both outbursts are produced by the helical motion of one superluminal
optical knot via lighthouse effect through two helical revolutions,
although we cannot exclude the possibility that they might be independent
flares with very similar temporary and spectral variations.
\item The model simulation results of the B-band and U-band light curves
for the outbursts in December/2015 and in March/2016 are shown in Figure 8.
It can be seen that the observed light curves for both outbursts are
well fitted by the helical motion model, showing that even in high
frequency regions (B- and U-bands) the outburst in December/2015 has
a variability behavior similar to that of the synchrotron outburst in
March/2016. The symmetry of the outburst profiles characteristic
in the low frequency region (K-band to V-band, modeled spectral index
$\alpha$\,=\,0.8) persists in the high frequency region
(modeled spectral index $\alpha$\,=\,1.3).
\item The model simulation results of the UVW1-,
UVW2- and UVM2-band light curves for the outbursts in December/2015 and
in March/2016 are shown in Figure 9. It can be seen that the light
curves observed at the three bands for both outbursts were well fitted
by the helical motion model. Thus the
outburst in December/2015 has its variability behavior very similar to that
of the synchrotron outburst in March/2016 in the UV-bands. But it should
be noticed that rapid (intraday) variability due to interstellar
scintillation (or extinction) or turbulent fluctuations in optical knots
(Qian et al. \cite{Qi91b}, Marscher et al. \cite{Ma08},
Marscher \cite{Ma14}) might cause scattering of the observational
data-points.
\end{itemize}
Based on the model simulation of the multi-wavelength light curves for
the periodic outburst in December/2015 and the non-periodic synchrotron
outburst in March/2016 as shown in Figure 4\,--\,9, we come to the
conclusion that the December/2015 outburst, which was claimed to be a
bremsstrahlung flare, has its variability behavior (both temporary and
spectral) very similar to the
synchrotron flare in March/2016: both have similar peaking flux densities
and spectral features, and their flux density curves having symmetric
profiles with similar rising and decaying time-scales. This would imply
that the December/2015 outburst may also be a synchrotron flare.
\section{Light curve structure of periodic optical outbursts}
As argued in the previous section, the symmetry in the optical light curves
and the similarity between the periodic and non-periodic optical outbursts
may be important for understanding the optical variations in OJ287. In order
to further investigate this behavior and clarify the nature of the phenomena
in OJ287, we will make model simulation of the light curves for the five
periodic optical outbursts observed in 1983.00, 1984.10, 1994.75, 2005.76
and 2007.70, and show that their light curves can be decomposed into a
number of subflares (or elementary flares) with symmetric profiles.
In addition, we will make model simulation for some well-resolved
(or isolated) non-periodic optical bursts to reveal the similarity
in optical variations between the periodic outbursts
(claimed as thermal flares)
and non-periodic outbursts (recognized as synchrotron flares).
In combination with the results presented in Sect.3 for the
December/2015 and
March/2016 outbursts, it can be seen that the consistency in the
variability behavior of all these outbursts provides
important clues for understanding the
nature of their emission, affording persuasive evidence that the entire
phenomena observed in blazar OJ287 may be caused by lighthouse effect
due to the helical motion of superluminal
optical knots (plasmons or shocks) within the relativistic jet.\\
\begin{table}
\caption{Base-level flux densities (V-band) for the six periodic optical
outbursts.}
\begin{flushleft}
\centering
\begin{tabular}{ll}
\hline
Flare & $\rm{S_b}$(mJy)\\
\hline
1983.00 & 5.0 \\
1984.10 & 2.8 \\
1994.75 & 1.0 \\
2005.76 & 2.0 \\
2007.70 & 5.5 \\
2015.87 & 3.5 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
For simplicity and easy comparison, we will apply the same approaches as
used in Sect.3 to make model simulation of
the light curves for the five periodic outbursts. The outbursts are assumed
to be composed of two components: the underlying jet emission (or
base-level component) and the flaring component. The base-level component
can be taken to be constant during
individual outbursts, but may change on longer time-scales due to the jet
precession and intrinsic variations in the underlying jet (jet-parameters
and bulk Lorentz factor).
For the six periodic outbursts simulated, the base-level flux densities
(at V-band) are listed in Table 5.
\subsection{Periodic optical outburst in 1983.00}
The outburst in 1983.00 is the first optical flare of the 1983\,--\,1984
double-peaked outbursts.
The simulation results for the 1983.00 periodic outburst are shown in
Figure 10. Its total flux density curve has
been decomposed into three subflares. The first flare was identified
as the ``impact (thermal) outburst''. Obviously, its light curve does not
have the ``standard shape'' expected for impact outbursts. It looks like
a single elementary flare and its light-curve can be very well fitted
with a symmetric profile in terms of the helical motion
(or lighthouse) model. Similarly, the light-curves of the other two
subflares can also be well fitted. The model parameters
for the three subflares are given in Table 6. \\
\begin{table}
\caption{Model simulation results for the 1983.00 periodic optical
outburst (V-band). $\rm{S_b}$\,=\,5\,mJy.
$\Gamma$\,--\,Lorentz factor of the
superluminal optical knot, $\rm{{\delta}_{max}}$\,=\,
maximum Doppler factor,
ratio\,=\,$\rm{{\delta}_{max}}$/$\rm{{\delta}_{min}}$,
$\rm{S_{int}}$(mJy)\,--\,intrinsic
(comoving) flux density. $t$(day)\,=\,flare time\,=\,day-1983.00.
FWHM (day)\,=\,full width at half maximum of the model light curve.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
t & $\Gamma$ & $\rm{{\delta}_{max}}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
9 & 8.0 & 15.90 & 3.20 & 4.27$\times{10^{-4}}$ & 10.0 \\
26 & 9.5 & 18.84 & 4.10 & 1.26$\times{10^{-4}}$ & 5.1 \\
37 & 8.5 & 16.89 & 3.49 & 1.07$\times{10^{-4}}$ & 7.1 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{table}
\caption{Model simulation results for the 1984.10 periodic
optical outburst. Explanation of the parameters and units as in Table 6.
$\rm{S_b}$=2.8\,mJy.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
t & $\Gamma$ & $\rm{{\delta}_{max}}$ & ratio & $\rm{S_{int}}$ &FWHM \\
\hline
48 & 7.5 & 14.90 & 2.93 & 3.44$\times{10^{-4}}$ & 12 \\
67 & 10.0 & 19.9 & 4.45 & 5.93$\times{10^{-5}}$ & 5 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Periodic optical outburst in 1984.10}
Interestingly, the second optical flare of the 1983\,--\,1984
double-peaked outbursts (during 1984.1--1984.3; see Fig.6
in Valtaoja et al. \cite{Val20})
clearly exhibits its structure consisting of two rather separated
outbursts peaking at 1984.18 and 1984.23 , respectively.
As like the 1983.00 outburst the major flare (peaking at 1984.18 with
peak flux=$\sim$20\,mJy at V-band) is a single elementary one and
its flux density curve can be very well simulated by a
symmetric profile and is displayed in Figure 11. The relevant model
parameters for the two subflares are listed in Table 7.
Thus for the 1983\,--\,1984 double-peaked outbursts, both the major
flares exhibit the symmetry in their light curves. As discussed below,
the major flares of the outbursts in 2007.70 and 2015.87 (Figs.15 and 16)
also reveal this characteristic feature. Symmetry in the optical outburst
light curves can not be explained in terms of the disk-impact scenario,
where the flux density curves of ``impact outbursts'' emitted from evolving
gas-bubbles would have a non-symmetric pattern with a rapid rising and
slower decaying phases.
\subsection{Periodic optical outburst in 1994.75}
The 1994--1996 periodic double-peaked outbursts were intensively monitored
via international cooperation of the ``OJ-94 project'' (Takalo
\cite{Tak96a}, Takalo et al. \cite{Tak96b}), starting at 1994.75
and 1995.84, respectively. During this period the optical flares
may be related to the ejection of superluminal radio (15\,GHz)
components C1, C2, C3 and C4 (Britzen et al. \cite{Br18},
Qian \cite{QiXiv18}, Tateyama et al. \cite{Ta99}).\\
The simulation results for the first outburst in 1994.75
\footnote{In Lehto \& Valtonen (\cite{Le96}) and Valtonen \& Lehto
(\cite{Va97}) this outburst was identified as the "disk-impact
thermal outburst". Recently Dey et al. (\cite{De18}) suggested that
its starting time should be changed to 1994.60 and the corresponding "impact
outburst" was missed due to lack of monitoring observations.}
(peaking at 1994.86) are shown in Figure 12. The observational
data are taken from Sillanp\"a\"a et al. (\cite{Si96a},
OJ-94 project; private communication). Its total flux density curve has been
decomposed into five subflares simulated with symmetric profiles.\\
Two rather isolated moderate outbursts peaking at JD2449326
and 2449415 were also model simulated and the results are shown in
Figure 13. The relevant model parameters are described in Table 8.
The successful simulation of the light curve for the 1994.75 outburst
might have demonstrated that any complex outburst observed in OJ287
can be decomposed into a few elementary flares and explained
in terms of the proposed helical motion (or lighthouse) model.
\subsection{Periodic optical outburst in 1995.84}
The outburst starting at 1995.84 was the second one of the
1994\,--\,1996 double-peaked outbursts. Its light curve exhibits a
complex structure, showing some distinct features which may be
meaningful for understanding the phenomena in OJ287 (Valtaoja et al.
\cite{Val20}).\\
Firstly, the ``impact flare'' (during 1995.84--1995.90) identified by the
disk-impact model (Dey et al. \cite{De18}) was a small one with its
peak flux density $\simeq$5\,mJy, much weaker than the follow-up flares
with peak flux of $\sim$12\,mJy (Valtaoja et al. \cite{Val20}). This
would pose a problem: why a strong disk impact
\footnote{According to the disk-impact model (Dey et al. \cite{De18}),
the 1995.84 outburst occurred at a distance of $\sim$3800\,AU from
the primary hole and should be produced by a strong impact of the secondary
hole onto the primary disk.} could only produce a small ``thermal''
optical flare, but resulted in the production of strong follow-up
synchrotron flares. Obviously, it seems that some other physical processes
would have played their roles. \\
Secondly, it is noticed that the 1995.84 flaring event (during
1995.84\,--\,1996.12) had its light curve very
similar to that of the 1994.75 outburst and could also be
decomposed into a number of subbursts simulated with symmetric profiles.
The symmetry of this double-peaked outbursts was
firstly discovered by Sillanp\"a\"a et al. (\cite{Si96b}), which
essentially reflects the symmetry existing in the light curves of
their subbursts with similar rising and declining time-scales.
Thus both the double-peaked outbursts during 1994\,--\,1996 period
can be interpreted in terms of our helical motion (lighthouse) model.
\footnote{Due to lack
of relevant data the 1995.84 outburst was not simulated in this work.} \\
Thirdly, the most interesting feature of the 1995.84 outburst may be
the close connection between the radio and optical variations.
According to Valtaoja et al. (\cite{Val20}, Fig.8 therein), its radio
variations at 22/37\,GHz were very similar to the V-band optical
variations: both variations are not only simultaneous but also
have similar envelopes. Even a few radio emission peaks can be
recognized to be concurrent with the optical peaks. This strict
simultaneity of the radio and optical variability seems important
and may have provided some significant clues to the physical processes
producing the radio/optical outbursts in OJ287. Unfortunately, this
radio-optical connection has not been explained since its discovery.
In combination with the model simulation of the light curves for the
December/2015 and March/2016 outbursts in Sect.3, we would come to the
conclusion that this close connection between the radio and optical
variability can not be explained in terms of disk-impact scenario and
shock-in-jet models, and can only be explained in terms of
lighthouse effect due to the helical motion of superluminal knots, but
requiring some special structure of the emitting source.\\
\begin{table}
\caption{Model parameters for the simulation of the 1994.75
periodic optical outburst and two
isolated moderate bursts (at V-band). $\rm{S_b}$=1.0\,mJy.
$t$\,=\,flare time\,=\,day--1994.75. Bursts at JD2449325 and JD2449415 are
isolated moderate flares. The major periodic outburst has been
decomposed into five subbursts. Explanation of the parameters and units
as in Table 6.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & ${\delta}_{max}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
325 & 11.5 & 22.83 & 5.57 & 2.23$\times{10^{-5}}$ & 3.3 \\
415 & 11.3 & 22.44 & 5.41 & 2.22$\times{10^{-5}}$ & 3.3 \\
\hline
630 & 9.5 & 18.88 & 4.11 & 2.67$\times{10^{-5}}$ & 5.7\\
640 & 7.5 & 14.90 & 2.93 & 1.28$\times{10^{-4}}$ & 11.6 \\
650 & 8.5 & 16.89 & 3.49 & 5.75$\times{10^{-5}}$ & 8.2 \\
670 & 7.0 & 13.90 & 2.68 & 2.41$\times{10^{-4}}$ & 14.8\\
680 & 9.5 & 18.88 & 4.11 & 1.83$\times{10^{-5}}$ & 5.8\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Periodic optical outburst in 2005.76}
The double-peaked outbursts during the period of 2005--2007 were
extensively observed by Valtonen et al. (\cite{Va08}) and Villforth et al.
(\cite{Vil10}). The ejection of superluminal radio components C11, C12,
C13L, C13U and C14 (Qian \cite{QiXiv18}) may be connected with these
optical flaring events.\\
The simulation results for the light curve of the first
outburst starting in 2005.76\footnote{It was identified as the ``impact
(bremsstrahlung) flare'' by Valtonen et al. (\cite{Va08}).} are shown
in Figure 14. Its total flux density curve has been decomposed into
five subflares, which are well simulated with symmetric profiles in terms
of the lighthouse model. The model parameters are given in Table 9.\\
According to the
disk-impact model, the 2005.76 and 1994.75 outbursts occur at
quite different distances (Dey et al. \cite{De18}):
$\sim$12,000AU (with the secondary-hole velocity of 0.17c) and
$\sim$7,000AU (with the secondary-hole velocity of 0.10c),
respectively. As shown in Fig.14
and in Fig.12, while their light curves have similar multi-component
structures, the peak flux density of the 2005.76 outburst ($\sim$9.0\,mJy)
is much higher than that of the 1994.75 outburst ($\sim$5.0\,mJy).
This is inconsistent with the prediction of the disk-impact model:
the strength of periodic optical outbursts is mainly dependent on
the impact-distance
and the secondary-hole velocity. It seems that some other
ingredients could exist to determine the strength of
the flaring activity, e.g., variations in the circumbinary disk
and accretion rates onto the binary holes.
\begin{table}
\caption{Model simulation results for the 2005.76 outburst. Its total flux
density curve was decomposed into five sub-flares with symmetric profiles.
$\rm{S_b}$=2.0\,mJy. $t$\,=\,flare time\,=\,day--2005.76. Explanation of
the parameters and units as in Table 6.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & ${\delta}_{max}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
20 & 7.0 & 13.90 & 2.68 & 1.81$\times{10^{-4}}$ & 14.6\\
37 & 7.0 & 13.90 & 2.68 & 1.73$\times{10^{-4}}$ & 14.6\\
58 & 7.0 & 13.90 & 2.68 & 1.63$\times{10^{-4}}$ & 14.6\\
90 & 5.5 & 10.89 & 2.03 & 4.18$\times{10^{-4}}$ & 31.7\\
182 & 6.0 & 11.90 & 2.23 & 1.76$\times{10^{-4}}$ & 25.8\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Periodic optical outburst in 2007.70}
The outburst starting at 2007.70 is the second flare of the double-peaked
outbursts during 2005\,--\,2007 period.
The model simulation results for this periodic optical outburst
are shown in Figure 15.\footnote{The whole outburst was observed at
R-band by Villforth et al. (\cite{Vil10}) during
September/2007\,--\,February/2008.
Its entire light curve comprises of at least seven individual subflares,
overlapping on each other and forming a very complex structure with a time
scale of about five months. The optical flare
discussed here is the first one which was identified as the ``impact
(bremsstrahlung) flare'' by Valtonen et al. (\cite{Va08})}.
Its total flux density curve has been decomposed into three subbursts
which are all well simulated with symmetric profiles in terms of the
lighthouse model. The model parameters are
given in Table 9. Although the declining part of the major outburst
is mixed with the rising part of the secondary burst, its rising-peaking
part still clearly demonstrates the trend of its symmetric profile.\\
In combination with the results given in Sect. 4.5, we found that
the total
flux density curves of both periodic outbursts (in 2005.76 and 2007.70)
can be well interpreted in terms of the lighthouse model with
symmetric profiles.\\
It can be seen from Figures 14 and 15 (also see Tables 9 and 10)
that the subflares of 2005.76 outburst
all have timescales much longer than those for the subflares of 2007.70
outburst, this might be related to their different impact distances
as expected by the disk-impact model:
the 2005.76 outburst occurred at $\sim$12,000\,AU and the 2007.70 outburst
at only $\sim$3,000\,AU. However, the peak flux density of the 2005.76
outburst ($\sim$9.0\,mJy) is much higher than that of the 2007.70
outburst ($\sim$6.5\,mJy), which seems inconsistent with the expectation
of this model.
\begin{table}
\caption{Model simulation results for the 2007.70 outburst. Its total flux
density curve is decomposed into three subbursts with symmetric profiles.
$\rm{S_b}$=5.5\,mJy. $t$\,=\,flare time\,=\,day--2007.70. Explanation of
the parameters and units as in Table 6.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & ${\delta}_{max}$ & Ratio & $\rm{S_{int}}$ & FWHM \\
\hline
2 & 11.4 & 22.64 & 5.49 & 2.22$\times{10^{-4}}$ & 3.4\\
6 & 12.0 & 23.82 & 5.97 & 9.00$\times{10^{-6}}$ & 3.0\\
10 & 11.4 & 22.64 & 5.49 & 1.12$\times{10^{-5}}$ & 3.4\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Periodic optical outburst in 2015.87}
In Sect.3 we have presented the model simulation results of the
multi-wavelength light curves for the major periodic optical outburst in
December/2015 (peaking at JD2457360) and shown that its multi-wavelength
light curves all have symmetric profiles and can
be interpreted in terms of light-house effect
due to the helical motion of a superluminal optical knot. Here we present
the model simulation of the light curve (at R-band) for an isolated optical
flare peaking at JD2457380 (Valtonen et al. \cite{Va16}), which is
shown in Figure 16 (left panel). For comparison, the modeled light curve
for the major outburst is also displayed (right panel). The model
parameters are given in Table 11.\\
It can be seen from Figure 16 and Table 11 that the isolated non-periodic
outburst
peaking at JD2457380 has a symmetric light curve similar to that of
the December/2015 outburst and both can be well fitted by the helical
motion model but with different bulk
Lorentz factors: $\Gamma$=13.5 for the non-periodic flare and $\Gamma$=9.5
for the major outburst. It should be noted that the non-periodic flare
(peaking at JD2457380) is a non-thermal (synchrotron) flare with
polarization degree of $\sim$40\% (Valtonen et al. \cite{Va17}).
Therefore, the similarity in the light curve patterns between the
December/2015 outburst and this non-thermal flare \footnote{This non-thermal
flare appeared at JD2457380, only 20\,days after the appearance of the
December/2015 outburst.}further prove the suggestion that
the December/2015 outburst may originate from synchrotron process.\\
In addition, we notice that the December/2015 outburst has its light curve
structure similar to that of the 1984.10 outburst (Figures 11 and 16) and
they have similar strengths: peak flux density of $\sim$14.5\,mJy
for the December/2015 outburst and $\sim$\,17.2mJy for the 1984.10 outburst.
This seems in contradiction with the expectation of the disk-impact model.
According to the
disk-impact model, the December/2015 outburst should be
much weaker than the 1984.10 outburst, because it appeared at
impact-distance of $\sim$18,000\,AU much farther than the 1984.10
outburst (at impact distance of $\sim$5,000\,AU).
This seems to demonstrate that there may exist some additional ingredients
(or processes) which determine the strength of the outbursts, e.g.,
variations in the circumbinary disk and the disks of the binary holes.\\
\begin{table}
\caption{Model simulation results for the non-periodic synchrotron outburst
peaking at 2457380 and its comparison with that for the major periodic
outburst in December/2015 (peaking at 2457360). $\rm{S_b}$=3.5\,mJy
(R-band). $t$\,=\,flare time\,=\,day--2457000. Explanation of the parameters
and units as in Table 6.}
\begin{flushleft}
\centering
\begin{tabular}{llllll}
\hline
$t$ & $\Gamma$ & ${\delta}_{max}$ & ratio & $\rm{S_{int}}$ & FWHM \\
\hline
360 & 9.5 & 18.88 & 4.11 & 1.33$\times{10^{-4}}$ & 5.9 \\
380 & 13.5 & 26.77 & 7.29 & 1.67$\times{10^{-5}}$ & 2.2\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\section{Discussion}
We have applied the precessing jet nozzle model previously proposed by
Qian et al. (e.g., \cite{Qi91a}, \cite{Qi13}, \cite{Qi19})
to investigate the optical variations observed in OJ287 and tried to
clarify the nature of emission for the outbursts.\\
It is found that the multi-wavelength variations (in NIR-optical-UV bands;
Kushwaha et al. \cite{Ku18}) of the periodic major outburst in
December/2015 (peaking at JD2457360) are
very similar to those of the non-periodic highly-polarized synchrotron
outburst in March/2016 (peaking at JD2457450). The multi-wavelength
light curves of both the outbursts can be well simulated by symmetric
profiles and interpreted in terms of lighthouse effect due to the helical
motion of one superluminal optical knot through two helical revolutions.
This result seems important, indicating that the December/2015 outburst
may like the March/2016 outburst and also originate from synchrotron
process.
\footnote{The December/2105 outburst was identified as
the ``impact (bremsstrahlung) flare'' according to the disk-impact model.}
Its association with the simultaneous $\gamma$-ray flare supports this
interpretation.\\
The five periodic outbursts observed in 1983.00, 1984.10,
1994.75, 2005.76 and 2007.70 (at V-band), and a few isolated
non-periodic flares
have also been simulated. We find that all the periodic outbursts can be
decomposed into a number of subbursts (or ``elementary flares'').
The light curves of all these elementary flares can be simulated by
symmetric profiles with similar rising and decaying time-scales and and
interpreted in terms of the lighthouse model. The isolated non-periodic
flares show their variability behavior similar to these elementary
flares.\\
In combination with the simulation results for the December/2015
and March/2016, we tentatively suggest that the periodic optical
outbursts observed during 1983\,--\,2015 may all originate from synchrotron
process in the relativistic jet and they may be produced by lighthouse
effect due to the helical motion of superluminal optical knots
(blobs or shocks). This interpretation is consistent with
the requirement of ``single mechanism'', which is derived from the color
stability during the optical outbursts (Sillanp\"a\"a et al. \cite{Si96a},
Gupta et al. \cite{Gu16}). The low polarization of the first flares of
the double-peaked outbursts can also be understood, because synchrotron
flares can have a large range of polarization degree, as typically
observed in OJ287 (from $<$2\% to $\sim$40\%
(Villforth et al. \cite{Vil10}, Kushwaha et al. \cite{Ku18}). The
close connection between the radio/mm and optical variations
(e.g., observed in the 1995.84 outburst) can also be explained.\\
We have shown that the entire optical variability in OJ287 could only be
explained by invoking lighthouse effect due to the helical motion
of superluminal optical knots. This result may have been expected, based
on the magnetohydrodynamical (MHD) theories for jet formation in spinning
black hole\,--\,accretion disk systems, in which relativistic jets are
produced in the rotating magnetospheres with strong toroidal magnetic
fields and strong helical fields should be permeated in the jets near the
black holes (e.g., Blandford \& Znajek \cite{Bl77}, Blandford \& Payne
\cite{Bl82}, Camenzind \cite{Cam90}, Li et al. \cite{Lizy92}, Beskin
\cite{Be10}, Valhakis \& K\"onigl \cite{Vl04}, Meier \cite{Mei13},
\cite{Mei01}). It would be a natural phenomenon that superluminal optical
knots move along helical trajectories, producing optical outbursts through
lighthouse effect. Unfortunately, there seems only a few observational events
revealing this phenomenon (e.g., Schramm et al. \cite{Sc93}, Dreissigacker
\cite{Dr96a}, Dreissigacker \& Camenzind \cite{Dr96b}, Camenzind \&
Kronkberger \cite{Cam92}, Wagner et al. \cite{Wa95}, Qian \cite{Qi15}). This
work demonstrates that helical motion of superluminal optical components
may be a general phenomenon in blazar OJ287 and thus provide some
observational evidence for the existence of helical magnetic fields in the
inner jet regions of blazars.\\
Under the binary black hole scenario, we have tentatively proposed
a unified and plausible relativistic jet
model for fully explaining the optical activities in OJ287 (including its
periodic and non-periodic outbursts), invoking lighthouse effect due
to the helical motion of superluminal optical components. The chain of the
physical processes in this model may be: a succession of discrete accretion
events (including the double-stream accretion flows; e.g.,
Tanaka \cite{Tan13} ) created by the pericenter passages of the secondary
hole (moving in an eccentric orbit) results in a succession of ejection
of superluminal optical components through the jet formation mechanism,
producing a succession of the elementary optical flares which blend
together to form major complex outbursts.\\
The relativistic jet model tentatively suggested in this work
should be tested by the future multi-wavelength (from radio to
$\gamma$-rays) observations. If this scenario is proved to be correct, the
optical phenomena in OJ287 can be explained without needing to invoke the
disk-impact mechanism, although this mechanism seems
very attractive for testing the efffects of general relativity
(Einstein \cite{Ei16}, \cite{Ei18}). However, the relativistic jet
scenario only concentrates on the
explanation of the nature and characteristics of the optical activities
(temporary and spectral variations of the outbursts), the quasi-periodicity
of its optical variability and the double-structure of the periodic
outbursts remain to be interpreted. In principle, under the framework of
binary black hole models, the quasi-periodicity can be related to the
modulation of accretion rates via the pericenter passages of the
companion black hole in an eccentric orbit. As Sillanp\"a\"a et al.
(\cite{Si96a}) originally suggested, the eccentric orbital motion
of the secondary hole can cause quasi-periodically enhanced accretion
flows onto the primary hole, which consequently result in ejections
of superluminal optical knots via jet-formation mechanism(s), producing
quasi-periodic optical outbursts. As regards the explanation of the
double-structure of periodic optical outbursts, cavity-accretion models
as suggested by Tanaka (\cite{Tan13}) might be applicable. In the case
of comparable-mass and eccentric binary systems, usually two gas streams
are created per pericenter passage of the secondary hole
in the circumbinary disk and flow toward the binary black holes,
producing a double-peaked outbursts. The cavity-accretion processes
(dynamics and kinematics of the streams) might be quasi-regular, because
the two streams would have to move across the Lagrange points of the binary
system (e.g., Artymowicz \& Lubow \cite{Ar96}, Artymowicz \cite{Ar98},
Tanaka \cite{Tan13}). However, in the case of cavity-accretion in binary
systems, complex processes are involved: e.g., eccentric
motion of the binary around the mass-center, interaction between the
binary and the circumbinary disk, creation of the pair of gas streams,
jet formation, precession and ejection of superluminal
components in binary systems, etc. Thus it seems that cavity-accretion
models can not
accurately predict the appearing times of the periodic optical outbursts,
because the timing of the periodic outbursts are not determined by the
orbital motion only. Stochasticity in the circumbinary disk accretion
and in the dynamics and kinematics of the stream flows could result in
some scattering of the appearing times of the double-peaked outbursts.
In fact, even in the disk-impact model (Lehto \& Valtonen \cite{Le96},
Dey et al. \cite{De18}) where the outburst timing is assumed to
mainly depend on the orbital motion, the strength of the double-peaked
outbursts seems not closely related to the orbital phases
(or outburst timing). In Table 12
the relation between the peak flux densities of the impact outbursts and
their impact distances (and secondary hole velocities) are listed,
which does not demonstrate any connection between these parameters:
strong outbursts do not necessarily appear at small impact distances.
This seems that some significant physical processes might have been missed
for determining the strength of the periodic outbursts.\\
In the relativistic jet model proposed in this work, the quasi-periodicity
in optical variability and the double-peak structure can be ascribed to
the accretion processes in binary systems. In the cavity-accretion models,
the modulation of the accretion rate onto the binary holes and their
disks by the orbital period might be the most plausible mechanism for
explaining the quasi-periodicity of the optical variability observed in
OJ287. Moreover, the two streams of accretion flows created per
pericenter passages of the secondary hole may be invoked to explain
the double-peak structure of the periodic outbursts. The timing
mechanism for the periodic outbursts could be investigated along with
the appearance of non-periodic outbursts. Detailed modeling
based on HD/MHD simulations is imperatively required.
\begin{table}
\caption{Relation between the strength ($\rm{S_{v}}$, peak flux density
at V-band) of the impact outbursts, the impact distance
($\rm{R_{imp}}$) and the secondary velocity ${v_0}$/c.
t\,=\,flare time. The data are arranged in
sequence of impact distance. $\rm{S_{v,obs}}$\,=\,$\rm{S_v}$+$\rm{S_b}$.}
\begin{flushleft}
\centering
\begin{tabular}{lllll}
\hline
$t$ & $R_{imp}$(AU) & $v_0$/c & $\rm{S_{v,obs}}$(mJy) & $\rm{S_v}$(mJy)\\
\hline
2007.70 & 3259 & 0.264 & 12.0 & 6.5 \\
1995.84 & 3855 & 0.245 & 5.0 & 3.5 \\
1983.00 & 4633 & 0.224 & 32.0 & 27.0 \\
1984.10 & 5387 & 0.205 & 20.0 & 17.2 \\
1994.75 & 7079 & 0.173 & 6.0 & 5.0 \\
2005.76 & 12427 & 0.106 & 11.0 & 9.0 \\
2015.87 & 17566 & 0.058 & 18.0 & 14.5 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\begin{acknowledgement}
I wish to thank Prof. A. Sillanp\"a\"a for affording the
V-band data on OJ287 during 1993.9\,--\,1995.3 (OJ-94 project),
which was very helpful to this work.
\end{acknowledgement}
|
1,108,101,565,144 | arxiv | \section{Introduction}
{\bf Introduction: }
Quantum information cannot propagate faster than light. However, in many laboratory settings, the speed of light is effectively infinite, since the natural dynamical timescales are long compared to the light-crossing time. Hence, these systems can sometimes be modeled as having instantaneous long-range interactions, for example, electric and magnetic dipolar interactions. Such non-local interactions potentially allow rapid information transfer between distant locations~\cite{guo_signaling_2019,lashkari_towards_2013,Eldredge17,Gualdi08,Avellino06}, making them attractive for quantum information processing.
Remarkably, short range interaction enforces an emergent speed limit~\cite{Lieb1972}, even when the speed of light is effectively infinite.
We study the analogous possibility of emergent limits on information propagation in long-range interacting systems. We refer to these limits as effective light cones even though their spacetime shape may not be that of a cone. Our focus is on power-law interactions that fall off with distance $r$ as $r^{-\alpha}$ since these systems are common in the lab and their emergent light cones have been intensely studied~\cite{Porras_2004, hastings_spectral_2006, Britton12, Blatt12,Ye13, Monroe13, gong_persistence_2014, foss-feig_nearly-linear_2015, Zaletel2015, Lukin17, Bollinger17, chen_quantum_2018,tran_locality_2018,chen_finite_2019,luitz_emergent_2019}. Using the concepts and tools recently developed from the study of many-body quantum chaos\cite{Xu_Swingle_2018,khemani_velocity-dependent_2018,chen_quantum_2018,zhou_operator_2018}, we argue that chaotic power-law interacting systems have a generic emergent light cone structure which depends only on $\alpha$ and the spatial dimension $d$.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig_phase_diagram.pdf}
\caption{The light cone contours of $C(x, t)$ in \mone\cite{chatterjee_multiple_2013,hallatschek_acceleration_2014}. The $\alpha$ axis marks the transition exponents in 1d ($d$-dimensional data in the parenthetical). In order of increasing $\alpha$, the light cone transitions from logarithmic to power-law to linear. The scaling functions for $t_{\rm LC}(x)$ in each phase as well as the marginal scalings at $\alpha = \frac{d}{2}$ and $d$ are displayed. The exponents $\zeta$ and $\frac{1}{\eta}$ are given by $\zeta = 2 \alpha - 2 d$, $\eta = \log_2 \frac{d}{\alpha}$. The power-law and linear light cone regimes are also numerically verified in chaotic long-range spin chains.}
\label{fig:phase_diagram}
\end{figure}
We diagnose emergent light cones by studying the commutator of two operators, where one acts as the perturbation and the other probes whether the perturbation has spread beyond a given spacetime point.
{ Such a commutator would exactly vanish outside the light cone in a \emph{relativistic} model, whereas for quantum lattice systems without manifest Lorentz invariance, the commutator may still be nonzero for arbitrarily small times. Furthermore, for long-range interacting systems, the region outside of which the commutator is small cannot in general be bounded by a simple, \emph{linear} contour; the notion of a light cone is still applicable here, however, since information can hardly spread beyond the contour at a given point in time.}
The key quantity is the expectation value of the squared commutator (closely related to the out-of-time-ordered correlator \cite{larkin_quasiclassical_1969,kitaev2015,maldacena_bound_2015}, or OTOC) defined (in our lattice setting, at infinite temperature) as
\begin{equation}
\label{eq:C_x_t}
C(x,t)=\text{Tr}\left( [W(t),V]^\dagger [W(t),V] \right)/\text{Tr}( \mathbb{I} ),
\end{equation}
where $W(t) = e^{i H t} W e^{-i H t}$ is the Heisenberg form of the local operator $W$ and $V$ is another local operator a distance $x$ away from $W$. Happily, these objects can be measured in experiment~\cite{Swingle2016,Zhu2016,Halpern2016,Halpern2017,Campisi2017,Garttner2016, Wei2016, Li2017a,Landsman_2019,yao_interferometric_2016-1,yoshida_efficient_2017-1,meier_exploring_2019}, including in large-scale systems with power-law interactions~\cite{sanchez2019emergent}.
The emergent light cone is defined in terms of the spacetime contours determined by $C=\text{constant}$, as these track the effective spread of the perturbation in spacetime. For local quantum chaotic systems, one typically finds that the contours are asymptotically straight, independent of the precisely chosen contour, although in general there is a rich shape structure in the non-asymptotic regime.
In the power-law case, Ref.~\cite{chen_quantum_2018} provided a systematic study of the light cone structure for systems with time-dependent random couplings. By random averaging, those authors gave strong numerical evidence for a complex light cone structure depending on $\alpha$.
In this work, we propose that the phase diagram in Ref.~\cite{chen_quantum_2018} is generic for chaotic power-law interacting systems {\it even without randomness}. { Specifically, we exclude systems with gauge or intrinsic constraints (see e.g. Refs.~\cite{pichler_real-time_2016,turner_weak_2018}) that prevent ergodicity.}
Our theoretical picture is that dephasing in such systems due to quantum chaos leads to an effective stochastic description of the emergent light cone. The resulting effective model falls into the ``long-range dispersal'' class for which a universal phase diagram is known. We rigorously locate the phase boundaries that delineate the regions of ballistic, super-ballistic, and exponential growth (Fig.~\ref{fig:phase_diagram}). Furthermore, we { develop a novel numerical scheme for operator spreading using time-dependent variational principle in the matrix product representation (TDVP-MPO)\cite{Haegeman2011, Haegeman2016, Koffel2012, Hauke2013, Halimeh2017}. As far as we know, it is the most efficient method to study the operator dynamics of large scale long-range systems so far, which enables us to simulate chaotic spin chains of up to 200 sites. The results are consistent with the phase diagram in Fig.~\ref{fig:phase_diagram}.}
{\bf Operator spreading:} In general, chaotic time evolution will increase the support and complexity of $W(t)$, a process known as operator spreading.
We propose that due to dephasing, such processes can be approximated by a stochastic model that generates a universal phase diagram.
We use a height representation introduced in Ref.~\onlinecite{zhou_operator_2018,chen_quantum_2018} to describe the operator spreading, but there are many other approaches~\cite{khemani_operator_2017,nahum_operator_2018,rakovszky_diffusive_2017,von_keyserlingk_operator_2018,xu_locality_2018}. In a 1d chain of spin-$\frac{1}{2}$ particles of length $L$, we expand $W(t)$ into Pauli string basis $\{B_\mu\}$:
\begin{equation}
\label{eq:Vt}
W(t) = \sum_{\mu} a_{\mu}(t) B_{\mu}.
\end{equation}
With the normalization $\text{tr}( W^{\dagger}(t) W(t) ) = 1 $, the coefficients $|a_{\mu}(t)|^2$ give a normalized probability distribution over $\{B_\mu\}$.
Each basis operator has a height as follows: the $i$-th component $h_i$ for operator $B_\mu$ is $0$ if $B_{\mu}$ is identity on site $i$ and $1$ otherwise. Together these $h_i$ form an $L$-component vector $\vec h \in \{0,1\}^L$. The height representation does not distinguish different Pauli operators, so many operators have the same height. If the distribution over operators of a given height $\vec h$ is more-or-less random, then the chaotic operator dynamics is succinctly represented by the height probability distribution $f(\vec{h}, t ) = \sum_{{\rm height}(B_{\mu}) = \vec{h}} | a_{\mu}(t)|^2$. Since the commutator $[W(t),V]$ can only be non-zero if $W(t)$ is not the identity at the location of $V$, it follows that $C(x,t)$ is proportional to the {\it mean height} of $W(t)$ at site $x$ (again provided the distribution over operators of a given height is uniform).
The distribution $f$ is defined on the space of $2^L$ height states. We refer to sites with $h_i=1$ as occupied, and otherwise as unoccupied. Initially, a simple local operator $W(0)$ only has one site occupied and the distribution $f$ is concentrated on that height vector. Time evolution generally expands the operator, and the height distribution is correspondingly spread over more height configurations. Due to the decaying strength of the interaction, sites closer to $W(0)$ are more likely to increase their height earlier. As a result, the dynamics of the height distribution encodes the light cone structure.
\begin{figure}[h]
\centering
\subfigure[]{
\label{fig:model_1}
\includegraphics[width=0.8\columnwidth]{fig_model_1.pdf}
}
\subfigure[]{
\label{fig:model_1p}
\includegraphics[width=0.8\columnwidth]{fig_model_1p.pdf}
}
\caption{\mone and a faster \monep. Filled rectangles are occupied sites. (a) Each of them (red on the top) contributes a rate proportional to ${r^{-2\alpha}}$ to occupy an empty site (red on the bottom) with distance $r$. (b) Make the same transition and then fill all the sites on its left.}
\label{fig:model_1_1p}
\end{figure}
The height picture is particularly useful for chaotic systems because their pseudo-random character implies that the evolution of $f( \vec{h}, t )$ is often approximately Markovian. This observation has been made in local systems~\cite{khemani_velocity-dependent_2018,khemani_operator_2017,nahum_operator_2018,rakovszky_diffusive_2017,von_keyserlingk_operator_2018}, where an additional site can become occupied only if it is next to an occupied site.
We postulate the following effective Markovian transition rates for the $f$ dynamics. For definiteness, suppose the Hamiltonian is $H = \sum_\nu J_\nu H_\nu$ where the $H_\nu$ are Pauli strings with non-identity elements on only two sites a distance $r(H_\nu)$ apart and the couplings $J_\nu$ scales as $r(H_\nu)^{-\alpha}$. If the model is chaotic, then it will exhibit an effective loss of coherence on a time-scale $\tau_{\rm coh}$. The Markovian transition rates are then estimated to be of order $J_{\nu}^2 \tau_{\rm coh} \propto r^{-2\alpha}$, which leads to a probability of jumping from the top to the bottom configuration in Fig.~\ref{fig:model_1}.
Hence, the stochastic height dynamics of \mone is:
\begin{enumerate}
\item Initially only one site is occupied.
\item Each occupied site contributes a transition rate proportional to $r^{-2\alpha}$ to occupy an empty site a distance $r$ away.
\end{enumerate}
The effective dephasing and the stochastic rate estimate above are our key assumptions to understanding the light cone structure. The resulting \mone can be {\it exactly} realized in an idealized model called a Brownian circuit~\cite{zhou_operator_2018,chen_quantum_2018,xu_locality_2018}, where the couplings are Brownian motions. Here, we believe the assumed randomness of chaos can effective do the same job leading to \mone.
As discussed above, we define the light cone structure by studying its level sets of the squared commutator. The curve parameterized by $t = t_{\rm LC}(x)$ with $C(x,t_{\rm LC}(x) ) = \epsilon$ defines the light cone contour with threshold $\epsilon$, which is expected to depend strongly on $\alpha$.
In the local limit, $\alpha \rightarrow \infty$, the leading behavior is $t_{\rm LC}(x) \sim x$, i.e. a linear light cone. When $\alpha = 0$, \mone completely loses locality, and $t_{\rm LC}(x) \rightarrow 0$ in an infinite chain. The general phase diagram has been obtained exactly in Ref.~\onlinecite{chatterjee_multiple_2013,hallatschek_acceleration_2014}; translating it to our setting yields Fig.~\ref{fig:phase_diagram}.
There are four different phases characterized by different light cone scalings. In 1d, $\alpha < 0.5$ is the completely non-local phase. The transition occurs at the threshold below which the jump rate $\sim r^{-(2\times 0.5)}$ in \mone becomes un-normalizable in an infinite chain. On a finite chain, the operator spreading is similar to that of the Sachdev-Ye-Kitaev model~\cite{sachdev_gapless_1993,kitaev2015,roberts_operator_2018, zhou_operator_2018,chen_quantum_2018}. As $\alpha$ increases, one finds a phase with $t_{\rm LC}(x) \sim (\log x)^{\frac{1}{\eta}}$ ($0 < \eta \le 1$) for $0.5 \le \alpha < 1$ and a power-law light cone phase for $1 < \alpha < 1.5$. Finally, when $\alpha \ge 1.5$, a linear light cone emerges.
{\bf A Faster Model: \monep. }
To better understand these results, and to learn more about the shape of the contours, we study an even simpler model that still captures much of the physics. We dub it ``\monep'' and illustrate in Fig.~\ref{fig:model_1p}.
Its modified transition rule is:
\begin{enumerate}[label={\arabic*$'$}]
\setcounter{enumi}{1}
\item Make a transition (as in \mone) and then fill in all the empty sites ``behind'' the newly occupied site.
\end{enumerate}
Clearly, \monep spreads faster than \mone, so its value for $C(x,t)$ will upper-bound that of \mone. However, \monep is simpler to analyze because its state is completely determined by the motion of the outer-most point, thus reducing it to a single particle problem. In 1d, the dynamics can be sped up by taking all the sites with $x \le 0$ to be occupied in the initial height state. The motion of the outer-most point becomes Markovian, and the rate to move forward $r$ sites is then $\sum_{r'=-\infty}^r (r')^{-2\alpha} \sim r^{1-2\alpha}$.
Such a long-range random walk is called a L\'evy flight (see Refs.~\onlinecite{calvo_generalized_2010,janson_stable_2011,chechkin_introduction_2008}), where the displacement of each jump $X_t$ (at time $t$) is an independent random variable with distribution $f_{\rm jump}(x)$ that scales as ${x^{-(1 + \alevy)}}$ when $x \rightarrow \infty$. According to the generalized central limit theorem \cite{SM},
the total displacement will converge to a L\'evy stable distribution $L_{ \alevy, \blevy}$, with parameter $\alevy = 2\alpha - 2$ and $\blevy = 1$ for the present case.
The distribution for the right-most occupied site $\rho( r, t )$ scales as
\begin{equation}
\label{eq:rho_monep}
\rho( x, t )
\sim
\left\lbrace
\begin{aligned}
& L_{2\alpha - 2, 1} \left({x}/ {t^{\frac{1}{\zeta}}}\right) & \,\, 1 < \alpha \le 1.5, \\
& L_{2\alpha - 2, 1} \left({(x - v_B t)}/{t^{\frac{1}{\zeta}}}\right) & \,\, 1.5 < \alpha < 2, \\
& \exp\left( - {(x - v_B t)^2}/{2Dt} \right) & \,\, 2 \le \alpha,
\end{aligned} \right.
\end{equation}
where $L_{\alpha, \beta}$ is the L\'evy stable distribution\label{app:GCLT} $\zeta = 2\alpha - 2$ and $v_B $ and $D$ are the first and second moments of $f_{\rm jump}(x)$ when they exist. The probability for site $x$ to be occupied is equal to $\int_x^\infty \rho(x', t )\,dx'$ in \monep, which leads to the light cones in the second column of Table ~\ref{tab:op_lightcone}:
\begin{table}[h]
\centering
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
& \multicolumn{3}{c|}{\text{\monep}} & \multicolumn{3}{c|}{\text{\mone}} \\ \hline
$\alpha$ & LC & width & tail & LC & width & tail \\ \hline
$ [0.5,1) $ & N/A & N/A & N/A & $e^{t^{\log_2 \frac{1}{\alpha}}}$ & N/A & \multirow{2}{*}{\parbox{35pt}{\vspace{0cm} $x^{-2\alpha}$ }} \\ \cline{1-6}
$ (1,\frac{3}{2}] $ & $t^{\frac{1}{2\alpha - 2}}$ & N/A & \multirow{2}{*}{\parbox{35pt}{\vspace{0cm} $x^{-(2\alpha - 2)}$ }} & $t^{\frac{1}{2\alpha - 2}}$ & N/A & \\ \cline{1-3} \cline{5-7}
$ (\frac{3}{2}, 2) $ & \multirow{3}{*}{\parbox{15pt}{\vspace{0cm} $v_B t $ }} & $t^{\frac{1}{2\alpha - 2}}$ & &\multirow{3}{*}{\parbox{15pt}{\vspace{0cm} $v_B t $ }} & $t^{\frac{1}{2\alpha - 2}}$ & $x^{-(2\alpha - 2)}$ \\ \cline{1-1} \cline{3-4} \cline{6-7}
$ 2 $ & & $( t\ln t )^{\frac{1}{2}}$ & \multirow{2}{*}{\parbox{35pt}{\vspace{0cm} Gaussian }} & & $( t\ln t )^{\frac{1}{2}}$ & \multirow{2}{*}{\parbox{35pt}{\vspace{0cm} Gaussian }} \\ \cline{1-1} \cline{3-3} \cline{6-6}
$ (2,\infty) $ & & $t^{\frac{1}{2}}$ & & & $t^{\frac{1}{2}}$ & \\ \hline
\end{tabular}
\caption{Scalings of light cone, its broadening (width) and tail of \monep and comparison with \mone.}
\label{tab:op_lightcone}
\end{table}
{ The transition points $\alpha = 1, 1.5$ and $2$ are the critical values above which the jump distribution $f_{\rm jump}(x)$ of \monep starts to be normalizable and acquires mean velocity $v_B$ and variance $D$ respectively. In the following, we review the quantitative predictions on \mone by \monep. Aside from the light cone scalings and characteristic width, we also study the wavefronts' spatial dependences at fixed time. We refer to the large-$x$ limit of $C(x,t)$ at fixed $t$ as the \emph{tail}. For small $t$ in \mone, the tail should be roughly equal to the probability of a rare jump from the initial seed at site $0$, i.e. as ${x^{- 2\alpha } }$. The tails we discuss are for large $t$.
From Tab.~\ref{tab:op_lightcone}, all the scalings about the light cones are identical for both models when $\alpha \ge 1.5$. In this regime, \monep has a linear light cone and since it spreads faster than \mone, the later must also have a linear light cone. We would further expect \mone to form a domain of occupied sites within the light cone, rendering the two models qualitatively similar. In particular the widths of ${t^{1/({2\alpha - 2})}}$ and $\sqrt{t}$ have been verified in the classical simulation of \mone \cite{SM}.
When $1 < \alpha < 1.5$, \monep has a power-law light cone, whereas that of \mone could potentially be more restrictive. But suppose \mone were to have a linear light cone; then a domain of occupied sites would form, so that the light cone of \mone would be identical to that of \monep. But the latter has faster-than-linear propagation, leading to a contradiction. In practice, \mone has the same light cone scaling as \monep \cite{chatterjee_multiple_2013,hallatschek_acceleration_2014}, but the gaps between filled sites in \mone gives a different tail scaling than \monep. Within a mean-field approximation \cite{SM}, we find the tail scaling to be ${x^{-2\alpha}}$, which is further numerically verified in \mone and a long-range spin chain discussed below.
Finally, when $\alpha < 1$, the the long range jumps of \mone create large gaps between the occupied sites. The approximation of a solid domain as in \monep does not work, and the problem is many-body in nature.}
We briefly comment on the situation in higher dimensions. The transition rate ${r^{-\alpha}}$ is normalizable in $d$-dimension only when $\alpha > \frac{d}{2}$. When we consider the corresponding \monep, the outer-most point jumps with rate $\int d^dr\; {r^{-2\alpha}} \sim r^{-2\alpha +d} $. The existence of the zeroth, first and second moments gives the general transition points marked in Fig.~\ref{fig:phase_diagram}.
{\bf Numerical results: } We test the dephasing mechanism and other predictions mentioned above in a long-range mixed field Ising model with Hamiltonian
\begin{equation}
H = - \sum \limits_{r,r'} \frac{J}{|r-r'|^\alpha }\sigma^z_r \sigma^z_{r'} - \sum\limits_{r} h_z \sigma^z _r -\sum\limits_{r} h_x \sigma^x _r,
\end{equation}
where $J$ is set to 1 as the energy unit, and the fields $h_z$ and $h_x$ are set to $0.5$ and $1.05$, respectively.
We implement the TDVP algorithm in operator space, which treats the operator as a matrix-product state and optimizes within the space of matrix-product representations \cite{Haegeman2011, Haegeman2016, Leviatan2017}.
The ``super'' Hamiltonian $\mathcal{H}=H\otimes I - I\otimes H^*$ of the long-range interaction is explicitly constructed and fed into the state-based TDVP algorithm~\cite{Haegeman2016}.
We expect that information far ahead of the wave front can be extracted with relatively low bond dimension, enabling us to simulate up to $200$ sites.
\begin{figure}
\subfigure[]{
\label{fig:linear_cone}
\includegraphics[width=0.46\columnwidth]{linear_cone.pdf}
}
\subfigure[]{
\label{fig:power_cone}
\includegraphics[width=0.46\columnwidth]{power_cone.pdf}
}
\caption{The light cone of the long-range mixed-field Ising model for (a) $\alpha = 2.2$ and (b) $\alpha = 1.2$. Contours of $C(x, t)$ at threshold $\epsilon = e^{-7}$ are the main figures and other thresholds in the insets. Various system sizes and bond dimensions confirm convergence.
}
\label{fig:num_lightcone}
\end{figure}
In Fig.~\ref{fig:num_lightcone}, we present the contour plots of $C(x,t)$ for $\alpha = 2.2$ and $\alpha = 1.2$, which demonstrate the linear and power-law light cones respectively. The insets show the contours for different values of the threshold, $\epsilon$. Eq.~\eqref{eq:rho_monep} predicts that the contours will follow the relations $(x - v_Bt) / \sqrt{t} \sim \text{constant}$ and $x \sim t^{\frac{1}{\zeta}}$ for the linear and power-law light cones respectively. The former gives convex curves that become parallel asymptotically, while the latter gives concave curves that disperse. These features are reflected in Fig.~\ref{fig:linear_cone} and Fig.~\ref{fig:power_cone}.
\begin{figure}
\subfigure[]{
\label{fig:pt_init}
\includegraphics[width=0.46\columnwidth]{levy_flight_point-crop.pdf}
}
\subfigure[]{
\label{fig:dw_init}
\includegraphics[width=0.46\columnwidth]{levy_flight-crop.pdf}
}
\caption{Tail of the front for (a) a point and (b) domain wall initial conditions. (a) at $\alpha = 1.2$, the decay fits $x^{-2\alpha}$ at long times. (b) the short time decay fits $C=a\left (x^{1-2\alpha_{\rm fitted}}-(x+x_0)^{1-2\alpha_{\rm fitted}} \right )$ where $x_0$ is the domain wall length. $\alpha_{\rm fitted} \approx \alpha$, confirming the L\'evy flight prediction.
}
\label{fig:tail_scaling}
\end{figure}
A precise verification of the phase boundary is computationally challenging. We instead measure the spatial dependence of the power-law tail to verify the proposed dephasing scheme. Fig.~\ref{fig:pt_init} shows the tail of the front for a point initial condition with $\alpha=1.2$. The decay exponent remains close to $2\alpha$ even at {\it late} times, consistent with the mean field argument \cite{SM}.
In contrast, a domain wall initial condition with $h = 1$ for $x<0$ will generate a tail that scales as ${x^{-2\alpha - 1 }}$ at {\it early} times. In Fig.~\ref{fig:dw_init}, we fit the decay while taking into account the finite size of the domain and show that the fitting parameter $\alpha_{\rm fitted}$ is fairly close to $\alpha$.
{\bf Discussion and conclusion: }
We studied information propagation in chaotic long-range interacting systems via an analysis of the light cone structure of the squared commutator. Invoking a dephasing mechanism, we proposed a general phase diagram for such chaotic systems that generalizes the one proposed in Ref.~\cite{chen_quantum_2018} that exhibits logarithmic, power-law and linear light cone regimes. In particular, we analytically compute and numerically confirm the emergence of a linear light cone when the power-law exponent of the interaction strength $\alpha \ge 1.5$. { The powerful TDVP-MPO algorithm allows us to simulate systems with 200 sites, so that pertinent results at late times can be explicitly verified.}
A further simplification of the model yields a simple L\'evy flight picture (\monep) that describes the operator spreading in generic long-range interacting systems. It is remarkable that we can determine all the phase transition points at where the moments of L\'evy flight diverge, as well as the OTOC scaling close to the light cone. Both \mone and the associated arguments are also generalizable to systems with a large number of on-site degrees of freedom, which we leave to future work.
Recently, Ref.~\cite{chen_finite_2019} proved a general Lieb-Robinson-type bound with a linear light cone for $\alpha > 3$ in 1d.
We here have a smaller threshold at $\alpha = 1.5$. This is in accordance with folklore that chaos usually prevents a optimal rate of propagation.
Thus, we anticipate that the critical $\alpha$ for the systems we consider will generally be smaller than those of theoretical bounds.
{\bf Acknowledgements:}
We acknowledge insightful discussions with Minh Tran and especially Sarang Gopalakrishnan for pointing out the relevance to the L\'evy flight at very early stage of the project.
We also thank the accommodation and interactive environment of the KITP program ``The Dynamics of Quantum Information" and the Aspen winter conference ``Many-Body Quantum Chaos".
XC and TZ are supported by postdoctoral fellowships from the Gordon and Betty Moore Foundation, under the EPiQS initiative, Grant GBMF4304, at the Kavli Institute for Theoretical Physics.
XC acknowledges support from DARPA DRINQS program. This research is supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
We acknowledge support from the Center for Scientific Computing from the CNSI, MRL: an NSF MRSEC (DMR-1720256) and NSF CNS-1725797, and University of Maryland supercomputing resources.
S. X and B. S acknowledge support from the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research Quantum Algorithms Teams program as part of the QOALAS collaboration.
AYG is supported by the NSF Graduate Research Fellowship Program under Grant No. DGE-1840340.
AYG also acknowledges partial support by the DoE ASCR Quantum Testbed Pathfinder program (award No. DE-SC0019040), DoE BES QIS program (award No. DE-SC0019449), NSF PFCQC program, AFOSR, ARO MURI, ARL CDQI, and NSF PFC at JQI.
|
1,108,101,565,145 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-2.5ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bfseries}}
\def\@runningauthor{}\newcommand{#1}}[1]{\def#1}{#1}}
\def\@runningtitle{}\newcommand{#1}}[1]{\def#1}{#1}}
\renewcommand{\ps@plain}{%
\renewcommand{\@oddhead}{\footnotesize\scshape \hfill#1}\hfill}}
\pagestyle{plain}
\g@addto@macro\bfseries{\boldmath}
\makeatother
\usepackage{amsthm,amsmath,amssymb}
\usepackage{graphicx}
\usepackage[colorlinks=true,citecolor=black,linkcolor=black,urlcolor=blue]{hyperref}
\usepackage{color}
\usepackage {cite}
\usepackage[mathscr]{euscript}
\usepackage{bm}
\usepackage{comment}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{proposition}{Proposition}
\newtheorem{fact}{Fact}
\newtheorem{observation}{Observation}
\newtheorem{claim}{Claim}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newtheorem{example}{Example}
\newtheorem{conjecture}{Conjecture}
\newtheorem{open}{Open Problem}
\newtheorem{problem}{Problem}
\newtheorem{question}{Question}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{note}[theorem]{Note}
\title{A Radix-M Construction for Complementary Sets}
\author{Srdjan Z. Budi\v{s}in\thanks{This work was partially supported by the Ministry of Education and Science of the Republic of Serbia under the Project TR-36029, year 2015.}\\
\small RT-RK, Novi Sad, Serbia\\[-0.8ex]
\small\tt [email protected]\\
}
\begin{document}
\maketitle
\thispagestyle{empty}
\begin{abstract}
We extend the paraunitary (PU) theory for complementary pairs to complementary sets and complete complementary codes (CCC) by proposing a new PU construction. A special, but very important case of complementary sets (and CCC), based on standard delays, is analyzed in details and a new 'Radix-M generator' (RM-G) is presented. The RM-G can be viewed as a generalization of the Boolean generator for complementary pairs. An efficient correlator for standard complementary sets and CCC is also presented. Finally, examples of polyphase, QAM and hexagonal PU sets of three sequences are given.
\end{abstract}
{\bf Keywords:} Complementary sequences, complementary sets, complete complementary code, paraunitary matrix, efficient correlator, QAM constellation.
\section{Introduction}
Complementary pairs of binary sequences were introduced by Golay in 1961 \cite{Golay61z}. They were generalized to complementary sets in 1972 \cite{TsengLiu}. Complete complementary codes (CCC) were introduced in 1988 \cite{N-Shift} as a collection of complementary sets which have zero cross-correlation sums. Complementary sequences were also extended to polyphase \cite{FrankPolyphase,SiwaswamiMulti} and QAM \cite{RobingTarokh,New16-QAM,New64-QAM2006,New64-QAM2010,Ying2010,Zilong2013} constellations.
An efficient correlator was introduced for complementary pairs in 1991 \cite{BudEfficient}. It was generalized to complementary sets of four sequences in 2004 \cite{jimenez2004efficientSet4} and to larger sets in 2007 \cite{de2007modular}. The efficient correlator is useful in pulse compression applications \cite{levanon2004radar} (radar, altimeters, ultrasound, sonar, indoor location, etc.) and synchronization \cite{popovic1999new}.
A generator based on paraunitary (PU) matrices that generates arbitrary binary, polyphase and QAM complementary pairs was introduced in 2013 \cite{BudPU} along with an efficient correlator. A new Boolean generator for complementary pairs was derived from the PU generator in 2014 \cite{BudBoolean2014}.
Complementary sets are of interest for PAPR reduction in OFDM \cite{DavisJedwab99}, as spreading sequences for loosely synchronized CDMA \cite{de1992bandlimited}, Multi-Carrier CDMA \cite{liu2014new} etc.
In Section 2 we present definitions and notation. In Section 3 we present the new PU theory of complementary sets. In Section 4 we present the new 'Radix-M generator'. In Section 5 we show that the PU generator also generates CCC and we derive the new efficient correlator for complementary sets and CCC. Some examples for sets of 3 sequences are given in Section 5. Conclusion is given in Sections 7.
\section{Definitions and Notation}
In this section we introduce basic definitions and notation.
\subsection{The digital expansion}
Any integer $n=0,1,\cdot \cdot \cdot,M^K-1$ can be represented by an $M$-base (radix-$M$) digital expansion:
\begin{equation}\label{expansion}
n = \sum_{k=0}^{K-1} {d_k (n) \cdot M^k}
\end{equation}
where $[d_0 (n),d_1 (n),…,d_{K-1} (n) ]$ are digits of the expansion.
\subsection{Digital signal processing (DSP) basics}
A discrete-time signal is represented by $x(n)$. The Z-transform of $x(n)$ is defined as:
\begin{equation}\label{Ztransform}
x(Z^{-1} )= \sum_{n=0}^{L-1} {x(n) \cdot Z^{-n}}
\end{equation}
We say that $x(Z^{-1} )$ is the Z-domain representation of $x(n)$. The Z-transform of the delayed signal $x(n+k)$ is: $\sum_{n=0}^{L-1} {x(n+k) \cdot Z^{-n}}=x(Z^{-1} ) \cdot Z^{-k} $. The spectrum $S_x(\omega)$ of $x(n)$ is obtained if we choose $Z=e^{2 \pi i \omega}$, i.e. $S_x(\omega)=x(e^{2 \pi i \omega} )$.
Multiplying $x(Z^{-1})$ by a polynomial $h(Z^{-1})$ modifies the spectrum of $x$ thus $h$ is called a digital filter. The resultant time-domain signal corresponds to the convolution of $x(n)$ with $h(n)$ and therefore, $h(n)$ is called the filter impulse response. A filter with M inputs and M outputs is called a MIMO filter. It is represented in Z-domain by matrix multiplication with a matrix polynomial over $Z^{-n}$.
Correlation of $x(n)$ with $y(n)$ is equivalent to convolution of $x^*(-n)$ with $y(n)$, i.e.
\begin{equation}\label{crosscorr}
C_{x,y} (Z^{-1} )=x^*(Z ) \cdot y (Z^{-1}) .
\end{equation}
If $y$ is identical to $x$, it is called auto-correlation; otherwise, it is called cross-correlation.
\subsection{Unitary and paraunitary (PU) matrices}
Elements of an $M \times M$ matrix $\bm{V}$ are denoted by $V_{p,q}$, where $p,q\in\left\{0,1,\cdot \cdot \cdot,M-1\right\}$ are row and column indices, respectively. If the matrix is a function of discrete time $n$, its elements are $V_{p,q} (n)$. We use superscripts to denote a collection of matrices by $\bm{V}^{(k)}$ and matrix elements by $V_{p,q}^{(k)}$. Matrix elements can be expressed by using matrix multiplication:
\begin{equation}\label{indexing}
V_{p,q}=\bm{v}_p^T \cdot \bm{V} \cdot \bm{v}_q
\end{equation}
where $(\cdot)^T$ denotes matrix transposition and $\bm{v}_q$ is the ``position column vector'' defined by:
\begin{equation}\label{v}
\bm{v}_q=[\delta (q-0),\delta (q-1),\cdot \cdot \cdot,\delta(q-M+1) ]^T;
\end{equation}
where $\delta(\cdot)$ is the Dirac delta function. It is obvious that $\bm{v}_k$ is the $k$-th column of the identity matrix of size $M$.
Here are some examples of $\bm{v}_q$:
\[ \bm{v}_0=[1,0,0,\cdot \cdot \cdot,0]^T, \bm{v}_1=[0,1,0,\cdot \cdot \cdot,0]^T, \cdot \cdot \cdot , \bm{v} _{M-1}=[0,0,0,\cdot \cdot \cdot,1]^T \]
The position vector $\bm{v}_q$ has two properties that we will use later:
\begin{equation}\label{VectorByV}
[a_0,a_1,\cdot \cdot \cdot,a_{M-1}]^T=\sum_{m=0}^{M-1} a_m \cdot \bm{v}_m;
\end{equation}
\begin{equation}\label{diagv}
diag(\bm{v}_q)=\bm{v}_q \cdot \bm{v}_q^T.
\end{equation}
A unitary matrix is defined by the relation: $ \bm{U} \cdot \bm{U}^H=C \cdot \bm{I} $ where $C$ is a positive real constant and $(\cdot)^H$ is the Hermitian operator (when $C=1$ the matrix $\bm{U}$ is strict-sense unitary, otherwise it is wide sense unitary). Equivalent unitary matrices are obtained by permutation of rows/columns and/or by phase shifting entire rows or columns.
A paraunitary matrix is a function of a variable $Z$ satisfying: $ \bm{U}(Z) \cdot \widetilde{\bm{U}(Z)} = C \cdot \bm{I} $, where $\widetilde{(\cdot)}$ is the tilde operator defined by: $ \widetilde{\bm{U} (Z )}=\bm{U}^H (Z^{-1} )$. We can see that PU matrices reduce to unitary matrices for $|Z|=1$.
\subsection{Sets of complementary sequences}
A sequence of length $L$ is denoted by $x(n)$ for $n\in\left\{0,1,…,L-1\right\}$. The aperiodic cross-correlation function (ACCF) is denoted by $C_{x,y} (k)$ for $k\in\left\{-L+1,\cdot \cdot \cdot,0,\cdot \cdot \cdot,L-1\right\}$ and its Z-transform is given by (\ref{crosscorr}) where $x(Z^{-1} )$ is the Z-transform of $x(n)$ given by (\ref{Ztransform}). The aperiodic auto-correlation function (ACCF) is defined in Z-domain as: $R_x (Z^{-1} ) = C_{x,x} (Z )=x^*(Z ) \cdot x (Z^{-1}) $.
A set of sequences $x^{(m)} (n)$; $m=0,1,\cdot \cdot \cdot,M-1$ is complementary if
\begin{equation}\label{complementarityt}
\sum_{m=0}^{M-1} {R_{x^{(m)} } (k) }=0~\text{for all}~k\neq0.
\end{equation}
In Z-domain the complementarity condition (\ref{complementarityt}) is:
\begin{equation}\label{complementarityz}
\sum_{m=0}^{M-1} {R_{x^{(m)}} (Z^{-1} )}
= \sum_{m=0}^{M-1} {\left(x^{(m)} (Z)\right)}^* \cdot {x^{(m)} (Z^{-1}) }
= \widetilde{\bm{x}(Z^{-1} )} \cdot \bm{x}(Z^{-1} )
= C
\end{equation}
where: $ \bm{x}(Z^{-1} )=[x^{(0)} (Z^{-1} ) , x^{(1)} (Z^{-1} ) ,\cdot \cdot \cdot, x^{(M-1)} (Z^{-1} ) ]^T$.
\section{The PU algorithm}
The paraunitary (PU) generator is just a unitary transform based recursive generator expressed in Z-domain and matrix form.
\subsection{Unitary transform recursion}
The unitary transform $\bm{y} (Z^{-1} )$ of a complementary set $\bm{x}(Z^{-1} )$ given by:
\begin{equation}\label{unitarytrans}
\bm{y} (Z^{-1} ) = \bm{U} \cdot \bm{x}(Z^{-1} )
\end{equation}
where $ \bm{U}$ is a unitary matrix, is also a complementary set because:
\begin{equation}\label{unitarycomplementarity}
\widetilde{\bm{y}(Z^{-1} )} \cdot \bm{y}(Z^{-1} )
= \widetilde{\bm{x}(Z^{-1} )} \cdot \bm{U}^H \cdot \bm{U} \cdot \bm{x}(Z^{-1} )
=\widetilde{\bm{x}(Z^{-1} )} \cdot C \cdot \bm{x}(Z^{-1} ) =Const. \end{equation}
Delayed sequences from a complementary set $\bm{x}(Z^{-1} )$ (delays do not affect their AACFs so they remain complementary) form a new complementary set:
\begin{equation}\label{delaytrans}
\bm{y}' (Z^{-1} ) = \bm{D} (Z^{-1} ) \cdot \bm{x}(Z^{-1} )
\end{equation}
where the delay matrix $\bm{D} (Z^{-1} )$ is a diagonal matrix of delay elements $Z^{-k}$.
\begin{proposition}\label{RecursiveAlg}
Recursive algorithm based on K repetitions of previous two transforms generates new complementary sets of $M$ sequences.
We start the recursion with $\bm{x}_0^{(r)} (Z^{-1} )$:
\begin{equation}\label{recursion0}
\bm{x}_0^{(r)} (Z^{-1} ) = \bm{U}^{(0)} \cdot \bm{v}_r \end{equation}
and repeate:
\begin{equation}\label{recursionn}
= \bm{U}^{k} \cdot \bm{D}^{(k)} (Z^{-1} ) \cdot \bm{x}_{k-1}^{(r)} (Z^{-1} )
\end{equation}
for $k=1,2,\cdot \cdot \cdot,K$, where $r \in \{0,1,\cdot \cdot \cdot,M-1\}$, $\bm{U}^{(k)} ; (k=0,1,\cdot \cdot \cdot,K)$ are unitary matrices and $ \bm{D}^{(k)} (Z^{-1} ); (k= 1,2,\cdot \cdot \cdot,K)$ are delay matrices \footnote{Here we use (\ref{VectorByV}) to write the RHS of (\ref{DelayMatrix}).}:
\begin{equation}\label{DelayMatrix}
\bm{D}^{(k)} (Z^{-1} ) = diag([Z^{-D_0^{(k)}} ,Z^{-D_1^{(k)}},Z^{-D_2^{(k)}},\cdot \cdot \cdot,Z^{-D_{M-1}^{(k)}} ])=\sum_{m=0}^{M-1} diag(\bm{v}_m \cdot Z^{-D_m^{(k)} } ) .
\end{equation}
\end{proposition}
\begin{proof}
The initial set satisfies condition (\ref{complementarityz}). In each iteration a new set of the same size but larger sequence length is generated according to (\ref{unitarytrans}) and (\ref{delaytrans}).
\end{proof}
\begin{definition}\label{regulardelay}
Regular delays $\bm{D}^{(k)} (Z^{-1} )$ are defined by: $D_m^{(k)}=m \cdot d^{(k)}$ . In that case:
\begin{equation}\label{regulardelayeq}
\bm{D}^{(k)} (Z^{-1} )=diag([Z^{-0 \cdot d^{k} },Z^{-1 \cdot d^{k} },Z^{-2 \cdot d^{k}},\cdot \cdot \cdot,Z^{-(M-1) \cdot d^{k}}])=\sum_{m=0}^{M-1} diag(\bm{v}_m \cdot Z^{-m \cdot d^{(k)}}) .
\end{equation}
\end{definition}
\begin{definition}\label{standardelay}
Standard delays are regular delays defined by: $d^{(k)} = M^{{\pi}_{k-1}} $, where ${\pi}_k$ is any permutation of $ \{0,1,\cdot \cdot \cdot,K-1 \}$ and $M$ is the set size. Standard delay have the form:
\begin{equation}\label{standarddelayeq}
\bm{D}^{(k)} (Z^{-1} ) = \sum_{m=0}^{M-1}{diag(\bm{v}_m \cdot Z^{-m \cdot M^{{\pi}_{k-1 }} } )} .
\end{equation}
\end{definition}
\begin{definition}\label{StandarRecursionSeq}
Standard sequences are generated by Proposition \ref{RecursiveAlg} using standard delays from Definition \ref{standardelay}.
\end{definition}
\begin{remark}\label{whystandard}
Standard delays correspond to digits in the binary expansion (\ref{expansion}) and generate all delays (degrees of $Z^{-1}$) in $\bm{x}_K^{(r)} (Z^{-1} )$ without repetition or gaps. Thus the length of standard sequences is $M^K$.
\end{remark}
\subsection{Paraunitary generator}
We can expand the recursion (\ref{recursion0}) and (\ref{recursionn}) as:
\[
\bm{x}^{(r)} (Z^{-1} ) = \bm{U}^{(K)} \cdot \bm{D}^{(K)} (Z^{-1} ) \cdot \bm{U}^{(K-1)} \cdot \bm{D}^{(K-1)} (Z^{-1} ) \cdot \cdot \cdot \cdot \cdot \bm{U}^{(1)} \cdot \bm{D}^{(1)} (Z^{-1} ) \cdot \bm{U}^{(0)} \cdot \bm{v}_r =\]
\begin{equation}\label{PU}
=\left( \prod_{l=K}^1{\bm{U}^{(l)} \cdot \bm{D}^{(l)} (Z^{-1} ) } \right) \cdot \bm{U}^{(0)} \cdot \bm{v}_r=\bm{\mathscr{M}}^{(K)} (Z^{-1} ) \cdot \bm{v}_r
\end{equation}
where $\bm{\mathscr{M}}^{(K)} (Z^{-1} )$ is called the (Z-domain) generating matrix of complementary sets of sequences and it is defined as follows.
\begin{definition}\label{ZGeneratingMatrix}
Z-domain generating matrix is:
\begin{equation}\label{genmatrixeq}
\bm{\mathscr{M}}^{(K)} (Z^{-1} )=\left( \prod_{l=K}^1 {\bm{U}^{(l)} \cdot \bm{D}^{(l)} (Z^{-1} ) } \right) \cdot \bm{U}^{(0)} =\bm{U}^{(K)} \cdot \left( \prod_{l=K}^1 {\bm{D}^{(l)} (Z^{-1} ) \cdot \bm{U}^{(l-1)} } \right).
\end{equation}
\end{definition}
\begin{remark}
The generating matrix is a PU matrix because it is a product of square PU matrices. The transposed generating matrix is also a generating matrix. From (\ref{PU}) we can see that the complementary set $\bm{x}^{(r)} (Z^{-1} )$ is the $r$-th row (or $r$-th column) of the generating matrix. So a generating matrix defines $M$ different complementary sets plus $M$ additional sets when the generating matrix is transposed.
\end{remark}
\begin{remark}
In order to facilitate the manipulation of equations we will rewrite the $\bm{U}^{(l)}$ and $\bm{D}^{(l)} $ matrices using the substitution $k=K-l$ for $l=1,2,\cdot \cdot \cdot,K$ so (\ref{genmatrixeq}) becomes:
\begin{equation}\label{newPU}
\bm{\mathscr{M}}^{(K)} (Z^{-1} )
= \bm{U}^{(0)} \cdot \prod_{k=0}^{K-1} { \left( \bm{D}^{(k)} (Z^{-1} ) \cdot \bm{U}^{(k+1)} \right) }
=\left( \prod_{k=0}^{K-1} {\bm{U}^{(k)} \cdot \bm{D}^{(k)} (Z^{-1} ) } \right) \cdot \bm{U}^{(K)}.
\end{equation}
where the new superscripts for $\bm{U}^{(k)}$ and $\bm{D}^{(k)} (Z^{-1} )$ go from $0$ to $k$ and from 0 to $k-1$ respectively.
\end{remark}
\begin{definition}\label{genmatrixt}
The time domain generating matrix $\bm{\mathscr{M}}^{(K)} (n)$ for complementary sequences is the inverse Z-transform of the Z-domain generating matrix from Definition \ref{ZGeneratingMatrix}.
\end{definition}
\begin{definition}\label{standardsets}
Standard complementary sets are sets generated by (\ref{newPU}) using standard delays (\ref{standarddelayeq}):
\begin{equation} \label{standardsetseq}
\bm{\mathscr{M}}^{(K)} (Z^{-1} ) = \bm{U}^{(0)} \cdot \prod_{k=0}^{K-1}
\left( \sum_{m=0}^{M-1} diag \left( \bm{v}_m \cdot Z^{-m \cdot M^{{\pi}_k}}
\right) \cdot \bm{U} ^{(k+1)} \right).
\end{equation}
\end{definition}
\begin{corollary}\label{standardPU}
Standard complementary sets generated by (\ref{newPU}) using standard delays (\ref{standarddelayeq}) have size M and sequence length $M^K$.
\end{corollary}
\begin{proof}
As (\ref{standardsetseq}) is derived from the algorithm from Proposition \ref{RecursiveAlg} and Definition \ref{StandarRecursionSeq} the generated sets are same, thus they have the same size and sequence length.
\end{proof}
\section{The new radix-M generator (RM-G)}
The new generator is based on radix-M digits, thus, it is a generalization of the Boolean (radix-2) generator (R2-G).
Theorem \ref{mainth} and Corollary \ref{maincor} are the main results of this paper. First we need a lemma which is elementary to prove.
\begin{lemma}\label{lemma1}
Let $ \bm{F}_k (m),k \in \{0,1,…,K-1 \}$ be a set of matrix functions of an integer variable $m$. Then
\begin{equation}\label{lemma1eq}
\prod_{k=0}^{K-1}{ \left( \sum_{m=0}^{M-1} \bm{F}_k (m) \right) }
=\sum_{n=0}^{M^{K-1}} \left\{ \prod_{k=0}^{K-1} \bm{F}_k (d_k (n))
\right\} .
\end{equation}
\end{lemma}
\begin{comment}
\begin{proof}
The proof is elementary. The product of $K$ sums of $M$ summands is equal to a sum of $M^K$ products of $K$ factors. We use the variable $n$ and its digits $d_k (n)$ to order the factors in the resulting sum. It is best understood via example.
\end{proof}
\begin{example}\label{lema1example}
We use $K=2$ and $M=3$ for illustration:
\[ \left(\bm{F}_0 (0)+\bm{F}_0 (1)+\bm{F}_0 (2) \right) \cdot \left( \bm{F}_1 (0)+\bm{F}_1 (1)+\bm{F}_1 (2) \right)=
\sum_{n=0}^{8} { \bm{F}_0 (d_0 (n)) \cdot \bm{F}_1 (d_1 (n))
}= \]
\[ =\bm{F}_0 (0) \cdot \bm{F}_1 (0)
+\bm{F}_0 (1) \cdot \bm{F}_1 (0)+\bm{F}_0 (2) \cdot \bm{F}_1 (0)+\bm{F}_0 (0) \cdot \bm{F}_1 (1)+ \]
\[ \bm{F}_0 (1) \cdot \bm{F}_1 (1)+\bm{F}_0 (2) \cdot \bm{F}_1 (1)+ \bm{F}_0 (0) \cdot \bm{F}_1 (2)+\bm{F}_0 (1) \cdot \bm{F}_1 (2)+\bm{F}_0 (2) \cdot \bm{F}_1 (2). \]
\end{example}
\end{comment}
\begin{theorem}\label{mainth}
The time-domain generating matrix of a standard complementary set from Definition \ref{standardsets} is:
\begin{equation}\label{maintheq}
\bm{\mathscr{M}}^{(K)} (n) = \bm{U}^{(0)} \cdot \prod_{k=0}^{K-1} \left( diag \left(\bm{v}_{d_{{\pi}_k } (n)} \right) \cdot \bm{U}^{(k+1)} \right)
\end{equation}
where $\bm{U}^{(k)}; k=0,1,\cdot \cdot \cdot,K $ are unitary matrices, $d_k (n)$ and $\bm{v}_p$ are given by (\ref{expansion}) and (\ref{v}).
\end{theorem}
\begin{proof}
We will use Lemma \ref{lemma1} with $\bm{F}_k (m)= diag(\bm{v}_m) \cdot \bm{U}^{(k)} \cdot Z^{-m \cdot M^{\pi_k } }$ :
\begin{equation}\label{lemma1mod}
\prod_{k=0}^{K-1} \left( \sum_{m=0}^{M-1} diag(\bm{v}_m) \cdot {\bm{U}^{(k)} \cdot Z^{-m \cdot M^{\pi_k } } } \right) = \sum_{n=0}^{M^K-1} \left(\prod_{k=0}^{K-1} diag(\bm{v}_{d_{\pi_k } (n)}) \cdot \left(\bm{U}^{(k)} \cdot Z^{-d_{\pi_k } (n) \cdot M^{\pi_k } } \right) \right) .
\end{equation}
Next we calculate the Z-transform of (\ref{maintheq}):
\[
\bm{\mathscr{M}}^{(K)} (Z^{-1} )= \sum_{n=0}^{L-1} \bm{\mathscr{M}}^{(K)} (n) \cdot Z^{-n} =\bm{U}^{(0)} \cdot \sum_{n=0}^{L-1} \left( \prod_{k=0}^{K-1} diag \left( \bm{v}_{d_{\pi_k } (n)} \right) \cdot \bm{U}^{(k+1)} \right) \cdot Z^{-n} \]
we substitute $n$ in $Z^{-n}$ with its digital expansion (\ref{expansion}):
\[
\bm{\mathscr{M}}^{(K)} (Z^{-1})
= \bm{U}^{0} \cdot \sum_{n=0}^{L-1} \left( \prod_{k=0}^{K-1} diag \left( \bm{v}_{d_{\pi_k } (n)} \right) \cdot \bm{U}^{(k+1)} \right) \cdot Z^{- \sum_{k=0}^{K-1} d_{\pi_k } (n) \cdot M^{\pi_k }} \]
\[
\bm{\mathscr{M}}^{(K)} (Z^{-1}) = \bm{U}^{0} \cdot \sum_{n=0}^{L-1} \left( \prod_{k=0}^{K-1} diag \left( \bm{v}_{d_{\pi_k } (n)} \right) \cdot \bm{U}^{(k+1)} \right) \cdot { \prod_{k=0}^{K-1} Z^{-d_{\pi_k } (n) \cdot M^{\pi_k } } } \]
\[
\bm{\mathscr{M}}^{(K)} (Z^{-1}) = \bm{U}^{0} \cdot \sum_{n=0}^{L-1} \left( \prod_{k=0}^{K-1} diag \left( \bm{v}_{d_{\pi_k } (n)} \right) \cdot \bm{U}^{(k+1)} \cdot Z^{-d_{\pi_k } (n) \cdot M^{\pi_k } } \right) \]
\[
\bm{\mathscr{M}}^{(K)} (Z^{-1} ) = \bm{U}^{(0)} \cdot \prod_{k=0}^{K-1}
\left( \sum_{m=0}^{M-1} diag \left( \bm{v}_m \cdot Z^{-m \cdot M^{{\pi}_k}}
\right) \cdot \bm{U} ^{(k+1)} \right) \]
that we get by applying (\ref{lemma1mod}) and which is the generating matrix in Z-domain (\ref{standardsetseq}).
\end{proof}
\begin{corollary}\label{maincor}
The s-th sequence $x_s^{(r)} (n )$ from the r-th standard complementary set $\bm{x}^{(r)} (n )$ can be expressed as:
\[ x_s^{(r)} (n )=
\mathscr{M}_{r,s}^{(K)} (n) =U_{r,d_{{\pi}_0 (n)} }^{(0)} \cdot \left( \prod_{k=1}^{K-1} U_{d_{{\pi}_{k-1} } (n),d_{{\pi}_k } (n)} ^{(k)}
\right) \cdot U_{d_{{\pi}_{K-1} } (n),s}^{(K)} = \]
\begin{equation}\label{maincoreq}
U_{r,d_{{\pi}_0 (n)} }^{(0)} \cdot U_{d_{{\pi}_0 } (n),d_{{\pi}_1 } (n)}^{(1)} \cdot U_{d_{{\pi}_1 } (n),d_{{\pi}_2 } (n)}^{(2)} \cdot \cdot \cdot \cdot \cdot U_{d_{{\pi}_{K-2} } (n),d_{{\pi}_{K-1} } (n)}^{(K-1)} \cdot U_{d_{{\pi}_{K-1} } (n),s}^{(K)} \end{equation}
for $n=0,1,\cdot \cdot \cdot,L-1$, where $L=M^K$ is the sequence length, ${\pi}_k=({\pi}_0,{\pi}_1,\cdot \cdot \cdot,{\pi}_{K-1} )$ is a permutation of the set $\{0,1,\cdot \cdot \cdot,K-1\}$, $\bm{U}^{(k)}$ are unitary matrices, $d_k (n)$ are defined by (1) and $r,s \in \{0,1,\cdot \cdot \cdot,M-1\}$ define a set and a sequence in the set respectively.
\end{corollary}
\begin{proof}
Applying (\ref{indexing}) to (\ref{maintheq}) we get:
\[
\mathscr{M}_{r,s}^{(K)} (n) = \bm{v}_r^T \cdot \bm{\mathscr{M}}^{(K)} (n) \cdot \bm{v}_s = \bm{v}_r^T \cdot \bm{U}^{(0)} \cdot \prod_{k=0}^{K-1} \left( (diag ( \bm{v}_ {d_{\pi_k } (n)} ) ) \cdot \bm{U}^{(k+1)} \right) \cdot \bm{v}_s. \]
Using (\ref{diagv}) with $q=d_{\pi_k } (n)$ we get $diag(\bm{v}_ {d_{\pi_k } (n)}) =\bm{v}_ {d_{\pi_k } (n)} \cdot \bm{v}_ {d_{\pi_k } (n)}^T$ and:
\[
\mathscr{M}_{r,s}^{(K)} (n)
= \bm{v}_r^T \cdot U^{(0)} \cdot \prod_{k=0}^{K-1} \left\{ \left(\bm{v}_{d_{\pi_k } (n)} \cdot \bm{v}_{d_{\pi_k } (n)}^T \right) \cdot \bm{U}^{(k)} \right\} \cdot \bm{v}_s = \]
\[
\left\{ \bm{v}_r^T \cdot U^{(0)} \cdot \bm{v}_{d_{\pi_0 } (n)} \right\} \cdot \left\{ \bm{v}_{d_{\pi_0 } (n)}^T \cdot U^{(1)} \cdot \bm{v}_{d_{\pi_1 } (n)} \right\} \cdot \cdot \cdot \cdot\cdot \left\{ \bm{v}_{d_{\pi_{K-1} } (n)}^T \cdot U^{(K)} \cdot \bm{v}_s\right\} \]
applying (\ref{indexing}) again, we get (\ref{maincoreq}).
\end{proof}
The generator given in Corollary \ref{maincor} can generate any sequence from the set directly from the discrete time variable $n$ using only scalar multiplications in contrast with the PU generator which uses matrix multiplications.
\section{Complete complementary code and efficient correlation}
We show that the complete complementary code (CCC) and the efficient correlator for complementary sets are easily derived from the PU generating matrix.
\subsection{Complete complementary codes}
The definition of CCC requires ACCF orthogonality in addition to AACF complementarity \cite{N-Shift}. Thus two complementary sets $\bm{x}^{(p)} (Z^{-1} )$ and $\bm{x}^{(q)} (Z^{-1} )$ are orthogonal if for $ p \ne q$:
\begin{equation}\label{CCCcondition}
\sum_{r=0}^{M-1} {C_{x_r^{(p)}, x_r^{(q)} } (Z^{-1} )}
= \widetilde{ \bm{x}^{(p)} (Z^{-1} )} \cdot \bm{x}^{(q)} (Z^{-1} )
= 0.
\end{equation}
A CCC consists of $M$ complementary sets of $M$ sequences each, where all sets are mutually orthogonal.
For two PU sets $\bm{x}^{(p)} (Z^{-1})=\bm{\mathscr{M}}(Z^{-1} ) \cdot \bm{v}_p$ and $\bm{x}^{(q)} (Z^{-1})=\bm{\mathscr{M}}(Z^{-1} ) \cdot \bm{v}_q$ we have:
\[
{\widetilde{\bm{x}^{(p)} (Z^{-1})} } \cdot \bm{x}^{(q)} (Z^{-1})
= \widetilde{(\bm{\mathscr{M}}(Z^{-1}) \cdot \bm{v}_p )} \cdot (\bm{\mathscr{M}} (Z^{-1}) \cdot \bm{v}_q ) = \]
\begin{equation}\label{PUareCCC}
\widetilde{\bm{\mathscr{M}} (Z^{-1} )} \cdot \bm{v}_p^T \cdot \bm{v}_q \cdot \bm{\mathscr{M}} (Z^{-1} )
= \widetilde{\bm{\mathscr{M}} (Z^{-1} )} \cdot \delta(p-q) \cdot {\bm{\mathscr{M}}} (Z^{-1} )
= C \cdot \delta(p-q). \end{equation}
This proves (\ref{CCCcondition}) which means that the complementary sets are orthogonal.
So any PU generating matrix is also the generating matrix for CCC.
\subsection{The efficient generator and correlator}
A significant advantage of the PU generator over alternate generators is that it is very easy to derive an efficient correlator (matched filter) from it. The correlator efficiency, compared to a strait-forward implementation of the correlator, is $L/ld(L)$. For example, for a typical length: $L=1024$, the strait-forward implementation requires 1024 operations, while the efficient correlator requires only 10 operations.
The efficient correlator is based on the simplicity of PU matrix inversion. In fact from the definition of a PU matrix:
$ \widetilde{\bm{U}(Z^{-1} )} \cdot \bm{U}(Z^{-1} ) = C \cdot \bm{I}$ we have: $ {\bm{U} (Z^{-1} )}^{-1}=\widetilde{\bm{U}(Z^{-1} )}/C$. Thus the correlating filter is:
\[ \bm{\Psi}^{(K)} (Z^{-1} ) = \widetilde{\bm{\mathscr{M}}^{(K)} (Z^{-1} )}/C
= {\left(\bm{\mathscr{M}}^{(K)} (Z )\right)}^H/C. \]
However this filter is not causal so we have to introduce a delay equal to $L-1$ to make it causal:
\begin{equation}\label{correlator}
\bm{\Phi}^{(K)} (Z^{-1} )
=Z^{-L+1} \cdot \bm{\Psi}^{(K)} (Z^{-1} )
=Z^{-L+1} \cdot {\left(\bm{\mathscr{M}}^{(K)} (Z )\right)}^H/C .
\end{equation}
We can see that the correlating filter is a MIMO filter. If we introduce a signal to the $r$-th input of this filter (other inputs being zero) we will obtain simultaneously the cross-correlations of this input signal with all $M$ sequences from the $r$-th complementary set at the $M$ filter outputs.
\section{Some Examples of complementary sets of 3 sequences}
We illustrate the radix-3 generator (R3-G) with complementary sets of polyphase, QAM and hexagonal sequences.
\subsection{Polyphase sets}
The only unitary matrix, known to exist for any matrix size, is the DFT matrix ${\bm{F}}$. For matrix size $M \times M$ it is defined as: $ F_{p,q} = e^{2 \pi i \cdot p \cdot q/M}=w^{p \cdot q}$ where $p,q \in \{0,1,…,M-1\}$ and $w=e^{2 \pi i/M}$. For $M=3$ it becomes:
\begin{equation}\label{DFT}
\bm{F}= \left[ \begin{array}{ccc}
1 & 1 & 1 \\
1 & w & w^2 \\
1 & w^2 & w
\end{array} \right]
=
\left[ \begin{array}{ccc}
1 & 1 & 1 \\
1 & w & w^* \\
1 & w^* & w
\end{array} \right]
=
\left[ \begin{array}{ccc}
1 & 1 & 1 \\
1 & w & -1-w \\
1 & -1-w & w
\end{array} \right]
\end{equation}
where
\begin{equation}\label{w}
w=e^{2\pi i/3}=-1/2+\sqrt{3} i/2.
\end{equation}
In the next example we will use: $K=2, U^{(k)} =F$ for $k \in \{0,1,2\}$, and $\pi=\{0,1\}$, so we have from (\ref{maincoreq}):
\[
\mathscr{M}_{r,s}^{(2)} (n)=U_{r,d_0 (n) }^{(0)} \cdot U_{d_0 (n),d_1 (n)}^{(1)} \cdot U_{d_1 (n),s}^{(2)} =F_{r,d_0 (n) } \cdot F_{d_0 (n),d_1 (n) } \cdot F_{d_1 (n),s}= w^{\mu_{r,s} (n) } \]
where:
$\mu_{r,s} (n)=r \cdot d_0 (n)+d_0 (n) \cdot d_1 (n)+d_1 (n) \cdot s $ and:
\[ d_0 (n){\rvert _0^8}=[0,1,2,0,1,2,0,1,2]; d_1 (n){\rvert_0^8}=[0,0,0,1,1,1,2,2,2]. \]
In the next equation we will use $d_k$ instead of $d_k (n)$ for shorter notation:
\[ \bm{\mu }(n)= \left[ \begin{array}{ccc}
d_0 \cdot d_1 &d_0 \cdot d_1 +d_1 &d_0 \cdot d_1+2 \cdot d_1 \\
d_0 +d_0 \cdot d_1 &d_0 +d_0 \cdot d_1 +d_1 &d_0 +d_0 \cdot d_1 +2 \cdot d_1\\
2 \cdot d_0 +d_0 \cdot d_1 &2 \cdot d_0 +d_0 \cdot d_1 +d_1 &2 \cdot d_0 +d_0 \cdot d_1 +2 \cdot d_1
\end{array} \right] \]
\[ = \left[ \begin{array}{ccc}
{[0,0,0,0,1,2,0,2,1]}&{[0,0,0,1,2,0,2,1,0]}&{[0,0,0,2,0,1,1,0,2]}\\
{[0,1,2,0,2,1,0,0,0]}&{[0,1,2,1,0,2,2,2,2]}&{[0,1,2,2,1,0,1,1,1]}\\
{[0,2,1,0,0,0,0,1,2]}&{[0,1,2,1,1,1,2,0,1]}&{[0,2,1,2,2,2,1,2,0]}
\end{array} \right] .\]
So the sequences from the first set are:
\[ {\left[ \begin{array}{ccc}
{w^{\mu_{0,0} }}\\
{w^{\mu_{0,1} }}\\
{w^{\mu_{0,2} }}
\end{array} \right] }
=
{\left[ \begin{array}{ccc}
{1,1,1,1,w,w^2,1,w^2,w}\\
{1,1,1,w,w^2,1,w^2,w,1}\\
{1,1,1,w^2,1,w,w,1,w^2}
\end{array} \right] } . \]
The other two orthogonal sets can be constructed from $\mu_{1,s}(n)$ and $\mu_{2,s}(n)$:
\[ \left[ \begin{array}{ccc}
{1,w,w^2,1,w^2,w,1,1,1}\\
{1,w,w^2,w,1,w^2,-1,-1,-1}\\
{1,w,w^2,w^2,w,1,w,w,w};
\end{array} \right] ;
\left[ \begin{array}{ccc}
{1,w^2,w,1,1,1,1,w,w^2}\\
{1,w^2,w,w,w,w,w^2,1,w}\\
{1,w^2,w,-1,-1,-1,w,w^2,1}
\end{array} \right]. \]
Many different polyphase sets (and CCC) can be constructed by using different unitary matrices (for example equivalent DFT matrices) and different permutations $\pi_k$. For some sets sizes (like 4, 8, 12, …) binary sequences can also be constructed based on Hadamard matrices \cite{Hadamard}.
Now we will determine the matched MIMO (3I3O) filter for the above example. First we need to write down the Z-domain generating matrix (\ref{standardsetseq}): ${\bm{\mathscr{M}}}^{(K)}(Z^{-1} ) =$
\[
\left[ \begin{array}{ccc} 1&1&1\\1&w&w^2\\1&w^2&w \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&0&0\\0&Z^{-1}&0\\0&0&Z^{-2} \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w&w^2\\1&w^2&w \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&0&0\\0&Z^{-3}&0\\0&0&Z^{-6} \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w&w^2\\1&w^2&w \end{array} \right]. \]
The MIMO matched filter calculated from (16) is: ${\bm{\Phi}}^{(K)} (Z^{-1} )=$
\[ Z^{-8} \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&0&0\\0&Z^3&0\\0&0&Z^6 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&0&0\\0&Z^1&0\\0&0&Z^2 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 )] \end{array} \right] = \]
\[
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 \end{array} \right] \cdot
\left[ \begin{array}{ccc} Z^{-6}&0&0\\0&Z^{-3}&0\\0&0&1 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 \end{array} \right] \cdot
\left[ \begin{array}{ccc} Z^{-2}&0&0\\0&Z^{-1}&0\\0&0&1 \end{array} \right] \cdot
\left[ \begin{array}{ccc} 1&1&1\\1&w^2&w\\1&w&w^2 \end{array} \right]. \]
We can also notice the similarity of the structure between the generating matrix (\ref{standardsetseq}) and the matched filter (\ref{correlator}) which can be exploited to implement both the generator and the correlator with the same hardware.
\subsection{Sets in rectangular QAM constellations}
There are no binary unitary matrices of size $3 \times 3$ but there are $3 \times 3$ unitary matrices with Gaussian integer elements \cite{Algebra91} as for example: $\left[ \begin{array}{ccc} 2+2i&2&2\\2&-1+3i&-1-i\\2&-1-i&-1+3i \end{array} \right]. $ These matrices can be used as unitary matrices in the radix 3 generator to produce sets of three QAM complementary sequences.
\subsection{Sets in hexagonal constellations}
Hexagonal constellations are known for some time but have never gained popularity despite their advantage over rectangular QAM \cite{BudPU}. However, as polyphase sets of complementary sequences is based on the DFT matrix with $2\pi /3$ phase elements they are naturally suitable for hexagonal constellations. We can use Eisenstein-Jacobi integers (EIs) \cite{Algebra91} to construct hexagonal sequences. EIs can be represented as: $Z=X+Y \cdot w$ (where: X, Y are integers and $w$ is given by (\ref{w})). EIs are closed under addition and multiplication. An example of unitary matrix with EI elements is: $\left[ \begin{array}{ccc} 2&2&2\\2&-2+w&-w\\2&-w&-2+w \end{array} \right] $. These matrices can be used as unitary matrices in the radix 3 generator to produce sets of three hexagonal complementary sequences.
\begin{comment}
\section{Future work}
The PU framework paves the way for many new constructions of complementary sequences. Sets of two sequences (pairs) are well studied and understood \cite{BudPU}. Each pair has a unique representation called a canonical form and it can be uniquely described by a vector of complex numbers called an omega vector \cite{BudPU}. A unique pair of complementary sequences can also be generated from ANY omega vector. Such concepts should be developed further for larger sets $(M>2)$.
Enumeration of binary, polyphase and QAM sequences is still in progress for $M=2$. Enumeration for sets has yet to be done.
Complementary sets have been studied mostly for binary and polyphase constellations. Standard constellations use only odd GI components (corresponding to I and Q coordinates). Constellations with arbitrary GIs is a promising area of research. Hexagonal constellations are also promising at least for complementary pairs.
Most previous work was based on standard delays. However, when M is a composite number, the use of other delays is an attractive option. For example, for length 4, regular delays are multiples of $\{0,1,2,3\}$. However non-regular delays as multiples of $\{0,0,1,1\}$ can also be used with the PU algorithm to build very regular generators.
A promising area for future research are very large sets, specially of set size $M=2^m$. For example, for $M=32$ millions of nonequivalent Hadamard matrices exist, each Hadamard matrix having millions of equivalent forms. They can be used to construct enormous binary sets as each unitary matrix in the PU product (11) can be a different Hadamard matrix.
Complementary arrays \cite{ComplArrays} are also a promising area for the PU theory.
\end{comment}
\section{Conclusion}
The PU theory breaks away with the tradition that sequence construction is based on number theory or Galois fields. On the contrary, the PU theory is based on digital signal processing (DSP) tools and more specifically on filter-banks theory. The PU theory of complementary sets is very general in the way that arbitrary sets can be represented and generated.
For the special case of a PU generator with standard delays, a radix-M generator (RM-G) is derived as a generalization of the Boolean generator for complementary pairs (which is R2-G). Furthermore, RM-G can generate complementary sets in any constellation.
We also show that complete complementary codes (CCC) and the efficient correlator are by-products of the PU theory.
Some illustrative examples are presented for sets of 3 sequences. They include polyphase, rectangular QAM and hexagonal constellations.
\bibliographystyle{IEEEtran}
|
1,108,101,565,146 | arxiv | \section{Introduction}
\label{sec:intro}
In conversational speech, individual utterances may reference context from previous utterances. If certain topics or words have been mentioned in the past, related words are likely to be used. In automatic speech recognition, language models are responsible for capturing the probabilities of words likely to be uttered given past words. Traditionally, these probabilities are captured in an n-gram language model. For example, in a trigram language model, we would store a mapping of all 3-word combinations found in our training corpus, along with the probabilities of the third word following the previous two words. The past "history" that a trigram language model would capture is limited to two words. It is also limited in its ability to capture semantics. With the advent of more computational power, neural-based methods for language modeling, like the Recurrent Neural Network (RNN) architecture have become possible. In neural architectures, words are typically represented as word embeddings, n-dimensional vectors that attempt to capture the semantics in the latent space. Due to its recurrent set-up, the RNN-based language model is able to encapsulate previous word embeddings in its hidden state (see Figure \ref{fig:rnn}).
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{rnn_better.png}
\caption{A recurrent neural network predicting the probability of the next word using hidden states}
\label{fig:rnn}
\end{figure}
The hidden state at each step represents a single value computed from a series of matrix operations on current word embedding and the hidden state from the previous step. However, with a long sequence, RNNs on their own suffer from vanishing gradients, Long Short-Term Memory (LSTM) units remedy this by introducing additional operations for computing each hidden state \cite{GRAD}. Despite their name, they fall short of capturing word dependencies past 200 words and focus on past 50 words more heavily \cite{LSTM_issues}. The transformer architecture, however, does not suffer from this problem. It does away with recurrence in favor of an attention mechanism \cite{ATTENTION}. The attention mechanism allows the network to learn a weighted average to determine which words are important at at each position. The network can capture much longer dependencies as all previous words are encoded and passed into the multi-head attention layers in parallel (see Figure \ref{fig:transformer}). The multi-head attention layer learns to reference words from the beginning of a very long context. This is a desirable attribute that we'd like to apply to conversational speech recognition, however, there is one problem: many transformer-based architectures take in fixed size input \cite{BERT}. Any change to the input, whether it be adding another word, or changing the last word in the input would require recomputing all values of the network. This can be prohibitive in the case of speech recognition as it would require the re-computation of all word inputs from the beginning of our speech context for every new utterance that we attempt to re-score. The transformer-XL architecture solves this by introducing a segment-level recurrence mechanism \cite{Transformer_xl}. In this work we use the Kaldi framework \cite{kaldi} and a transfomer-XL architecture to do efficient lattice re-scoring. For each new utterance, we cache the segment-level embeddings for use in future utterance lattice re-scoring.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{transformer.png}
\caption{A simplified view of the transformer architecture. No recurrence, just attention.}
\label{fig:transformer}
\end{figure}
\section{Past Work}
\label{sec:past}
Typically, due to Kaldi’s use of weighted finite state transducers, the fast decoding techniques require that a language model is expanded as a finite state transducer \cite{wfst}. This is difficult to do with an RNNLM as it requires approximating the RNN's probability distribution and converting into the a deterministic FST form. There have been attempts to sample an RNNLM to produce an equivalent FST, however the accuracy performance was equivalent to that of a bigram language model \cite{rnn_to_fst}.
Because of this, first-pass decoding is done using an n-gram language model. After first-pass decoding with an n-gram language model, a second-pass re-scoring is done using an RNNLM. However, this can be slow, so n-gram approximation is used, but this limits the history that an RNN has available to it. \cite{pruned_rnn} proposes a pruned approach which allows increasing the n-gram approximation to more n-grams but without degrading performance.
In other works to handle long range dependencies, \cite{rnn_adaptation} use an RNNLM with a "conversation cache" and DNN-based adaptation technique. The "conversations cache" is a count of seen unigrams. The cache is used to modify unigram priors before RNNLM re-scoring.
\section{Approach}
\label{sec:approach}
We use XLNet \cite{xlnet} to attempt to capture long range language dependencies. At the time of this writing, XLNet provides the best accuracy for many downstream tasks that require language modeling pre-training, including question-answering, text classification, and other natural language understanding tasks. We also attempt to take advantage of a transformer's parallel properties to make some performance optimizations when re-scoring our lattices.
\subsection{Why XLNet over BERT?}
\label{sec:xlnet}
XLNet is a generalized auto-regressive model that can be used for language modeling based on the transformer-XL architecture \cite{xlnet}. This means that the outputs of XLNet depend strictly on the previous outputs. This is different from other state-of-the-art language models like BERT (Bidirectional Encoder Representations from Transformers) which rely on conditioning the probabilities given surrounding words. In BERT, the model tries to predict a masked word by looking at all surrounding unmasked words (figure \ref{fig:bert_masking}).
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{bert.png}
\caption{BERT attempts to predict the masked word use both left and right contexts. During training, a certain percentage of words are masked for use in prediction. If both "San" and "Francisco" were masked, BERT would not be able to use information when decoding one of the words to help in decoding the other.}
\label{fig:bert_masking}
\end{figure}
The concept of masking the input introduces a few disadvantages mentioned in \cite{xlnet}. Firstly, a masked token is rarely seen for most subsequent language modeling tasks, so there tends to be a discrepancy between the "pre-taining" step and the "fine-tuning" step. Typically, in the "fine-tuning" step, BERT is adapted to attempt tasks like question-answering. Secondly, BERT does not use information from one decoded masked token to help in decoding another masked token. In other words, all masked tokens are assumed to be independent. This is necessary in BERT, because there is a strict separation between unmasked and masked tokens, as the masked tokens will be predicted. In XLNet, however, the separation is a directional one: anything to the left of the word that is attempting to be predicted is fair game (during training, words are reordered to get the benefit of surrounding context, but conceptually, orders are seen from left to right for the factorization order). This lends itself well to decoding in speech recognition as we typically re-score a lattice from left to right (assuming you are visualizing a lattice in english), while we prune low scoring results. This does not mean, however, that the encodings for the words do not capture context from surrounding words. With permutation language modeling, during training, the ordering of previous tokens can be modified (figure \ref{fig:xlnet_permutation}).
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{xlnet_perm_better.png}
\caption{In XLNet training, every sample can have a different permutation ordering. Instead of passing an explicit masking token like BERT, the permutation ordering dictates what is visible to each token's hidden states across the network. Relative positional encoding preserve information about the original sequence ordering. }
\label{fig:xlnet_permutation}
\end{figure}
With permutation language modeling, the network is trained on a regiment of random order word sequences. For example, the phrase: "I am going to San Francisco to watch the Warriors play basketball" could be used to train XLNet by selecting a random factorization order, which is the order in which we will decode the network "see" the tokens. Since this is accomplished by passing an attention mask, which controls which positions are allowed to attend to which other positions, we don't give up any parallelism. Then we select a pivot index, say 6 corresponding to the word "to." All words preceding the pivot would be used to predict words succeeding the pivot. So we may try to predict the p("to"| "I", "am", "going", "to", "San", "Francisco") and p("to" | "San", "am", "I", "to", "going", "Francisco"), among all other permutations of the preceding words (see figure \ref{fig:xlnet_permutation}). Although this random ordering seems jarring, the original word sequence orderings are still preserved through "relative positional embeddings." XLNet maintains as an input, the relative word position distance from the word that is being predicted from its inputs. This means that a random permutation of previous words can help the network learn an embedding from the surrounding words while it preserves information related to the ordering of those words.
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{use_of_mems.png}
\caption{The transformer-XL, the architecture that XLNet is based on, allows for caching of hidden states from previous outputs. These states can be used as inputs into the attention layers.}
\label{fig:xlnet_mems}
\end{figure}
Another feature of XLNet, is that it allows the exporting of its internal hidden states, which can be passed in on subsequent calls. This provides a recurrence mechanism that can help capture long range dependencies (see figure \ref{fig:xlnet_mems}). The segment-level memory units (mems) allow us to capture longer range dependencies in language without recomputing all previous hidden states. There are large performance improvements for longer attention lengths as well \cite{Transformer_xl}.
\subsection{Decoding and Re-scoring in Kaldi}
\label{sec:decoding}
Weighted finite state transducers (WFSTs or FSTs) provide a structured way to perform decoding. They represent a directed graph where each arc can represent probabilities of various state transitions. FSTs can represent hidden Markov models (HMMs), context dependency models, lexicons (pronunciation dictionaries), and grammars (n-gram language models). Kaldi uses FSTs heavily, and combines all of the previously mentioned components into a single composed FST (HCLG.fst) \cite{wfst}. During decoding, an acoustic model is used to produce log probabilities of various phonemes, and the HCLG.fst will be used to map the outputs of the acoustic model to HMM state transitions to context-dependent phones (H.fst) to context-independent phones (C.fst) to pronunciations of a single word (L.fst) to subsquent words (G.fst). The resulting output is a lattice which can also be represented as an FST (see figure \ref{fig:lattice_fst}). The lattice contains the combined acoustic model and language model probabilities for each arc (see figure \ref{fig:lattice_fst}).
The language modeling probabilities represent the log probability of the next state given the previous n-grams from the path leading up to that state.
Typically, for larger language models, a separate re-scoring step occurs where the arcs in a lattice are re-scored by subtracting the initial language modeling score that was given to a particular arc and inserting the new score from the larger language model. In a pruned approach, only a subset of the arcs are re-scored. Using a set of heuristics \cite{pruned_rnn}, arcs that require re-scoring are added to a priority queue and only the most promising paths are explored. In deterministic approaches, all possible paths are explored and re-scored.
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{fst_vert.png}
\caption{This is an example of a lattice. "So does" or "sodas"? Each arc can represent a word and a log probability that combines the acoustic and language modeling scores. The best path for the lattice represents the best scoring sentence decoded for this utterance.}
\label{fig:lattice_fst}
\end{figure}
The re-scoring lattice operation takes the form of an FST composition. In FST composition \cite{openfst} two FSTs are combined to form an FST with the combined weights for the arcs that "exist" in both.
\section{Experiments and Results}
\label{sec:exp}
All experiments were conducted using the TED-LIUM3 dataset \cite{ted_lium3}. TED-LIUM is an English speech recognition training corpus from TED talks. This data-set was chosen due to its topical nature, usually in the form of 15 minutes or more worth of speech where the speaker is discussing a particular topic. Our TED-LIUM dataset contains a training set of 248 hours of speech with aligned transcription, with approximately 2 hours of development and 3 hours of test. An acoustic model and n-gram language model are trained to provide a baseline word-error rate.
We use a library that contains a pre-trained version of XLNet, an implementation of the transformer-XL architecture \cite{hugging}. The model is fairly large with 110M parameters. It was previously trained on BooksCorpus \cite{book_corpus} and English Wikipedia which have 13GB of plain text combined \cite{xlnet}. We run a transfer learning step using PyTorch on the TED-LIUM dataset. We implement a gRPC server that can run inference on our model over the local network. In Kaldi, we implement a DeterministicOnDemandFst that calls into our exported model. Our DeterministicOnDemandFst maps kaldi word symbol identifiers to tokens that our model understands and vice-versa. When re-scoring a lattice, we remove the first-pass FST values and compose our DeterministicOnDemandFst (see figure \ref{fig:rescoring_flow}). The segment embedding for the best path after lattice re-scoring is cached and passed as inputs for re-scoring future lattices from the same speech context. We compare this technique to first-pass decoding lattices (no-rescoring) and to re-scoring with an RNNLM trained directly on the TED-LIUM data-set.
\subsection{XLNet gRPC Inference Server}
For ease of integration, we implement a gRPC server for access over the local network. gRPC provides a high-performance, cross-language way to send data across processes and over the network. Our implementation of a gRPC server takes in a sequence of words and returns the log probability for the next word.
\lstinputlisting[language=protobuf3,style=protobuf]{protobuf/transformer.proto}
There are two methods that can be invoked on our gRPC server: GetLogProb and GetLogProbBatch. GetLogProb can take a single sequence of words and returns the log probability of the next word. It will also return a "mems\_id" which can be used on subsequent requests to reference the cached hidden memory states on the gRPC server. GetLogProbBatch can take a batch of requests and execute them in a single batch on the model. Words are tokenized to map to the word IDs that the XLNet model understand. Also, all sequence lengths in the batch are expected to be the same, so a padding token is added to pad all sequences to the maximum length sequence of the batch. A "common\_mems\_id" can be be passed with a batch request to load the cached hidden memory states as part of the batch inference call into the model. The memory states are repeated to match the dimensions of the batch. After each sequential lattice is re-scored, its best path is passed as sequence of words to GetLogProb and predicting the probability of an end of sentence token. The returned "mems\_id" is saved.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{grpc_arch_before_better.png}
\caption{Our architecture for lattice re-scoring: TransformerLmOnDemandFst is composed with the lattice to re-score each lattice arc. Each inference request is sent to our gRPC server. After re-scoring the lattice, the best path is passed to the transformer to get its memory states, those are cached in the TransformerLmOnDemandFst for rescoring future sequential utterances.}
\label{fig:rescoring_flow}
\end{figure}
\subsection{Fine-tuning Language Model}
We fine-tune the XLNet base-cased model for 20 epochs on the TED-LIUM training set. First we concatenate all of the TED-LIUM training transcripts, preserving the order of the utterances. This step is important because we want to be able to condition our language models on very long histories of words (more than 100) so we need to be sure that contiguous training text belongs to the same TED talk. The training text is then tokenized using XLNet's word dictionary. Our training examples consist of blocks of 512 XLNet word IDs. We use an Adam optimizer with a learning rate of 5e-6. We did not attempt to different factorization orders (figure \ref{fig:xlnet_permutation}), only the natural factorization order was used. The cross entropy losses against all of the word positions were back-propagated for experiments where we had no target mapping. Other training runs for 10 and 4 word blocks were also ran.
We changed the random sampling to sequential sampling during training to allow for memory blocks to be cached from previous examples. We also experimented with a single permutation mask for the next token in order to mimic inference time where we are attempted to predict a last word that all other word positions and hidden states cannot attend to.
There are a few discrepancies between the pre-trained XLNet and our use-case. All of our words are lower cased and our corpus contains no punctuation. Since the pre-trained XLNet differentiates between upper and lower cased words and has punctuation tokens, it is likely that it relies on them for deriving semantics. XLNet's pre-trained vocabulary is also much larger than TED-LIUM's.
\subsection{Performance Optimizations}
\label{sec:perf}
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{grpc_arch_after_better.png}
\caption{An improvement to our lattice rescoring flow involves the NgramCacherOnDemandFst to retrieves all n-grams in the lattice. All n-grams are passed to the TransformerLmOnDemandFst for a single batch request to the transformer server.}
\label{fig:rescoring_flow2}
\end{figure}
Typically with an RNNLM approach, we must have cached the hidden states of previous time steps before running inference for the next FST state. This is improved upon with our transformer LM approach, as we can compute the log probability of the next unigram with a single inference pass. There is no need to compute the hidden state for each additional word in sequence. The problem, however, is that the transformer based language model (110M parameters for the one used on our experiments) is much larger than the RNNLM (15M parameters) that is referenced in the TED-LIUM recipe in Kaldi. This makes running the model a much more expensive prospect. For that reason, we must take a different approach to running our model. Because we are not constrained by running our models in sequence, we can batch all operations for execution in a single-pass, if the GPU memory permits it (see figure \ref{fig:rescoring_flow2}).
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{perf_graph.png}
\caption{This is a graph showing the incremental cost of adding a request to the batch.}
\label{fig:perf_res}
\end{figure}
See the graph in figure \ref{fig:perf_res} which shows that the incremental cost of adding an extra call to the batch operation is negligible (see figure \ref{fig:perf_res}).
This allows us to experiment with different techniques, for one, we can relax our pruning parameters as there is very little additional cost to run a deterministic approach where we score every arc in the lattice.
\subsection{Results}
\label{sec:res}
\begin{table}[]
\centering
\begin{tabular}{c c c}
\hline
Model & Dev WER & Test WER\\ [0.5ex]
\hline
No-rescoring & 8.75 & 8.77 \\
\hline
Base XLNet 4-gram, no mems & 8.54 & 8.73 \\
\hline
Base XLNet 4-gram, with mems & 8.46 & 8.82 \\
\hline
Fine-tuned XLNet 4-gram & 8.32 & 8.47 \\
\hline
Fine-tuned XLNet 10-gram & 8.29 & 8.44 \\
\hline
RNNLM 4-gram & 6.77 & 7.25 \\
\hline
Lattice Oracle Best Path & 1.81 & 1.69 \\
\hline
\end{tabular}
\caption{The pre-trained and fine-tuned transformer models make improvements on the lattice, but they're still not as good as the pre-trained RNNLM for TED-LIUM. This is most likely due to using a very large XLNet, limited GPU resources, and only 25MB worth of TED-LIUM text.}
\label{tab:res}
\end{table}
In order to motivate the problem, we measure the oracle word-error rate which gives us the path with the minimum word error rate found within each lattice. The oracle word error rate for the test set was found to be 1.70\%. If we were to flawlessly re-score a lattice, we could, in theory, achieve this word error rate. In the lattice some very good answers exist. However, our results in table \ref{tab:res} show how difficult it is to make a dent in the WER with such a large XLNet model. The RNNLM still gives a much better score. We suspect that this is due to a few things: firstly, the XLNet is 110M parameters and was trained on approximately 13GB of text compared to 25MB worth of text for TED-LIUM. Given the size of the model, and the fact that it was pre-trained on 512 TPUs \cite{xlnet}, we expect that training for 20 epochs on TED-LIUM's text is not enough to overcome the differences between written text and conversational speech. Without fine-tuning, adding memory seems to have an adverse effect on the test set.
\section{Conclusion}
\label{sec:expected}
We proposed a way to use a transformer-based language model in conversational speech recognition. We showed that XLNet-based language model with 110M parameters is slower in lattice re-scoring than the RNNLM, but there is promise when it comes to the ability of the transformer architecture to run inference in parallel. For a fairer comparison, a much smaller derivative of XLNet should be used. The transformer also showed a 4\% relative improvement on the TED-LIUM when transfer learning and running a 10-gram approximation on the lattice. Given a smaller model and more training time, we expect to see small XLNets to be used for lattice re-scoring soon.
\section{Future Improvements}
\label{sec:future}
Future improvements include reducing the batch sizes that are sent to our inference server by caching common n-gram prefixes. We also experimented fine-tuning the language models with memory hidden states but did not have time to compile the results. A much smaller network based on the XLNet archicture would allow for easier fine-tuning and inference. It also may be easier to test using the pre-computed word embedding matrix from TED-LIUM's RNNLM. That would help with reducing the size of the model overall and training time since we could avoid using the model's internal embedding lookup matrix. We would also like to explore scoring lattices in both directions and re-scoring the n-best outputs directly. We also wish to experiment with expanding the lattice with unique n-gram paths first before re-scoring to reduce the effect of multiple states affecting the path.
\bibliographystyle{IEEEbib}
|
1,108,101,565,147 | arxiv | \section*{Acknowledgments}
We thank the anonymous reviewers for their constructive feedback and comments. This work is supported by National Nature Science Foundation of China (62072188, 61672241), Natural Science Foundation of Guangdong Province (2016A030308013), Science and Technology Program of Guangdong Province (2019A050510010). This work is also partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the York Research Chairs (YRC) program.
\section{Conclusion}
\label{sec:conclusion}
This work develops a new graph learning framework--MTD, which aims to inject multi-level transition dynamics into the session-based recommendation. By integrating a position-aware dual-stage attention network and graph hierarchical relation encoder, MTD\ not only models the intra-session sequential transitions, but also derives the high-order item relationships across sessions. Experimental results on different real-world datasets show that MTD\ is superior to many state-of-the-art baselines. In the future, we plan to incorporate item content information (\eg, item text description or reviews) into MTD\ to deal with external attributes in learning semantic-aware item transitions.
\section{Evaluation}
\label{sec:eval}
In this section, we perform extensive experiments on three publicly available real-life recommendation datasets and compare \emph{MTD} with various state-of-the-art techniques. Particularly, we aim to answer the following research questions:
\begin{itemize}[leftmargin=*]
\item \textbf{RQ1}: Does \emph{MTD} consistently outperform other baselines by yeilding better recommendation results?
\item \textbf{RQ2}: How do different sub-modules in our \emph{MTD} framework affect the recommendation performance?
\item \textbf{RQ3}: What is the influence of hyperparameter settings in \emph{MTD} for the model performance?
\item \textbf{RQ4}: How is the model interpretation capability of \emph{MTD}?
\item \textbf{RQ5}: How is the computational cost of \emph{MTD} method ?
\end{itemize}
\begin{table}
\centering
\footnotesize
\begin{tabular}{| c | c | c | c |}
\hline
Dataset & Yoochoose & Diginetica & RetailRocket\\
\hline
\# Train Sessions & 369,859 & 719,470 & 433,648 \\
\hline
\# Test Sessions & 55,400 & 60,858 & 15,132 \\
\hline
\# All Items & 17,376 & 43,097 & 36,968 \\
\hline
Average Length & 6.15 & 5.13 & 9.93 \\
\hline
\end{tabular}
}
\vspace{-0.1in}
\caption{Statistics of the experimented datasets.}
\vspace{-0.2in}
\label{tab:data}
\end{table}
\subsection{Experimental Settings}
\subsubsection{\bf Data Description.}
The data statistics with training/test detailed split settings are shown in Table~\ref{tab:data}. We present the details of experimented datasets as below:\\\vspace{-0.1in}
\noindent \textbf{Yoochoose Data}\footnote{http://cikm2016.cs.iupui.edu/cikm-cup}. This data comes from an online retailing site to log half year of user clicks (released by Recsys'15 Challenge). Following the pre-processing strategies in~\cite{li2017neural,liu2018stamp}, the sessions with the length of $\geq 2$ and items with the appearing frequency of $\geq 5$ are kept in the training and test set.\\\vspace{-0.1in}
\noindent \textbf{Diginetica Data}\footnote{http://2015.recsyschallenge.com/challenge.html}. This data is collected from the CIKM Cup platform 2016 which records the user clicks from the time period of six months. To be consistent with the settings in~\cite{wu2019session,liu2018stamp}, we do not include the sessions that contains single clicked item. Sessions in the test set are generated from the last week. \\\vspace{-0.1in}
\noindent \textbf{Retailrocket Data}\footnote{https://www.kaggle.com/retailrocket/ecommerce-dataset}. It contains the user browse data from another e-commerce company. Following the same settings in~\cite{xu2019graph}, we filter out the items with the browsed frequency less than 5 and sessions with the length of less than 2. We set the data from the last week for test and the remaining part for training.
\subsubsection{\bf Evaluation Metrics.}
We leverage two metrics which are widely adopted in the session-based recommendation applications: \textbf{Precision@$K$} (Pre@$K$) and \textbf{Mean Reciprocal Rank@$K$} (MRR@$K$). Following the same rubric in~\cite{wu2019session,li2017neural}, MRR@$K$=0 if the first correctly recommended items is not among the top-$K$ ranked items. Note that larger Pre@$K$ and MRR@$K$ scores indicate better recommendation performance.
\subsubsection{\bf Compared Methods.} In our experiments, we consider the following baselines for performance comparison.
\begin{itemize}[leftmargin=*]
\item \textbf{POP}: it explores users' past interested items and makes recommendations with the identified most frequent items.
\item \textbf{S-POP}: it recommends the most popular items to users by considering their activities from the current session.
\item \textbf{ItemKNN}~\cite{davidson2010youtube}: it considers the item correlations using $k$-nearest neighbors algorithm based on items' cosine similarity.
\item \textbf{GRURec}~\cite{hidasi2015session}: it is a representative session-based recommendation approach using the gated recurrent unit to encode the transitional regularities.
\item \textbf{NARM}~\cite{li2017neural}: it is a neural attention model to argument recurrent network for session representations, by attending deferentially to sequential items.
\item \textbf{STAMP}~\cite{liu2018stamp}: this approach is an attention model to capture user's temporal interests from historical clicks in a session.
\item \textbf{SASRec}~\cite{kang2018self}: this method is built upon the self-attention architecture to model the long-term item transition dynamics.
\item \textbf{SR-GNN}~\cite{wu2019session}: it proposes a graph neural network model to encode item transitions within a session to generate item embedding.
\item \textbf{CSRM}~\cite{wang2019collaborative}: it integrates the inner memory encoder through an outer memory network by considering correlations between neighborhood sessions.
\item \textbf{CoSAN}~\cite{luocollaborative}: it designs self-attention networks to model the collaborative feature information of items from neighborhood sessions.
\end{itemize}
\subsection{Parameter Settings}
Our implement is based on Tensorflow. The embedding dimensionality $d$ is set as 100. We assign the regularization penalty $\lambda_2 = 10^{-6}$. All models are optimized using the Adam optimizer with the batch size and learning rate as 512 and $1e^{-3}$, respectively. The training frequency $f$ in each epoch is set as 1, 4, 6 corresponding to the Yoochoose, Diginetica, Retailrocket, respectively. Furthermore, the dropout technique is applied in the training phase to alleviate the overfitting issue, with the ratio of 0.2. Experiments of most baselines are conducted with their release source code.
\begin{table*}
\centering
\footnotesize
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
Data & Metric & ~POP~ & S-POP & It-KNN & GRURec & NARM & STAMP & SASRec & SR-GNN & ~CSRM & CoSAN & \emph{MTD} \\
\hline
\multirow{2}{*}{Digi}
& Pre & 0.58 & 20.66 & 26.46 & 20.31 & 36.72 & 37.05 & 38.42 & 38.40 & 38.56 & 37.58 & \textbf{40.22}\\
& MRR & 0.19 & 13.59 & 10.91 & 7.78 & 15.00 & 16.05 & 16.27 & 17.04 & 16.23 & 15.57 & \textbf{17.58}\\
\hline
\multirow{2}{*}{Yooc}
& Pre & 4.59 & 28.61 & 43.40 & 55.13 & 60.19 & 58.79 & 60.42 & 60.84 & 60.46 & 61.01 & \textbf{61.83}\\
& MRR & 1.51 & 18.45 & 21.39 & 25.76 & 29.03 & 29.44 & 30.47 & 30.57 & 30.37 & 30.21 & \textbf{30.83}\\
\hline
\multirow{2}{*}{Reta}
& Pre & 1.59 & 29.67 & 21.41 & 31.01 & 44.74 & 43.14 & 46.39 & 44.88 & 47.21 & 45.83 & \textbf{47.93}\\
& MRR & 0.44 & 21.51 & 9.78 & 15.37 & 25.54 & 26.65 & 26.74 & 26.95 & 27.14 & 26.01 & \textbf{28.51}\\
\hline
\end{tabular}
\vspace{-0.1in}
\caption{Recommendation performance comparison of all methods in terms of Pre@$10$ and MRR@$10$.}
\label{tab:result_across}
\vspace{-0.1in}
\end{table*}
\begin{table}
\centering
\footnotesize
\vspace{-0.05in}
\begin{tabular}{l|c|c|c|c|c}
\hline
Data & Metric & SR-GNN & ~CSRM & CoSAN & \emph{MTD} \\
\hline
\multirow{3}{*}{Digi} & Pre@5 & 27.15 & 26.38 & 25.72 & \textbf{28.29}\\
& Pre@10 & 38.40 & 38.56 & 37.58 & \textbf{40.22}\\
& Pre@20 & 51.57 & 52.56 & 50.94 & \textbf{53.92}\\
\hline
\multirow{3}{*}{Reta} & Pre@5 & 37.38 & 38.65 & 37.07 & \textbf{39.64}\\
& Pre@10 & 44.88 & 47.21 & 45.83 & \textbf{47.93}\\
& Pre@20 & 52.27 & 55.04 & 54.87 & \textbf{55.95}\\
\hline
\end{tabular}
\vspace{-0.1in}
\caption{Evaluation results with different top-$K$ values.}
\label{tab:result_vary_k}
\vspace{-0.1in}
\end{table}
\subsection{Performance Validation (RQ1)}
We present evaluation results of all methods on different datasets in Table~\ref{tab:result_across}, and show the performance of several recent baselines when varying the value of top-$K$ in Table~\ref{tab:result_vary_k}. We can observe that \emph{MTD} consistently outperforms other baselines in most cases on different datasets, which justifies the effectiveness of our model in comprehensively capturing multi-level transition dynamics from intra-session and inter-session relations in a hierarchical manner.
The naive frequency (POP and S-POP) and similarity (ItemKNN) based recommendation approaches perform much worse than other baselines due to their limitations in capturing the dynamic sequential patterns of item transitions. Additionally, the attention-based recommendation techniques (NARM and STAMP) outperform the mere RNN approach (GRU4REC)--considering singular level of item sequential relations. However, the significant improvement between \emph{MTD} and attentive recommendation model suggests that only considering the intra-session item transitions is insufficient to fully capture the complex item transition dynamics from both local and global perspectives. While SR-GNN tries to encode the long-term item dependencies using the graph neural network, it yields suboptimal results because its failure in learning cross-session dependency.
\subsection{Model Ablation and Effect Analyses (RQ2)}
We consider several model variants to investigate the efficacy of key modules in our learning framework of \emph{MTD}.\\\vspace{-0.1in}
\noindent \textbf{Effect of Hierarchical Attention Network}.
We design two contrast models: i) \emph{MTD}-va generates the session-level embeddings with the vanilla attention layer; ii) \emph{MTD}-at further incorporates the temporal factor into the \emph{MTD}-va method.\\\vspace{-0.1in}
\noindent \textbf{Effect of Cross-Session Dependency Encoder}. i) \emph{MTD}-lo only encodes the local-level item transition patterns without the cross-session dependency encoder; ii) \emph{MTD}-ga replaces our graph-structured hierarchical relation encoder with the graph attention network operated on all relevant sessions.
\begin{figure}[h]
\centering
\vspace{-0.1in}
\subfigure[][Diginetica]{
\centering
\includegraphics[width=0.13\textwidth]{fig/Diginetica_Pre.pdf}
\label{fig:beh_buy_hr}
}
\subfigure[][Yoochoose]{
\centering
\includegraphics[width=0.14\textwidth]{fig/Yoochoose_Pre.pdf}
\label{fig:beh_click_hr}
}
\subfigure[][Retailrocket]{
\centering
\includegraphics[width=0.14\textwidth]{fig/Retailrocket_Pre.pdf}
\label{fig:beh_click_ndcg}
}
\vspace{-0.1in}
\caption{Model ablation study of \emph{MTD}.}
\label{fig:model_ablation}
\vspace{-0.05in}
\end{figure}
We report the results in Figure~\ref{fig:model_ablation} and observe that \emph{MTD} outperforms all other variants on all datasets in terms of $Pre$@$K$ and $MRR$@$K$ under $K=20$, which justifies the effectiveness of the design of individual component in our \emph{MTD} framework. In particular: (1) The performance gap among \emph{MTD}-va, \emph{MTD}-at, and \emph{MTD}-lo shows the effectiveness of our position-aware hierarchical attention network in modeling the local item transitions. (2) Without the consideration of cross-session item dependencies, \emph{MTD}-lo performs worse than \emph{MTD}. It suggests the necessity of modeling the inter-session item correlations based on our developed graph-structured framework; (3) While the graph attention network (\emph{MTD}-ga) could learn global-level item relations, it still falls behind \emph{MTD} since it does not capture the hierarchical informativeness across relevant sessions.
\subsection{Hyperparameter Study of MTD (RQ3)}
We further investigate the hyperparameter sensitivity of our \emph{MTD} (as shown in Figure~\ref{fig:hyperparam_study}) and summarize the following observations. To save space and integrate results on different datasets with different performance scales into the one figure, we set y-axis as the performance degradation ratio compared to the best performance. \\\vspace{-0.1in}
\noindent (1) \textbf{Effect of Hidden Dimensionality $d$}. The performance saturates as the hidden dimensionality $d$ reaches around 100. This is because a larger dimensionality $d$ brings a stronger representation ability at the early stage, but might lead to overfitting as the continuously increasing of $d$.\\\vspace{-0.1in}
\noindent (2) \textbf{Impact of Training Frequency $f$}. We perform the training frequency study by varying $f$ from 1 to 8, and could notice that a large value of $f$ ($\geq 5$) will degrade the performance by misleading the objective function optimization.\\\vspace{-0.1in}
\noindent (3) \textbf{Influence of Depth in Graph Neural Architecture}.
Stacking more graph convolution layers with the adjacent matrix-based aggregation will involve more redundant information of high-order connectivity, which hinders the learning process of global item relational structures in \emph{MTD}. This observation also suggests the rationality of our designed graph neural component in simplifying and powering the cross-session item dependency learning, via the exploration of mutual relations between low-level item embeddings and high-level graph representation.
\begin{figure}
\vspace{-0.1in}
\centering
\begin{adjustbox}{max width=1.0\linewidth}
\input{./fig/parameter}
\end{adjustbox}
\vspace{-0.20in}
\caption{Hyper-parameter study of \emph{MTD}.}
\vspace{-0.15in}
\label{fig:hyperparam_study}
\end{figure}
\subsection{Case Studies: Model Interpretation (RQ4)}
\noindent {\bf Hierarchical Relation Interpretation across Items.}
We visualize the hierarchical item relations with quantitative weights learned from our intra-session attention network on Diginetica. Figure~\ref{fig:case_study} (a) and Figure~\ref{fig:case_study} (b) show the encoded pairwise item correlations in modeling the intra-session sequential patterns of two sampled sessions across different time steps. From Figure~\ref{fig:case_study} (c), we can observe that different items contribute differently to summarize the session-specific main purchase with hidden representations.\\\vspace{-0.1in}
\noindent{\bf Visualizations of Learned Session Embeddings.}
We further visualize the projected session representations by our \emph{MTD} and two state-of-the-arts: SR-GNN and STAMP (as shown in Figure~\ref{fig:case_study} (d)). We randomly sample 180 session instances and label each one with its corresponding next clicked item (ground truth). It is easy to see that embeddings of sessions with the same label (6 classes and each one is represented with the same color) cluster closely and can be better distinguished by \emph{MTD} as compared to other two methods. This demontrates the effectiveness of our learned item transitional patterns with session embeddings.
\begin{figure}[h!]
\vspace{-0.10in}
\includegraphics[width=0.47\textwidth]{fig/case_study_20210409.pdf}
\vspace{-0.10in}
\caption{Case study of \emph{MTD} framework}
\label{fig:case_study}
\vspace{-0.15in}
\end{figure}
\subsection{Model Scalability Study (RQ5)}
\label{sec:efficiency}
Since efficiency is a key factor in many real-life recommendation applications, we finally investigate the computational cost (measured by running time of individual epoch) of our \emph{MTD} and other state-of-the-art recommendation models. Our experiments are conducted on different datasets are summarized in Table~\ref{tab:time}. From the evaluation results, we can observe that \emph{MTD} outperforms most competitive baselines with different deep neural network architectures (\eg, attention mechanisms and graph-based message passing frameworks). Particularly, SR-GNN involves much computation cost in the gating mechanisms from neural network over each constructed session graph. Additionally, it is time-consuming to discover collaborative neighborhood sessions for each batch during the training phase of CSRM method. In the occasional cases that \emph{MTD} miss the best performance (as compared to a streaming algorithm STAMP--only using attention mechanism for transition aggregation), \emph{MTD} still achieves competitive model efficiency. Overall, the proposed \emph{MTD} is efficient and scalable for large-scale session-based recommendation applications.
\begin{table}[h]
\vspace{-0.05in}
\centering
\footnotesize
\vspace{-0.1in}
\begin{tabular}{lccc}
\toprule
Models & Yoochoose & Diginetica & RetailRocket \\
\hline
NARM & 35 & 66 & 81\\
STAMP & 9 & 24 & 14\\
SASRec & 18 & 28 & 42\\
TiSA & 82 & 160 & 100\\
SR-GNN & 1401 & 2586 & 2502\\
CSRM & 530 & 556 & 228\\
\hline
\emph{MTD} & 24 & 40 & 53 \\
\hline
\end{tabular}
\caption{Computational time cost (seconds) investigation.}
\vspace{-0.1in}
\label{tab:time}
\end{table}
\section{Introduction}
\label{sec:intro}
Personalized recommendation has attracted a lot of attention in real-life applications, to alleviate information overload on the web~\cite{xia2020multiplex}. In various recommendation scenarios, session-based recommendation has become an important component in many online services (\eg, retailing and advertising platforms)~\cite{huang2004dynamic}, to address the unavailability issue of user information in realistic scenarios (such as non-logged in customers or users without historical interactions)~\cite{quadrana2017personalizing,ren2019repeatnet,yuan2020future}. At its core is to predict the next interactive item based on a group of anonymous temporally-ordered behavior sequences of users (\eg, clicked, browsed or purchased item sequences)~\cite{liu2018stamp,wang2020global,wang2019collaborative}. To facilitate the study of session-based recommendation, many efforts have been devoted to developing various deep neural network models, by exploring correlations between the future interested item and past interacted ones, which contributes to smarter recommendations.
\begin{figure*}[t]
\includegraphics[width=0.99\textwidth]{fig/intro_scenario_new_4.pdf}
\vspace{-0.10in}
\caption{Illustrated example of session-based recommendation with multi-level transition dynamics.}
\label{fig:intro_example}
\vspace{-0.15in}
\end{figure*}
Existing session-based recommendation methods for understanding the item transitional regularities can be grouped into several key paradigms. For example, one key research line aims to capture transitional patterns of interacted item sequence with recurrent neural network~\cite{hidasi2015session,hidasi2018recurrent}. Along this line, to aggregate sequential embeddings into a more summarized session-level representation, researchers recently propose to augment recurrent session-based recommendation frameworks with attention mechanism~\cite{li2017neural}, or rely on the memory network~\cite{liu2018stamp,wang2019collaborative}. Furthermore, another recommendation paradigm utilizes graph neural network as the item transitional relation encoder, to model long-term item dependencies within the session based on the structured relation graph~\cite{wu2019session}.
Despite their effectiveness, we argue that these methods are not sufficient to yield satisfactory recommendation results, due to their failure in encoding complex item transition dynamics which are exhibited with multi-levels in nature~\cite{song2019hierarchical}. Particularly, in the practical session-based recommendation scenarios, there exist session-specific short-term and long-term item transitions, as well as the long-range cross-session item dependencies in global context~\cite{al2018dynamic}. These different inter-correlations among items constitute the underlying multi-level item transition dynamics. As illustrated in Figure~\ref{fig:intro_example}, while item $t_7$ and $t_3$ are not directly connected within the same session, there exist implicit inter-dependency among them, due to the item transitional relationship of $t_2 \rightarrow t_3$ and $t_7 \rightarrow t_2$ in session $B$ and $A$, respectively. In such cases, items from different sessions are no longer independent. The dependent signals between interactive items may come from not only the intra-session transition regularities, but also inter-session item relations. However, to simplify the model design, most of current session-based recommender systems only explore local contextual features, while the global item transitional patterns across exogenous sessions are neglected. This restricts the capabilities of current models in capturing the hierarchical transition signals for making recommendations.
While intuitively useful to perform the joint learning of item relation structures with multi-level transition dynamics, it is non-trivial to do it well. In particular, the item dependencies across different sessions can be complex. It is not necessary that a future interactive item is more relevant to items from a recent session than one that is further away~\cite{kang2018self}. Hence, when tackling the cross-session item dependencies at various neighbor distances, the high-order relation structures exhibited with item transition patterns from a global perspective over all sessions, is necessary to be investigated in the relation embedding function. Additionally, intra-session item transition patterns vary by sessions. When modeling the time-evolving item correlation within a session, both the user's sequential behavior (short-term) and the overall cross-session dependencies (long-term) should be taken into account~\cite{liu2018stamp,li2020time}. Therefore, it is a significant challenge to jointly integrate the intra-session item correlations and inter-session item transition patterns, into the recommendation framework in a fully adaptive manner.\\\vspace{-0.12in}
\noindent \textbf{Present Work}. Motivated by the aforementioned challenges, we propose a new multi-task learning model with \underline{M}ulti-level \underline{T}ransition \underline{D}ynamics (MTD) for session-based recommendation. In our MTD\ framework, we first devise a position-aware attention mechanism to jointly capture the intra-session sequential item transitions, and session-specific main purchase with the incorporation of position information. Specifically, we integrate a self-attention model with an attentive aggregation layer to capture the sequential transitional patterns of items within each individual session, without the rigid order assumption of user behavior (\textit{i}.\textit{e}., latent states are propagated through temporally-ordered sequences in recurrent framework). To argument the representation learning ability over individual session, an attentive summarization layer is introduced to adaptively perform pattern aggregation. In the hierarchical attentive component, we also seek to explore the item positional information under a sequential encoding module to learn the influence of time factors. Additionally, inspired by the effectiveness of mutual information maximization in prioritizing global or local structural information in feature learning~\cite{hjelm2018learning}, we model the cross-session item dependencies in a hierarchical manner, \textit{i}.\textit{e}., from item-level embedding learning to global graph-level representation. The developed hierarchically structured encoder via graphical mutual information maximization, endows the MTD\ with the capability to incorporate inter-session transitional signals from low-level to high-level across different sessions. Source code is released at the link \emph{https://github.com/sessionRec/MTD}.
We highlight key contributions of this paper as follows:\vspace{-0.08in}
\begin{itemize}[leftmargin=*]
\item We exploit multi-level item transition dynamics in studying the session-based recommendation task. Towards this end, we propose a new recommendation framework which captures the item transition patterns, in the form of of the intra-session item dependencies, as well as the cross-session item relation structures.
\item We first develop a position-aware attentive mechanism to learn the evolving intra-session behavioral sequential signals and the summarized session-specific knowledge. Furthermore, a global context enhanced inter-session relation encoder is built upon the graph neural network paradigm, to endow MTD\ for capturing the inter-session item-wise dependencies.
\item Our extensive experiments on three real-world datasets demonstrate that MTD\ outperforms different types of baselines in yielding better recommendation results. Also, we show the efficiency of our developed model as compared to representative competitors and perform case studies with qualitative examples to investigate the interpretation capability of our MTD\ model.
\end{itemize}
\section{Problem Definition}
\label{sec:model}
\section{Related Work}
\label{sec:relate}
\noindent \textbf{Session-based Recommender Systems}. To model sequential patterns of user behaviors, many recommender systems have been proposed to predict future interactions based on users' historical observations~\cite{huang2019online}. In recent years, many session-based recommendation techniques have been developed based on various neural network architectures~\cite{qiu2020exploiting}. Particularly, one intuitive approach is to apply the recurrent neural network (\eg, GRU) for modeling the item sequential correlations~\cite{hidasi2015session}. Furthermore, attention mechanisms have been adopted for pattern aggregation through relation weight learning, such as NARM~\cite{li2017neural} and STAMP~\cite{liu2018stamp}. Different from the method~\cite{xujoint} which replies on the random walk-based skipgram model for capturing the dependency, we leverage graph neural networks to consider the global item dependency across different sessions. Another paradigm of session-based recommendation models lie in utilizing graph neural networks to capture the graph-structured item dependencies, such as attributed graph neural network for streaming recommendation~\cite{qiu2020gag} and graph-based message passing architectures~\cite{wu2019session}. Different from the above work, our MTD\ framework aims to jointly captures the local and global item transitional signals in a hierarchical manner. \\\vspace{-0.1in}
\noindent \textbf{Graph Neural Networks for Recommendation.} Recently emerged graph neural networks shine a light on performing information propagation over user-item graph for recommendation. Inspired by the graph convolution, several efforts have been devoted to capturing collaborative signals from the graph-based interacted neighbors, such as and LightGCN~\cite{he2020lightgcn} and PinSage~\cite{ying2018graph}. Additionally, graph neural networks have also been integrated for recommendation to aggregate external knowledge from user side~\cite{aaaisocial} or item side~\cite{wang2019explainable}. In this work, we propose to capture cross-session item dependencies in a hierarchical manner upon a global context enhanced graph network.
\section{Methodology}
\label{sec:solution}
In this section, we present the technical details of our proposed recommendation framework MTD. We first formulate our studied session-based recommendation scenario as follows: Session-based recommendation aims to predict the next action of users based on their anonymous historical activity sequences (\eg, clicks or purchases). Let $S = \{v_1,...,v_m,... , v_M\}$ denote the item candidate set, where $M$ is the number of items. An anonymous session $s$ is a item sequence $s=[v_{s,1},...,v_{s,i},...,v_{s,I}]$ in a chronological order, where $v_{s,i} \in S$ denotes the $i$-th item interested by the user in the session $s$, and $I$ denotes the length of session $s$. The recommendation model outputs a list $Y=[y_1, y_2, ..., y_M]$ for each session $s$, where $y_m$ denotes the probability that the next interacted item is $v_m$. We finally make recommendations based on the top-$K$ ranked items in terms of their estimated probability values.
\subsection{Intra-Session Item Relation Learning}
To capture item transitional relationships within a session, we integrate two modules for learning the session-specific item transition patterns: (i) position-aware self-attention network for sequential transition modeling; (ii) attentive aggregation for session-specific knowledge representation.\vspace{-0.05in}
\subsubsection{\bf Self-Attentive Item Embedding Layer.} In MTD\ framework, we leverage the self-attention mechanism to learn the relevance scores over historical interested items within the session and draw the sequential contextual signals. Motivated by the attentive neural network in relation learning~\cite{huang2019mist}, self-attention mechanism has been proposed to tackle various sequence modeling tasks such as machine translation~\cite{yang2019convolutional} and user behavior modeling~\cite{kang2018self}). Different from the standard attention module, self-attention could bring the benefits of capturing the relevance of past instances (\eg, words or behaviors), and refine the representation process on the single sequence at various distance~\cite{vaswani2017attention}. Following the transformer network, we build the intra-session transition modeling layer upon the dot-product attention which consists of query, key and value dimensions. The weight matrices $\textbf{W}_Q$, $\textbf{W}_K$, $\textbf{W}_V \in \mathbb{R}^{d\times d}$ respectively corresponds to the query, key, value vectors, to map initial item embeddings $\textbf{E}_s \in \mathbb{R}^{I\times d}$ of session $s$ into latent representations. The operations of self-attention network are defined as follows:
\begin{align}
\begin{bmatrix}
\textbf{Q} \\ \textbf{K} \\ \textbf{V}
\end{bmatrix}
= \textbf{E}_s
\begin{bmatrix}
\textbf{W}_Q \\ \textbf{W}_K \\ \textbf{W}_V
\end{bmatrix};~~
\mathrm{Att}(\textbf{Q}, \textbf{K}, \textbf{V}) = \delta(\frac{\textbf{Q}\textbf{K}^T}{\sqrt{d}})\textbf{V}
\label{eq6}
\end{align}
\noindent where we define $\textbf{X}_s \in \mathbb{R}^{I\times d} = \mathrm{Att}(\textbf{Q}, \textbf{K}, \textbf{V})$ to represent the learned item embeddings with the modeling of pairwise relations between items $[v_{s,1},...,v_{s,i},..., v_{s,I}]$ in session $s$. $\delta(\cdot)$ denotes the softmax function and $\sqrt{d}$ is the scaling factor during the inner product operation.
We further enhance the self-attentive transition learning module with the modeling of non-linearities with the feed-forward network as shown below:
\begin{align}
\widetilde{\textbf{X}}_s = \mathrm{FFN}(\textbf{X}_s) = \varphi(\textbf{X}_s \cdot \textbf{W}_1 + \textbf{b}_1) \cdot \textbf{W}_2 + \textbf{b}_2
\end{align}
\noindent we utilize $\varphi(\cdot)$=ReLU as the activation function. $\textbf{W}_1$, $\textbf{W}_2 \in \mathbb{R}^{d\times d}$ and $\textbf{b}_1$, $\textbf{b}_2 \in \mathbb{R}^{d}$ are trainable weight matrices and bias terms. After integrating the self-attention layer with the feed-forward network, we generate the embeddings $\widetilde{\textbf{X}}_s \in \mathbb{R}^{I\times d}$ for all items $[v_{s,1},..., v_{s,I}]$ in each session.
\subsubsection{\bf Position-aware Item-wise Aggregation Module.} We further design a position-aware attentive aggregation component to fuse the encoded item-wise relations for capturing the user main purpose within individual session $s$. We assign larger importance to the item states in which they have more contextual relations with the future interested item. In particular, for the set of items in session $s$, we learn a set of weights $\{\alpha_1$,...,$\alpha_t$,...,$\alpha_I\}$ corresponding to the set of learned item embeddings $\widetilde{\textbf{X}}_s = \{ \textbf{x}_{s,1},..., \textbf{x}_{s,i},...,\textbf{x}_{s,I} \}$. Formally, $\alpha_i$ is calculated as follows:
\begin{align}
\alpha_i = \delta (\textbf{g}^T \cdot \sigma ( \textbf{W}_3 \cdot \textbf{x}_{s,I} + \textbf{W}_4 \cdot \textbf{x}_{s,i}) )
\end{align}
\noindent where $\textbf{g} \in \mathbb{R}^{d}$ is a linear projection vector for generating the weight scalar $\alpha_i$. $\textbf{W}_3$, $\textbf{W}_4 \in \mathbb{R}^{d\times d}$. $\sigma(\cdot)$ and $\delta(\cdot)$ denotes the sigmoid and softmax function, respectively. The aggregated session representation as $\textbf{x}_s^*$, \textit{i}.\textit{e}., $\textbf{x}_s^* = \sum_{i=1}^I \alpha_i \cdot \textbf{x}_{s,i}$.
We further augment the intra-session item-wise fusion module with the injection of positional information, to capture the session-specific temporally-order signals of items. The dimensionality of positional representation is also set as $d$. This endows the modeling of relative positions with the incorporation of decay factor into linear transformations:
\begin{align}
\textbf{p}_s = \sum_{i=1}^I \omega_i \cdot \textbf{x}_{s,i};~~~\omega_i = \propto \mathrm{exp}(|i-I| + 1)
\end{align}
\noindent where $\textbf{p}_s$ denotes the fused representation with the preservation of relative positional information across different items. We construct a concatenated embedding for individual session of $s$ as $\textbf{q}_{s} = \textbf{W}_c [\textbf{x}_{s,I}, \textbf{x}_s^*, \textbf{p}_s]$, where $\textbf{W}_c \in \mathbb{R}^{d\times 3d}$ performs the transformation operation. After that, following the implicit feedback-based recommendation paradigm in~\cite{he2020lightgcn,wang2019neural}, we utilize the inner product between $\textbf{q}_s$ and embedding of item candidate $\textbf{v}_m$ as $\textbf{z}_{m} = \textbf{q}_s^T \textbf{v}_m$ and define our loss function of intra-session item relation learning with the cross-entropy as follows:
\begin{align}
\mathcal{L}_{in} = -\sum_n^N \textbf{y}_n \mathrm{log}(\tilde{\textbf{y}}_n)+(1 - \textbf{y}_n)\mathrm{log}(1-\tilde{\textbf{y}}_n)
\label{eq11}
\end{align}
\noindent where $\textbf{y}_n$ denotes the ground truth label of $n$-th instance and $\tilde{\textbf{y}}_n$ is the corresponding estimated result (\textit{i}.\textit{e}., $\tilde{\textbf{y}}_n = \delta(\textbf{z}_{n})$).
\begin{figure*}
\centering
\includegraphics[width=0.96\textwidth]{fig/fra_20210401.pdf}
\vspace{-0.2in}
\caption{Global transition dynamics modeling}
\vspace{-0.15in}
\label{fig:framework}
\end{figure*}
\subsection{Global Transition Dynamics Modeling}
To comprehensively capture the global cross-session transition dynamics among items, we develop a graph neural network architecture (as illustrated in Figure~\ref{fig:framework}) to inject high-order dependent signals across different sessions into session representations. In particular, we first formulate a cross-session item graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ in which nodes $\mathcal{V}$ and $\mathcal{E}$ are generated from historical sessions. Each session $s$ can be regarded as a path which starts from $v_{s,1}$ and ends at $v_{s,I}$ in graph $\mathcal{G}$. The adjacent matrix $\mathcal{A}$ is constructed where each entry $a_{m,m'}=1$ if there exists a transition relation from item $v_{m}$ to $v_{m'}$ and $a_{m,m'}=0$ otherwise.
We first propose a graph-structured message passing architecture to model the local context of transitional signals between different items. We formally define the corresponding encoding function as follows:
\begin{align}
\label{eq:patch_embed}
\textbf{H}^{(l+1)} = \varphi(\textbf{A}, \textbf{H}^l \textbf{W}^l)=\varphi(\hat{\textbf{D}}^{-\frac{1}{2}} \hat{\textbf{A}} \hat{\textbf{D}}^{-\frac{1}{2}} \textbf{H}^l \textbf{W}^l)
\end{align}
\noindent where $\textbf{H}^{(l+1)} \in \mathbb{R}^{M\times d}$ denotes the learned representations over items under the $l$-th propagation layer. With the aim of incorporating the self-propagated signals, we update the adjacent matrix with the summation of identify matrix $\textbf{I}$ and the original adjacent matrix $\textbf{A}$ as $\hat{\textbf{A}}=\textbf{A}+\textbf{I}$. Then, we further apply the symmetric normalization strategy to conduct the information aggregation as: $\hat{\textbf{D}}^{-\frac{1}{2}} \hat{\textbf{A}} \hat{\textbf{D}}^{-\frac{1}{2}}$, where $\hat{\textbf{D}}$ is the diagonal node degree matrix of $\textbf{A}$.\vspace{-0.05in}
\subsubsection{Global Dependency Representation.} After obtaining $\textbf{H}=\{\textbf{h}_{1},...,\textbf{h}_{m},...\textbf{h}_{M}\}$, we propose to capture the high-order global dependencies across correlated items from different sessions. Specifically, we first generate a fused graph-level emebdding with the aggregation function as: $\textbf{z} = \tau(\textbf{H})$ ($\mathbb{R}^{M\times d} \rightarrow \mathbb{R}^d$), where $\tau(\cdot)$ denotes the mean pooling operation. Motivated by the paradigm of global feature representation with mutual information~\cite{velivckovic2018deep,velickovic2019deep}, we enhance our cross-session item relation encoder with the global context of the mutual information between local-level embedding ($\textbf{H}$) and graph-level representation $\textbf{z}$.
We develop a classifier to perform the global dependency representation under the mutual information learning paradigm. It aims to differentiate positive ($\textbf{h}_m, \textbf{z}$) and negative instances ($\widetilde{\textbf{h}}_m, \textbf{z}$) in graph $\mathcal{G}$ by preserving the underlying cross-session item transition dynamics, the negative sample pair $(\widetilde{\textbf{h}}_m, \textbf{z})$ are generated by associating sampled item nodes with the fake embeddings based on the node shuffling strategy~\cite{velickovic2019deep}. Then, both the positive and negative instances are fed into the classifier for classification task with the encoding function $\xi(\cdot)$:
\begin{align}
\xi(\textbf{h}_{m}, \textbf{z}) = \sigma(\textbf{h}_{m}^T \cdot \textbf{W}_g \cdot \textbf{z}); \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}
\end{align}
\noindent where $\textbf{W}_g\in \mathbb{R}^{d\times d}$ is the projection matrix. The classifier function outputs a probability score of the target node belongs to $\mathcal{G}$ given the corresponding embedding pair $(\textbf{h}_{m}, \textbf{z})$. The loss function of our graph-level global dependency representation component is defined as follows:
\begin{align}
\label{eq:dgi_loss}
\mathcal{L}_{co} &= - \frac{1}{N_{pos}+N_{neg}} \Big ( \sum_{i=1}^{N_{pos}} \rho(\textbf{h}_{m}, \textbf{z}) \cdot log \xi(\textbf{h}_{m}, \textbf{z}) \nonumber\\
&+ \sum_{i=1}^{N_{neg}} \rho(\widetilde{\textbf{h}}_{m}, \textbf{z}) \cdot log [1-\xi(\widetilde{\textbf{h}}_{m}, \textbf{z})] \Big )
\end{align}
\noindent where $\rho(\cdot)$ is an indicator function where $\lambda(\textbf{h}_{m}, \textbf{z})=1$ and $\rho
(\widetilde{\textbf{h}}_{m}, \textbf{z})=1$ corresponds to positive and negative instance pairs, respectively. We define the number of positive and negative samples as $N_{pos}$ and $N_{neg}$. By minimizing $\mathcal{L}_{co}$ (maximizing the mutual information between local-level and graph-level representations), we could generate the enhanced user representations $\textbf{H}^* \in \mathbb{R}^{M\times d}$ by encoding cross-session item transitional patterns from low-level (locally) to high-level (globally).
\subsection{Model Inference}
Based on the multi-task learning framework of MTD, we define our loss function with the integration of both intra- and inter-session transition dynamics as follows:
\begin{equation}
\mathcal{L} = \mathcal{L}_{cr} + \lambda_1 \mathcal{L}_{in} + \lambda_2\Vert{{\Theta}}\Vert_2^2
\label{eq12}
\end{equation}
where ${\Theta}$ are learnable parameters. $\lambda_1$ and $ \lambda_2$ balance the losses from two module and prevent over-fitting, respectively. Since the input of cross-session relation encoder and attention network are different, we employ mini-batch Adam to optimize $\mathcal{L}_{in}$ and $\mathcal{L}_{cr}$ alternatively. We further define additional parameter $f$ to denote the training frequency of $\mathcal{L}_{in}$ optimization for loss balance. In each epoch, we first optimize the graph-structured relation encoder and initialize the item representations with the current embeddings. Note that the local representation $\textbf{H}$, which are generated by the graph neural network, implies the global transition of items. To capture the global signal in recommendation module, we update the embedding table of items with $\textbf{H}$ after the optimization step of $\mathcal{L}_{cr}$.
\noindent \textbf{Complexity Analysis of MTD\ Framework.}
The intra-session item relation learning requires $O(I\times d^2+I^2\times d)$ calculations to compute the $\textbf{Q}, \textbf{K}, \textbf{V}$ and attentive embeddings $\textbf{X}_s$ in the self-attention layer. After that, the rest of the intra-session learning spends most complexity on transformations in the $d$-dimensional hidden space (\eg, the two-layer feed-forward network), which costs $O(I\times d^2)$ complexity, and results in $O(L_1\times I\times d^2+I^2\times d)$ overall complexity. Here, $L_1$ denotes the number of $d\times d$ transformations. Furthermore, the graph-based inter-session item transition modeling component requires $O(|\textbf{A}|\times d + M\times d^2)$ complexity for message passing and embedding transformation, where $|\textbf{A}|$ denotes the number of neighboring item pairs.
|
1,108,101,565,148 | arxiv | \section{Introduction}\label{section:Introduction}
The logic and database theory literature has considered various kinds of
representations of finite labeled trees as logical structures. In
particular, trees are either ranked or unranked (i.e., the number of
children of each node is bounded by a constant, or unbounded);
the children of each node are either ordered or unordered; and
there is or there is not available the descendant relation
(i.e., the transitive closure of the child relation);
for overviews see \cite{DBLP:journals/lmcs/Libkin06,
DBLP:conf/csl/Neven02,
WThomas-Handbook-survey,
tata2008}.
Considering \emph{ordered} unranked labeled trees,
Gottlob and Koch \cite{GottlobKoch} showed that monadic datalog,
viewed as a language for
defining Boolean or unary queries on such trees, is
exactly as expressive as monadic second-order logic. For achieving
this result, they represent a tree as a logical structure where the
nodes of the tree form the structure's universe, on which there are
available the firstchild relation, the nextsibling relation, and
unary relations for representing the root, the leaves, the last
siblings, and the labels of the nodes.
Other papers, e.g.\
\cite{DBLP:journals/tcs/NevenS02,DBLP:journals/jacm/BojanczykMSS09},
consider representations of trees where also the child relation and
its transitive closure, the descendant relation are available.
For \emph{unordered} unranked labeled trees, one usually
considers logical representations consisting only of the child relation, and
possibly also
the descendant relation, along with unary relations for encoding the
node labels, cf.\ e.g.\ \cite{AbiteboulBMW13,Kepser2008,DBLP:journals/jcss/BjorklundMS11}.
Recently, Abiteboul \emph{et al.} \cite{AbiteboulBMW13} considered recursive query languages on
unordered trees and data trees, among them datalog and monadic
datalog. In particular, they asked for the decidability of the query
containment problem for monadic datalog on unordered labeled trees
represented using the child relation and the descendant relation.
The present paper gives an affirmative answer to this question, as
well as an overview of the expressive power of monadic datalog on
various representations of trees as logical structures.
The paper is organised as follows.
Section~\ref{section:Preliminaries} fixes the basic notation concerning
unordered as well as ordered trees, and their representations as
logical structures. Furthermore, it recalls the syntax and semantics,
along with basic properties, of monadic datalog and monadic
second-order logic.
Section~\ref{section:ExpressivePower} gives details on the expressive
power of monadic datalog on various kinds of tree representations.
Section~\ref{section:QCPofMonadicDatalog} shows that query
containment, equivalence, and satisfiability of monadic
datalog queries are decidable on all considered tree representations.
\section{Preliminaries}\label{section:Preliminaries}
We write $\NN$ for the set of non-negative integers, and we let
$\NNpos \deff \NN\setminus\set{0}$.
For a set $S$ we write $2^S$ to denote the power set of $S$, i.e., the
set $\setc{X}{X\subseteq S}$.
\\
Throughout this paper, we let $\Sigma$ be a fixed finite non-empty
alphabet.
\subsection{Relational Structures}\label{subsection:structures}
In this paper, a \emph{schema} (or, \emph{signature}) $\tau$ consists of a
finite number of relation symbols $R$, each of a fixed
\emph{arity} $\ar(R)\in\NNpos$.
A \emph{$\tau$-structure} $\A$ consists of a \emph{finite} non-empty
set $A$ called the \emph{domain} (or, \emph{universe}) of $\A$, and a relation
$R^{\A}\subseteq A^{\ar(R)}$ for each relation symbol $R\in\tau$.
Sometimes, it will be convenient to
identify $\A$ with the \emph{set of atomic facts of $\A$},
i.e., the set
\begin{eqnarray*}
\atoms(\A)
& \deff
& \setc{\ R(a_1,\ldots,a_r)\ }{\ \ R\in\tau,\ \ r=\ar(R),\ \
(a_1,\ldots,a_r)\in R^\A \ }.
\end{eqnarray*}
If $\tau$ and $\tau'$ are schemas such that $\tau\subseteq \tau'$,
and $\A$ is a $\tau$-structure and $\B$ a $\tau'$-structure, then
$\A$ is the \emph{$\tau$-reduct} of $\B$
(and $\B$ is a \emph{$\tau'$-expansion} of $\A$), if $\A$ and $\B$
have the same domain and $R^\A=R^\B$ is true for all $R\in\tau$.
\subsection{Unordered Trees}\label{subsection:UnorderedTrees}
An \emph{unordered $\Sigma$-labeled tree} $T=(V^T,\lambda^T,E^T)$
consists of a finite set $V^T$ of nodes, a function
$\lambda^T: V^T\to \Sigma$ assigning to each node $v$ of $T$ a label
$\lambda(v)\in\Sigma$, and a set $E^T\subseteq V^T\times V^T$ of directed edges such
that the following is true:
\begin{mi}
\item
There is exactly one node $\rroot^T\in V^T$ with in-degree 0. This node is
called the \emph{root} of $T$.
\item
Every node $v\in V^T$ with $v\neq \rroot^T$ has in-degree 1, and
there is exactly one directed path from $\rroot^T$ to $v$.
\end{mi}
\noindent
As in \cite{AbiteboulBMW13}, we represent unordered
$\Sigma$-labeled trees $T$ by relational structures $\S_u(T)$ of schema
\begin{eqnarray*}
\tau_u & \deff &
\setc{\,\Label_\alpha}{\alpha\in\Sigma\,}
\; \cup \;
\set{\,\Child\,},
\end{eqnarray*}
where $\Child$ has arity~2 and $\Label_\alpha$ has arity~1 (for every $\alpha\in\Sigma$), as follows:
\begin{mi}
\item
The domain of $\S_u(T)$ is the set $V^T$ of all nodes of $T$,
\item
for each label $\alpha\in \Sigma$,
$\Label_\alpha^{\S_u(T)}$ consists of all nodes labeled $\alpha$, i.e.
$\Label_\alpha^{\S_u(T)} = \setc{v\in V^T}{\lambda^T(v)=\alpha}$,
and
\item
$\Child^{\S_u(T)} = E^T$.
\end{mi}
\begin{figure}[!h]
\begin{center}
\begin{tikzpicture}[->,>=stealth'
\tikzset{
treenode/.style = {align=center, inner sep=0pt, text centered,
font=\sffamily},
arn_red/.style = {treenode, circle, white, font=\sffamily\bfseries, draw=black,
fill=black, text width=1.5em},
arn_blue/.style = {treenode, circle, black, draw=black,
text width=1.5em, very thick}
}
\node [arn_red] {$v_0$}
child {node [arn_red] {$v_1$}}
child {node [arn_blue] {$v_2$}
child {node [arn_blue] {$v_6$}}
child {node [arn_red] {$v_7$}}
}
child {node [arn_red] {$v_3$}}
child {node [arn_blue] {$v_4$}
child {node [arn_red] {$v_8$}}}
child {node [arn_red] {$v_5$}}
;
\end{tikzpicture}
\caption{An example tree $T$ labeled by symbols from $\Sigma=\{\textit{Black},\textit{White}\}$.} \label{Pic:tree}
\end{center}
\end{figure}
\begin{Example} \label{Exa:tree-unordered}
Let $T$ be the unordered\footnote{Note that an unordered tree does not contain any information on the
relative order of the children of a node. Thus, the arrangement of
children given in the picture is only one of many possibilities to draw the tree.}
$\Sigma$-labeled tree from
Figure~\ref{Pic:tree}, for $\Sigma=\{\textit{Black},\textit{White}\}$.
The $\tau_u$-structure $\A= \S_u(T)$ representing $T$ has domain
\begin{eqnarray*}
A & = & \set{v_0,v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8}
\end{eqnarray*}
and relations
\begin{mi}
\item $\Label_{\textit{Black}}^\A = \set{v_0,v_1,v_3,v_5,v_7,v_8}$,
\item $\Label_{\textit{White}}^\A = \set{v_2,v_4,v_6}$,
\item $\Child^{\A} = \left\{
\begin{array}{l}
(v_0,v_1), (v_0,v_2),(v_0,v_3),(v_0,v_4),(v_0,v_5), \\
(v_2,v_6),(v_2,v_7),(v_4,v_8)
\end{array} \right\}.$
\end{mi}
The set of atomic facts of $\A$ is the set $\atoms(\A)=$
\[
\left\{
\begin{array}{l}
\Label_{\textit{Black}}(v_0), \
\Label_{\textit{Black}}(v_1), \
\Label_{\textit{Black}}(v_3), \
\Label_{\textit{Black}}(v_5), \\
\Label_{\textit{Black}}(v_7), \
\Label_{\textit{Black}}(v_8), \
\Label_{\textit{White}}(v_2), \
\Label_{\textit{White}}(v_4), \\
\Label_{\textit{White}}(v_6), \
\Child(v_0,v_1), \
\Child(v_0,v_2), \
\Child(v_0,v_3), \\
\Child(v_0,v_4), \
\Child(v_0,v_5), \
\Child(v_2,v_6), \
\Child(v_2,v_7), \
\Child(v_4,v_8)
\end{array}
\right\}.
\]
\markEnd
\end{Example}
\medskip
\noindent
Sometimes, we will also consider the extended schema
\begin{eqnarray}
\tau'_u & \deff & \tau_u \ \cup \ \set{\,\Desc, \ \Is, \ \Root, \ \Leaf \,},
\end{eqnarray}
where $\Desc$ and $\Is$ are of arity 2, and $\Root$ and $\Leaf$ are
of arity 1.
\\
The $\tau'_u$-representation $\S'_u(T)$ of an unordered
$\Sigma$-labeled tree $T$ is the expansion of $\S_u(T)$ by the
relations
\begin{mi}
\item
$\Desc^{\S'_u(T)}$, which is the transitive (and non-reflexive) closure
of $E^T$,
\item
$\Is^{\S'_u(T)}$, which consists of all tuples $(u,v)$ of nodes
such that $u\neq v$ have the same parent (i.e.,
there is a $w\in V^T$ such that $(w,u)\in E^T$ and
$(w,v)\in E^T$).
\item
$\Root^{\S'_u(T)}$ consists of the root node $\rroot^T$ of $T$,
\item
$\Leaf^{\S'_u(T)}$ consists of all leaves of $T$, i.e., all $v\in
V^T$ that have out-degree~0 w.r.t.\
$E^T$.
\end{mi}
\noindent
For a set $M\subseteq\set{\Desc,\Is,\Root,\Leaf}$ we let
\begin{eqnarray*}
\tau_u^M & \deff &
\tau_u \cup M,
\end{eqnarray*}
and for every $\Sigma$-labeled unordered tree $T$ we let
$\S^M_u(T)$ be the $\tau_u^M$-reduct of $\S'_u(T)$.
If $M$ is a singleton set, we omit the curly brackets --- in particular, we
write $\tau_u^{\Desc}$ instead of
$\tau_u^{\set{\Desc}}$, and $\S_u^{\Desc}(T)$ instead of $\S_u^{\set{\Desc}}(T)$.
\subsection{Ordered Trees}\label{subsection:OrderedTrees}
An \emph{ordered} $\Sigma$-labeled tree $T=(V^T,\lambda^T,E^T,\textit{order}^T)$
consists of the same components as an unordered $\Sigma$-labeled tree
and, in addition, $\textit{order}^T$ fixes, for each node $u$ of $T$,
a strict linear order of all the children\footnote{i.e., the
nodes $v$ such that $(u,v)\in E^T$} of $u$ in $T$.
We represent ordered $\Sigma$-labeled trees $T$ by relational
structures $\S_o(T)$ of schema
\begin{eqnarray*}
\tau_o & \deff &
\setc{\,\Label_\alpha}{\alpha\in \Sigma\,} \ \cup \ \set{\,\Fc,\Ns\,},
\end{eqnarray*}
where $\Fc$ and $\Ns$ have arity 2 and $\Label_\alpha$ has arity 1
(for every $\alpha\in\Sigma$) as follows:
\begin{mi}
\item The domain of $\S_o(T)$ is the set $V^T$ of all nodes of $T$,
\item for each $\alpha\in\Sigma$, the relation $\Label_\alpha^{\S_o(T)}$
is defined in the same way as for unordered trees,
\item $\Fc^{\S_o(T)}$ consists of all tuples $(u,v)$ of nodes such
that $u$ is the first child of $v$ in $T$ (i.e.,
$\textit{order}^T$ lists $u$ as the first child of $v$),
\item $\Ns^{\S_o(T)}$ consists of all tuples $(v,v')$ of nodes such
that $v$ and $v'$ have the same parent, i.e., there is an
$u\in V^T$ such that $(u,v)\in E^T$ and $(u,v')\in E^T$, and
$v'$ is the immediate successor of $v$ in the linear order of the
children of $u$ given by $\textit{order}^T$.
\end{mi}
Often, we will also consider the extended schema
\begin{eqnarray*}
\tau'_o & \deff &
\tau_o \ \cup \ \set{\,\Child,\ \Desc,\ \Root,\ \Leaf,\ \Ls \,},
\end{eqnarray*}
where $\Child$ and $\Desc$ have arity 2 and $\Root$, $\Leaf$, $\Ls$ have arity 1.
The $\tau'_o$-representation $\S'_o(T)$ of an ordered $\Sigma$-labeled
tree $T$ is the expansion of $\S_o(T)$ by the relations
\begin{mi}
\item
$\Child^{\S'_o(T)}$, $\Desc^{\S'_o(T)}$, $\Root^{\S'_o(T)}$, and $\Leaf^{\S'_o(T)}$, which are
defined in the same way as for unordered trees, and
\item $\Ls^{\S'_o(T)}$, which consists of all nodes $v\neq \rroot^T$ such
that $\textit{order}^T$ lists $v$ as the last child of its parent $u$.
\end{mi}
For a set $M\subseteq\set{\Child,\ \Desc,\ \Root,\ \Leaf,\ \Ls}$ we let
\begin{eqnarray*}
\tau_o^M & \deff & \tau_o\cup M,
\end{eqnarray*}
and for every $\Sigma$-labeled ordered tree $T$ we let $\S_o^M(T)$ be
the $\tau_o^M$-reduct of $\S'_u(T)$. If $M$ is a singleton set, we
omit curly brackets.
Note that in \cite{GottlobKoch}, Gottlob and Koch represented ordered $\Sigma$-labeled trees
$T$ by relational structures $\SGK(T)\deff \S_o^{\set{\Root,\Leaf,\Ls}}$ of schema
\[
\begin{array}{c}
\tauGK \ \ \deff \ \ \tau_o^{\set{\Root,\Leaf,\Ls}} \ \ = \ \
\tau'_o\setminus\set{\Child,\,\Desc} \ \ = \ \
\\[2ex]
\setc{\,\Label_\alpha}{\alpha\in \Sigma\,} \ \cup \
\{\, \Fc,\ \Ns,\ \Root,\ \Leaf,\ \Ls \,\}.
\end{array}
\]
\medskip
\begin{Example}\label{Exa:tree-ordered}
Let $T$ be the \emph{ordered} $\Sigma$-labeled tree from
Figure~\ref{Pic:tree}, for $\Sigma=\{Black,White\}$, where the order
of the children of each node is from left to right, as depicted in the illustration.
The $\tauGK$-structure $\B=\SGK(T)$ representing $T$ has
domain
\begin{eqnarray*}
B & = & \set{v_0,v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8}
\end{eqnarray*}
and relations
\begin{mi}
\item $\Label_{\textit{Black}}^\B = \set{v_0,v_1,v_3,v_5,v_7,v_8}$,
\item $\Label_{\textit{White}}^\B = \set{v_2,v_4,v_6}$,
\item $\Root^\B = \set{v_0}$,
\item $\Leaf^\B = \set{v_1,v_3,v_5,v_6,v_7,v_8}$,
\item $\Fc^\B = \set{\,(v_0,v_1), \ (v_2,v_6),\ (v_4,v_8)\,}$,
\item $\Ns^\B = \set{\,(v_1,v_2),\ (v_2,v_3),\ (v_3,v_4),\ (v_4,v_5),\ (v_6,v_7)\,}$,
\item $\Ls^\B = \set{v_5,v_7,v_8}$.
\end{mi}
Note that the root node of $T$ is not included in any sibling relation.
\markEnd
\end{Example}
\subsection{Monadic Datalog ($\mDatalog$)}\label{subsection:datalog}
The following definition of monadic datalog ($\mDatalog$, for short)
is basically taken from \cite{GottlobKoch}.
A \textit{datalog rule} is an expression of the form
\[
h \leftarrow b_1 ,\ldots, b_n ,
\]
for $n\in\NN$,
where $h, b_1 , \ldots, b_n$ are called \emph{atoms} of the rule, $h$
is called the rule's \emph{head}, and $b_1,\ldots,b_n$ (understood as a
conjunction of atoms) is called the \emph{body}.
Each atom is of the form $P(x_1,\ldots,x_m)$ where $P$ is a predicate
of some arity $m\in\NNpos$ and $x_1,\ldots,x_m$ are variables.
Rules are required to be \emph{safe} in the sense that all variables
appearing in the head also have to appear in the body.
A \textit{datalog program} is a finite set of datalog rules.
Let $\PP$ be a datalog program and let $r$ be a datalog rule.
We write
$\ensuremath{\textit{var}}(r)$ for the set of all variables occurring in the rule $r$, and
we let $\ensuremath{\textit{var}}(\PP):= \bigcup_{r \in \PP} \ensuremath{\textit{var}}(r)$.
Predicates that occur in the head of
some rule of $\PP$ are called \emph{intensional}, whereas predicates that only
occur in the body of rules of $\PP$ are called \emph{extensional}.
We write $\idb(\PP)$ and
$\edb(\PP)$ to denote the sets of intensional and extensional predicates
of $\PP$, and we say that $\PP$ \emph{is of schema $\tau$}
if $\edb(\PP)\subseteq \tau$.
A datalog program belongs to \emph{monadic datalog} ($\mDatalog$, for
short), if all its \emph{intensional} predicates have arity~1.
For defining the semantics of datalog, let $\tau$ be a schema,
let $\PP$ be a datalog program of schema $\tau$, let
$A$ be a domain, and let
\begin{eqnarray*}
F_{\PP,A} & \deff &
\setc{\ R(a_1,\ldots,a_r)}{R\in \tau\cup\idb(\PP),\ r=\ar(R),\ a_1,\ldots,a_r\in A
\ }
\end{eqnarray*}
the set of all \emph{atomic facts over $A$}.
A \textit{valuation $\beta$ for $P$ in $A$} is a function $\beta: \big(\ensuremath{\textit{var}}(P)
\cup A\big) \to A$ where $\beta(a)=a$ for all $a\in A$.
For an atom $P(x_1,\ldots,x_m)$ occurring in a rule of $\PP$ we
let
\begin{eqnarray*}
\beta\big(P(x_1,\ldots,x_m)\big)
& \deff &
P\big(\beta(x_1),\ldots,\beta(x_m)\big) \ \ \in \ F_{\PP,A}.
\end{eqnarray*}
The \emph{immediate consequence operator} $\ensuremath{\mathcal{T}}_{\PP}:2^{F_{\PP,A}}\to
2^{F_{\PP,A}}$ induced by the datalog program $\PP$ on domain $A$ maps
every $C\subseteq F_{\PP,A}$ to
\begin{eqnarray*}
\ensuremath{\mathcal{T}}_{\PP}(C) & \deff &
C \ \cup \
\left\{\
\beta(h) \ :
\begin{array}{p{6cm}}
there is
a rule $h \leftarrow b_1,\ldots,b_n$ in $\PP$ and
a valuation $\beta$ for $\PP$ in $A$ such that
$\beta(b_1),\ldots,\beta(b_n)\in C$
\end{array}
\right\}.
\end{eqnarray*}
Clearly, $\ensuremath{\mathcal{T}}_{\PP}$ is \emph{monotone}, i.e., for $C\subseteq D\subseteq
F_{\PP,A}$ we have $\ensuremath{\mathcal{T}}_{\PP}(C)\subseteq \ensuremath{\mathcal{T}}_{\PP}(D)$.
Letting $\ensuremath{\mathcal{T}}_{\PP}^0(C)\deff C$ and $\ensuremath{\mathcal{T}}_{\PP}^{i+1}(C)\deff
\ensuremath{\mathcal{T}}_{\PP}\big(\ensuremath{\mathcal{T}}_{\PP}^i(C)\big)$ for all $i\in\NN$, it is straightforward to
see that
\[
C = \ensuremath{\mathcal{T}}_{\PP}^0(C) \subseteq \ensuremath{\mathcal{T}}_{\PP}^1(C) \subseteq \cdots \subseteq
\ensuremath{\mathcal{T}}_{\PP}^{i}(C) \subseteq \ensuremath{\mathcal{T}}_{\PP}^{i+1}(C) \subseteq \cdots \subseteq F_{\PP,A}.
\]
For a finite domain $A$, the set $F_{\PP,A}$ is finite, and hence
there is an $i_0\in\NN$ such that $\ensuremath{\mathcal{T}}_{\PP}^{i_0}(C)=\ensuremath{\mathcal{T}}_{\PP}^i(C)$ for all
$i\geq i_0$. In particular, the set $\ensuremath{\mathcal{T}}_{\PP}^\omega(C)\deff\ensuremath{\mathcal{T}}_{\PP}^{i_0}(C)$ is a
\emph{fixpoint} of the immediate consequence operator $\ensuremath{\mathcal{T}}_{\PP}$.
By the theorem of Knaster and Tarski we know that this fixpoint is the
\emph{smallest} fixpoint of $\ensuremath{\mathcal{T}}_{\PP}$ which contains $C$.
\begin{Theorem}[Knaster and Tarski \cite{Tar}]\label{thm:KnasterTarski}
Let $\tau$ be a schema, let $\PP$ be a datalog program of schema $\tau$,
and let $A$ be a finite domain.
For every $C\subseteq F_{\PP,A}$ we have
\begin{eqnarray*}
\ensuremath{\mathcal{T}}_{\PP}^{\omega}(C)
& =
& \bigcap \; \setc{\,D}{\,\ensuremath{\mathcal{T}}_{\PP}(D) = D \text{ \,and \,} C
\subseteq D \subseteq F_{\PP,A}\,} \\
& =
& \bigcap \; \setc{\,D}{\,\ensuremath{\mathcal{T}}_{\PP}(D) \subseteq D \text{ \,and \,} C
\subseteq D\subseteq F_{\PP,A}\,}.
\end{eqnarray*}
\markEnd
\end{Theorem}
A \emph{$k$-ary (monadic) datalog query} of schema $\tau$ is a tuple $Q=(\PP,P)$
where $\PP$ is a (monadic) datalog program of schema $\tau$ and $P$ is an
(intensional or extensional) predicate of arity $k$ occurring in
$\PP$.
$\PP$ and $P$ are called the \emph{program} and the \emph{query
predicate} of $Q$.
When evaluated in a finite $\tau$-structure $\A$, the query $Q$
results in the following $k$-ary relation over $A$:
\begin{eqnarray*}
\AF{Q}(\A) & \deff &
\setc{\ (a_1,\ldots,a_k)\in A^k\ }{\ \; P(a_1,\ldots,a_k) \;\in\;
\ensuremath{\mathcal{T}}_\PP^\omega\big(\atoms(\A)\big)\ }.
\end{eqnarray*}
\emph{Unary} queries are queries of arity $k=1$.
The \emph{size} $\size{Q}$ of a (monadic) datalog query $Q$ is the
length of $Q=(\PP,P)$ viewed as a string over alphabet
\[
\edb(\PP) \, \cup \, \idb(\PP) \, \cup \,
\set{x,y,z,{}_{0},{}_{1},{}_{2},{}_{3},{}_{4},{}_{5},{}_{6},{}_{7},{}_{8},{}_{9}}
\, \cup \,
\set{(,),\{,\}} \, \cup \, \set{\leftarrow}\cup\set{,}.
\]
\begin{Example} \label{sample:2red}
Consider the schema $\tauGK$ introduced in Section~\ref{subsection:OrderedTrees} for
representing ordered $\Sigma$-labeled trees for
$\Sigma=\set{\textit{Black},\ \textit{White}}$.
We present a unary monadic datalog query
$Q=(\PP,\ANS)$ of schema $\tauGK$ such that for every ordered
$\Sigma$-labeled tree $T$ we have
\begin{eqnarray*}
\AF{Q}\big(\SGK(T)\big)
& = &
\left\{
\begin{array}{lp{6cm}}
\set{\,\rroot^T\,} & if the root of $T$ has exactly two
children labeled with the symbol \textit{White},
\\[1ex]
\emptyset & otherwise.
\end{array}
\right.
\end{eqnarray*}
To this end, we let $\PP$ consist of the following rules:
\[
\begin{array}{rcl}
\ANS(x) &\leftarrow & \Root(x),\ \Fc(x,y),\ \WHITE_2(y)\\[0.5ex]
\WHITE_2(x) &\leftarrow & \Label_{Black}(x),\ \Ns(x,y),\ \WHITE_2(y)\\[0.5ex]
\WHITE_2(x) &\leftarrow & \Label_{White}(x),\ \Ns(x,y),\ \WHITE_1(y)\\[0.5ex]
\WHITE_1(x) &\leftarrow & \Label_{Black}(x),\ \Ns(x,y),\ \WHITE_1(y)\\[0.5ex]
\WHITE_1(x) &\leftarrow & \Label_{White}(x),\ \Ns(x,y),\ \WHITE_0(y)\\[0.5ex]
\WHITE_0(x) &\leftarrow & \Label_{Black}(x),\ \Ns(x,y),\ \WHITE_0(y)\\[0.5ex]
\WHITE_1(x) &\leftarrow & \Label_{White}(x),\ \Ls(x) \\[0.5ex]
\WHITE_0(x) &\leftarrow & \Label_{Black}(x),\ \Ls(x)
\end{array}
\]
In particular, $Q$ returns $\set{\rroot^T}$ on the tree from
Example~\ref{Exa:tree-ordered}. \markEnd
\end{Example}
\begin{Remark}[Folklore]\label{remark:monotonicity}
The monotonicity of the immediate consequence operator implies
that datalog queries $Q$ of schema $\tau$ are monotone in the following sense:
If $\A$ and $\B$ are $\tau$-structures with $\atoms(\A)\subseteq
\atoms(\B)$, then $\AF{Q}(\A)\subseteq \AF{Q}(\B)$. \markEnd
\end{Remark}
Let us point out that it is also well-known that datalog is \emph{preserved
under homomorphisms} in the following sense.
A \emph{homomorphism} from a $\tau$-structure $\A$ to a
$\tau$-structure $\B$ is a mapping $h:A\to B$ such that for all
$R\in\tau$ and all tuples $(a_1,\ldots,a_r)\in R^\A$ we have
$(h(a_1),\ldots,h(a_r))\in R^\B$.
As a shorthand, for any set $S\subseteq A^k$ we
let $h(S)=\setc{\big(h(a_1),\ldots,h(a_k)\big)}{(a_1,\ldots,a_k)\in S}$.
\begin{Lemma}[Folklore]\label{lemma:homomorphisms}
Any $k$-ary datalog query $Q$ of schema $\tau$ is preserved under
homomorphisms in the following sense: If $\A$ and $\B$ are
$\tau$-structures, and $h$ is a homomorphism from $\A$ to $\B$, then
$h\big(\AF{Q}(\A)\big)\subseteq \AF{Q}(\B)$.
\end{Lemma}
\begin{proof}
Let $\A$ and $\B$ be $\tau$-structures and let $h:A\to B$ be a
homomorphism from $\A$ to $\B$. Furthermore, let $Q=(\PP,P)$ where
$\PP$ is a datalog program of schema $\tau$.
For an atomic fact $f=R(a_1,\ldots,a_r)\in F_{\PP,A}$ let
$h(f)\deff R(h(a_1),\ldots,h(a_r))$ be the according atomic fact in
$F_{\PP,B}$.
Furthermore, for a set $S\subseteq F_{\PP,A}$ let
$h(S)\deff\setc{h(f)}{f\in S}$ be the according subset of $F_{\PP,B}$.
First, note that by the definition of the immediate consequence
operator $\ensuremath{\mathcal{T}}_\PP$ it is straightforward to see that the following is true:
If $C\subseteq F_{\PP,A}$ and $D\subseteq F_{\PP,B}$ such that $h(C)\subseteq D$,
then $h(\ensuremath{\mathcal{T}}_\PP(C))\subseteq \ensuremath{\mathcal{T}}_\PP(D)$.
Next, note that this immediately implies that the following is true: If
$C\subseteq F_{\PP,A}$ and $D\subseteq F_{\PP,B}$ such that
$h(C)\subseteq D$, then $h\big(\ensuremath{\mathcal{T}}_\PP^\omega(C)\big)\subseteq \ensuremath{\mathcal{T}}_\PP^\omega(D)$.
Finally, note that $h$ is a homomorphism from $\A$ to $\B$, and thus
$h\big(\atoms(\A)\big)\subseteq \atoms(\B)$. Consequently,
$h\big(\ensuremath{\mathcal{T}}_\PP^\omega\big(\atoms(\A)\big)\big)\subseteq
\ensuremath{\mathcal{T}}_\PP^\omega\big(\atoms(\B)\big)$. In particular, this means that
$h\big(\AF{Q}(\A)\big)\subseteq \AF{Q}(\B)$.
\end{proof}
\subsection{Monadic Second-Order Logic ($\MSO$)}
The set $\MSO(\tau)$ of all monadic second-order formulas of schema
$\tau$ is defined as usual, cf.\ e.g.\ \cite{Libkin}: There are
two kinds of variables, namely
\emph{node variables}, denoted with lower-case letters $x$, $y$, $\ldots$,
$x_1$, $x_2$, $\ldots$ and ranging over elements of the domain,
and \emph{set variables}, denoted with upper-case letters $X$, $Y$, $\ldots$,
$X_1$, $X_2$, $\ldots$ and ranging over sets of elements of the domain.
\noindent
An \emph{atomic} $\MSO(\tau)$-formula is of the form
\begin{description}
\item[(A1)] $R(x_1,\ldots,x_r)$, \ where $R\in\tau$, $r=\ar(R)$,
and $x_1,\ldots,x_r$ are node variables,
\item[(A2)] $x=y$, \ where $x$ and $y$ are node variables, \ or
\item[(A3)] $X(x)$, \ where $x$ is a node variable and $X$ is a set variable.
\end{description}
\noindent
If $x$ is a node variable, $X$ a set variable, and $\phi$ and $\psi$
are $\MSO(\tau)$-formulas, then
\begin{description}
\item[(BC)] $\lnot \phi$ \ and \ $(\phi \lor \psi)$ \ are $\MSO(\tau)$-formulas,
\item[(Q1)] $\exists x \phi$ \ and \ $\forall x \phi$ \ are $\MSO(\tau)$-formulas,
\item[(Q2)] $\exists X \phi$ \ and \ $\forall X \phi$ \ are $\MSO(\tau)$-formulas.
\end{description}
Quantifiers of the form (Q1) are called \emph{first-order
quantifiers}; quantifiers of the form (Q2) are called \emph{set
quantifiers}.
$\MSO(\tau)$-formulas in which no set quantifier occurs, are called
\emph{first-order formulas}
(\emph{$\FO(\tau)$-formulas}, for short).
The \emph{size} $\size{\varphi}$ of a formula $\varphi$ is the length
of $\varphi$ viewed as a string over alphabet
\[
\tau
\, \cup \,
\set{x,y,z,X,Y,Z,{}_{0},{}_{1},{}_{2},{}_{3},{}_{4},{}_{5},{}_{6},{}_{7},{}_{8},{}_{9}}
\, \cup \,
\set{(,)} \, \cup \, \set{=,\nicht,\oder,\exists,\forall}\cup\set{,}.
\]
As shortcuts we use the Boolean connectives $(\varphi\und\psi)$, $(\varphi\impl\psi)$, and
$(\varphi\gdw\psi)$, the statement $x\neq y$ for node
variables, and the statements $X=Y$, $X\neq Y$, and
$X\subseteq Y$ for set variables. Note that all these can easily be
expressed in first-order logic.
To improve readability of formulas, we will sometimes add or omit parentheses.
By $\free(\varphi)$ we denote the set of (node or set) variables
that occur free (i.e., not within the range of a node or set
quantifier) in $\varphi$.
A \emph{sentence} is a formula without free variables.
We write $\varphi(x_1,\ldots,x_k,X_1,\ldots,X_\ell)$ to indicate that $\varphi$ has $k$
free node variables $x_1,\ldots,x_k$ and $\ell$ free set variables
$X_1,\ldots,X_\ell$.
For a $\tau$-structure $\A$, elements $a_1,\ldots,a_k\in A$, and sets
$A_1,\ldots,A_\ell\subseteq A$, we write
$\A\models\varphi(a_1,\ldots,a_k,A_1,\ldots,A_\ell)$ to indicate that
$\A$ satisfies the formula $\varphi$ when interpreting the free
occurrences of the variables $x_1,\ldots,x_k,X_1,\ldots,X_\ell$ with
$a_1,\ldots,a_k,A_1,\ldots,A_\ell$.
A formula $\varphi(x_1,\ldots,x_k)$ with $k$ free node variables and
no free set variable defines a $k$-ary query on $\tau$-structures which,
when evaluated in a $\tau$-structure $\A$, results in the $k$-ary relation
\begin{eqnarray*}
\AF{\varphi}(\A) & \deff &
\setc{\ (a_1,\ldots,a_k)\in A^k\ }{\ \A \models \varphi(a_1,\ldots,a_k)\ }.
\end{eqnarray*}
\begin{Example}\label{Exa:MSO_child}
Consider the schema $\tau_u$ introduced in
Section~\ref{subsection:UnorderedTrees} for representing unordered
$\Sigma$-labeled trees for
$\Sigma=\set{\textit{Black},\textit{White}}$.
We present a unary $\FO(\tau_u)$-query $\varphi(x)$ such that for
every unordered $\Sigma$-labeled tree $T$ we have
\begin{eqnarray*}
\AF{\varphi}\big(\S_u(T)\big)
& = &
\left\{
\begin{array}{lp{6cm}}
\set{\,\rroot^T\,} & if the root of $T$ has exactly two
children labeled with the symbol \textit{White},
\\[1ex]
\emptyset & otherwise.
\end{array}
\right.
\end{eqnarray*}
To this end, we let $\varphi(x)$ be the following
$\MSO(\tau_u)$-formula:
\[
\begin{array}{l}
\nicht \exists u \,\Child(u,x) \ \ \und\\[0.5ex]
\exists y \, \exists z\,
\Big( \,
y \neq z \ \und \,
\Child(x,y) \, \und \, \Child(x,z) \, \und \,
\Label_{\textit{White}}(y) \, \und \,
\Label_{\textit{White}}(z) \, \und \,
\\
\qquad \qquad
\forall v\, \big( \,
\Child(x,v) \, \impl \, (\, v=y \, \oder \, v=z
\, \oder \, \nicht\Label_{\textit{White}}(v) \,)
\, \big)
\, \Big)
\end{array}
\]
\markEnd
\end{Example}
\medskip
\noindent
A $\PiMSO(\tau)$-formula is an $\MSO(\tau)$-formula of the form
\[
\forall X_1 \cdots \forall X_m \
\exists x_1 \cdots\exists x_k \
\xi
\]
where $m,k \in \NN$, $X_1,\ldots,X_m$ are set variables,
$x_1,\ldots,x_k$ are node variables, and $\xi$
is a formula that does not contain any (node or set) quantifier.
It is well-known that unary monadic datalog queries can be translated into
equivalent $\PiMSO$ queries.
\begin{Proposition}[Folklore]\label{prop:mDatalog2MSO}
Let $\tau$ be a schema. For every unary monadic datalog query $Q=(\PP,P)$ of
schema $\tau$ there is a $\PiMSO(\tau)$-formula $\varphi(x)$ such that
$\AF{Q}(\A)=\AF{\varphi}(\A)$ is true
for every finite $\tau$-structure $\A$.
Furthermore, there is an algorithm which computes $\varphi$ from $Q$
in time polynomial in the size of $Q$.
\end{Proposition}
\begin{proof}
Let $\{X_1, \ldots , X_m\} = \idb(\PP)$ be the set of intensional
predicates of $\PP$, and w.l.o.g let $X_1=P$.
For every rule $r$ of $\PP$ of the form
\ $
h \leftarrow b_1, \ldots , b_n,
$ \
with $\set{z_1,\ldots,z_k}=\ensuremath{\textit{var}}(r)$ let
\begin{eqnarray*}
\psi_r(X_1,\ldots,X_m) & \deff &
\forall z_1\, \cdots\, \forall z_k \ \big( \
( \; b_1 \und \cdots \und b_n \;)\; \impl\; h
\ \big).
\end{eqnarray*}
Now, let
\ $
\chi(X_1,\ldots,X_m) \deff
\Und_{r\in\PP} \psi_r(X_1,\ldots,X_m)$. \
Finally, let $x$ be a node variable that does not occur in
$\chi(X_1,\ldots,X_m)$ and let
\begin{eqnarray*}
\varphi(x) & \deff &
\forall X_1 \, \cdots \, \forall X_m \ \big( \
\chi(X_1,\ldots,X_m)\;\impl\, X_1(x)
\ \big).
\end{eqnarray*}
Obviously, $\varphi(x)$ is equivalent, on the
class of all $\tau$-structures, to the formula
\ $\forall X_1 \cdots \forall X_m \, \big(
X_1(x) \oder \nicht \chi \big)$, \
and $\nicht\chi$ is equivalent to $\Oder_{r\in\PP}\nicht\psi_r$, \
while $\nicht\psi_r$ is equivalent to
\ $\exists z_1\cdots\exists z_k \,\nicht\big((b_1\und\cdots\und
b_n)\impl h\big)$. Thus, it is straightforward to see that
$\varphi(x)$ is equivalent to a $\PiMSO(\tau)$-formula, and this
formula can be constructed in time polynomial in the size of $Q$.
It remains to verify that $\AF{Q}(\A)=\AF{\varphi}(\A)$, for
every $\tau$-structure $\A$.
To this end, let $\A$ be an arbitrary $\tau$-structure.
By the construction of $\varphi(x)$ we know for $a\in A$ that
\begin{equation*}
\begin{array}{ll}
& a\in\AF{\varphi}(\A)
\\[1ex]
\iff
& a\in X_1^{\A'} \ \ \text{for every
$\tau\cup\set{X_1,\ldots,X_m}$-expansion $\A'$ of $\A$
with $\A'\models\chi$}.
\end{array}
\end{equation*}
Now let $C\deff\atoms(\A)$. Furthermore, consider arbitrary
sets $A_1,\ldots,A_m\subseteq A$, let
$\A'$ be the $\tau\cup\set{X_1,\ldots,X_m}$-structure obtained as the
expansion of $\A$ by $X_i^\A\deff A_i$ for all
$i\in\set{1,\ldots,m}$, and let $D\deff\atoms(\A')$. Clearly,
$C\subseteq D\subseteq F_{\PP,A}$.
Furthermore, note that $\chi$ is constructed in such a way that the
following is true:
\[
\A'\models \chi \quad \iff \quad \ensuremath{\mathcal{T}}_\PP(D)\subseteq D.
\]
By the theorem of Knaster and Tarski (Theorem~\ref{thm:KnasterTarski})
we know that
\begin{eqnarray*}
\ensuremath{\mathcal{T}}_\PP^\omega(C) & = &
\bigcap \; \setc{\,D}{\,\ensuremath{\mathcal{T}}_{\PP}(D) \subseteq D \text{ \,and \,} C
\subseteq D \subseteq F_{\PP,A}\,}.
\end{eqnarray*}
Thus, for $a\in A$ we have
\[
\begin{array}{ll}
& a\in\AF{Q}(\A)
\\[1ex]
\iff
& X_1(a)\in \ensuremath{\mathcal{T}}_\PP^\omega(C)
\\[1ex]
\iff
& X_1(a)\in D \ \ \text{for every $D$ with $\ensuremath{\mathcal{T}}_\PP(D)\subseteq D$ and
$C\subseteq D\subseteq F_{\PP,A}$}
\\[1ex]
\iff
& a\in X_1^{\A'} \ \ \text{for every
$\tau\cup\set{X_1,\ldots,X_m}$-expansion $\A'$ of $\A$ with
$\A'\models\chi$}
\\[1ex]
\iff
& a\in\AF{\varphi}(\A).
\end{array}
\]
This completes the proof of Proposition~\ref{prop:mDatalog2MSO}.
\end{proof}
\section{Expressive Power of Monadic Datalog on Trees}\label{section:ExpressivePower}
A unary query $q$ on $\Sigma$-labeled (un)ordered trees assigns to each
(un)ordered $\Sigma$-labeled tree $T$ a set
$q(T)\subseteq V^T$.
\subsection{Expressive Power of $\mDatalog$ on Ordered Trees}\label{subsection:ExpressivePower-OrderedTrees}
Let $\tau$ be one of the schemas introduced in
Section~\ref{subsection:OrderedTrees}, i.e., $\tau$ is $\tau_o^M$ for some
$M\subseteq\set{\Child,\Desc,\Root,\Leaf,\Ls}$.
We say that a unary query $q$ on $\Sigma$-labeled ordered trees is
\emph{$\mDatalog(\tau)$-definable} iff there is a unary monadic
datalog query $Q$ of schema $\tau$ such
that for every ordered $\Sigma$-labeled tree $T$ we have
\ $q(T) = \AF{Q}(\S_o^M(T))$.
Similarly, for any subset $L$ of $\MSO$, $q$ is called
\emph{$L(\tau)$-definable} iff there is an $L(\tau)$-formula
$\varphi(x)$ such that for every ordered $\Sigma$-labeled tree $T$
we have
\ $q(T) = \AF{\varphi}(\S_o^M(T))$.
Often, we will simply write $Q(T)$ instead of $\AF{Q}(\S_o^M(T))$, and
$\varphi(T)$ instead of $\AF{\varphi}(\S_o^M(T))$.
Proposition~\ref{prop:mDatalog2MSO} implies that unary queries
on $\Sigma$-labeled ordered trees
which are definable in $\mDatalog(\tau)$, are also definable in $\MSO(\tau)$.
In \cite{GottlobKoch} it was shown that for the particular schema
$\tau=\tauGK$ also the converse is true:
\begin{Theorem}[Gottlob, Koch \cite{GottlobKoch}]\label{thm:GottlobKoch}
A unary query on $\Sigma$-labeled ordered trees is definable in
$\mDatalog(\tauGK)$ if, and only if, it is definable in
$\MSO(\tauGK)$.
\\
Furthermore, there is an algorithm which translates a given unary
$\mDatalog(\tauGK)$-query into an equivalent unary
$\MSO(\tauGK)$-query, and vice versa.
\markEnd
\end{Theorem}
In the remainder of this subsection, we point out that adding the
$\Child$ and $\Desc$ relations won't increase the expressive power of $\mDatalog$ or
$\MSO$ on ordered $\Sigma$-labeled trees, while omitting any of the relations
$\Root$, $\Leaf$, or $\Ls$ will substantially decrease the expressive
power of $\mDatalog$, but not of $\MSO$.
\begin{Fact}[Folklore]\label{fact:MSO-orderedTrees}
There are $\MSO(\tau_o)$-formulas
\begin{center}
$\varphi_{\Child}(x,y)$, \
$\varphi_{\Desc}(x,y)$, \
$\varphi_{\Root}(x)$, \
$\varphi_{\Leaf}(x)$, \
$\varphi_{\Ls}(x)$,
\end{center}
such that for every $\Sigma$-labeled ordered tree $T$ and all nodes
$a,b$ of $T$ we have
\[
\begin{array}{lcl}
\S_o(T)\models\varphi_{\Child}(a,b) & \iff & \S'_o(T)\models\Child(a,b),
\\
\S_o(T)\models\varphi_{\Desc}(a,b) & \iff & \S'_o(T)\models\Desc(a,b),
\\
\S_o(T)\models\varphi_{\Root}(a) & \iff & \S'_o(T)\models\Root(a),
\\
\S_o(T)\models\varphi_{\Leaf}(a) & \iff & \S'_o(T)\models\Leaf(a),
\\
\S_o(T)\models\varphi_{\Ls}(a) & \iff & \S'_o(T)\models\Ls(a).
\end{array}
\]
\end{Fact}
\begin{proof}
Obviously, we can choose
\[
\begin{array}{rcl}
\varphi_{\Root}(x)
& \deff
& \nicht\,\exists y\;\big(\,\Fc(y,x)\,\oder\,\Ns(y,x)\,\big),
\\[1ex]
\varphi_{\Leaf}(x)
& \deff
& \nicht\,\exists y\; \Fc(x,y),
\\[1ex]
\varphi_{\Ls}(x)
& \deff
& \nicht\,\exists y\; \Ns(x,y).
\end{array}
\]
For constructing $\varphi_{\Child}(x,y)$ and
$\varphi_{\Desc}(x,y)$, we consider the following auxiliary formulas:
Let $\varrho(x,y)$ be an arbitrary formula, let $X$ be a set variable,
and let
\begin{eqnarray*}
\textit{cl}_{\varrho(x,y)}(X)
& \deff
& \forall x\,\forall y\; \Big(\,
\big(\,X(x) \und \varrho(x,y)\,\big) \,\impl\, X(y)
\,\Big).
\end{eqnarray*}
Clearly, this formula holds for a set $X$ iff $X$ is closed under
``$\varrho$-successors''.
In particular, the formula
\begin{eqnarray*}
\varphi_{\Ns^*}(x,y)
& \deff
& \forall X\; \Big(\,
\big(\, X(x) \,\und\, \textit{cl}_{\Ns(x,y)}(X) \,\big) \,\impl\, X(y)
\,\Big)
\end{eqnarray*}
expresses that $y$ is either equal to $x$, or it is a sibling of $x$
which is bigger than $x$ w.r.t.\ the linear order of all children of
$x$ and $y$'s common parent.
Consequently, we can choose
\begin{eqnarray*}
\varphi_{\Child}(x,y)
& \deff
& \exists x'\;
\big(\,
\Fc(x,x') \,\und\, \varphi_{\Ns^*}(x',y)
\,\big).
\end{eqnarray*}
Since the $\Desc$-relation is the transitive (and non-reflexive)
closure of the $\Child$-relation, we can choose
\begin{eqnarray*}
\varphi_{\Desc}(x,y)
& \deff
& x\neq y \ \und \
\forall X\; \Big(\,
\big(\,
X(x) \,\und\,\textit{cl}_{\varphi_{\Child}(x,y)}(X)
\,\big) \,\impl\, X(y)
\,\Big).
\end{eqnarray*}
\end{proof}
\noindent
In combination with Theorem~\ref{thm:GottlobKoch} and Proposition~\ref{prop:mDatalog2MSO}, this leads to:
\begin{Corollary}\label{cor:mDatalogVsMSO-orderedTrees}
The following languages can express exactly the same unary queries on $\Sigma$-labeled ordered trees:
\begin{center}
$\mDatalog(\tauGK)$, \ $\mDatalog(\tau'_o)$, \
$\MSO(\tau'_o)$, \ $\MSO(\tauGK)$, \ $\MSO(\tau_o)$.
\end{center}
Furthermore, there is an algorithm which translates a given unary
query on $\Sigma$-labeled ordered trees formulated in one of these
languages into equivalent queries formulated in any of the other languages.
In particular, adding the $\Child$ and $\Desc$ relations to $\tauGK$
does not increase the expressive power of monadic datalog on
$\Sigma$-labeled ordered trees.
\end{Corollary}
\begin{proof}
Since $\tauGK\subseteq\tau'_o$, \ $\mDatalog(\tauGK)$ is at most as
expressive as $\mDatalog(\tau'_o)$ which, by
Proposition~\ref{prop:mDatalog2MSO}, is at most as expressive as
$\MSO(\tau'_o)$.
By Fact~\ref{fact:MSO-orderedTrees}, $\MSO(\tau'_o)$ is as expressive
on $\Sigma$-labeled ordered trees as $\MSO(\tau_o)$ and $\MSO(\tauGK)$ which, by
Theorem~\ref{thm:GottlobKoch}, is as expressive on $\Sigma$-labeled
ordered trees as $\mDatalog(\tauGK)$.
Furthermore, by Proposition~\ref{prop:mDatalog2MSO},
Fact~\ref{fact:MSO-orderedTrees}, and Theorem~\ref{thm:GottlobKoch},
the
translation from one language to another is constructive.
\end{proof}
Next, we note that omitting any of the unary relations $\Root$,
$\Leaf$, or $\Ls$ decreases the expressive power of monadic datalog on
$\Sigma$-labeled ordered trees.
\begin{Observation}\label{obs:mDatalogWithoutUnaryRel-orderedTrees}
For any relation $\Rel\in\set{\Root,\Leaf,\Ls}$, the unary query
$q_{\Rel}$ with $q_{\Rel}(T)=\setc{v\in V^T}{\S'_o(T)\models\Rel(v)}$
can be expressed in $\mDatalog(\set{\Rel})$, but not in
$\mDatalog(\tau'_o\setminus\set{\Rel})$.
\end{Observation}
\begin{proof}
It is obvious that the query $q_{\Rel}$ can be expressed in
$\mDatalog(\set{\Rel})$.
Let $M\subseteq\set{\Child,\Desc,\Root,\Leaf,\Ls}$ be such that
$\tau_o^M=\tau'_o\setminus\set{\Rel}$.
Assume, for contradiction, that $q_{\Rel}$ is expressed by a
$\mDatalog(\tau_o^M)$-query $Q=(\PP,P)$.
First, consider the case where $\Rel=\Root$. Let $T_0$ be the tree
consisting of a single node $v$ labeled $\alpha\in\Sigma$, and let
$T_1$ be the tree consisting of two nodes $u,v$, both labeled
$\alpha$, such that $v$ is the unique child of $u$.
Since $\tau_o^M=\tau'_o\setminus\set{\Root}$, we have
\[
\begin{array}{rl}
\atoms\big(\S_o^M(T_0)\big)
\ =
& \{\
\Label_\alpha(v),\ \Leaf(v)
\ \}, \quad \text{and}
\\[1ex]
\atoms\big(\S_o^M(T_1)\big)
\ =
& \atoms\big(\S_o^M(T_0)\big) \ \cup\
\left\{
\begin{array}{l}
\Label_\alpha(u), \ \Fc(u,v), \\ \Ls(v), \ \Child(u,v),\\
\Desc(u,v)
\end{array}
\right\}.
\end{array}
\]
I.e., $\atoms(\S_o^M(T_0))\subseteq \atoms(\S_o^M(T_1))$ and thus,
due to the monotonicity stated in Remark~\ref{remark:monotonicity}, we have
$\AF{Q}(\S_o^M(T_0))\subseteq \AF{Q}(\S_o^M(T_1))$.
This contradicts the fact that
$v\in q_{\Root}(T_0)=\AF{Q}(\S_o^M(T_0))$ but $v\not\in
q_{\Root}(T_1)=\AF{Q}(\S_o^M(T_1))$.
Next, consider the case where $\Rel=\Leaf$, and let
$T_0$ be the tree consisting of a single node $v$ labeled $\alpha\in\Sigma$, and let
$T'_1$ be the tree consisting of two nodes $v$ and $w$, both labeled
$\alpha$, such that $w$ is the unique child of $v$.
Since $\tau_o^M=\tau'_o\setminus\set{\Leaf}$, it is straightforward
to see that $\atoms(\S_o^M(T_0))\subseteq\atoms(\S_o^M(T'_1))$.
By monotonicity, we have that
$\AF{Q}(\S_o^M(T_0))\subseteq \AF{Q}(\S_o^M(T'_1))$,
contradicting the fact that
$v\in q_{\Leaf}(T_0)=\AF{Q}(\S_o^M(T_0))$ but $v\not\in
q_{\Leaf}(T'_1)=\AF{Q}(\S_o^M(T'_1))$.
Finally, consider the case where $\Rel=\Ls$.
Let $T_1$ be the tree consisting of two nodes $u,v$, both labeled
$\alpha$, such that $v$ is the unique child of $u$. Let
$T_2$ be the tree consisting of three nodes $u,v,w$, all labeled
$\alpha$, such that $v$ and $w$ are the first and the second child of
$u$.
Since $\tau_o^M=\tau'_o\setminus\set{\Ls}$, it is straightforward
to see that $\atoms(\S_o^M(T_1))\subseteq\atoms(\S_o^M(T_2))$.
By monotonicity, we have
$\AF{Q}(\S_o^M(T_1))\subseteq \AF{Q}(\S_o^M(T_2))$,
contradicting the fact that
$v\in q_{\Ls}(T_1)=\AF{Q}(\S_o^M(T_1))$ but $v\not\in
q_{\Ls}(T_2)=\AF{Q}(\S_o^M(T_2))$.
\end{proof}
\subsection{Expressive Power of $\mDatalog$ on Unordered Trees}\label{subsection:ExpressivePower-UnorderedTrees}
Let $\tau$ be one of the schemas introduced in
Section~\ref{subsection:UnorderedTrees}, i.e., $\tau$ is $\tau_u^M$ for some
$M\subseteq\set{\Desc,\Is,\Root,\Leaf}$.
We say that a unary query $q$ on $\Sigma$-labeled unordered trees is
\emph{$\mDatalog(\tau)$-definable} iff there is a unary monadic
datalog query $Q$ of schema $\tau$ such
that for every unordered $\Sigma$-labeled tree $T$ we have
\ $q(T) = \AF{Q}(\S_u^M(T))$.
Similarly, for any subset $L$ of $\MSO$, $q$ is called
\emph{$L(\tau)$-definable} iff there is an $L(\tau)$-formula
$\varphi(x)$ such that for every unordered $\Sigma$-labeled tree $T$
we have
\ $q(T) = \AF{\varphi}(\S_u^M(T))$.
Often, we will simply write $Q(T)$ instead of $\AF{Q}(\S_u^M(T))$, and
$\varphi(T)$ instead of $\AF{\varphi}(\S_u^M(T))$.
Proposition~\ref{prop:mDatalog2MSO} implies that unary queries
on $\Sigma$-labeled unordered trees
which are definable in $\mDatalog(\tau)$, are also definable in $\MSO(\tau)$.
It is straightforward to see that
$\MSO(\tau_u)$ can express all the relations present in $\tau'_u$:
\begin{Fact}[Folklore]\label{fact:MSO-unorderedTrees}
There are $\MSO(\tau_u)$-formulas
\[
\varphi_{\Desc}(x,y), \ \
\varphi_{\As}(x,y),\ \
\varphi_{\Root}(x),\ \
\varphi_{\Leaf}(x)
\]
such that for every $\Sigma$-labeled unordered tree $T$ and all nodes
$a,b$ of $T$ we have
\[
\begin{array}{lcl}
\S_u(T)\models\varphi_{\Desc}(a,b) & \iff &
\S'_u(T)\models\Desc(a,b),
\\
\S_u(T)\models\varphi_{\As}(a,b) & \iff & \S'_u(T)\models\As(a,b),
\\
\S_u(T)\models\varphi_{\Root}(a) & \iff & \S'_u(T)\models\Root(a),
\\
\S_u(T)\models\varphi_{\Leaf}(a) & \iff & \S'_u(T)\models\Leaf(a).
\end{array}
\]
\end{Fact}
\begin{proof}
Obviously, we can choose
\begin{eqnarray*}
\varphi_{\Root}(x)
& \deff
& \nicht\,\exists y\,\Child(y,x),
\\
\varphi_{\Leaf}(x)
& \deff
& \nicht\,\exists y\,\Child(x,y),
\\
\varphi_{\As}(x,y)
& \deff
& x\neq y \ \und \ \exists u\,\big(\, \Child(u,x)\,\und\,\Child(u,y)\,\big).
\end{eqnarray*}
For constructing
$\varphi_{\Desc}(x,y)$, we consider the following auxiliary formula:
Let $\varrho(x,y)$ be an arbitrary formula, let $X$ be a set variable,
and let
\begin{eqnarray*}
\textit{cl}_{\varrho(x,y)}(X)
& \deff
& \forall x\,\forall y\; \Big(\,
\big(\,X(x) \und \varrho(x,y)\,\big) \,\impl\, X(y)
\,\Big).
\end{eqnarray*}
Clearly, this formula holds for a set $X$ iff $X$ is closed under
``$\varrho$-successors''.
In particular, the formula
\begin{eqnarray*}
\varphi_{\Child^*}(x,y)
& \deff
& \forall X\; \Big(\,
\big(\, X(x) \,\und\, \textit{cl}_{\Child(x,y)}(X) \,\big) \,\impl\, X(y)
\,\Big)
\end{eqnarray*}
expresses that $y$ is either equal to $x$, or it is a
descendant of $x$.
Thus, we can choose
\begin{eqnarray*}
\varphi_{\Desc}(x,y)
& \deff
& x\neq y \ \und \ \varphi_{\Child^*}(x,y).
\end{eqnarray*}
\end{proof}
However, unlike in the case of ordered trees, $\mDatalog(\tau'_u)$
cannot express all unary queries expressible in $\MSO(\tau_u)$, as the
following observation shows.
\begin{Observation}\label{obs:mDatalog-weaker-than-MSO-unorderedTrees}
The unary query $q_{\textit{two}}$ with
\[
q_\textit{two}(T) \ \ = \ \
\setc{v\in V^T}{v \text{ has exactly two children labeled }\alpha}
\]
is expressible in
$\MSO(\tau_u)$, but not in $\mDatalog(\tau'_u)$.
\end{Observation}
\begin{proof}
It is obvious that the query $q_\textit{two}$ is defined by the
$\MSO(\tau_u)$-formula \ $\psi(x) \deff$
\[
\begin{array}{ll}
\exists y_1\,\exists y_2\, \Big( \hspace{-2ex}
& \Child(x,y_1)\,\und\,\Child(x,y_2)\,\und\, \Label_\alpha(y_1)
\,\und\, \Label_\alpha(y_2)\,\und\, y_1\neq y_2 \, \und
\\
& \forall z\,\big(\,
(\, \Child(x,z)\,\und\,\Label_\alpha(z)\,)\,\impl\,
(\, z=y_1\,\oder\, z=y_2\,)\,
\big)\,
\Big).
\end{array}
\]
For contradiction, assume that $q_\textit{two}$ is expressed by a $\mDatalog(\tau'_u)$-query
$Q=(\PP,P)$.
Let $T_2$ be the $\Sigma$-labeled unordered tree consisting of three
nodes $u,v_1,v_2$, all labeled $\alpha$, such that $v_1$ and $v_2$ are
children of $u$. Furthermore, let $T_3$ be the tree consisting of four
nodes $u,v_1,v_2,v_3$, all labeled $\alpha$, such that $v_1,v_2,v_3$ are children of $u$.
Since
\[
\tau'_u
\ = \
\setc{\Label_\alpha}{\alpha\in \Sigma}\ \cup\
\set{\,\Child,\,\Desc,\,\As,\,\Root,\,\Leaf\,},
\]
it is straightforward to see that
$\atoms(\S'_u(T_2))\subseteq\atoms(\S'_u(T_3))$.
Thus, due to the monotonicity stated in
Remark~\ref{remark:monotonicity}, we have
$\AF{Q}(\S'_u(T_2))\subseteq\AF{Q}(\S'_u(T_3))$.
This contradicts the fact that $u\in
q_\textit{two}(T_2)=\AF{Q}(\S'_u(T_2))$ but
$u\not\in q_{\textit{two}}(T_3)=\AF{Q}(\S'_u(T_3))$.
\end{proof}
Next, we note that omitting any of the relations $\Root$, $\Leaf$, or $\As$
further decreases the expressive power of monadic datalog on
$\Sigma$-labeled unordered trees.
\begin{Observation}\label{obs:mDatalogWithoutUnaryRel-unorderedTrees}
\begin{mea}
\item
For any relation $\Rel\in\set{\Root,\Leaf}$, the query
$q_{\Rel}$ with $q_{\Rel}(T) = \setc{v\in
V^T}{\S'_u(T)\models\Rel(v)}$ can be expressed in
$\mDatalog(\set{\Rel})$, but not in
$\mDatalog(\tau'_u\setminus{\Rel})$.
\item
The query $q_{\textit{sib}}$ with $q_{\textit{sib}}(T) =
\setc{v\in V^T}{v\text{ has at least one sibling}}$, for all
$\Sigma$-labeled unordered trees $T$, can be expressed
in $\mDatalog(\set{\As})$, but not in
$\mDatalog(\tau'_u\setminus\set{\As})$.
\end{mea}
\end{Observation}
\begin{proof}
The proof of (a) is analogous to the according parts of the proof of
Observation~\ref{obs:mDatalogWithoutUnaryRel-orderedTrees}.
For the proof of (b), first note that $q_{\textit{sib}}$ is
expressed by the unary monadic datalog query $Q=(\PP,P)$ where $Q$
consists of the single rule
\[
P(x)\leftarrow \As(x,y).
\]
Now let $M= \set{\Desc,\Root,\Leaf}$, i.e.,
$\tau_u^M=\tau'_u\setminus\set{\As}$.
Assume, for contradiction, that $q_{\textit{sib}}$ is expressed by a
unary $\mDatalog(\tau_u^M)$-query $Q=(\PP,P)$.
We will conclude the proof by using
Lemma~\ref{lemma:homomorphisms},
stating that datalog queries are preserved under
homomorphisms.
Let $T_2$ be the $\Sigma$-labeled unordered tree consisting of three
nodes $a,a_1,a_2$, all labeled $\alpha$, such that $a_1$ and $a_2$ are
children of $a$. Furthermore, let $T_1$ be the tree consisting of two nodes
$b,b_1$, both labeled $\alpha$, such that $b_1$ is the unique child of $b$.
Let $\A\deff \S_u^M(T_2)$ and $\B\deff \S_u^M(T_1)$.
Consider the mapping $h:A\to B$ with $h(a)=b$ and
$h(a_1)=h(a_2)=b_1$. It is not difficult to see that $h$ is a
homomorphism from $\A$ to $\B$, since
\begin{mi}
\item $\Label_\alpha^\A=\set{a,a_1,a_2}$ \ and \
$\Label_\alpha^\B=\set{b,b_1}$
\item $\Label_{\alpha'}^\A=\emptyset=\Label_{\alpha'}^\B$, \ for all
$\alpha'\in\Sigma$ with $\alpha'\neq \alpha$
\item $\Child^\A=\set{(a,a_1),\, (a,a_2)}$ \ and \
$\Child^\B=\set{(b,b_1)}$
\item $\Desc^\A=\Child^\A$ \ and \ $\Desc^B=\Child^B$
\item $\Root^\A=\set{a}$ \ and \ $\Root^\B=\set{b}$
\item $\Leaf^\A=\set{a_1,\,a_2}$ \ and \ $\Leaf^\B=\set{b_1}$.
\end{mi}
From Lemma~\ref{lemma:homomorphisms} we obtain that
$h\big(\AF{Q}(\A)\big)\subseteq \AF{Q}(\B)$.
This contradicts the fact that $a_1\in
q_{\textit{sib}}(T_2)=\AF{Q}(\A)$, but $h(a_1)=b_1\not\in q_{\textit{sib}}(T_1)=\AF{Q}(\B)$.
\end{proof}
\noindent
In summary, we immediately obtain the following:
\begin{Corollary}\label{cor:ExpressivePower-unorderedTrees}
\begin{mea}
\item
$\MSO(\tau_u)$ can express exactly the same unary queries on
$\Sigma$-labeled unordered trees as $\MSO(\tau'_u)$;
and there is a polynomial time algorithm which translates a given unary
$\MSO(\tau'_u)$-query on $\Sigma$-labeled unordered trees into an equivalent
$\MSO(\tau_u)$-query.
Furthermore, both languages
are capable of expressing strictly more unary queries on
$\Sigma$-labeled unordered trees than $\mDatalog(\tau'_u)$.
\item
Omitting any of the relations $\Root$, $\Leaf$, $\As$ strictly
decreases the expressive power of unary
$\mDatalog(\tau'_u)$-queries on $\Sigma$-labeled unordered trees.
\markEnd
\end{mea}
\end{Corollary}
\section{Query containment, Equivalence, and Satisfiability for
Monadic Datalog on Trees}\label{section:QCPofMonadicDatalog}
Query containment, equivalence, and satisfiability of queries are important
problems concerning query optimisation and static analysis of queries.
\subsection{Query Containment for $\mDatalog$ on Trees}
Let $\tau$ be one of the schemas introduced in
Section~\ref{subsection:UnorderedTrees} or
Section~\ref{subsection:OrderedTrees}, and let $\S(T)$ the corresponding
$\tau$-structure representing the tree $T$.
For two queries $Q_1$ and $Q_2$ of schema $\tau$,
we write $Q_1 \subseteq Q_2$ (and say that $Q_1$ is included in $Q_2$
on trees) to indicate that for every $\Sigma$-labeled
tree $T$ we have $\AF{Q_1}(\S(T)) \subseteq \AF{Q_2}(\S(T))$.
Accordingly, we write $Q_1\not\subseteq Q_2$ to indicate that
$Q_1\subseteq Q_2$ does not hold.
An important task for query optimisation and static analysis
is the \emph{query containment problem}, defined
as follows:
\begin{Problem}{The QCP for unary $\mDatalog(\tau)$-queries on (un)ordered
$\Sigma$-labeled trees}
\In Two unary $\mDatalog(\tau)$-queries $Q_1$ and $Q_2$.
\Out
\begin{tabular}[t]{lp{7cm}}
\ensuremath{\textnormal{\texttt{Yes}}}, & if $Q_1\subseteq Q_2$,
\\
\ensuremath{\textnormal{\texttt{No}}}, & otherwise.
\end{tabular}
\end{Problem}
\noindent
For \emph{ordered} $\Sigma$-labeled trees, the following is known:
\begin{Theorem}[Gottlob, Koch \cite{GottlobKoch}]\label{QCP:ur} \ \\
The QCP for unary $\mDatalog(\tauGK)$-queries on ordered $\Sigma$-labeled trees is
decidable and $\textnormal{\textnormal{\textsc{Exptime}}}$-hard.
\markEnd
\end{Theorem}
Using Corollary~\ref{cor:mDatalogVsMSO-orderedTrees} and the fact that
$\tauGK\subseteq \tau'_o$, this immediately
leads to:
\begin{Theorem}\label{thm:QCP_orderedTrees}
The QCP for unary $\mDatalog(\tau'_o)$-queries on ordered $\Sigma$-labeled trees is
decidable and $\textnormal{\textnormal{\textsc{Exptime}}}$-hard.
\markEnd
\end{Theorem}
To obtain decidability also for the case of \emph{unordered}
$\Sigma$-labeled trees, we can use the following result:
\begin{Theorem}[Seese \cite{Seese91}]\label{MSO:SAT}
The problem
\begin{Problem}{Satisfiability of $\MSO(\tau_{u})$-sentences on
unordered $\Sigma$-labeled trees}
\In An $\MSO(\tau_{u})$-sentence $\varphi$.
\item[\textnormal{\textit{Question:}}] Does there exist an unordered $\Sigma$-labeled (finite) tree $T$ such that $\S_u(T) \models \varphi$?
\end{Problem}
is decidable.\markEnd
\end{Theorem}
Combining this with Proposition~\ref{prop:mDatalog2MSO} and
Fact~\ref{fact:MSO-unorderedTrees}, we obtain:
\begin{Theorem} \label{thm:QCP_unordered}
The QCP for unary $\mDatalog(\tau'_u)$-queries on unordered $\Sigma$-labeled trees is decidable.
\end{Theorem}
\begin{proof}
An algorithm for deciding the QCP for unary $\mDatalog(\tau'_u)$-queries on
unordered $\Sigma$-labeled trees can proceed as follows:
On input of two unary $\mDatalog(\tau'_u)$-queries $Q_1$ and $Q_2$, first
use the algorithm from Proposition~\ref{prop:mDatalog2MSO} to
construct two $\MSO(\tau'_u)$-formulas $\varphi_1(x)$ and $\varphi_2(x)$ such that, for
each $i\in\set{1,2}$, the formula $\varphi_i(x)$ defines
the same unary query on $\Sigma$-labeled unordered trees as $Q_i$.
Afterwards, use Fact~\ref{fact:MSO-unorderedTrees} to translate the $\MSO(\tau'_u)$-formulas
$\varphi_1(x)$ and $\varphi_2(x)$ into $\MSO(\tau_u)$-formulas $\psi_1(x)$ and
$\psi_2(x)$, which are equivalent to $\varphi_1(x)$ and $\varphi_2(x)$
on $\Sigma$-labeled unordered trees.
Finally, let
\[
\varphi \quad \deff \quad
\exists x\ \big(\, \psi_1(x) \, \und \, \nicht\psi_2(x)\,\big),
\]
and use the algorithm provided by Theorem~\ref{MSO:SAT} to decide
whether there is an unordered $\Sigma$-labeled tree $T$ such that
$\S_u(T)\models\varphi$.
Output ``\ensuremath{\textnormal{\texttt{No}}}'' if this algorithm outputs ``\ensuremath{\textnormal{\texttt{Yes}}}'', and output ``\ensuremath{\textnormal{\texttt{Yes}}}'' otherwise.
To verify that this algorithm produces the correct answer, note that
for every $\Sigma$-labeled unordered tree $T$, the following
is true:
\[
\begin{array}{ll}
& \S_u(T)\models\varphi
\\
\iff
& \text{there is a node $a$ of $T$ with
$\S'_u(T)\models\psi_1(a)$ and $\S'_u(T)\not\models\psi_2(a)$}
\\
\iff
& \text{there is a node $a$ of $T$ with
$a\in\AF{Q_1}(\S'_u(T))$ and $a\not\in\AF{Q_2}(\S'_u(T))$}
\\
\iff
& \AF{Q_1}(\S'_u(T))\not\subseteq \AF{Q_2}(\S'_u(T)).
\end{array}
\]
Thus, the $\MSO(\tau_u)$-sentence $\varphi$ is satisfiable on
unordered $\Sigma$-labeled trees if, and only if, $Q_1\not\subseteq
Q_2$.
\end{proof}
\subsection{Equivalence for $\mDatalog$ on Trees}
Let $\tau$ be one of the schemas introduced in
Section~\ref{subsection:UnorderedTrees} or
Section~\ref{subsection:OrderedTrees}, and let $\S(T)$ the corresponding
$\tau$-structure representing the tree $T$.
For two queries $Q_1$ and $Q_2$ of schema $\tau$,
we write $Q_1\equiv Q_2$ (and say that $Q_1$ is equivalent to $Q_2$ on
trees)
to indicate that for every $\Sigma$-labeled tree $T$ we have
$\AF{Q_1}(\S(T))=\AF{Q_2}(\S(T))$.
Accordingly, we write $Q_1\not\equiv Q_2$ to indicate that $Q_1\equiv
Q_2$ does not hold.
We consider the following decision problem.
\begin{Problem}{The Equivalence Problem for unary
$\mDatalog(\tau)$-queries on $\Sigma$-labeled (un)ordered trees}
\In Two unary $\mDatalog(\tau)$-queries $Q_1$ and $Q_2$.
\Out
\begin{tabular}[t]{lp{7.5cm}}
\ensuremath{\textnormal{\texttt{Yes}}}, & if $Q_1\equiv Q_2$,
\\
\ensuremath{\textnormal{\texttt{No}}}, & otherwise.
\end{tabular}
\end{Problem}
By definition, we have $Q_1\equiv Q_2$ if, and only if, $Q_1\subseteq
Q_2$ and $Q_2\subseteq Q_1$. Thus, the decidability of the query
containment problem for $\mDatalog$ stated in Theorem~\ref{thm:QCP_orderedTrees} and
Theorem~\ref{thm:QCP_unordered}, immediately leads to the following.
\begin{Corollary}\label{cor:equivalence} \hspace{4cm}
\begin{mea}
\item
The equivalence problem for unary $\mDatalog(\tau'_o)$-queries on
$\Sigma$-labeled ordered trees is decidable.
\item
The equivalence problem for unary $\mDatalog(\tau'_u)$-queries on
$\Sigma$-labeled unordered trees is decidable.
\markEnd
\end{mea}
\end{Corollary}
\subsection{Satisfiability of $\mDatalog$ on Trees}
Let $\tau$ be one of the schemas introduced in
Section~\ref{subsection:UnorderedTrees} or
Section~\ref{subsection:OrderedTrees}, and let $\S(T)$ the corresponding
$\tau$-structure representing the tree $T$.
A query $Q$ of schema $\tau$ is called \emph{satisfiable on trees} if there
is a $\Sigma$-labeled (un)ordered tree $T$ such that $\AF{Q}(\S(T))\neq \emptyset$.
\begin{Example}\label{Exa:unsatisQ} There exists a unary $\mDatalog(\tau)$-query
$Q_\textit{unsat}=(\PP_\textit{unsat},P_\textit{unsat})$
which is \emph{not} satisfiable on trees. \\
For example, for $\tau=\tau_u$ the $\PP_\textit{unsat}$ can be chosen to consist of the
single rule
\[
P_\textit{unsat}(x) \ \ \leftarrow \ \ \Child(x,x)
\]
and for $\tau=\tau_o$ the following rule can be chosen
\[
P_\textit{unsat}(x) \ \ \leftarrow \ \ \Fc(x,x)
\]
since in trees no node can be its own (first)child.
\markEnd
\end{Example}
\noindent
We consider the following decision problem.
\begin{Problem}{The Satisfiability Problem for unary
$\mDatalog(\tau)$-queries on $\Sigma$-labeled (un)ordered trees}
\label{mDatalog:SAT}
\In A unary $\mDatalog(\tau)$-query $Q$.
\Out
\begin{tabular}[t]{lp{8cm}}
\ensuremath{\textnormal{\texttt{Yes}}}, & if $Q$ is satisfiable on trees,
\\
\ensuremath{\textnormal{\texttt{No}}}, & otherwise.
\end{tabular}
\end{Problem}
\medskip
Corollary~\ref{cor:equivalence}, together with
Example~\ref{Exa:unsatisQ}, leads to the following.
\begin{Corollary} \label{cor:SAT}
\begin{mea}
\item The satisfiability problem for unary
$\mDatalog(\tau'_o)$-queries on $\Sigma$-labeled ordered trees is decidable.
\item The satisfiability problem for unary
$\mDatalog(\tau'_u)$-queries on $\Sigma$-labeled unordered trees is decidable.
\end{mea}
\end{Corollary}
\begin{proof}
Let $Q$ be the input query for which we want to decide whether or not
it is satisfiable on trees.
Let $Q_\textit{unsat}$ be the unsatisfiable query from
Example~\ref{Exa:unsatisQ}.
It is straightforward to see that $Q\equiv Q_\textit{unsat}$ if, and
only if, $Q$ is \emph{not} satisfiable on trees.
Thus, we can use the algorithms for deciding equivalence of queries on
trees (provided by Corollary~\ref{cor:equivalence}) to decide whether or not
$Q$ is satisfiable on trees.
\end{proof}
\bibliographystyle{amsplain}
|
1,108,101,565,149 | arxiv | \section{Introduction}
In our perspective, a renewal equation (RE) is a delay equation, i.e., a rule for extending a function of time towards the future on the basis of the (assumed to be) known history. The difference between delay differential equations (DDE) and RE is that, for the former, the rule specifies the derivative of the function in the current time point, while for the latter the rule specifies the function value itself.
For both, one defines a dynamical system by translation along the extended function, so by updating the history.
But while for DDE the natural state space consists of continuous functions (with the supremum norm), for RE it is more natural to consider integrable functions (with the $L^1$ norm).
Stability and bifurcation results for RE are formulated in \cite{DGG2007} as corollaries of the general theory presented in \cite{DiekmannBook}.
RE arise routinely in the formulation of physiologically structured population models, see e.g. \cite{DGM2010}. In that context, one would often like to go beyond a pen and paper analysis and perform a numerical bifurcation analysis. But the lack of tools that can handle this kind of delay equations clearly forms an obstruction.
In \cite{SIADS2016} the idea is launched to first reduce the infinite dimensional dynamical system corresponding to a delay equation to a finite dimensional one by pseudospectral approximation, and next use tools for ordinary differential equations (ODE) in order to perform a numerical bifurcation analysis. Several examples illustrate that this approach is promising (also see \cite{EJQTDE2016, Ando2019b, Babette, DCDS2020Vermiglio, JMB2019, Vietnam2020, AMC2018}).
Note that we restrict to \emph{bounded} maximal delay, as unbounded delays require to consider different (exponentially weighted) function spaces and corresponding results from weighted interpolation theory \cite{AMC2018,DG2012}.
A nice feature of pseudospectral approximation is that, in the resulting ODE, one can recognize that the dynamics involves both a rule for extension and translation. The latter is captured by a matrix, often called differentiation matrix, that depends on the choice of mesh points but \emph{not} on the delay equation under consideration.
\fra{In the approximation of
a scalar DDE, the rule for extension is reflected in the expression for the derivative of exactly one component, so the nonlinear part of the ODE has one dimensional range;
\fra{in the approximation of
systems of DDE, the dimension of the range corresponds to the dimension of the system. The fact that, for RE, the rule for extension does specify the value, rather than the derivative, makes its incorporation in the ODE less straightforward. In \cite{SIADS2016,EJQTDE2016} an ad hoc method was employed: the value in the current time point was computed from the (approximate) history and the right-hand side of the RE by way of a numerical solver. The aim of the present paper is to introduce a much more natural and elegant alternative, which also improves the efficiency of the numerical method.
The main new idea is to approximate the indefinite integral of the integrable function, rather than the integrable function itself. First of all, a reassuring consequence is that now we approximate a function that has well-defined point values (in contrast with an $L^1$ equivalence class).
More importantly: in terms of the integrated function, the original RE becomes a DDE, as the rule for extension specifies its derivative in the current time point.
The difference with a ``true'' DDE is that we have to incorporate a (re)normalization condition in order to have a one-to-one relationship between the integrated function and its derivative. The resulting ODE therefore has a slightly different structure: for a scalar equation the nonlinear part again has one-dimensional range, but in natural coordinates the range is spanned by a different vector.
From a more abstract point of view, we \fra{represent $L^1$ by $AC_0$}, the subspace of $NBV$ (normalized bounded variation functions) consisting of absolutely continuous functions.
This embedding also features in the \emph{sun-star} framework of \cite{DGG2007,DiekmannBook} (where the ``big'' space $NBV$ serves to represent the rule for extension as a perturbation with range spanned by a Dirac mass) and in the more recent theory of twin semigroups \cite{TwinSemigroups}.
The space \fra{$AC_0$} also guarantees the convergence of the approximation: classical results from interpolation theory \cite{Krylov, MastroianniBook, Trefethen2013} ensure the convergence of polynomial interpolation, in supremum norm (and not necessarily in $NBV$ norm), for functions that are at least absolutely continuous, for a suitable choice of the interpolation nodes in the bounded interval.
An important advantage of the current method compared to the approach proposed in \cite{SIADS2016} is the remarkable reduction of computational costs in all the simulations considered here (see for instance Figure \ref{f:specialRE_time} and Table \ref{t:times} below).
The inversion of the nonlinear condition with a numerical solver is indeed the main bottleneck of the method in \cite{SIADS2016}.
The improved computational efficiency is fundamental especially when dealing with complex applications from population dynamics as the coupled RE/DDE models for \emph{Daphnia} \cite{DGM2010}, that are particularly challenging to treat numerically and often need ad hoc techniques \cite{Vietnam2020, Ando2020a, DCDS2020Ando, PSPManalysis2020}.
\smallskip
In this paper we focus on the approximation of equilibria and their stability.
We start in the next section by providing some concrete examples that illustrate the main features and potential of the approximation approach.
The latter is introduced rigorously in Section \ref{s:method} for general scalar nonlinear RE, together with some basic results regarding the approximation of equilibria.
Section \ref{s:linear_conv} focuses on autonomous linear equations: we show that characteristic roots and exponential solutions are approximated with infinite order of convergence as the dimension of the approximation increases.
An outlook discussion is presented in Section \ref{s:outlook}.
In population models the RE usually concerns the population level birth rate and for that reason we chose to use the character $b$ to denote the variable. The integrated quantity corresponds to the cumulative birth rate and is denoted by $B$.
\section{Some illustrative examples}
\label{s:examples}
In this section we consider some specific nonlinear RE.
All the equations are approximated with an ODE system using the method introduced rigorously in Section \ref{s:method}. The dimension $M$ of the approximating ODE system is specified each time.
The bifurcation diagram of the approximating system is then studied numerically using software for numerical bifurcation analysis of ODE.
Specifically we use the package MatCont (version 7p1) \cite{Matcont0} running on MATLAB 2019a.
To improve efficiency, the integrals are computed using Clenshaw--Curtis quadrature formulas \cite{Trefethen2000}.
\fra{MATLAB codes used to obtain the results in this paper are available at \href{http://cdlab.uniud.it/software}{\path{http://cdlab.uniud.it/software}}.
}
We have three main goals:
1) show the suitability of the approach (ODE approximation plus software for numerical bifurcation) to reveal bifurcations of equilibria and to study some more advanced dynamical behaviors; 2) carry out a preliminary study of the convergence of the approximations;
3) compare the performances and the output of the method presented here with the one proposed in \cite{SIADS2016}.
\subsection{An SIRS model}
Let $k \colon \mathbb{R} \to \mathbb{R}$ be a nonnegative and measurable function with support in $[0,1]$, and normalized such that $\int_0^1 k(s)\mathop{}\!\mathrm{d} s=1$.
Consider the nonlinear equation
\begin{equation} \label{prelude}
b(t) = \gamma \left(1-\int_0^1 b(t-s)\mathop{}\!\mathrm{d} s \right) \int_0^1 k(s) b(t-s) \mathop{}\!\mathrm{d} s, \qquad t>0,
\end{equation}
for \fra{$\gamma > 0$}.
In \cite{Diekmann1982prelude}, the authors derive this equation in the context of an SIRS epidemic model, and study the bifurcation with respect to $\gamma$. It is proved that the trivial equilibrium undergoes a transcritical bifurcation at $\gamma=1$, and a positive stable equilibrium exists for $\gamma>1$.
Under some conditions on the kernel $k$, the positive equilibrium undergoes a sequence of Hopf bifurcations as $\gamma$ increases.
The authors also conjecture that \eqref{prelude} may exhibit chaotic behavior for large values of $\gamma$.
We here consider a truncated Gamma-type kernel of the form
\begin{equation} \label{gamma_kernel}
k(s) =
\begin{cases}
\alpha s^{m-1} \ee^{-\frac{s}{\theta}} & s \leq 1, \\
0 & s > 1,
\end{cases}
\end{equation}
for fixed parameters $m=3$, $\theta=0.1$, and $\alpha$ a normalising constant so that $\int_0^1 k(s) \mathop{}\!\mathrm{d} s=1$.
\fra{The kernel \eqref{gamma_kernel} belongs to the class considered in \cite{Diekmann1982prelude} due to the cut-off at $s=1$.}
The bifurcation diagram with respect to $\log \gamma$ is shown in Figure \ref{f:prelude_gamma_bif} for different values of the dimension $M$ of the approximating system.
A Hopf bifurcation is detected on the branch of positive equilibria, after which the equilibrium becomes unstable and a branch of stable periodic solutions appears.
Note that the minimum values of the periodic orbits come close to zero when $\log \gamma$ approaches 3. When this happens, $M = 10$ is not sufficient to approximate the orbit, but for $M=20,40$ the lower portions of the curves are indistinguishable.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{prelude_gamma_bif}
\caption{Equation \eqref{prelude} with kernel \eqref{gamma_kernel}, with $m=3$ and $\theta=0.1$. Bifurcation diagram with respect to $\log \gamma$, for different values of $M$. A Hopf bifurcation is detected at $\log\gamma \approx 1.6553$. Equilibrium branch and the maximum and minimum values of the periodic orbits are plotted; solid lines correspond to stable branches, dashed lines to unstable ones.
}
\label{f:prelude_gamma_bif}
\end{figure}
\subsection{Nicholson's blowflies equation}
Consider Nicholson's blowflies DDE \cite{Gurney1980}
\begin{equation} \label{Nich-DDE}
A'(t) = -\mu A(t) + \beta_0 \ee^{-\mu} A(t-1) \ee^{-A(t-1)}, \qquad t>0,
\end{equation}
for $\beta_0,\mu \geq 0$, where $A(t)$ denotes the size of the adult population, and newborns become adult after a maturation delay which is normalized to 1.
For $b(t):= \beta_0 A(t) \ee^{-A(t)},$ \eqref{Nich-DDE} is equivalent to
\begin{equation}\label{Nich-RE}
b(t) = \beta_0 \ee^{-\int_1^{\infty} b(t-s) \ee^{-\mu s} \mathop{}\!\mathrm{d} s} \, \int_1^{\infty} b(t-s) \ee^{-\mu s} \mathop{}\!\mathrm{d} s, \qquad t>0.
\end{equation}
The equivalence is rigorously proved in \ref{app:blowflies}; however note that, since $b(t)$ represents the population birth rate, \eqref{Nich-RE} follows directly from modelling assumptions and there is no need to derive it from the DDE.
It is easy to see that the (unique) nontrivial equilibrium of \eqref{Nich-DDE} exists for $\beta_0 > \mu \ee^{\mu}$, with value
$$ \overline A = \log \frac{\beta_0}{\mu} - \mu.$$
Correspondingly, the equilibrium of \eqref{Nich-RE} is $\overline b = (\log \frac{\beta_0}{\mu} - \mu) \mu \ee^{\mu}$.
The nontrivial equilibrium undergoes a Hopf bifurcation as $\beta_0$ increases.
An explicit parametrization of the Hopf bifurcation curves in the plane $(\mu,\beta_0 \ee^{-\mu}/\mu)$ is computed in \cite{Babette} for the DDE \eqref{Nich-DDE}.
Moreover, the numerical bifurcation analysis of \eqref{Nich-DDE} can be performed with standard software for DDE, like DDE-BIFTOOL for MATLAB \cite{ddebiftool0, ddebiftool}.
Therefore, the blowflies equation allows us to compare the output of MatCont computations on the pseudospectral approximation of \eqref{Nich-RE} with analytic formulas for the Hopf bifurcation curves and with the output of DDE-BIFTOOL for bifurcations of periodic solutions of \eqref{Nich-DDE}.
In order to apply the method presented here, we truncated the integral in \eqref{Nich-RE} so that the probability of survival at a finite maximal time $\tau$ is less than a certain threshold. We chose $\tau=10$, as it ensures that the survival probability $\ee^{-\mu \tau}$ is less than $10^{-6}$
for $\mu = 1$, less than $10^{-10}$
for $\mu=2$, and less than $10^{-19}$
for $\mu=4$.
Figure \ref{f:Nich-fixed-tau} shows the Hopf bifurcation curve approximated with different values of $M$, for fixed $\tau=10$, and compared with the analytic curve \fra{obtained for the DDE}. The curves in a two-parameter plane were obtained by first performing a one-parameter continuation for $\mu=4$, and then starting the two-parameter continuation from the detected Hopf bifurcation. Note that no Hopf bifurcation was detected for $M\leq 8$.
While in Figure \ref{f:Nich-fixed-tau} we fixed the delay interval $[-\tau,0]$ and varied the dimension $M$ of the approximation, Figure \ref{f:Nich-fixed-M} shows the effect of changing the truncation delay $\tau$ while fixing $M$. The results highlight the delicate balance between the choice of the truncation delay and the degree of \fra{the} approximation: for a good approximation, a larger interval requires a larger dimension of the approximating system. This is particularly evident in the analysis of complex bifurcations, for instance the period doubling bifurcation of periodic orbits, and demonstrates the need to develop an approximation specifically tailored to unbounded intervals.
\begin{figure}
\centering
\includegraphics[scale=1]{Nich_fixed_tau_regions}
\caption{Nicholson's blowflies model \eqref{Nich-RE}: Hopf bifurcation curve in the plane $(\mu,\beta_0 \ee^{-\mu}/\mu)$ approximated with MatCont, for fixed $\tau=10$ and different values of $M$. Note that the Hopf bifurcation curves for $M=20, 40$ are indistinguishable from each other and \fra{lie on top of the reference Hopf bifurcation curve calculated analytically in \cite{Babette}}. No Hopf bifurcation was detected using $M\leq 8$.}
\label{f:Nich-fixed-tau}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{Nich_fixed_M_regions}
\caption{Nicholson's blowflies model \eqref{Nich-RE}: Hopf and period doubling bifurcation curves in the plane $(\mu,\beta_0 \ee^{-\mu}/\mu)$ approximated with MatCont, for fixed $M=20$ and different values of $\tau$. The reference line for the Hopf bifurcation curve is calculated analytically in \cite{Babette}; the reference line for the period doubling bifurcation is approximated with DDE-BIFTOOL on the DDE formulation \eqref{Nich-DDE} of the model.}
\label{f:Nich-fixed-M}
\end{figure}
\subsection{A cannibalism equation}
\begin{figure}[p]
\centering
\includegraphics[width=.48\textwidth]{specialRE_20_bif_full}\quad
\includegraphics[width=.48\textwidth]{specialRE_20_regions}
\caption{Equation \eqref{specialRE}. Left: bifurcation diagram computed with $\tau=3$ and $M=20$ (equilibria and max/min values of the periodic solutions); solid and dashed curves are used to distinguish stable and unstable elements; note the Hopf bifurcation at $\log \gamma \approx 2.5708$ and the period doubling bifurcation at $\log \gamma \approx 3.8777$.
Right: stability curves in the plane $(\log\gamma,\tau)$, supercritical (black solid) and subcritical Hopf (black dotted), and period doubling (gray dashed); note the generalized Hopf point (GH) that characterizes the change in criticality.
The period doubling curve in the right panel was approximated by performing a sequence of one-parameter continuations with respect to $\log\gamma$, each one for a fixed $\tau$;
we were however not able to complete the curve numerically using MatCont.
}
\label{f:specialRE_bif}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=.95\textwidth]{specialRE_time_zeros}
\caption{
Equation \eqref{specialRE}. Computation time (seconds) for one evaluation of the right-hand side of the approximating system (left), and for a $50$-point continuation along the branch of positive equilibria (right), varying $M$.
Comparison between the method in \cite{SIADS2016} (gray $\circ$) and the method proposed here (black $\bullet$).
}
\label{f:specialRE_time}
\end{figure}
\begin{table}[p]
\centering
\caption{Equation \eqref{specialRE}. Computation time for one evaluation of the right-hand side (seconds $\times 10^{-4}$) of the approximating system and for a $50$-point continuation along the branch of positive equilibria (seconds), performed with the \fra{method in \cite{SIADS2016} and with the current method}, and ratio between the two.}
\footnotesize
\begin{tabular}{cccc|cccc}
\toprule
\multicolumn{4}{c|}{\textbf{RHS evaluation}} & \multicolumn{4}{c}{\textbf{Equilibrium continuation}} \\
$M$ & Method \cite{SIADS2016} & Current method & Ratio
& $M$ & Method \cite{SIADS2016} & Current method & Ratio \\
\midrule
$15$ & 40.14 & 5.65 & 7.10
& $15$ & 8.99 & 0.76 & 11.83 \\
$16$ & 43.61 & 6.17 & 7.07
& $16$ & 9.52 & 0.73 & 13.04 \\
$17$ & 61.66 & 6.38 & 8.05
& $17$ & 10.28 & 0.93 & 11.05 \\
$18$ & 49.17 & 5.13 & 9.58
& $18$ & 10.80 & 0.89 & 12.13 \\
$19$ & 47.53 & 4.41 & 10.71
& $19$ & 11.29 & 0.97 & 11.64 \\
$20$ & 48.77 & 4.75 & 10.27
& $20$ & 11.73 & 1.10 & 10.66 \\
\bottomrule
\end{tabular}
\label{t:times}
\end{table}
Consider the equation
\begin{equation} \label{specialRE}
b(t) = \frac{\gamma}{2} \int_{1}^{\tau} b(t-s) \ee^{-b(t-s)} \mathop{}\!\mathrm{d} s, \quad t \geq 0,
\end{equation}
for \fra{$\gamma > 0$} and $\tau>1$, modelling a cannibalism phenomenon in an age-structured population \cite{Breda2013}.
The bifurcation properties of \eqref{specialRE} were recently studied numerically in \cite{EJQTDE2016} using the method \cite{SIADS2016}.
The bifurcation diagram with respect to $\log \gamma$ for fixed $\tau=3$ is plotted in Figure \ref{f:specialRE_bif} (left), obtained with an approximating ODE system of dimension $M=20$.
The nontrivial equilibrium branch undergoes a Hopf bifurcation at $\log \gamma \approx 2.5708$, after which a stable branch of periodic solutions emerges.
The starting of a sequence of period doubling bifurcations is detected on the branch of periodic solutions, with the first period doubling bifurcation at $\log\gamma\approx 3.8777$.
Figure \ref{f:specialRE_bif} (right) shows the bifurcation curves in the plane $(\log\gamma,\tau)$.
Note that, using a larger discretization index $M=20$, we could approximate the period doubling point with better accuracy than in \cite{EJQTDE2016}.
We used this example for some basic performance comparisons in terms of computational costs.
Figure \ref{f:specialRE_time} shows the computation times of the two approaches (current and \cite{SIADS2016}) for a single evaluation of the right-hand side of the approximating system, and for performing a $50$-point numerical continuation along the branch of positive equilibria using MatCont.
Note the remarkable improvement with the current method, which reduces computation times by approximately a factor 10, see also Table \ref{t:times}.
\section{Nonlinear renewal equations} \label{s:method}
In this section we present the approximation technique for a general scalar \fra{(possibly nonlinear)} RE.
Let $\tau>0$ denote the maximal delay.
We will work with real-valued functions defined on the domain $[-\tau,0]$, so, for simplicity of notation, we will omit the domain when there is no confusion, writing for instance $L^1$ instead of $L^1([-\tau,0],\mathbb{R})$.
Consider the RE
\begin{equation} \label{RE_b}
b(t) = F(b_t), \qquad t > 0,
\end{equation}
where $F \colon L^1 \to \mathbb{R}$ is a
\fra{globally Lipschitz continuous function}, and $b_t \in L^1$ denotes the history or state function
$$ b_t(\theta) := b(t+\theta), \qquad \theta \in [-\tau,0].$$
Equation \eqref{RE_b} is provided with the initial condition
\begin{equation} \label{IC_b}
b(\theta)=\phi(\theta), \qquad \theta \in [-\tau,0],
\end{equation}
for $\phi\in L^1$.
Under these assumptions, the initial value problem \eqref{RE_b}\&\eqref{IC_b} has a unique (global) solution in $[-\tau,+\infty)$,
which is continuous in $(0, +\infty)$ \cite[Theorem 3.8]{DGG2007}. In view of the analysis of the numerical approach, it is worthwhile
to remark that $b_t \in C([-\tau,0],\mathbb{R})$ for $t >\tau.$
To efficiently apply the pseudospectral approach and approximately describe the qualitative behavior of the solutions of \eqref{RE_b}, in terms of stability and bifurcations, we want to associate with it an infinite-dimensional dynamical system where the rule for extension is represented explicitly, acting in a state space where point evaluation is well defined.
For this purpose we define
\begin{equation}\label{B}
B(t) := \int_0^t b(s) \mathop{}\!\mathrm{d} s, \qquad t \geq -\tau,
\end{equation}
and, \fra{for all $t\geq 0$,}
\begin{equation} \label{v}
v(t)(\theta) := B(t+\theta)-B(t),\qquad \theta \in [-\tau,0],
\end{equation}
so that we get for all $t \geq 0$
\begin{equation} \label{dt}
\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} v(t)(\theta) = b_t(\theta)-b_t(0)
\end{equation}
and
\begin{equation} \label{dtheta}
\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} \theta} v(t) = b_t.
\end{equation}
\fra{We stress that \eqref{dtheta} should be interpreted as follows: presuming that $v(t)$ is an absolutely continuous function of $\theta$, it is almost everywhere differentiable and the derivative is Lebesgue integrable, so it defines an element of $L^1$, i.e., an equivalence class, denoted by $b_t$.
}
To derive an abstract differential equation from \eqref{dt}\&\eqref{dtheta}, inspired by the nonlinear theory in \cite{DGG2007} and by the theoretical framework in \cite{TwinSemigroups}, we introduce the space \fra{$NBV = NBV([-\tau,0],\mathbb{R})$ defined as}
\begin{align*}
NBV := \{ &\psi\in BV \fra{([-\tau,0],\mathbb{R})} \colon \psi(0)=0 \\
&\text{ and $\psi$ is continuous from the right on the open interval } (-\tau,0) \}
\end{align*}
and we \fra{represent $L^1$ by} the subspace $AC_0$ of the absolutely continuous functions in $NBV$, i.e.,
$$ \fra{AC_0} = \{ \psi \in AC \colon \psi(0)=0\},$$
which is a Banach space with norm
$$ \|\psi\|_{NBV} = \|\psi'\|_{L^1} = \int_{-\tau}^0 |\psi'(\theta)| \mathop{}\!\mathrm{d} \theta, \qquad \psi \in \fra{AC_0}.$$
The operator
\begin{equation*}
(V \phi)(\theta) = -\int_\theta^0 \phi(s) \mathop{}\!\mathrm{d} s, \qquad \phi \in L^1,\ \theta \in [-\tau,0],
\end{equation*}
maps an element of $L^1$ (i.e., an equivalence class) to a function in \fra{$AC_0$}, defining an embedding of $L^1$ into $NBV$.
Although we defined $V$ with domain in $L^1$, in the following we will also consider the restrictions $V\big|_{NBV}$ and $V\big|_C$ to the spaces $NBV$ and $C$. We will sometimes omit the explicit restriction and use simply the notation $V$, since the appropriate domain should be clear from the context.
\fra{We can reformulate \eqref{B} and \eqref{v} as}
\begin{equation} \label{Vbt}
v(t) = V b_t.
\end{equation}
Note that for the constant function $\bar{b}$ we get \fra{$v(t)=\bar{v}$ with} $\bar{v}(\theta)= V \bar{b}(\theta) =\theta \bar{b}$, $\theta \in [-\tau,0],$ while $v(t) = V b_t$ is periodic when $b(t)$ is a periodic function.
\medskip
We now define the operator $C_0 \colon D(C_0) (\subset NBV) \to NBV$ as $C_0:= (V\big|_{NBV})^{-1}$.
That is, we define
$$ D(C_0) =
\{ \psi \in \fra{AC_0} \colon \psi = V\phi \text{ for some } \phi \in NBV \}
$$
and, for $\psi \in D(C_0)$, we take
$$C_0 \psi = \{ \phi \colon \psi = V \phi\}.$$
The density of \fra{(the embedding of) $NBV$ in $L^1$ (defined by the identity, i.e., by associating to any $NBV$ element the equivalence class of functions that are almost everywhere equal to it)\footnote{For a proof of the density of (the embedding of) $NBV$ in $L^1$, consider $f \in L^1([-\tau,0])$, extended by 0 outside the interval $[-\tau,0]$. Let $g$ be defined by $g(\theta):= -\int_{\theta}^0 f(\sigma)\mathop{}\!\mathrm{d} \sigma$, and, for $t>0$, let $h_t$ be defined by $h_t(\theta):= \frac{1}{t}[g(t+\theta)-g(\theta)]$. Then $g,h_t \in AC_0 \subset NBV$. From \cite[Appendix II, Theorem 2.3]{DiekmannBook}, we have $\| h_t - g'\|_{L^1} \to 0$ as $t\to 0^+$, with $g'=f$.}
}
implies that $\fra{AC_0}=\overline{D(C_0)}$.
Note that $C_0$ is multivalued, since functions that differ only by a jump in $\theta=-\tau$ are mapped by $V$ to the same element of $NBV$.
We could eliminate this ambiguity by considering the restriction of $V$ to $NBV_0$, defined as the subspace of $NBV$ consisting of functions that are continuous in $-\tau$. In this way, the inverse $(V\big|_{NBV_0})^{-1}$ is single-valued. Note that $AC_0 \subset NBV_0$, and $NBV_0$ naturally arises in the sun-star approach, as explained in \cite{DGG2007}. This restriction is not necessary for the numerical approach proposed in this manuscript, so we will keep working with the larger space $NBV$.
\fra{For $v(t) \in D(C_0)$, the inverse relation \eqref{dtheta} can now be written as $b_t =$ ``$C_0 v(t)$'', where ``$\cdot$'' indicates that we take the equivalence class in $L^1$, to which all elements in $C_0 v(t)$ belong; however, the request that $v(t)$ belongs to $D(C_0)$ is stronger than needed, as \eqref{dtheta} is well defined also when $v(t) \in AC_0$.}
When we focus on restrictions to $NBV_0$, $C_0$ is the generator (in weak$^*$ sense) \cite{DiekmannBook} of the semigroup $\{S_0(t)\}_{t \geq 0}$ defined on $NBV$ by
$$ (S_0(t)\psi)(\theta) =
\begin{cases}
\psi(t+\theta), & t+\theta \leq 0, \\
0, & t+\theta>0.
\end{cases}
$$
The semigroup $\{S_0(t)\}_{t\geq 0}$ is not strongly continuous on $NBV$, but it is strongly continuous on the subspace $AC_0$.
To interpret the rule for extension \eqref{RE_b} as perturbation of the translation semigroup $\{S_0(t)\}_{t\geq 0}$, in analogy with the nonlinear theory treated in \cite{DGG2007} and the linear perturbation theory developed in \cite{TwinSemigroups}, we define $q \in NBV$ by
\begin{equation} \label{q}
q(\theta) = \begin{cases}
0, & \theta =0, \\
-1, & \theta \in [-\tau,0),
\end{cases}
\end{equation}
i.e., $q$ is the Heaviside function that represents the Dirac measure in $\theta=0$. Note that $q \notin AC_0$ because of the discontinuity in $0.$
\medskip
For $\psi:=V\phi \in AC_0$, \eqref{dt}\&\eqref{dtheta}\&\eqref{RE_b}
is formally equivalent to the abstract Cauchy problem for $v(t) \in AC_0$
\begin{align}
\frac{\mathop{}\!\mathrm{d} v(t)}{\mathop{}\!\mathrm{d} t} &= C_0 v(t) + q F(C_0 v(t)), \qquad t\geq 0, \label{ADE} \\
v(0) &= \psi, \label{IC_ADE}
\end{align}
in a sense that is made precise, for the linear case, in \cite{TwinSemigroups}.
In particular, given a (classical) solution $v$ of \eqref{ADE}\&\eqref{IC_ADE}, we reconstruct the solution of \eqref{RE_b}\&\eqref{IC_b} by defining
$b(t)=F(C_0 v(t))$ for $t>0$, \fra{and \eqref{Vbt} holds}.
It is then clear that the linear operator $C_0$ captures the translation, while the rule for extension is represented explicitly by the nonlinear perturbation $q F \circ C_0$ (where $\circ$ denotes composition).
\subsection{Reduction to ODE via pseudospectral discretization}
To approximate the solution $v(t)$ of \eqref{ADE}\&\eqref{IC_ADE}, we fix a \emph{discretization index} $M\in \mathbb{N}$ and consider a set of points $\Theta_M =\{\theta_1,\dots,\theta_M\} \subset [-\tau,0)$ and $\theta_0=0$, with
$$-\tau \leq \theta_M < \cdots < \theta_1 < \theta_0=0.$$
In the numerical simulations we consider the \emph{Chebyshev zeros}, i.e., the roots of the Chebyshev polynomial of the first kind of degree $M$ \cite{Xu2016}, transformed to the interval $(-\tau,0)$, which are given explicitly by
\begin{equation}\label{cheb_zeros}
\theta_j = \frac{\tau}{2} \left( \cos \left( \frac{2j-1}{2M}\pi \right) - 1 \right), \qquad j=1,\dots,M.
\end{equation}
Let $\ell_j$, $j=0,\dots,M$, be the Lagrange polynomials associated with $\{\theta_0=0\} \cup \Theta_M$, which are defined by
$$ \ell_j(\theta) := \prod_{\substack{k=0\\ k\neq j}}^M \frac{\theta-\theta_k}{\theta_j - \theta_k}, \qquad \theta \in [-\tau,0].$$
We recall that $\ell_j(\theta_k)=\delta_{jk}$, where $\delta_{jk}$ denotes the Kronecker symbol, and that, for all $\theta \in [-\tau,0]$, the Lagrange polynomials have the properties
\begin{align*}
\sum_{j=0}^M \ell_j(\theta) =1, \qquad
\sum_{j=0}^M \ell_j'(\theta)=0,
\end{align*}
and form a basis of the space of polynomials of degree less than or equal to $M$.
Using the Lagrange basis, the unique $M$-degree polynomial $p$ interpolating a function $\varphi$ with well-defined point values on the nodes $\{0\} \cup \Theta_M$ can be expressed by
$$ p(\theta) = \sum_{j=0}^M \varphi(\theta_j) \ell_j(\theta), \qquad \theta\in [-\tau,0].$$
To obtain a finite dimensional approximation of \eqref{ADE} we need to approximate the operator $C_0$ and the function $F$.
To do this, we introduce $P_M \colon \mathbb{R}^M \to NBV$, the interpolation operator on $\{0\} \cup \Theta_M$ with value zero in $\theta_0=0$, and $R_M \colon NBV \to \mathbb{R}^M$, the restriction on $\Theta_M$. More precisely,
\begin{align}
P_M y &:= \sum_{j=1}^M y_j \ell_j, \qquad y \in \mathbb{R}^M, \label{defP}\\
(R_M \varphi)_j &:= \varphi(\theta_j), \qquad \varphi \in NBV,\ j=1,\dots,M. \notag
\end{align}
The operator $C_0$ is approximated by pseudospectral differentiation, i.e., by the finite dimensional operator $D_M \colon \mathbb{R}^M \to \mathbb{R}^M$ defined by
$$ D_M := R_M C_0 P_M.$$
At this point, the choice of a mesh like \eqref{cheb_zeros}, which does not include the point $-\tau$, is even better justified, as the multi-valuedness of $C_0$ disappears in the pseudospectral approximation. If the point $-\tau$ belongs to $\Theta_M$, then we should define $C_0$ as the (single-valued) inverse of the restriction of $V$ on $NBV_0$, so that $D_M$ is well defined.
To write explicitly the operator $D_M$, we consider a vector $y=(y_1,\dots,y_M) \in \mathbb{R}^M$. Then $P_M y$ is given by \eqref{defP} and, for all $k=1,\dots,M$,
\begin{align*}
(D_M y)_k &= (C_0 P_M y)(\theta_k) \\
&= \sum_{j=1}^M y_j \ell_j'(\theta_k).
\end{align*}
The entries of the matrix $D_M$ are therefore given explicitly by
$$ (D_M)_{kj} := \ell_j'(\theta_k), \qquad k,j=1,\dots,M.$$
Note that the elements $(D_M)_{kj}$ are entries of the differentiation matrix associated with the mesh $\{0\} \cup \Theta_M$ \cite{Trefethen2000}.
More precisely, $D_M$ is the submatrix of the differentiation matrix accounting for the fact that functions are zero in zero.
\medskip
To approximate the perturbation $qF\circ C_0$, which captures the rule for extension, we let $\mathbf{1} \in \mathbb{R}^M$ denote the vector with all the entries equal to $1$ \fra{(note that the vector $-\mathbf{1}$ is the discretization of $q$)}, and introduce $F_M \colon \mathbb{R}^M \to \mathbb{R}$ by
$$ F_M := F \circ C_0 P_M.$$
Using \eqref{defP}, for $y \in \mathbb{R}^M$ we have
$$ F_M(y) = F(\sum_{j=1}^M \ell_j' y_j).$$
Then, $qF\circ C_0$ is approximated by $-\mathbf{1} F_M$.
\medskip
Putting everything together, we obtain the following finite dimensional ODE system
\begin{equation} \label{ODEM}
\frac{\mathop{}\!\mathrm{d} x}{\mathop{}\!\mathrm{d} t} = D_M x - F_M (x) \mathbf{1},
\end{equation}
for $x(t) \in \mathbb{R}^M$, $t \geq 0.$ By construction, the $(M-1)$-degree polynomial $C_0 P_Mx(t)= \sum_{j=1}^M \ell_j' x_j (t)$ furnishes an approximation of $b_t$, while $F_M (x(t))$ approximates $b(t)$ in \eqref{RE_b}. The computational advantage of \eqref{ODEM} with respect to the ODE system derived in \cite{SIADS2016} is evident: the nonlinear contribution appears explicitly as a perturbation of the linear system, and there is no need to solve a nonlinear equation to impose the domain condition.
The dynamics of \eqref{ODEM} can be investigated efficiently by using available software for numerical continuation and bifurcation study. It is therefore important to understand to which extent \eqref{ODEM} mimics the dynamics of the original infinite dimensional dynamical system described by the RE \eqref{RE_b}. Here we examine the equilibria and their stability properties.
\subsection{Correspondence of equilibria and linearized equations}
In what follows we assume that the operator $F$ in \eqref{RE_b} is continuously Fr\'echet differentiable and we focus on constant solutions and the corresponding linearized equations.
\fra{We slightly abuse notation by using the symbol $\overline b$ to represent both a real number and the constant function (defined on $[-\tau,0]$) taking that number as its one and only value.}
\begin{theorem}
The equilibria of \eqref{RE_b} and \eqref{ODEM} are in one-to-one correspondence.
More precisely, if $\overline{b}=F(\overline{b})$, then $\overline{x} \in \mathbb{R}^M$ with
\begin{equation}\label{equil}
\overline{x}_j = \overline b \, \theta_j, \qquad j=1,\dots,M,
\end{equation}
satisfies $D_M \overline{x}- F_M(\overline{x}) \mathbf{1} = 0$ and so defines an equilibrium of \eqref{ODEM};
vice versa, if $D_M \overline{x} - F_M(\overline{x}) \mathbf{1} = 0$ then $\overline{b} = \overline{x}_j/\theta_j$ does not depend on $j$ and satisfies $\overline{b}=F(\overline{b})$.
\end{theorem}
\begin{proof}
We first derive a useful identity.
Since $M$-degree polynomial interpolation is exact on all polynomials with degree equal to or less than $M$, for the identity function we can write
$$ \theta = \sum_{j=1}^M \theta_j \ell_j(\theta), \qquad \theta \in [-\tau,0].
$$
By differentiation we then obtain
\begin{equation} \label{1_sum_der}
1 = \sum_{j=1}^M \theta_j \ell_j'.
\end{equation}
Assume first $\overline{b}=F(\overline b)$, and define $\overline x$ by \eqref{equil}. From \eqref{1_sum_der},
$$
\overline b = \overline b \sum_{j=1}^M \theta_j \ell_j'(\theta)$$
for all $\theta \in [-\tau,0]$ and, \fra{consequently,}
\fra{$$[D_M \overline x]_k = \sum_{j=1}^M \ell_j'(\theta_k) \overline x_j = \overline b \sum_{j=1}^M \theta_j \ell_j'(\theta_k) = \overline b,$$
i.e., }
$D_M \overline x = \overline b \, \mathbf{1}$.
Therefore we conclude
$$ D_M \overline{x} - F_M(\overline{x})\mathbf{1} = D_M \overline x - F(\overline b \sum_{j=1}^M \theta_j \ell_j')\, \mathbf{1} = \overline b \, \mathbf{1} - F(\overline b)\, \mathbf{1} =0,$$
\fra{or, in words,} $\overline x$ is an equilibrium of \eqref{ODEM}.
Vice versa, assume that $\overline x \in \mathbb{R}^M$ satisfies
\begin{equation} \label{eq_ode}
D_M \overline x - F_M(\overline{x}) \,\mathbf{1} = 0,
\end{equation}
and define $\overline b := F_M(\overline{x}) = F(\overline{p}_{M-1})$, with $\overline{p}_{M-1}:= \sum_{j=1}^M \overline x_j \ell_j'$. Note that $\overline{p}_{M-1}$ is a polynomial with degree $M-1$ and, by definition, $(D_M \overline x)_k = (\sum_{j=1}^M \overline x_j \ell_j')(\theta_k) = \overline{p}_{M-1}(\theta_k)$, $k=1,\dots,M$.
By \eqref{eq_ode}, we conclude that $\overline{p}_{M-1}$ takes the value $\overline{b}$ on $M$ distinct points and therefore $\overline{p}_{M-1}$ is identically equal to $\overline b$. It now follows from the definition of $\overline{b}$ that $\overline{b}=F(\overline{b})$.
\end{proof}
\begin{theorem}
The operations of linearization around an equilibrium and pseudospectral discretization commute.
\end{theorem}
\begin{proof}
Let $\overline b$ be an equilibrium of \eqref{RE_b}. The linearization around $\overline b$ reads
$$ b(t) = DF(\overline b) \, b_t,$$
and the corresponding approximating ODE system is
\begin{align}
\frac{\mathop{}\!\mathrm{d} x}{\mathop{}\!\mathrm{d} t} &= D_M x - \big[ DF(\overline b) \sum_{j=1}^M x_j \ell'_j \big] \, \mathbf{1}. \label{lin_discr}
\end{align}
On the other hand, let $\overline x$ be an equilibrium of \eqref{ODEM} with \eqref{equil}.
The linearization of \eqref{ODEM} around $\overline x$ reads
\begin{align*}
\frac{\mathop{}\!\mathrm{d} x}{\mathop{}\!\mathrm{d} t} &= D_M x - [DF_M(\overline{x}) x] \mathbf{1} \\
&= D_M x - \Big[ DF(\overline b \, \sum_{j=1}^M \theta_j \ell_j') \sum_{j=1}^M x_j \ell_j'\Big] \, \mathbf{1} \\
&=D_M x - \Big[ DF(\overline b) \sum_{j=1}^M x_j \ell_j'\Big] \,\mathbf{1},
\end{align*}
where in the last step we used \eqref{1_sum_der}. Hence the linearized system coincides with \eqref{lin_discr}.
\end{proof}
The principle of linearized stability \cite{DGG2007} ensures that the local stability properties of an equilibrium $\overline{b}$ of \eqref{RE_b} are determined by the stability properties of the zero solution of the system linearized around $\overline{b}$, if the equilibrium is hyperbolic.
Therefore the next step in order to study whether the finite dimensional ODE \eqref{ODEM} approximates the stability properties of the equilibria of \eqref{RE_b} is to focus on linear equations.
\section{The linear case: convergence analyses} \label{s:linear_conv}
In this section we focus on the linear case and study the convergence of exponential solutions, which are related to stability.
We consider the linear RE
\begin{equation} \label{linear_RE_b}
b(t) = \int_0^\tau k(a) b(t-a) \mathop{}\!\mathrm{d} a, \qquad t > 0,
\end{equation}
where $k$ is a bounded measurable function on $[0,\infty)$ with support in $[0,\tau]$.
\medskip
Following \cite{TwinSemigroups}, let $\langle \cdot , \cdot \rangle$ denote the pairing between bounded measurable functions and $NBV$, i.e.,
$$ \langle k, \psi \rangle := \int_{-\tau}^0 k(-a) \psi(\mathop{}\!\mathrm{d} a), \qquad \psi \in NBV.$$
In particular, if $\psi \in \fra{AC_0}$, the pairing reads
$$ \langle k, \psi \rangle = \int_{-\tau}^0 k(-a) \psi'(a) \mathop{}\!\mathrm{d} a.$$
By defining $K_M$ as the row vector with components
\begin{equation} \label{KM}
(K_M)_j := \langle k, \ell_j \rangle = \int_{-\tau}^0 k(-a) \ell_j'(a) \mathop{}\!\mathrm{d} a, \qquad j=1,\dots,M,
\end{equation}
we can write the ODE system \eqref{ODEM} approximating \eqref{linear_RE_b} as
\begin{equation} \label{linear_ODEM}
\frac{\mathop{}\!\mathrm{d} x}{\mathop{}\!\mathrm{d} t} = D_M x - (K_M x) \, \mathbf{1}.
\end{equation}
We want to study in which sense the finite dimensional approximation \eqref{linear_ODEM} captures the stability properties of the zero equilibrium of \eqref{linear_RE_b} as $M\to \infty$.
The asymptotic stability of the zero equilibrium is determined by the roots of the characteristic equation, a.k.a.\ ``characteristic roots''.
Specifically, \eqref{linear_RE_b} has an exponential solution of the form $b(t)= \alpha \ee^{\lambda t}$ with $\alpha \neq 0$ if and only if $\lambda \in \mathbb{C}$ is a root of the characteristic equation
\begin{equation} \label{CE}
1 = \int_0^\tau k(a) \ee^{-\lambda a} \mathop{}\!\mathrm{d} a.
\end{equation}
The real parts of the characteristic roots $\lambda$ determine the stability of the zero solution of \eqref{linear_RE_b}: if $\Re\lambda <0$ for all $\lambda$, then the zero equilibrium is asymptotically stable; if there exists a root $\lambda$ with $\Re\lambda>0$, then the zero equilibrium is unstable.
\medskip
From now on we assume that the set $\Theta_M$ is chosen such that the associated Lebesgue constant, defined as
$$\tilde\Lambda_M := \max_{\theta \in [-\tau,0]} \sum_{j=1}^M |\tilde \ell_j(\theta)|,$$
for $\tilde \ell_j$ Lagrange polynomials associated with $\Theta_M$, satisfies
\begin{equation} \label{cond_Lebesgue}
\lim_{M\to \infty} \frac{\tilde \Lambda_M}{M} = 0.
\end{equation}
We remark that \eqref{cond_Lebesgue} is true for instance for the nodes \eqref{cheb_zeros}, for which $\tilde \Lambda_M = O(\log M)$ \cite[Chapter 1.4.6]{MastroianniBook}.
We also remark that including additional nodes, for instance the extremum $-\tau$, does not affect the validity of \eqref{cond_Lebesgue}, \cite[Chapter 4.2]{MastroianniBook}.
The other important ingredient for convergence is the regularity of the function $\varphi$: the smoother a function, the faster the convergence of its interpolants.
More specifically, let $\LL_{M-1}$ denote the polynomial interpolation operator on $\Theta_M$, such that $\LL_{M-1} \varphi$ is the unique $(M-1)$-degree polynomial with $[\LL_{M-1}\varphi] (\theta_j) = \varphi(\theta_j)$, $j=1,\dots,M$.
Although $\LL_{M-1}$ can be defined also for bounded variation functions, and some results for interpolation in $L^p$ norm exist \cite{Mastroianni2010}, we here require that $\varphi$ is at least continuous, and use error bounds of polynomial interpolation in terms of the uniform norm.
In the following, let $c$ denote a constant (different from time to time) independent of $M$.
If $\varphi$ is Lipschitz continuous, the bound
\begin{equation} \label{interp_error_sup}
\| (I - \LL_{M-1})\varphi \|_{\infty} \leq c \, \frac{\tilde \Lambda_M}{M} \, \text{Lip}(\varphi)
\end{equation}
holds, where Lip$(\varphi)$ denotes the Lipschitz constant of $\varphi$, \fra{and, under the assumption \eqref{cond_Lebesgue}, the right-hand side of \eqref{interp_error_sup} tends to zero as $M \to \infty$.}
The order of convergence depends on the regularity of the interpolated function. More precisely, the estimate
$$ \| (I - \LL_{M-1})\varphi \|_{\infty} \leq c \, \frac{\tilde \Lambda_M}{M^k} \, \|\varphi^{(k)}\|_\infty $$
holds for $\varphi \in C^k$,
see for instance \cite[Chapters 1.2 and 1.4]{MastroianniBook}.
For analytic functions, the uniform error of polynomial interpolation behaves as $O(\rho^{-M})$ for some $\rho>1$, a phenomenon called \emph{spectral accuracy}, or \emph{geometric convergence}, or \emph{exponential convergence} \cite{MastroianniBook,Trefethen2013,Trefethen2000}.
Finally we remark that the uniform convergence of polynomial interpolation holds also for absolutely continuous functions interpolated at Chebyshev zeros \cite{Krylov}.
Before studying the approximation error, we note that the numerical computation of the vector $K_M$ in \eqref{KM} may involve the use of quadrature formulas to approximate the integrals. These approximations can introduce numerical errors in addition to those due to the pseudospectral approximation. In the following, we ignore these errors by assuming that the integrals are computed exactly.
\subsection{Resolvent operators}
Before focusing on characteristic roots, we study more in general the convergence of the pseudospectral approximations of the resolvent operators of $C_0$. Although the convergence of resolvent operators is not directly used to prove the convergence of the stability criteria of equilibria, it is fundamental to set the basis for a broader study of the approximation error, as solution operators are strictly connected with the resolvent of their generator via the Laplace transform.
For every $\lambda \in \mathbb{C}$ and $\varphi \in NBV$, the resolvent operator of $C_0$ is given by
\begin{equation} \label{resolvent}
((\lambda I-C_0)^{-1}\varphi) (\theta) = \ee^{\lambda \theta} \int_{\theta}^0 \ee^{-\lambda s} \varphi(s) \mathop{}\!\mathrm{d} s, \quad \theta \in [-\tau,0].
\end{equation}
\fra{In the following analysis of the approximation error, we require that $\varphi \in NBV$ is continuous.}
\begin{lemma} \label{l:res}
\fra{Let $\varphi \in NBV \cap C$, let $B$ be a bounded open subset of $\mathbb{C}$, and assume that \eqref{cond_Lebesgue} holds.
There exists $\overline{M}(B)$ such that, for any index $M \geq \overline{M}(B)$ and $\lambda \in B$, the polynomial
\begin{equation}
\label{pMcoll}
p_M :=P_M(\lambda I - D_M)^{-1} R_M \varphi
\end{equation}
is well defined, and, for $\psi := (\lambda I - C_0)^{-1} \varphi,$
\begin{equation} \label{psi-p}
\| \psi - p_M \|_{NBV} \leq c(B) \; \big\| r_M(\lambda,\varphi) \big\|_{\infty},
\end{equation}
where $ r_M(\lambda,\varphi) := (I-\LL_{M-1}) \psi'$ and
$$ c(B) = 2 \tau \, \sup_{\lambda \in \overline{B}} \left( 1 + |\lambda| \frac{1-\ee^{-\tau \Re \lambda}}{\Re \lambda} \right).
$$
}
\end{lemma}
\begin{proof}
The proof technique is similar to the one used in \cite[Proposition 5.1]{BMVBook} and \cite[Theorem 4.1]{DCDS2020Vermiglio}. We therefore only sketch the main steps.
The function $\psi$ satisfies
\begin{equation} \label{psi}
\begin{cases}
\psi'(\theta) = \lambda \psi(\theta) - \varphi(\theta), & \theta \in [-\tau,0], \\
\psi(0)=0.
\end{cases}
\end{equation}
\fra{For given $\varphi \in NBV$, the function $\psi$ defined by \eqref{resolvent} satisfies \eqref{psi}. Here, since $\varphi$ is continuous, we have $\psi' \in C$ and the first identity also holds in $\theta=0$.}
On the other hand, proving that \eqref{pMcoll} is well defined is equivalent to showing that the collocation problem
\begin{equation*}
\begin{cases}
p_M'(\theta) = \lambda p_M(\theta) - \varphi(\theta), & \theta=\theta_1,\dots,\theta_M, \\
p_M(0)=0
\end{cases}
\end{equation*}
admits a unique solution.
Since $p_M'$ has degree $M-1$, it can be expressed as interpolation polynomial on $\Theta_M$, i.e., as
\begin{equation} \label{pM'_gen}
p_M'
= \LL_{M-1} (\lambda p_M - \varphi).
\end{equation}
Define $e_M:= \psi-p_M$. Subtracting \eqref{pM'_gen} from the first equation in \eqref{psi} and writing $e_M = V e_M'$, we obtain
\begin{equation} \label{eq_error}
e_M' = \lambda \LL_{M-1} V e_M' + \lambda (I-\LL_{M-1}) \psi - (I-\LL_{M-1})\varphi.
\end{equation}
Note that, since $\varphi, \psi,\psi' \in C$, \eqref{eq_error} can be interpreted as an equation in $C$. We show that, for all $\lambda \in \mathbb{C}$, the operator $(I-\lambda \LL_{M-1} V)$ is invertible in $C$ by showing that it is a perturbation, in supremum norm, of (the restriction to $C$ of) the operator $(I-\lambda V)$. In fact, since $V\varphi$ is Lipschitz for all $\varphi \in C$, from \eqref{interp_error_sup} we can bound
$$ \| \lambda (I - \LL_{M-1}) V \|_\infty \leq c \, \frac{\tilde \Lambda_M}{M} |\lambda|,$$
with $c$ independent of $M$.
In $C$, $(I-\lambda V)$ is invertible with
\begin{equation*}
[(I-\lambda V)^{-1}\zeta](\theta) = \zeta(\theta) - \lambda \int_\theta^0 \ee^{\lambda (\theta-s)} \zeta(s) \mathop{}\!\mathrm{d} s, \qquad \zeta\in C,
\end{equation*}
and
$$ \fra{
\|(I-\lambda V)^{-1} \|_{\infty} \leq
\begin{cases}
1 + |\lambda| \frac{1-\ee^{-\tau \Re \lambda}}{\Re \lambda}, & \text{if } \Re \lambda \neq 0 \\[6pt]
1 + \tau \, |\lambda | , & \text{if } \Re \lambda = 0.
\end{cases}
}
$$
\fra{For $\Re \lambda \neq 0$}, from Banach's perturbation lemma (e.g., \cite[Theorem 10.1]{KressBook}), by taking $\tilde{M}(\lambda)$ such that $\|\lambda (I-\LL_{M-1})V\|_\infty \, \|(I-\lambda V)^{-1} \|_{\infty} < 1/2$ for all $M \geq \tilde{M}(\lambda)$, we conclude that $(I - \lambda \LL_{M-1}V)$ is invertible in $C$ for all $M \geq \tilde{M}(\lambda)$, and
\fra{
\begin{equation} \label{bound_discr}
\| (I - \lambda \LL_{M-1} V)^{-1}\|_{\infty} \leq
\begin{cases}
2 \left(1 + |\lambda| \frac{1-\ee^{-\tau \Re \lambda}}{\Re \lambda}\right), & \text{if } \Re \lambda \neq 0 \\[6pt]
2 (1+\tau |\lambda |), & \text{if } \Re \lambda = 0.
\end{cases}
\end{equation}
}
Hence, from \eqref{eq_error},
\begin{equation*}
e_M' = (I - \lambda \LL_{M-1} V)^{-1} r_M(\lambda,\varphi),
\end{equation*}
with
\begin{align*}
r_M(\lambda,\varphi) := \lambda (I-\LL_{M-1}) \psi - (I-\LL_{M-1})\varphi
= (I-\LL_{M-1}) \psi'.
\end{align*}
The bound \eqref{psi-p} follows from the fact that $ e_M = V e_M' \in \fra{AC_0}$ and therefore
$$ \| e_M \|_{NBV} \leq \tau \| e_M'\|_{\infty},$$
\fra{taking $\overline{M}(B) = \sup_{\lambda \in \overline{B}} \tilde{M}(\lambda)$ and using the fact that the right-hand side of \eqref{bound_discr} is continuous on $\mathbb{C}$.}
\end{proof}
From \eqref{psi-p}, it is clear that $\|\psi_M-p_M\|_{NBV} \to 0$ when $\|r_M\|_\infty \to 0$.
\fra{If $\varphi \in AC_0$}, $\psi' \in \fra{AC_0}$ and therefore $\|r_M\|_\infty \to 0$ when the grid consists of the Chebyshev zeros \eqref{cheb_zeros}, cf.~\cite{Krylov}.
If $\varphi \in D(C_0)$, then $\psi'' \in NBV$ and the convergence $\|r_M\|_\infty \to 0$ holds for any set of nodes satisfying \eqref{cond_Lebesgue}, since
$$ \|(I-\LL_{M-1})\varphi\|_\infty \leq \tilde \Lambda_M \mathcal{E}_{M-1}(\varphi) \leq \frac{2 \tilde \Lambda_M }{\pi (M-2)} \| \psi'' \|_{NBV}, $$
where $\mathcal{E}_M$ denotes the error of the best uniform approximation in $C$ with polynomials of degree $M$ \cite[Chapter 7]{Trefethen2013}.
As already discussed, the order of convergence $\|r_M\|_{\infty} \to 0$ is higher if $\psi'$ has higher regularity (at least Lipschitz continuous, see \eqref{interp_error_sup} and following discussion).
\medskip
\fra{The proof of Lemma \ref{l:res} can be extended to functions $\varphi \in NBV$ that are continuous in $[-\tau,0)$, but have a jump discontinuity in $\theta=0$ (note that this class includes the function $q$).
In fact, note that $(\lambda I-C_0)^{-1}$ maps $NBV$ to $D(C_0)$, hence $\psi = (\lambda I-C_0)^{-1}\varphi \in \fra{AC_0}$.
From \eqref{psi}, we note that if $\varphi$ is discontinuous in $\theta=0$,
$\psi'$ is discontinuous, too.
However, we can adapt the previous proof by considering $\varphi$ and $\psi'$ extended by continuity in $\theta=0$, hence working in the space $C$.
In particular, the following special case for the function $q\in NBV$ is relevant for the analysis of characteristic roots.
}
\begin{lemma} \label{l:res_1}
\fra{
Let $B$ be an open bounded subset of $\mathbb{C}$ and assume that \eqref{cond_Lebesgue} hold.
There exists $\overline M (B)$ such that, for any index $M \geq \overline M(B)$ and $\lambda \in B$, the polynomial
\begin{equation*}
p_{M,\lambda}:= P_M(\lambda I - D_M)^{-1} (-\mathbf{1})
\end{equation*}
is well defined, and, for $\psi_\lambda := (\lambda I - C_0)^{-1}q$,
\begin{equation} \label{error_m_q}
\| \psi_\lambda - p_{M,\lambda} \|_{NBV} \leq C_2(B) \frac{C_1(B)^M}{M!}
\end{equation}
where
\begin{align*}
C_1(B) &= \tau \, \max_{\lambda \in \overline{B}} |\lambda|, \\
C_2(B) &= 2 \, \sup_{\lambda \in \overline{B}} \left[ \left( 1 + |\lambda| \frac{1-\ee^{-\tau \Re \lambda} }{\Re \lambda} \right) \, \max \{ \ee^{-\tau \Re \lambda}, 1\} \right] .
\end{align*}
}
\end{lemma}
\begin{proof}
\fra{From \eqref{resolvent} we obtain, for $\theta \in [-\tau,0]$,
\begin{equation} \label{psi_lambda}
\psi_{\lambda}(\theta)=
\begin{cases}
\displaystyle{\frac{\ee^{\lambda \theta}-1}{\lambda}}, & \text{if } \lambda \neq 0, \\
\theta, & \text{if } \lambda = 0.
\end{cases}
\end{equation}
Consider now the function $\eta_\lambda(\theta) := \ee^{\lambda \theta}$, $\theta \in [-\tau,0]$, which satisfies
\begin{equation*}
\eta_\lambda(\theta) = \lambda \psi_\lambda(\theta) + q(\theta), \qquad \theta \in [-\tau,0),
\end{equation*}
(i.e., $\eta_\lambda = \psi_\lambda'$) and $\eta_\lambda$ is continuous on $[-\tau,0]$.
}
\fra{As in the proof of Lemma \ref{l:res}, let $p_{M,\lambda}$ be the solution of
\begin{equation*}
\begin{cases}
p_{M,\lambda}'(\theta) = \lambda p_{M,\lambda}(\theta) + q(\theta), & \theta = \theta_1,\dots,\theta_M, \\
p_{M,\lambda}(0)=0,
\end{cases}
\end{equation*}
and let $z_M := \eta_\lambda - p_{M,\lambda}'$. Then $e_M := \psi_\lambda -p_{M,\lambda} = V z_M$, and $z_M$ satisfies
\begin{equation} \label{eq_error_2}
z_M = \lambda \LL_{M-1} V z_M + \overline r_M(\lambda),
\end{equation}
for $\overline r_M(\lambda):= (I-\LL_{M-1}) \eta_\lambda$.
Since $\overline r_M(\lambda) \in C$, we can invert equation \eqref{eq_error_2} using \eqref{bound_discr},
and the assertion follows from $e_M = V z_M$.
}
\fra{
Using the Cauchy interpolation remainder \cite[Theorem 3.1.1]{Davis1975}, we can bound
\begin{equation} \label{rm}
\| \overline r_M(\lambda) \|_\infty \leq \frac{\tau^M \| \psi^{(M)}\|_\infty}{M!} \leq \frac{\tau^M |\lambda|^M \max \{ \ee^{-\tau \Re \lambda }, 1\}}{M!}
\end{equation}
The error bound \eqref{error_m_q} follows from $e_M = Vz_M$, using \eqref{eq_error_2}, \eqref{rm} and \eqref{bound_discr} for $\lambda \in \overline{B}$.
}
\end{proof}
\subsection{Characteristic roots} \label{s:conv_CE}
We first study the exponential solutions of \eqref{linear_ODEM} and specify the corresponding discrete characteristic equation.
Next, we prove the convergence of the roots of the characteristic equations and, in turn, of exponential solutions.
\begin{lemma}
\fra{
Let $M\in \mathbb{N}$ and let $\lambda \in \mathbb{C}$, $\lambda \notin \sigma(D_M)$.
Then \eqref{linear_ODEM} has a solution of the form $x(t)=\ee^{\lambda t}y$ with nontrivial $y \in \mathbb{C}^M$ if and only if $\lambda$ is a root of the characteristic equation
\begin{equation} \label{discr_CE}
1=K_M (\lambda I-D_M)^{-1} (-\mathbf{1}).
\end{equation}
}
\end{lemma}
\begin{proof}
Substitution of $\ee^{\lambda t}y$ into \eqref{linear_ODEM} leads to
$$
\lambda y = D_M y-(K_My) \, \mathbf{1}.
$$
First note that, for $M \geq \overline M(\lambda)$, $K_My \neq 0$ must hold. Indeed $\lambda \notin \sigma(D_M)$ and therefore $\lambda y = D_M y$ can not hold.
Next, we can invert the equation to obtain
$$y= (K_M y)(D_M-\lambda I)^{-1}\mathbf{1}.$$
Hence we should have
$$ K_M y = (K_M y)\, K_M (D_M-\lambda I)^{-1}\mathbf{1},$$
which amounts to \eqref{discr_CE} since $K_M y \neq 0$.
\smallskip
\fra{Vice versa, assume \eqref{discr_CE} holds and take $y := (D_M-\lambda I)^{-1}\mathbf{1}$. Note that $y \neq 0$ because $(D_M-\lambda I)$ has full rank. Moreover, $K_My = \mathbf{1}$, hence
$$ (D_M-\lambda I)y = \mathbf{1} = K_My,$$
i.e., \eqref{discr_CE} holds and therefore $\ee^{\lambda t}y$ is a solution of \eqref{linear_ODEM}.
}
\end{proof}
\bigskip
Let now
\begin{align*}
\chi(\lambda) &:= 1 - \int_0^\tau k(a) \ee^{-\lambda a} \mathop{}\!\mathrm{d} a = 1 - \langle k, \psi_\lambda \rangle, \\
\chi_M(\lambda) &:= 1 - K_M (D_M-\lambda I)^{-1} \mathbf{1} = 1 - \langle k, P_M (\lambda I-D_M)^{-1} (-\mathbf{1}) \rangle,
\end{align*}
with $\psi_\lambda$ defined in \eqref{psi_lambda},
so that the roots of \eqref{CE} and \eqref{discr_CE} coincide with the zeros of $ \chi(\lambda)=0$ and $\chi_M(\lambda)=0$, respectively.
We want to prove the convergence of the zeros of $\chi_M$ to the zeros of $\chi$.
Lemma \ref{l:res_1} allows us to state the following result.
\begin{theorem} \label{th:conv_CE}
\fra{Let $B$ be an open bounded subset of $\mathbb{C}$, and let \eqref{cond_Lebesgue} hold.
There exists $\overline M(B)$ such that, for any index $M \geq \overline M(B)$ and $\lambda \in B$, $\lambda \notin \sigma(D_M)$,
$$ |\chi(\lambda)-\chi_M(\lambda)| \leq C_2(B) \frac{C_1(B)^M}{M!} \, \| k\|_\infty,$$
for $C_1(B)$ and $C_2(B)$ defined as in Lemma \ref{l:res_1}.
}
\end{theorem}
\begin{proof}
We have
\begin{align*}
\big|\chi(\lambda)-\chi_M(\lambda)\big|
&= \big| \langle k , \psi_{\lambda} \rangle - \langle k, P_M \fra{(\lambda I - D_M)^{-1}} \mathbf{1} \rangle \big| \\
&\leq \| k \|_{\infty} \| \psi_\lambda - P_M \fra{(\lambda I - D_M)^{-1}} \mathbf{1}\|_{NBV},
\end{align*}
\fra{and the assertion follows from \eqref{error_m_q}.}
\end{proof}
We are now ready to conclude that roots of \eqref{CE} are approximated by roots of \eqref{discr_CE}.
The proof of the following result follows the lines of \cite[Section 5.3.2]{BMVBook}. We repeat the proof here for completeness.
\begin{theorem} \label{th:conv_lambda}
Let $\lambda$ be a root of \eqref{CE} with multiplicity $\nu$, and let $B$ be an open ball of center $\lambda$ such that \fra{there is no other characteristic root of \eqref{CE} in $B$}.
Under assumption \eqref{cond_Lebesgue}, there exists a positive integer $\overline{M}=\overline{M}(B)$ such that, for $M \geq \overline{M}$, there exist $\nu$ roots $\lambda_1,\dots,\lambda_\nu$ (each counted as many times as its multiplicity) of \eqref{discr_CE} with
$$ \max_{j=1,\dots,\nu} \big| \lambda - \lambda_j \big| \leq \left( \frac{2 \nu!}{|\chi^{(\nu)}(\lambda)|} \max_{z\in B\setminus\{\lambda\}} |\chi(z)-\chi_M(z)| \right)^{\frac{1}{\nu}}.$$
In particular, $ \max_{j=1,\dots,\nu} | \lambda - \lambda_j | \to 0 $ as $M\to \infty$.
\end{theorem}
\begin{proof}
We want to apply Rouch\'e's theorem from complex analysis, which states that, if $\chi$ and $\chi_M$ are continuous on a compact set $K$ and holomorphic in the interior of $K$, and $|\chi(z)-\chi_M(z)|<|\chi(z)|$ for all $z \in \partial K$, then $\chi$ and $\chi_M$ have the same number of zeros inside $K$, see \cite[Chapter 10]{RudinBookComplex} or \cite[Section 7.7]{PriestleyBook}.
Clearly, the complex-valued functions $\chi$ and $\chi_M$ are holomorphic in $B$.
Let
$$ \varepsilon_M(z) := \big| \chi(z) - \chi_M(z)\big|.$$
From Theorem \ref{th:conv_CE}, we have that $ \varepsilon_M(z) \to 0$ for all $z \in B\setminus \{\lambda\}$.
\medskip
Since $\lambda$ has multiplicity $\nu$, we have $\chi^{(n)}(\lambda) =0$ for $n=0,\dots,\nu-1$.
For $z$ in a neighborhood $U\subset B$ of $\lambda$, by Taylor expansion we have
\fra{
\begin{equation} \label{chi_taylor}
\chi(z) = \frac{\chi^{(\nu)}(\lambda)}{\nu!} (z-\lambda)^\nu + R_\nu(z),
\end{equation}
with
$$ R_\nu(z) = \sum_{j=\nu+1}^\infty \frac{\chi^{(j)}(\lambda)}{j!} (z-\lambda)^j = \sum_{j=\nu+1}^\infty \frac{(z-\lambda)^j}{2\pi i} \int_{\partial B} \frac{\chi(w)}{(w-\lambda)^{j+1}} \mathop{}\!\mathrm{d} w.
$$
If $r$ denotes the radius of the ball $B$ centered in $\lambda$, we have
\begin{equation} \label{R-bound}
|R_\nu(z)| \leq M_B \sum_{j=\nu+1}^\infty \frac{(z-\lambda)^j}{r^j} \leq M_B \frac{\alpha(z)^{\nu+1}}{1-\alpha(z)}
\end{equation}
where $M_B := \sup_{z \in \partial B} \frac{|\chi(z)|}{2\pi r}$ and $\alpha(z) = \frac{|z-\lambda|}{r}<1$.
Take now $r_1=r_1(B)$ such that $B(\lambda,r_1) \subset B$ and, for all $z \in B(\lambda,r_1)$,
\begin{equation} \label{r1}
M_B \frac{\alpha(z)^{\nu+1}}{1-\alpha(z)} < \frac{1}{2} \frac{|\chi^{(\nu)}(\lambda)|}{\nu!} |z-\lambda|^\nu.
\end{equation}
From \eqref{chi_taylor}, \eqref{R-bound} and \eqref{r1} we have that,
}
for all $z \in B(\lambda,r_1)$,
\begin{equation} \label{error_2}
|\chi(z)| > \frac{1}{2} \frac{|\chi^{(\nu)}(\lambda)|}{\nu!} |z-\lambda|^\nu.
\end{equation}
Now take $\overline M= \overline M(B)$ large enough so that, for all $M\geq \overline M$,
\begin{equation} \label{error_1}
\max_{|z-\lambda|=r_1} \varepsilon_M(z) \leq \frac{1}{2} \frac{|\chi^{(\nu)}(\lambda)|}{\nu!} r_1^\nu,
\end{equation}
and define $r^*=r^*(M)$ by
\begin{equation} \label{r*}
r^* := \left( \frac{\max_{|z-\lambda|=r_1} \varepsilon_M(z)}{\frac{1}{2} \frac{|\chi^{(\nu)}(\lambda)|}{\nu!}} \right)^{\frac{1}{\nu}}.
\end{equation}
Note that, by \eqref{error_1}, it is $r^*\leq r_1$.
Then, by combining \eqref{error_2} and \eqref{error_1}, we have
\begin{align*}
\max_{|z-\lambda|=r^*} \big| \chi(z) - \chi_M(z) \big|
&\leq \max_{|z-\lambda|=r_1} \big| \chi(z) - \chi_M(z) \big| \\
&= \frac{1}{2} \frac{|\chi^{(\nu)}(\lambda)|}{\nu!} (r^*)^\nu \\
& < \fra{|\chi(z)|},
\end{align*}
where the last inequality holds for all $z$ such that $|z-\lambda| = r^*$.
Hence for all $M\geq \overline M$ we can apply Rouch\'e's theorem on $B(\lambda,r^*)$, and conclude that $\chi$ and $\chi_M$ have the same number of zeros in $B(\lambda,r^*)$, where each zero is counted as many times as its multiplicity.
More precisely, there exist $\lambda_1,\dots,\lambda_\nu$ such that, for all $j=1,\dots,\nu$, $ \chi_M(\lambda_j)=0$ and
$$ |\lambda - \lambda_j| < r^*.$$
Now note from \eqref{r*} that $r^*=r^*(M) \to 0$ as $M\to \infty$.
Hence we have proved the convergence of characteristic roots with their multiplicity.
\end{proof}
Thanks to the previous theorem we know that each ``true'' characteristic root is approximated as $M\to \infty$ with its multiplicity.
\fra{Finally, given a sequence $\{\lambda_M\}_{M}$ satisfying $\chi_M(\lambda_M)=0$ for $M\geq \overline{M}$ for some $\overline M \in \mathbb{N}$, and such that $\lambda_M \to \lambda \in \mathbb{C}$ as $M\to \infty$, from the continuity of $\chi$ and $\chi_M$ we can conclude that $\chi(\lambda)=0$.
}
Combining the result of Theorem \ref{th:conv_lambda} with the bound in Lemma \ref{l:res}, it follows that the order of convergence of the characteristic roots depends on the interpolation error of $\psi_\lambda'$.
In particular, Cauchy's remainder theorem \cite[Theorem 3.1.1]{Davis1975} gives the more precise bound
\begin{equation} \label{bound_modulus}
\|(I-\mathcal{L}_{M-1}) \psi_\lambda' \|_\infty \leq \frac{\tau^{M} \| \psi_\lambda^{(M)}\|_{\infty}}{M!} \leq \frac{\tau^{M} |\lambda|^{M-1} \max\{ \ee^{- \tau \Re \lambda}, 1\} }{M!} .
\end{equation}
Hence, the modulus of $\lambda$ affects the order of convergence of the roots of the discrete characteristic equation, as observed also in \cite{BMVBook,BMV2005}. More precisely, characteristic roots that are smaller in modulus are approximated with better accuracy for the same value of $M$. This can indeed be observed experimentally in Figure \ref{f:specialRE_eig}.
\subsection{Numerical results}
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{specialRE_eigs}
\caption{Equation \eqref{specialRE}. Eigenvalues associated with the positive equilibrium at $\log\gamma=2,2.57,3$, for $M=10,20,40$.
Larger (in modulus) eigenvalues to the left are not shown.
Note the rightmost pair of eigenvalues crossing the vertical axis in a Hopf bifurcation, as $\log\gamma$ increases.
}
\label{f:specialRE_eig}
\end{figure}
\begin{figure}[p]
\includegraphics[width=\textwidth]{specialRE_error}
\caption{Equation \eqref{specialRE}. Log-log plot of the error in the detection of the Hopf bifurcation point (left) and the first period doubling bifurcation (right), increasing $M$.
}
\label{f:specialRE_errors}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{specialRE_mults}
\caption{
Equation \eqref{specialRE}. Multipliers associated with the branch of periodic solutions emerging from Hopf, at $\log\gamma=3,3.8763,4$, for $M=10,20,40$.
Note that the periodic solutions lose stability via a period doubling bifurcation, with the leftmost multiplier exiting the unit circle through $-1$ as $\log\gamma$ increases.
For $M=10$ and $\log\gamma=4$ the leftmost multiplier is at approximately $\lambda \approx -2.6793$ (not visible in the leftmost panel).
}
\label{f:specialRE_mult}
\end{figure}
To illustrate the convergence result proved in Theorem \ref{th:conv_lambda} we consider again equation \eqref{specialRE}, focusing on the approximation of the eigenvalues associated with the nontrivial equilibrium.
In the following analyses, we examine the approximated eigenvalues returned as output of the numerical continuation with MatCont and study experimentally the convergence as $M$ increases.
Similar analyses have been performed on the other examples presented in Section \ref{s:examples}, showing analogous results.
Explicit formulas for the differentiation matrix associated to \eqref{cheb_zeros} with $\theta_0=0$, as well as the barycentric weights for interpolation, are reported in \ref{app:cheb_zeros}.
Figure \ref{f:specialRE_eig} shows the approximated eigenvalues associated with the nontrivial equilibrium for $\log\gamma=2,3$, and at the detected Hopf bifurcation point, located at $\log\gamma \approx 2.5708$.
We plotted the approximated spectrum for several values of $M$, with $M=10,20,40$.
Note that the rightmost eigenvalues are well approximated already for $M=10$, whereas eigenvalues that are larger in modulus require higher values of $M$ to obtain a satisfactory approximation.
This is in agreement with the bound \eqref{bound_modulus}, since the modulus of $\lambda$ affects the speed of convergence.
The different panels show the rightmost pair of eigenvalues crossing the vertical axis from left to right in a Hopf bifurcation.
As an outlook on the approximation of bifurcation points of equilibria and periodic solutions, we have studied experimentally the error behavior of the MatCont bifurcation points, Hopf and period doubling, see Figure \ref{f:specialRE_errors} (log-log plot).
The error is computed with respect to the values obtained with the current method and $M=40$.
Both plots show the typical spectral accuracy behavior, but a reliable approximation of the period doubling bifurcation requires a larger dimension of the approximating system.
The barrier of $10^{-7}$ evident in the left panel is likely due to the tolerance options imposed in MatCont (tolerance $10^{-10}$ for Newton's method and $10^{-6}$ for the calculation of the test functions for bifurcation points).
The computation of the period doubling was performed with tolerance options $10^{-6}$ in MatCont.
Although we did not study theoretically the convergence of multipliers, we can expect that, similarly as in Theorem \ref{th:conv_lambda} for eigenvalues, the multiplicity plays a role in the order of convergence.
The fact that the trivial multiplier $\mu=1$ has multiplicity 2 due to the integration of the state may in turn have consequences on the convergence rate of the multipliers corresponding to periodic solutions.
Regarding periodic solutions, we plot the approximated multipliers associated with the branch of periodic solutions emerging from Hopf, for $\log\gamma=3,4$, and at the period doubling bifurcation detected at $\log\gamma \approx 3.8763$, see Figure \ref{f:specialRE_mult}.
Note the leftmost multiplier exiting the unit circle via $-1$, indicating a period doubling bifurcation.
\section{Discussion and outlook} \label{s:outlook}
We have proposed a new approach for the approximation of a nonlinear RE with a system of ODE.
Similarly as in \cite{SIADS2016,EJQTDE2016}, the approach is based on the formulation of the problem as an abstract Cauchy problem on a space of functions and its approximation via pseudospectral techniques.
The novelty here is that the space of functions is chosen as the space $AC_0$, rather than $L^1$. This is done by first integrating the state and then considering the integrated variable as state variable. A similar idea has been proposed for PDE in \cite{Vietnam2020}.
Compared with the method proposed in \cite{SIADS2016}, this approach returns an approximation that is naturally formulated as a system of ODE, avoiding the algebraic equation emerging from the rule for extension of the RE.
In terms of efficiency, the reduction in computational cost is approximately tenfold, as evidenced in Figure \ref{f:specialRE_time} and Table \ref{t:times}.
The attention here is restricted to the subspace $\fra{AC_0} \subset NBV$ consisting of absolutely continuous functions. In \cite{TwinSemigroups}, the variation-of-constants formula and the construction of the solution operators of linear equations are in fact extended to $NBV$. So one might wonder whether pseudospectral approximation can be extended from $\fra{AC_0}$ to $NBV$? In the $NBV$ setting, solutions are allowed to make jumps when time proceeds.
To capture the jumps of $v(t)$, the discretized vector $x(t)$ should be allowed to have jumps as well. As a consequence, one cannot in general describe the dynamics of the vector in terms of just a differential equation.
In this case one could consider other techniques like spectral or finite elements approximations.
Although the construction is carried out in the space $NBV$, we resorted to bounds of the interpolation error in supremum norm (hence in a subspace of $C$) when proving the error bounds in Lemma \ref{l:res}, since the literature about convergence of interpolation in supremum norm is traditionally more extensive than the corresponding one in the $L^1$-norm. We wonder if similar (or even stronger) conclusions can be stated in terms of the $NBV$ norm, possibly exploiting results about the interpolation of $NBV$ functions and bounds in terms of bounded variation norm \cite{MastroianniBook, Trefethen2013, Mastroianni2010}.
In our theoretical results, the condition \eqref{cond_Lebesgue} concerning the asymptotic (for the number of points going to infinity) behavior of the Lebesgue constant plays a crucial role. It is known that for Chebyshev zeros, cf.~\eqref{cheb_zeros}, the condition is satisfied, see \cite[Chapter 1.4.6]{MastroianniBook}. For other meshes the condition is not guaranteed. For instance, for Chebyshev extremal nodes without one or both endpoints it is only known that the quantity at the left-hand side of \eqref{cond_Lebesgue} is bounded for $M$ tending to infinity \cite[Chapter 4.2]{MastroianniBook}.
However, Chebyshev extrema are widely used for pseudospectral differentiation and integration in the bounded interval, and efficient numerical routines exist for the associated differentiation matrix, and interpolation and quadrature weights \cite{Trefethen2000,Weideman2000}.
Our experimental results (not included here), show comparable results when using Chebyshev zeros or extrema, both in terms of accuracy and computational times. It would be interesting if either \eqref{cond_Lebesgue} could be verified for a large class of meshes, or the proof of convergence could be restructured such that only a weaker variant of \eqref{cond_Lebesgue}, known to hold for a large class of meshes, is needed.
We proved that every characteristic root of a linear(ized) RE is approximated by the characteristic roots of the corresponding pseudospectral approximation when the dimension $M$ is large enough.
Vice versa, the limit of every convergent sequence of discrete characteristic roots is a ``true'' characteristic root of the delay equation. An open problem is proving that the dimension of the unstable manifold is preserved for $M$ large enough, or alternatively that the infinite- and finite-dimensional problems have the same number of eigenvalues lying in any right-half of the complex plane, if $M$ is large enough.
We also remark that proving convergence of Hopf bifurcations requires not only the convergence of the eigenvalues, but also to verify that the transversality conditions are satisfied at the bifurcation point, and the direction of bifurcation is preserved as $M\to \infty$ \cite{DGG2007,DiekmannBook}.
For DDE, this is done in \cite{Babette}. To adapt the proofs to RE, one should obtain explicit formulas for the direction of Hopf and verify the convergence (see \cite[Remark 2.22]{DGG2007} and \cite{Babette}).
The natural next step beyond the study of equilibria is the approximation of periodic solutions and their convergence as $M\to \infty$. In this case, one should first ensure the convergence of the finite-time solution maps, and hence focus the attention on the initial value problems associated with the RE and its ODE approximation.
A study of the convergence of the solution operators is in the authors' pipeline.
The approximation of stability of the periodic solutions then relies on the approximation of the multipliers of the (time periodic) linearized equations, as formally proved in \cite{Breda2020}.
The analysis of Nicholson's blowflies equation, and in particular Figure \ref{f:Nich-fixed-M}, shows the impossibility of using a discretization of a truncated interval to approximate the behavior of an equation with infinite delay. In this case, techniques specific to the unbounded integration interval seem necessary, along the lines of \cite{AMC2018, IlariaThesis}.
To keep the notation simple, in Section \ref{s:method} we introduced the approximation approach for scalar equations. We stress however that the extension to systems of equations is quite straightforward and can be done along the lines of \cite{SIADS2016}. In fact, the combination of the current method for RE with the pseudospectral discretization of DDE \cite{SIADS2016,BredaSanchez2015} provides a strategy for approximating general systems where a RE is coupled with a DDE, which arise frequently in population dynamics, see e.g.\ \cite{DGM2010}.
\section*{Acknowledgements}
\noindent
\fra{The authors are thankful to two anonymous reviewers for their constructive comments that improved the manuscript.
The research of FS was supported by the NSERC-Sanofi Industrial Research Chair in Vaccine
Mathematics, Modelling and Manufacturing.
FS and RV are members of the INdAM Research group GNCS and of the UMI Research group ``Modellistica Socio-Epidemiologica''.}
{ \footnotesize
|
1,108,101,565,150 | arxiv | \section{Introduction}
The chiral coupled channel approach, implementing exact unitarity in coupled channels
and using input from chiral Lagrangians, has allowed to make predictions beyond the
restricted range of energies of chiral perturbation theory and is having a
great impact in the study of meson baryon interaction at low energies.
At the same time it has shown that many known resonances listed by the Particle Data
Group (PDG)~\cite{Eidelman:2004wy}
qualify as dynamically generated, or in simpler words,
they are quasibound states of a meson and a baryon. After early studies in this
direction showing that the $\Lambda (1405)S_{01}$ and the $N^*(1535)S_{11}$ were dynamically
generated resonances~\cite{Kaiser:1995cy,Kaiser:1996js,angels,Nacher:1999vg,oller,Inoue:2001ip,
bennhold,Garcia-Recio:2002td},
more
systematic studies have shown that there are two octets and one singlet of
resonances from the interaction of the octet of pseudoscalar mesons with the
octet of stable baryons~\cite{Jido:2003cb,Garcia-Recio:2003ks}.
Further work in this direction~\cite{lutz,decu_ss} has
shown that many of the $3/2^-$ low lying baryonic resonances appear as
dynamically generated from the interaction of the decuplet of baryons and the
octet of mesons. Clear peaks in the amplitudes and poles in the complex plane appear
for states that can be associated to the $\Delta(1700)D_{33}$, $\Sigma(1670)D_{13}$,
$\Sigma(1940)D_{13}$ and $\Xi(1820)D_{13}$, while the $N^*(1520)D_{13}$ and $\Lambda(1520)D_{03}$
are reproduced only qualitatively hinting at the relevance of extra coupled channels.
In particular the $\Lambda(1520)$ appears displaced in mass, around 1560 MeV in
\cite{lutz} and 1570 MeV in~\cite{decu_ss}.
In the chiral coupled channel approach of Refs.~\cite{lutz,decu_ss}
this resonance couples to the $\pi \Sigma^*(1385)$ and
$K \Xi^*(1530)P_{13}$ channels, particularly to
the former one. With the $\pi^+ \Sigma^{*-}$, $\pi^- \Sigma^{*+}$, $\pi^0 \Sigma^{*0}$ masses
7 MeV above, 2 MeV above and 1 MeV below the nominal $\Lambda(1520)$ mass and the
strong coupling of the resonance to $\pi \Sigma^*$, the state could qualify as a loosely
bound $\pi \Sigma^*$ state.
However, the lack of other relevant channels which couple
to the quantum numbers of the resonance makes the treatment
of~\cite{lutz,decu_ss} only semiquantitative. In
particular, the $\Lambda(1520)$ appears in~\cite{lutz,decu_ss}
at higher energy than the nominal one and with a large width of
about 130 MeV, nearly ten times larger than the physical width.
This large width is a necessary consequence of the large coupling to the
$\pi \Sigma^*$ channel and the fact that the pole appears at energies above
the $\pi \Sigma^*$ threshold. On the other hand, if we modify the
subtraction constants of the meson baryon loop function to bring the pole below
the $\pi \Sigma^*$ threshold, then the pole appears without imaginary part.
Since the width of the $\Lambda(1520)$ resonance comes basically from the
decay into
the $\bar{K} N$ and $\pi \Sigma(1193)$, the introduction of these channels is
mandatory to reproduce the shape of the $\Lambda(1520)$ resonance.
In the present work we include the $\bar{K} N$ and $\pi \Sigma$ channels into
the set of coupled channels which build up the $\Lambda(1520)$.
This is done phenomenologically with no links with chiral Lagrangians.
The novelty with
respect to the other channels already accounted for~\cite{lutz,decu_ss},
which couple in $s$-wave,
is that the new channels couple in $d$-waves. Fitting two parameters to the partial decay widths of the $\Lambda(1520)$
into $\bar{K} N$ and $\pi \Sigma$, a good shape for the $\Lambda(1520)$
dominated
amplitudes is obtained at the right position and with the proper experimental
width. The coupling of the $\Lambda(1520)$ to the $\pi \Sigma^*$
channel is a prediction of the theory and we use this to study
the reaction $K^- p \to \pi \Sigma(1385) (\pi^0 \Lambda)$
which is closely related to the strength of
this coupling. We then compare with recent
experimental results measured above the $\Lambda(1520)$ energy. The
agreement with the data is good and the cross section is sizeable thanks to
the large coupling of the $\Lambda(1520)$ to the $\pi \Sigma^*$ channel. Other
standard mechanisms for the
$K^- p \to \pi^0 \pi^0 \Lambda$ reaction without the $\Lambda(1520)$
give too small a cross section compared
to experiment in a wide range of energies around the $\Lambda(1520)$ peak.
We also compare the invariant mass distributions for
$\pi^0 \Lambda$, where a distinct peak associated to the $\Sigma(1385)$ resonance
is seen, in good agreement with experiment.
We also make predictions for the cross section for $K^- p$ energies around
the $\Lambda(1520)$, where we find a large peak with the
$\Lambda(1520)$ shape, not measured so far, and which, if confirmed
experimentally, would
give a strong support to the idea of the $\Lambda(1520)$ as a dynamically
generated resonance.
Inclusion of the new channels $\bar{K} N$ and $\pi \Sigma$ into
the coupled channel approach allows one to calculate
the cross sections of other reactions
where the $\Lambda(1520)$ appears, much as has been the case of the
$\Lambda(1405)$~\cite{review}, where
a large variety of reactions could be studied within the chiral unitary approach
taking into account that the $\Lambda(1405)$ is dynamically generated from the
$\bar{K}N$ and coupled channels in $s$-wave.
\section{Decuplet octet interaction and the $\Lambda(1520)$}
Following~\cite{decu_ss}, we briefly recall how the $\Lambda(1520)$ is generated
dynamically in the $s$-wave interaction of the decuplet of baryons with the octet
of pseudoscalar mesons. We consider the lowest order term of the
chiral Lagrangian given by~\cite{manohar}
\begin{equation}
{\cal L}=-i\bar T^\mu {\cal D}\!\!\!\!/ T_\mu
\label{lag1}
\end{equation}
where $T^\mu_{abc}$ is the decuplet of Rarita Schwinger fields and ${\cal D}^{\nu}$ the covariant derivative
given by
\begin{equation}
{\cal D}^\nu T^\mu_{abc}=\partial^\nu T^\mu_{abc}+(\Gamma^\nu)^d_aT^\mu_{dbc}
+(\Gamma^\nu)^d_bT^\mu_{adc}+(\Gamma^\nu)^d_cT^\mu_{abd}
\end{equation}
with $\mu$ the Lorentz index and $a,b,c$ the $SU(3)$ indices.
The vector current $\Gamma^\nu$ is given by
\begin{equation}
\Gamma^\nu=\frac{1}{2}(\xi\partial^\nu \xi^\dagger+\xi^\dagger\partial^\nu \xi)
\end{equation}
with
\begin{equation}
\xi^2=U=e^{i\sqrt{2}\Phi/f}
\end{equation}
where $\Phi$ is the 3$\times$3 matrix of fields for the pseudoscalar
mesons~\cite{Gasser:1984gg} and $f=93$ MeV. Consideration of only the $s$-wave part
of the baryon meson interaction and the use of non-relativistic approximations
as described in detail in~\cite{decu_ss} allows for substantial technical
simplifications, and writing $T_\mu \equiv T u_\mu$, with $u_\mu$ the
Rarita Schwinger spinor, the Lagrangian can be written as the flavor trace
\begin{equation}
{\cal L}=3i\, Tr \left[ \bar{T}\cdot T\, \Gamma^{0T }\right]
\label{lag1xx}
\end{equation}
where
\begin{equation}
\left( \bar{T}\cdot T\right)^{a}_d=\sum_{b,c} \bar{T}^{abc}T_{dbc}
\end{equation}
and $\Gamma^{0T }$ is the transposed matrix of $\Gamma^{0 }$ with
$\Gamma^{\nu}$ given, up to two meson fields, by
\begin{equation}
\Gamma^{\nu} =\frac{1}{4 f^2}\left( \Phi\partial^{\nu}\Phi-\partial^{\nu}\Phi\Phi\right).
\end{equation}
From the Lagrangian of Eq. (\ref{lag1xx}) and with the ordinary correspondence of the
${T}^{abc}$ components to the decuplet fields used in \cite{decu_ss}
we obtain the $s$-wave transition amplitudes for a meson of
incoming and outgoing momentum $k$ and $k^{\,\prime}$ respectively as
\begin{equation}
V_{ij}=-\frac{1}{4f^2}C_{ij}(k^0+k^{^{\,\prime} 0}).
\label{poten}
\end{equation}
For the quantum numbers $S=-1$ and $I=0$ the relevant channels are
$\pi\Sigma^*$ and $K\Xi^*$. The corresponding coefficients $C_{ij}$
are shown in table~\ref{tabS-1I0} where we have used the isospin
states\footnote{we use $|\pi^+\rangle=-|1\ 1\rangle$}
\begin{eqnarray}
|\pi\Sigma^* ;\ I=0\rangle&=&\frac{1}{\sqrt{3}}\ |\pi^-\Sigma^{*+}\rangle
-\frac{1}{\sqrt{3}}\ |\pi^0\Sigma^{*0}\rangle
-\frac{1}{\sqrt{3}}\ |\pi^+\Sigma^{*-}\rangle\nonumber\\
|K\Xi^* ;\ I=0\rangle&=&-\frac{1}{\sqrt{2}}\ |K^0\Xi^{*0}\rangle
+\frac{1}{\sqrt{2}}\ |K^+\Xi^{*-}\rangle~.
\end{eqnarray}
\begin{table}[h]
\begin{center}
\begin{tabular}{c|cc}
\hline
& $\pi\Sigma^*$ & $K\Xi^*$ \\
\hline
$\pi\Sigma^*$ & 4 & $-\sqrt{6}$ \\
$K\Xi^*$ & $-\sqrt{6}$ & 3 \\
\hline
\end{tabular}
\caption{$C_{ij}$ coefficients for $S=-1$, $I=0$.}
\label{tabS-1I0}
\end{center}
\end{table}
The matrix $V$ is then used as the kernel of the Bethe-Salpeter equation to
obtain the unitary transition matrix~\cite{angels}. This results in the matrix equation
\begin{equation}
T=(1-VG)^{-1}V
\label{BS}
\end{equation}
where $G$ is a diagonal matrix representing the meson-baryon loop function
\begin{eqnarray}
G_{l}&=& i \, 2 M_l \int \frac{d^4 q}{(2 \pi)^4} \,
\frac{1}{(P-q)^2 - M_l^2 + i \epsilon} \, \frac{1}{q^2 - m^2_l + i
\epsilon} \nonumber \\
&=& \frac{2 M_l}{16 \pi^2} \left\{ a_l(\mu) + \ln
\frac{M_l^2}{\mu^2} + \frac{m_l^2-M_l^2 + s}{2s} \ln \frac{m_l^2}{M_l^2}
-2i\pi \frac{q_l}{\sqrt{s}}
\right. \nonumber \\ & & \phantom{\frac{2 M}{16 \pi^2}} +
\frac{q_l}{\sqrt{s}}
\left[
\ln(s-(M_l^2-m_l^2)+2 q_l\sqrt{s})+
\ln(s+(M_l^2-m_l^2)+2 q_l\sqrt{s}) \right. \nonumber \\
& & \left. \phantom{\frac{2 M}{16 \pi^2} +
\frac{q_l}{\sqrt{s}}}
\left. \hspace*{-0.3cm}- \ln(s-(M_l^2-m_l^2)-2 q_l\sqrt{s})-
\ln(s+(M_l^2-m_l^2)-2 q_l\sqrt{s}) \right]
\right\},
\label{propdr}
\end{eqnarray}
in which $M_l$ and $m_l$ are the masses of the baryons and mesons respectively,
$s=P^2$, with $P$ the total four momentum of the meson baryon system and $q_l$ denotes the
three-momentum of the meson or baryon in the center of mass frame.
In the second equality we have removed an infinity, that we obtain
for instance evaluating the integral with dimensional regularization. In getting
the finite expression at a regularization scale $\mu$, we are implicitly assuming that there is
a higher order counterterm that has canceled the infinity and provided a remnant finite part
which is the subtraction constant $a_l(\mu) $. In as much as the $a_l(\mu) $ will be a fit parameter
in the theory there is no need to use explicitly the counterterm Lagrangians. Alternatively one can
think of a cut off regularization without using higher order terms. A cut off in the three momentum of
the order of 1 GeV is what we would call natural size in this case, and then it was proved in
\cite{oller} that this regularization procedure was equivalent to that of Eq.~(\ref{propdr})
using $\mu\approx 700$ MeV and $a_l(\mu)\approx -2$.
One then looks for poles of the transition matrix $T$ in the complex $\sqrt{s}$
plane. The complex poles, $z_R$, appear in unphysical Riemann sheets.
A good relationship between the real and twice the imaginary part of the complex pole positions
with the mass and width of the associated Breit Wigner shapes in the real axis is obtained
in what we call the second Riemann sheet. This is defined by taking $G_l$ from Eq.~(\ref{propdr})
(which is the expression for $G$ for the first Riemann sheet) and substituting
\begin{equation}
G_l^{2nd}=G_l+2i\,\frac{q_l}{\sqrt{s}}\,\frac{M_l}{4\pi},
\label{defR2}
\end{equation}
where the variables on the right hand side of the above equation are
evaluated in the first (physical) Riemann sheet, for the channels which are above threshold
at an energy equal to $\rm{Re}(z)$. This prescription is equivalent to changing the sign of $q_l$
in Eq.~(\ref{propdr}) for those channels.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{lda_old_cplx.eps}
\includegraphics[width=0.5\textwidth]{lda_old_real.eps}
\caption{(Color online) Left: The $\Lambda(1520)$ pole as seen in the $\pi\Sigma^*\rightarrow\pi\Sigma^*$
amplitude in the complex $\sqrt{s}$ plane. Right:$|T_{\pi\Sigma^*\rightarrow\pi\Sigma^*}|^2$ in
(MeV$^{-2}$).}
\label{polefig1}
\end{figure}
Using the natural size values~\cite{oller}
$a=-2$ and $\mu=700$ MeV, we find
a pole at $z_R=1550-i67$ as seen in fig.~\ref{polefig1},
which we can well associate with the 4-star resonance
$\Lambda(1520)$. The residue at this pole
indicates a strong coupling to the $\pi\Sigma^*$ channel~\cite{decu_ss}.
However, the experimental mass and width are lower and there are
also large branching ratios
of the $\Lambda(1520)$ to the $\bar KN$ and
$\pi\Sigma$ channels.
In the following section we will phenomenologically
add these channels to our coupled channel scheme.
\section{Introduction of the $\bar{K} N$ and $\pi\Sigma$ channels}
We will generate the resonance $\Lambda(1520)$ in coupled channels involving
the $\pi\Sigma^*$, $K\Xi^*$, $\bar{K} N$ and $\pi\Sigma$.
However,
we shall only couple the
$\bar{K} N$ and $\pi\Sigma$ channels to the dominant $\pi\Sigma^*$ channel as described below.
The lowest partial wave in which $\bar{K}N$ and $\pi\Sigma$ can couple to spin parity
$3/2^-$ is $L=2$ and thus we consider these states in $d$-wave. From the point of
view of strangeness and isospin other channels like $\eta \Lambda$ and $K \Xi$
would be allowed (and they are considered in the $s$-wave study of $\bar{K} N$ and
coupled channels in \cite{angels,oller,bennhold,Garcia-Recio:2002td,Jido:2003cb,Garcia-Recio:2003ks}.
However, their thresholds are at 1663 MeV and 1880 MeV respectively, such that their influence in the
region around 1520 MeV should be small. In any case, since the $\Lambda(1520)$ does not decay in these
channels their influence could only be in the mass of the resonance, not in its width, but the mass
will be obtained by fine tunning the subtraction constant of the dominant $\pi\Sigma^*$ channel.
\begin{figure}[h]
\centerline{
\includegraphics[width=0.4\textwidth]{kbarn1.eps}
}
\caption{The $\bar{K} N\rightarrow \pi\Sigma^*$ vertex}
\label{point}
\end{figure}
Consider the transition $\bar{K} N$ ($d$-wave) to $\pi\Sigma^*$ ($s$-wave) as shown in
fig.~\ref{point}. We start with an amplitude of the form
\begin{equation}
-it_{\bar{K} N\rightarrow\pi\Sigma^*}=-i\beta_{\bar{K} N}\ |\vec k|^2 \left[T^{(2)\dagger}
\otimes Y_{2}(\hat{k})
\right]_{0\,0}
\end{equation}
where $ T^{(2)\, \dagger}$ is a (rank 2) spin transition operator
defined by
\[\langle 3/2\ M|\ T^{(2)\dagger}_\mu\ |1/2\ m\rangle
={\cal C}(1/2\ 2\
3/2;m \ \mu \ M)\ \langle 3/2||\ T^{(2) \dagger}\ ||1/2\rangle~,\]
$Y_2(\hat{k})$ is the spherical harmonic coupled to $ T^{(2) \dagger}$ to
produce a scalar, and
$\vec k$ is the momentum of the $\bar{K}$. The 3rd component of spin
of the initial nucleon and the final $\Sigma^*$ are denoted by $m$
and $M$ respectively. Choosing appropriately the reduced matrix element
we obtain
\begin{equation}
-it_{\bar{K} N\rightarrow\pi\Sigma^*}=-i\beta_{\bar{K} N}\ |\vec k|^2\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}.
\end{equation}
In the same way we
write the amplitude for $\pi\Sigma$ ($d$-wave) to $\pi\Sigma^*$ ($s$-wave) as
\begin{equation}
-it_{\pi\Sigma\rightarrow\pi\Sigma^*}=-i\beta_{\pi\Sigma}\ |\vec k|^2\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}.
\end{equation}
\begin{figure}
\centerline{
\includegraphics[width=0.4\textwidth]{kbarn1_loop.eps}
}
\caption{$\pi\Sigma^*\rightarrow\pi\Sigma^*$ through $\bar{K} N$ loop arising in the Bethe-Salpeter
series}
\label{loop}
\end{figure}
Now, let us consider fig.~\ref{loop}. The loop function $G$ involving
the $\bar{K}$ and $N$ is given by
\begin{eqnarray}
G&=&i\int\frac{d^4q}{(2\pi)^4}\ G_N \ D_{\bar{K}} \ 4\pi\nonumber\\
&&\beta_{\bar{K} N}|\vec{q}|^2\sum_m{\cal C}(1/2\ 2\
3/2;m,M^{\,\prime}-m)Y_{2,m-M^{\,\prime}}(\hat{q})(-1)^{M^{\,\prime}-m}\nonumber\\
&& \beta_{\bar{K} N}|\vec{q}|^2\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y^*_{2,m-M}(\hat{q})(-1)^{M-m}
\label{gkn}
\end{eqnarray}
where $G_N$ and $D_{\bar{K}}$ are the propagators for the nucleon and the $\bar{K}$
respectively. Eq.~(\ref{gkn}) can be further simplified by performing the angular
integration of the two spherical harmonics, which gives $\delta_{MM^{\,\prime}}$ and
then using the orthogonality of the Clebsch Gordan (CG) coefficients. We obtain
\begin{eqnarray}
G&=&i\ \delta_{MM^{\,\prime}}2 M_N \int\frac{dq^0}{2\pi}\ \int\frac{|\vec q|^2\ d|\vec q|}
{(2\pi)^3}\ 4\pi\
(\beta_{\bar{K} N}|\vec{q}|^2)^2\nonumber
\frac{1}{q^2-m_K^2+i\epsilon}\
\frac{1}{(P-q)^2-M_N^2+i\epsilon}\nonumber\\
&=&
i\ \delta_{MM^{\,\prime}}2 M_N \int \frac{d^4 q}{(2 \pi)^4}\ (\beta_{\bar{K} N}|\vec{q}|^2)^2 \
\frac{1}{q^2 - m^2_K + i \epsilon}\,
\frac{1}{(P-q)^2-M_N^2 + i \epsilon}~.
\label{gkn2}
\end{eqnarray}
A further simplification can be done in Eq. (\ref{gkn2}) by factorizing the vertex,
$ \beta_{\bar{K} N}|\vec{q}|^2$, on shell. This is done in the Bethe Salpeter
approach of Ref. \cite{angels} and justified there for $s$-waves, but one finds
a more general justification in the $N/D$ method as used in \cite{oller,nsd} which we sketch
below. Unitarity states that, above threshold,
\begin{equation}
\left[ \textrm{Im}\, t^{-1}(s)\right]_{\alpha\beta}=-\frac{q_\alpha M_\alpha}{4\pi\sqrt{s}}\delta_{\alpha\beta}
\end{equation}
Since the right hand side is $ -\textrm{Im}\, G $ one can perform a subtracted dispersion relation
and one would have
\begin{equation}
t^{-1}(s)=G(s)+V^{-1}(s)
\label{w1w2}
\end{equation}
where $G(s)$ contains an arbitrary subtraction constant (like $a$) and $ V^{-1}(s)$
accounts for contact terms which remain at tree level when we remove the loops by
taking $G=0$. Eq.~(\ref{w1w2}) can be cast as
\begin{equation}
t(s)=\left[1-V(s)G(s)\right]^{-1} V(s) \;\Rightarrow \; t(s)= V(s)+ V(s) G(s)t(s)
\end{equation}
where the last equation is the Bethe Salpeter equation except that $V(s) t(s)$ factorize
outside the loop integral of the $VGT$ term. The caveat in the derivation is that we have only
included the right hand cut in the dispersion relation. In as much as the contribution of the left hand
cut
is negligible, which is the case in the meson baryon interaction since the energies of this cut
are very far from those in the real channel \cite{oller}, the on shell factorization is justified.
In fact the caveat is less restrictive because it is sufficient that the energy dependence of the
left cut contribution is negligible in the region of interest to justify the on shell prescription,
and the contribution of the left hand cut can be absorbed into the subtraction constants.
Factorizing the vertex, i.e. $|\vec{q}|^2$, on shell
results in the
simplification that we can use the transition matrix elements
\begin{eqnarray}
V_{\bar{K} N\rightarrow\pi\Sigma^*}&=&\beta_{\bar{K} N}|\vec{q}_{on}|^2\nonumber\\
V_{\pi\Sigma\rightarrow\pi\Sigma^*}&=&\beta_{\pi\Sigma}|\vec{q}_{on}^{\,\prime}|^2
\end{eqnarray}
where $\vec{q_{on}}$ and
$\vec{q}_{on}^{\,\prime}$ are the (on-shell) CM momenta of the $\bar{K}$ and $\pi$
respectively for a given value of $s$. After removing the factor $(\beta_{\bar{K} N}|\vec{q}|^2)^2$ in
Eq.~(\ref{gkn2}), the rest of the formula is the ordinary $G$ function for the
$s$-wave meson baryon interaction, Eq.~(\ref{propdr}). This allows us to use the same
formalism as in ordinary $s$-wave scattering assuming an effective transition
potential $\beta_{\bar{K} N}|\vec{q_{on}}|^2$ for $\pi\Sigma^*\rightarrow\bar{K} N$.
With the matrix $V$ now given by
\begin{equation}
V=\left|
\begin{array}{cccc}
V_{\pi\Sigma^*\rightarrow\pi\Sigma^*} & V_{\pi\Sigma^*\rightarrow K\Xi^*} & \beta_{\bar{K}
N}|\vec{q}_{on}|^2 & \beta_{\pi\Sigma}|\vec{q}^{\,\prime}_{on}|^2 \\
V_{K\Xi^*\rightarrow\pi\Sigma^*} & V_{K\Xi^*\rightarrow K\Xi^*} & 0 & 0 \\
\beta_{\bar{K} N}|\vec{q}_{on}|^2 & 0 & 0 & 0 \\
\beta_{\pi\Sigma}|\vec{q}^{\,\prime}_{on}|^2 & 0 & 0 & 0
\end{array}
\right|~,
\end{equation}
we solve Eq.~(\ref{BS}) to obtain
the amplitudes $T$.
The actual transition amplitudes are related to $T$ through the
following relations
\begin{eqnarray}
t_{\pi\Sigma^*\rightarrow\pi\Sigma^*}&=&T_{\pi\Sigma^*\rightarrow\pi\Sigma^*}\nonumber\\
t_{\bar{K} N\rightarrow\pi\Sigma^*}&=&T_{\bar{K} N\rightarrow\pi\Sigma^*}\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}\nonumber\\
t_{\pi\Sigma\rightarrow\pi\Sigma^*}&=&T_{\pi\Sigma\rightarrow\pi\Sigma^*}\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}\nonumber\\
t_{\bar{K} N\rightarrow\bar{K} N}&=&T_{\bar{K} N\rightarrow\bar{K} N}\ \sum_M{\cal C}(1/2\ 2\
3/2;m,M-m)Y_{2,m-M}(\hat{k})\nonumber\\
&& {\cal C}(1/2\ 2\
3/2;m^{\,\prime},M-m^{\,\prime})Y_{2,m^{\,\prime}-M}^*(\hat{k}^{\,\prime})(-1)^{m^{\,\prime}-m}\ 4\pi~.
\label{actual}
\end{eqnarray}
We then look for poles in the 2nd Riemann sheet of the complex plane. Assuming
that the pole corresponding to the $\Lambda(1520)$ appears at $z=z_R$ where $z$ stands
for the (complex) CM energy, the amplitudes close to the pole
can be written as
\begin{eqnarray}
T_{\pi\Sigma^*\rightarrow\pi\Sigma^*}&=&\frac{g_{\pi\Sigma^*}^2}{z-z_R}\nonumber\\
T_{\bar{K} N\rightarrow\pi\Sigma^*}&=&\frac{g_{\pi\Sigma^*}\ g_{\bar{K} N}}{z-z_R}\nonumber\\
T_{\pi\Sigma\rightarrow\pi\Sigma^*}&=&\frac{g_{\pi\Sigma^*}\ g_{\pi\Sigma}}{z-z_R}
\end{eqnarray}
where the couplings $g_{\pi\Sigma^*}$, $g_{\bar{K} N}$ and $g_{\pi\Sigma}$
can be obtained from the residues at the pole.
Writing the amplitudes for the $\Lambda(1520)$ decay to $\bar{K} N$ and $\pi\Sigma$
respectively as,
\begin{eqnarray}
-it_{\Lambda(1520)\rightarrow\bar{K} N}=-ig_{\bar{K} N}\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y^*_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}\nonumber\\
-it_{\Lambda(1520)\rightarrow\pi\Sigma}=-ig_{\pi\Sigma}\ {\cal C}(1/2\ 2\
3/2;m,M-m)Y^*_{2,m-M}(\hat{k})(-1)^{M-m}\sqrt{4\pi}
\end{eqnarray}
the partial decay widths of the $\Lambda(1520)$ are obtained as,
\begin{eqnarray}
\Gamma_{\bar{K} N}&=&\frac{g_{\bar{K} N}^2}{2\pi}\ \frac{M_N}{M_{\Lambda}}\ k_{\bar{K}}\nonumber\\
\Gamma_{\pi\Sigma}&=&\frac{g_{\pi\Sigma}^2}{2\pi}\ \frac{M_{\Sigma}}{M_{\Lambda}}\ k_\pi
\end{eqnarray}
where $k_{\bar{K}}=|\vec{q}_{on}|=242$ MeV and $k_\pi=|\vec{q}_{on}^{\,\prime}|=263$ MeV.
The partial decay
width to the $\pi\Sigma^*$ channel is zero because the $\Lambda(1520)$ pole is below the threshold
for this channel. Note that $g_{\bar{K} N}$
and $g_{\pi\Sigma}$ automatically incorporate the $\beta_{\bar{K} N}|\vec{q}_{on}|^2$
and $\beta_{\pi\Sigma}|\vec{q}_{on}^{\,\prime}|^2$ of the transition potential since at
least one $\pi\Sigma^*\rightarrow\bar{K} N$ transition is needed in the Bethe Salpeter series.
Hence the term $|\vec{q}_{on}|^4\ k_{\bar{K}}=k_{\bar{K}}^5$ guarantees the $d$-wave character
of the decay.
We vary $\beta_{\bar{K} N}$ and $\beta_{\pi\Sigma}$
to reproduce the correct partial decay widths of the $\Lambda(1520)$ into $\bar{K}
N(45\%)
$ and $\pi\Sigma(42\%)
$ out of a total width of 15.6 MeV and simultaneously the subtraction constant
$a$ in order to have the pole at the experimental $\Lambda(1520)$ mass. This exercise results in the values
\(|g_{\pi\Sigma^*}|=1.57, \ |g_{\bar{K} N}|=0.54\) and \(|g_{\pi\Sigma}|=0.45\) for the couplings
of the various channels to $\Lambda(1520)$ using
$\beta_{\bar{K} N}=2.4\times 10^{-7}$, $\beta_{\pi\Sigma}=1.7\times 10^{-7}$ in units of
MeV$^{-3}$ and $a=-2.5$, fixing $\mu=700$ MeV. With this we obtain the
$\Lambda(1520)$ pole at the position $z_R=1519.7-i7.9$ as seen in
fig.~\ref{polefig2}.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{lda_new_cplx.eps}
\includegraphics[width=0.5\textwidth]{lda_new_real.eps}
\caption{(Color online) Left: The $\Lambda(1520)$ pole as seen in the $\bar{K} N\rightarrow\pi\Sigma^*$
amplitude in the complex $\sqrt{s}$ plane. Right:$|T_{\bar{K} N\rightarrow\pi\Sigma^*}|^2$ in
(MeV$^{-2}$).}
\label{polefig2}
\end{figure}
The isoscalar part of the amplitudes for specific charge channels can be
obtained using\footnote{we use $|K^-\rangle=-|\frac{1}{2}\,
\frac{1}{2}\rangle$ and $|\Sigma^+\rangle=-|1\,1\rangle$}
\begin{eqnarray}
|\pi\Sigma ;\ I=0\rangle&=&-\frac{1}{\sqrt{3}}\ |\pi^-\Sigma^{+}\rangle
-\frac{1}{\sqrt{3}}\ |\pi^0\Sigma^{0}\rangle
-\frac{1}{\sqrt{3}}\ |\pi^+\Sigma^{-}\rangle\nonumber\\
|\bar{K} N ;\ I=0\rangle&=&\frac{1}{\sqrt{2}}\ |\bar{K}^0 n\rangle
+\frac{1}{\sqrt{2}}\ |K^-p\rangle
\end{eqnarray}
and multiplying the $I=0$ amplitudes obtained above by the relevant CG coefficients.
It is to be noted that $\beta_{\bar{K} N}$ and $\beta_{\pi\Sigma}$ have been fitted to the partial
decay widths. Hence we are not making any prediction for these couplings, or
equivalently $g_{\bar{K} N}$ and $g_{\pi\Sigma}$. However, the
coupling $g_{\pi\Sigma^*}$ is a prediction of the theory, up to small changes
in the fine tuning of the subtraction constant.
\section{The reaction $K^-p\rightarrow\pi^0\Sigma^{*0}(1385)\rightarrow\pi^0\pi^0\Lambda(1116)$}
Here we evaluate the cross-section for the reaction
$K^-p\rightarrow\pi^0\Sigma^{*0}$ generated by the coupled channel scheme and the
subsequent decay of the $\Sigma^{*0}(1385)$ to $\pi^0\Lambda(1116)$
as shown in fig.~\ref{kpfig}.
To obtain the cross-section for $K^-p\rightarrow\pi^0\Sigma^{*0}$ in the $K^-p$ CM
frame we use the formula
\begin{equation}
\frac{d\sigma}{d\Omega}=\frac{1}{16\pi^2}\ \frac{M_NM_{\Sigma^*}}{s}\
\frac{|\vec p_1|}{k}\ \overline{\sum_i}\sum_f \ |t_{K^- p\rightarrow \pi^0 \Sigma^{*0}}|^2
\end{equation}
where $|\vec p_1|$ and $\vec
k=(0,0,k)$ denote the momenta of the outgoing pion and the incoming kaon
respectively.
Using Eq.~(\ref{actual}) and \(Y_{2,m-M}(\hat k)=\sqrt{\frac{5}{4\pi}}\
\delta_{mM}\) and taking into account the CG coefficients we find
\begin{equation}
t_{K^- p\rightarrow \pi^0 \Sigma^{*0}}=\sqrt{\frac{1}{3}}\ T_{\bar{K} N\rightarrow\pi\Sigma^*}\
\delta_{mM}\ \left\{\begin{array}{l}{-1~~~~~~m=+1/2}\\{+1~~~~~~m=-1/2}
\end{array}\right\}
\end{equation}
where $m$ is the spin of the proton and $M$ that of the $\Sigma^{*0}$.
The cross section is then given by
\begin{equation}
\sigma=\frac{1}{12\pi}\ \frac{M_NM_{\Sigma^*}}{s}\ \frac{|\vec p_1|}{k}\
|T_{\bar{K} N\rightarrow\pi\Sigma^*}|^2~.
\end{equation}
To obtain the cross section for $K^-p\rightarrow\pi^0\Sigma^{*0}\rightarrow\pi^0\pi^0\Lambda$
we now evaluate the Feynman diagram of fig.~\ref{kpfig}
where the $\Sigma^{*0}$ appears as a particle propagator.
\begin{figure}[h]
\centerline{\includegraphics[width=0.5\textwidth]{kptopipilda.eps}}
\caption{Scheme for $K^-p\rightarrow\pi^0\Sigma^{*0}(1385)\rightarrow\pi^0\pi^0\Lambda(1116)$. The
blob indicates the unitarized vertex.}
\label{kpfig}
\end{figure}
The vertex $\Sigma^{*0}\rightarrow\pi^0\Lambda$ is given by~\cite{angels_phi}
\begin{equation}
-it_{\pi^0\Lambda\rightarrow\Sigma^{*0}}=-\frac{f_{\Sigma^*\pi\Lambda}}{m_\pi}\ \vec S^\dagger\cdot
\vec p_2^{\,\prime}
\end{equation}
where $S^\dagger$ is the 1/2 to 3/2 spin transition operator
and the coupling $f_{\Sigma^*\pi\Lambda}$ is fitted to the partial decay width
of 32 MeV for $\Sigma^{*0}\rightarrow\pi^0\Lambda$. Using the $SU(3)$ arguments
of~\cite{angels_phi} one obtains $\frac{f_{\Sigma^*\pi\Lambda}}{m_\pi}
=\frac{6}{5}\frac{D+F}{2f}$.
The amplitude for
the process shown in fig.~\ref{kpfig} in the $K^-p$ CM is then obtained as
\begin{equation}
-it(\vec p_1,\vec p_2)=\frac{-iT_{\bar{K} N\rightarrow\pi\Sigma^*}}{3\sqrt{2}}\
\frac{f_{\Sigma^*\pi\Lambda}/m_\pi}{M_R-M_{\Sigma^*}+i\Gamma_{\Sigma^*}(M_R)/2}\
\left\{\begin{array}{l}{-2p_{2z}^{\,\prime}~~~~~~~~~m^{\,\prime}=+1/2}\\
{p_{2x}^{\,\prime}+ip_{2y}^{\,\prime}~~~~m^{\,\prime}=-1/2}
\end{array}\right\}
\end{equation}
where $m^{\,\prime}$ is the spin of the outgoing $\Lambda$.
Here, and in the following we will take
the spin projection $m=+1/2$ for the proton.
The $p$-wave decay width of the propagating $\Sigma^*$ is
given by
\begin{equation}
\Gamma_{\Sigma^*}(M_R)=\frac{1}{6\pi}\frac{f_{\Sigma^*\pi\Lambda}^2}{m_\pi^2}\ \frac{M_\Lambda}{M_R}\
|\vec p_2^{\,\prime}|^3
\end{equation}
from where we obtain $f_{\Sigma^*\pi\Lambda}=1.3$ for $M_R=M_{\Sigma^*}$,
which only differs from the $SU(3)$ value given above
by about $10\%
$.
The momentum $\vec p_2^{\,\prime}$ of the final pion in the rest system of
the $\Sigma^*$ is obtained as
\begin{equation}
\vec p_2^{\,\prime}=\left[\left(\frac{E_R}{M_R}-1\right)\ \frac{(\vec p_2\cdot\vec p_1)}
{|\vec p_1|^2}+\frac{p_2^0}{M_R}\right]\vec p_1
+\vec p_2
\label{boost}
\end{equation}
where $M_R^2=(p_2+p_\Lambda)^2$ and $E_R^2=\vec p_1^2+M_R^2$.
\begin{figure}[h]
\centerline{\includegraphics[width=0.6\textwidth]{cross_chiral.eps}}
\caption{Cross-section as a function of the $K^-$ momentum}
\label{sigfig1}
\end{figure}
In the next step, the total squared amplitude for $K^-p\rightarrow\pi^0\pi^0\Lambda$ is
symmetrized
in the momenta $\vec p_1$ and $\vec p_2$ to account for the two
$\pi^0$s in the final state so that,
\begin{equation}
|Amp|^2=\sum_{m^{\,\prime}}|t(\vec p_1,\vec p_2)+t(\vec p_2,\vec p_1)|^2~.
\end{equation}
The cross section is then obtained by integrating the above amplitude
over the three-particle phase space (with a factor
1/2 for the identity of the two pions). Details are discussed in the appendix.
The results are shown in fig.~\ref{sigfig1}. The peak in the cross section for
$K^-p\rightarrow\pi^0\pi^0\Lambda$ (solid line) corresponds to the $\Lambda(1520)$. We observe
a fair agreement with the experimental data~\cite{data} in the region of $K^-$
momenta
up to about 600 MeV from where other mechanisms for $\pi^0\pi^0\Lambda$ not tied to
$\pi^0\Sigma^{*0}$ production become more relevant as we shall see.
The cross section for $K^-p\rightarrow\pi^0\Sigma^{*0}$ multiplied by
the $\Sigma^*\rightarrow\Lambda\pi$ branching ratio (=0.88)
is also shown for comparison (dashed line). Recall that the threshold for this reaction lies
just above the peak of the $\Lambda(1520)$. It is interesting to see that there is a
good agreement between the two methods of calculation when we are above the
$\pi^0\Sigma^{*0}$ threshold. However, evaluating the $K^-p\rightarrow\pi^0\Sigma^{*0}$
cross section, assuming the $\Sigma^{*0}$ as a stable particle gives no cross
section below the $\pi^0\Sigma^{*0}$ threshold and then the explicit evaluation of
$K^-p\rightarrow\pi^0\pi^0\Lambda$ using the $\Sigma^{*0}$ propagator becomes mandatory and
provides strength below
this threshold. This feature is rather interesting because one can see the shape
of the $\Lambda(1520)$ in the cross section as a function of the $K^-$ momentum.
The strength of this peak is a genuine prediction of the theory, as well as
the strength predicted around
500--600 MeV/c $K^-$ momentum.
It would be instructive to get
data around the energy of the peak, since it would be a clean proof of the
link between the $\Lambda(1520)$ and the $\pi\Sigma^*$ channel which is the basic
prediction of the chiral unitary approach.
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\textwidth]{kptopipilda_c1.eps}}
\caption{A conventional scheme for $K^-p\rightarrow\pi^0\pi^0\Lambda$}
\label{conv1}
\end{figure}
We will now consider other mechanisms, figs.~\ref{conv1} and \ref{conv2} which
are not tied to the $\Lambda(1520)$ resonance. In
fig.~\ref{conv1} we separate the $K^-p\rightarrow\pi^0\Lambda$ interaction in $s$-wave (a)
and $p$-wave (b), this latter one dominated by the $\Sigma^*$
pole~\cite{jido_pwave}.
Since there is no $s$-wave resonance in $\pi^0\Lambda$ around the energies we investigate, it
is enough to take for $K^-p\rightarrow\pi^0\Lambda$ the lowest order chiral amplitude
in $s$-wave in fig.~\ref{conv1}(a), which we get from~\cite{angels}, and we obtain for the
amplitude of this diagram,
\begin{equation}
-it^{(s-wave)}=\frac{\sqrt{3}}{2}\ \frac{1}{4f^2}\frac{D+F}{2f}
\frac{k^{0\,\prime}+p_2^{0\,\prime}}{E_N(\vec k)-p_1^0-E_N(\vec k+\vec p_1)}
\left\{\begin{array}{l}{p_{1z}~~~~~~~~~~~~~m^{\,\prime}=+1/2}\\{p_{1x}+ip_{1y}~~~~m^{\,\prime}=-1/2}
\end{array}\right\}
\label{swave}
\end{equation}
where $k^{0\,\prime}$ and $p_2^{0\,\prime}$ are the energies of $\vec k$ and
$\vec p_2$ written in the $\pi^0\Lambda$ CM frame.
\begin{figure}[h]
\centerline{\includegraphics[width=.8\textwidth]{kptopipilda_c2.eps}}
\caption{A conventional scheme for $K^-p\rightarrow\pi^0\pi^0\Lambda$}
\label{conv2}
\end{figure}
The amplitude
corresponding to the diagram of fig.~\ref{conv1}(b), is given by
\begin{eqnarray}
-it^{(p-wave)}&=&-\frac{D+F}{2f}\ \frac{f_{\Sigma^*\pi\Lambda}}{m_\pi}\
\frac{f_{K^-p\Sigma^{*0}}}{m_\pi}\ \
\vec S^\dagger\cdot\vec p_2^{\,\prime}\ \vec S\cdot\vec k^{\,\prime}\ \vec\sigma\cdot\vec
p_1\nonumber\\
&&\times\frac{1}{M_R-M_{\Sigma^*}+i\Gamma_{\Sigma^*}(M_R)/2}\
\frac{1}{E_N(\vec k)-p_1^0-E_N(\vec k+\vec p_1)}
\end{eqnarray}
where $f_{K^-p\Sigma^{*0}}$ is given in~\cite{angels_phi} by
\begin{equation}
\frac{f_{K^-p\Sigma^{*0}}}{m_\pi}=-\frac{2\sqrt{3}}{5}\frac{D+F}{2f}.
\end{equation}
and
\begin{eqnarray*}
&&
\vec S^\dagger\cdot\vec p_2^{\,\prime}\ \vec S\cdot\vec k^{\,\prime}\ \vec\sigma\cdot\vec
p_1=\nonumber\\
&&\left\{\begin{array}{l}
-\frac{i}{3}\, (\vec p_2^{\,\prime}\times \vec k^{\,\prime})\,\cdot\vec p_1
+\frac{2}{3}\, (\vec p_2^{\,\prime}\cdot\vec k^{\,\prime})\, p_{1z}
+\frac{1}{3}\, (\vec p_1\cdot\vec p_2^{\,\prime})\, k_z^{\,\prime}
-\frac{1}{3}\, (\vec p_1\cdot\vec k^{\,\prime})\, p_{2z}^{\,\prime}~~~~~~~~~~~m^{\,\prime}=+1/2\\
\frac{2}{3}\, (\vec p_2^{\,\prime}\cdot\vec k^{\,\prime})\, (p_{1x}+ip_{1y})
+\frac{1}{3}\, (\vec p_1\cdot\vec p_2^{\,\prime})\, (k_{x}^{\,\prime}+ik_{y}^{\,\prime})
-\frac{1}{3}\, (\vec p_1\cdot\vec k^{\,\prime})\, (p_{2x}^{\,\prime}+ip_{2y}^{\,\prime})~~~~m^{\,\prime}=-1/2
\end{array}\right\}
\end{eqnarray*}
where the boosted momenta $\vec p_2^{\,\prime}$ and $\vec k^{\,\prime}$ are obtained as in
Eq.~(\ref{boost}).
Next we study the amplitude corresponding to fig.~\ref{conv2}. As shown in~\cite{hyodo}
the contact term of fig.~\ref{conv2}(b) just cancels the part of
fig.~\ref{conv2}(a) which comes from the off shell part of the meson meson
amplitude. Hence, using the diagram of fig.~\ref{conv2}(a) with the meson meson
amplitude calculated on shell accounts for the sum of the two diagrams. We
take the $K^-K^+\rightarrow\pi^0\pi^0$ amplitude from~\cite{ollernpa97} and the $K^-p\Lambda$
vertex from~\cite{angels} and we obtain
\begin{eqnarray}
-it^{(K-pole)}&=&-\frac{1}{4f^2}\left(-\frac{2}
{\sqrt{3}}\frac{D+F}{2f}+\frac{1}{\sqrt{3}}\frac{D-F}{2f}\right)
\frac{(p_1+p_2)^2}{(k-p_1-p_2)^2-m_K^2}\nonumber\\
&&\times\left\{\begin{array}{l}{(k-p_{1z}-p_{2z})~~~~~~~~~~~~~~~~~~~~m^{\,\prime}=+1/2}
\\{-(p_{1x}+p_{2x})-i(p_{1y}+p_{2y})~~~~m^{\,\prime}=-1/2}
\end{array}\right\}~.
\end{eqnarray}
\begin{figure}[h]
\centerline{\includegraphics[width=0.6\textwidth]{cross_all.eps}}
\caption{Cross-section as a function of the $K^-$ momentum. The dot-dashed
and dotted lines are
the contributions of the diagrams of figs.~\ref{conv1}(a) and \ref{conv1}(b)
respectively. The
dashed line shows the cross section with fig.~\ref{kpfig} only
and the solid line for a coherent sum of all these diagrams.}
\label{sigfig2}
\end{figure}
We add all these amplitudes symmetrized to the former ones and recalculate the
cross section. Note that the amplitude $t^{(K-pole)}$ is already symmetric with
respect to the momenta $p_1$ and $p_2$ and does not have to be symmetrized
again. The results are shown in
fig.~\ref{sigfig2}. We find that by themselves the new
mechanisms would give a cross section more than one order of magnitude smaller
than the experiment, up to 600 MeV/c, indicating that the dominant mechanism by far is the one
that we have investigated with the $\pi\Sigma^*$ tied to the $\Lambda(1520)$ resonance.
Added coherently to the dominant mechanism, these new processes
produce a negligible effect around the $\Lambda(1520)$ peak and they become more
visible far away from the resonance where they increase the cross section and
help to get a good agreement with the data.
Details on the new mechanisms are as follows:
a) The kaon pole term of fig.~\ref{conv2}(a) produces a negligible effect in the
cross section not visible in fig.~\ref{sigfig2}. The $K$ propagator reduces the
strength of the diagram and the factor $(p_1+p_2)^2$ from the
$K^+K^-\rightarrow\pi^0\pi^0$ amplitude also contributes to the small size of the term.
b) The term from the diagram of fig.~\ref{conv1}(a) involving the $s$-wave
$K^-p\rightarrow\pi^0\Lambda$ amplitude contributes about one fifth of the total cross
section at the highest energy of fig.~\ref{sigfig2} and adds practically
incoherently to the $K^-p\rightarrow\pi^0\Sigma^{*0}$ mechanism.
c) The term from the diagram of fig.~\ref{conv1}(b) involving the $p$-wave
$K^-p\rightarrow\pi^0\Lambda$ amplitude contributes about one half
of the total cross
section at the highest energy of fig.~\ref{sigfig2} and also adds almost
incoherently to the other mechanisms.
We also calculate the differential cross section $d\sigma/dM^2$ as a function of
the invariant mass of a pair of $\pi^0\Lambda$ for two values of $K^-$ momentum
which we plot in fig.~\ref{dsdmfig}. We
find a good agreement with the experimental curves in~\cite{data}.
In the figure we can see the $\Sigma^*(1385)$ peak clearly. We also notice, as
in~\cite{data}, that the effect of symmetrization of the amplitudes with respect
to the two final pions is visible in the spectra. Indeed, we see that for
$p_K$=659 MeV/c some strength piles up on the left hand side of the resonance while
for $p_K$=750 MeV/c this strength is moved to higher energies and
produces a shoulder on the right hand side. These features are also clear in the
experimental data.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{dsdm2_659.eps}
\includegraphics[width=0.5\textwidth]{dsdm2_750.eps}
\caption{$d\sigma/dM^2$ as a function
of the invariant mass of $\pi^0\Lambda$ for two values of the $K^-$ momentum in CM;
Left: 659 MeV
and Right:750 MeV. Solid lines represent our results. The dotted histograms are
the experimental results from~\cite{data} normalized to the total experimental
cross section. The dashed lines indicate the phase space normalized to the
theoretical cross section.}
\label{dsdmfig}
\end{figure}
\section{Conclusions}
We have extended the chiral unitary approach for the interaction of the decuplet
of baryons with the octet of mesons, for the case of meson baryon scattering in
the region of the $\Lambda(1520)$ resonance, by including the $\bar{K}N$ and $\pi
\Sigma$ channels which couple in $d$-wave to the main $s$-wave channels
$\pi \Sigma^*(1385)$ and $K \Xi^*(1533)$. The introduction of these channels allowed us
to obtain a more realistic description of the $\Lambda(1520)$ resonance and make
predictions for reactions which evidence the nature of this resonance as a
quasibound $\pi \Sigma^*(1385)$ state. We found a good example in the
$K^- p \to \pi \Sigma^*(1385) (\pi^0 \Lambda)$ reaction which has been measured
recently. We found that the strength of the cross section was well reproduced
in terms of the large coupling of the $\Lambda(1520)$ to $\pi \Sigma^*(1385)$,
which is a prediction of the chiral unitary approach. Both the total cross
sections as well as the invariant mass distributions of $\pi^0 \Lambda$ were
well reproduced. In addition the theory makes predictions for a large peak of
the total cross section of $K^- p \to \pi^0\pi^0 \Lambda$ for $K^- p$ energies around
the $\Lambda(1520)$, and hence below the $\pi \Sigma^*(1385)$ threshold. The
prediction for this cross section is related to the large coupling of the
$\Lambda(1520)$ to $\pi \Sigma^*(1385)$ in spite of the fact that the
$\pi \Sigma^*(1385)$ is kinematically forbidden. This region falls just below
the data measured in the reaction that we analyze. It is then clear that a
measurement of the reactions in this region becomes most advisable, and
confirmation of the quantitative predictions made here would support the
idea of the $\Lambda(1520)$ as a dynamically generated resonance, and by
extension for the other resonances equally generated from the interaction of the
decuplet of baryons and octet of mesons.
\section*{Acknowledgments}
This work is partly supported by DGICYT contract number BFM2003-00856,
and the E.U. EURIDICE network contract no. HPRN-CT-2002-00311.
This research is part of the EU Integrated Infrastructure Initiative
Hadron Physics Project under contract number RII3-CT-2004-506078.
|
1,108,101,565,151 | arxiv | \section{Introduction}
In science and engineering, differential eigenvalue problems occur abundantly. Differential eigenvalue problems can arise when partial differential equations are solved using the method of separation of variables. Consequently, they also play an important role in Sturm-Liouville (SL) differential eigenvalue problems \cite{Amrein2005}. For example, the solution of the wave equation can be expressed as the sum of standing waves. The frequencies of these standing waves are precisely the eigenvalues of its corresponding Sturm-Liouville problem. Similarly, in quantum mechanics, the energy eigenvalues associated with a Hamiltonian operator are modelled using the time-independent Schr\"odinger equation which is in fact a special case of a Sturm-Liouville differential eigenvalue problem.
Recently, collocation and spectral methods have shown great promise for solving singular Sturm-Liouville differential eigenvalue problems~\cite{Auzinger2006,Chanane2007}. More specifically, the Sinc collocation method (SCM) \cite{Tharwat2013a,Tharwat2013,Jarratt1990a} has been shown to yield exponential convergence. During the last three decades the SCM has been used extensively to solve many problems in numerical analysis. The applications include numerical integration, linear and non-linear ordinary differential equations, partial differential equations, interpolation and approximations to functions~\cite{Stenger1981,Stenger2000}. The SCM applied to Sturm-Liouville problems consists of expanding the solution of a SL problem using a basis of Sinc functions. By evaluating the resulting approximation at the Sinc collocation points separated by a fixed mesh size $h$, one obtains a matrix eigenvalue problem or generalized matrix eigenvalue problem for which the eigenvalues are approximations to the eigenvalues of the SL operator. In~\cite{Safouhi50-arXiv}, we used the double exponential Sinc collocation method (DESCM) to compute the eigenvalues of singular Sturm-Liouville boundary value problems. The DESCM leads to a generalized eigenvalue problem where the matrices are symmetric and positive-definite. In addition, we demonstrate that the convergence of the DESCM is of the rate ${\cal O} \left( \frac{N^{5/2}}{\log(N)^{2}} e^{-\kappa N/\log(N)}\right)$ for some $\kappa>0$ as $N\to \infty$, where $2N+1$ is the dimension of the resulting generalized eigenvalue system. The DESCM was also applied successfully to the Schr\"odinger equation with the anharmonic oscillators~\cite{Safouhi49}.
In the present contribution, we show how a parity symmetry of the Sturm-Liouville operator can be conserved and exploited when converting our differential eigenvalue problem into a matrix eigenvalue problem. Indeed, given certain parity assumptions, the matrices resulting from the DESINC method are not only symmetric and positive definite; they are also centrosymmetric. The study of centrosymmetry has a long history \cite{Saibel1942, Cantoni1976a, Cruse1977, A1984, Stuart1988, Hill1990, Muthiyalu1992, Nield1994}. However, the last two decades has stemmed much research focused on the properties and applications of centrosymmetric matrices ranging from iterative methods for solving linear equations to least-squares problems to inverse eigenvalue problems~\cite{Melman2000,Tao2002,Lu2002,Abu-jeib2002,Zhou2003a,Zhou2003,Liu2003,Fassbender2003,Trench2004,Trench2004a, Zhongyun2005,Liu2005,Tian2007,Li2012,El-Mikkawy2013}.
Using the eigenspectrum properties of symmetric centrosymmetric matrices presented in \cite{Cantoni1976a}, we apply the DESCM algorithm to Sturm-Liouville eigenvalue problems and demonstrate that solving the resulting generalized eigensystem of dimension $(2N+1)\times(2N+1)$ is equivalent to solving the two smaller eigensystems of dimension $N \times N$ and $(N+1) \times (N+1)$. Moreover, we also demonstrate that only $\frac{1}{N+1}$ of all components need to be stored at every iteration in order to obtain all generalized eigenvalues. To illustrate the gain in efficiency obtained by this method, we apply the DESCM method to the time independent Schr\"odinger equation with an anharmonic potential. Furthermore, it is worth mentioning that research concerning inverse eigenvalue problems where the matrices are assumed centrosymmetric has been the subject of much research recently \cite{Trench2004a, Zhou2003a}. Consequently, the combination of these results and our findings could lead to a general approach for solving inverse Sturm-Liouville problems.
All calculations are performed using the programming language Julia and all the codes are available upon request.
\section{Definitions and basic properties}~\label{GenDef}
The sinc function valid for all $z \in \mathbb{C}$ is defined by the following expression:
\begin{equation} \label{formula: sinc functions}
\textrm{sinc}(z) = \left\{ \begin{array}{cc} \dfrac{\sin(\pi z)}{\pi z} &\quad \textrm{for} \quad z \neq 0 \\[0.3cm]
1 &\quad \textrm{for} \quad z=0. \end{array} \right.
\end{equation}
For $j \in \mathbb{Z}$ and $h$ a positive number, we define the Sinc function $S(j,h)(x)$ by:
\begin{equation}\label{formula: Sinc function}
S(j,h)(x) = \textrm{sinc}\left( \dfrac{x-jh}{h}\right) \quad \textrm{for} \quad x \in \mathbb{C}.
\end{equation}
The Sinc function defined in \eqref{formula: Sinc function} form an interpolatory set of functions with the discrete orthogonality property:
\begin{equation}
S(j,h)(kh) = \delta_{j,k} \qquad \textrm{for} \qquad j,k \in \mathbb{Z},
\end{equation}
where $\delta_{j,k}$ is the Kronecker delta function.
\begin{definition}\cite{Stenger1981}
Given any function $v$ defined everywhere on the real line and any $h>0$, the symmetric truncated Sinc expansion of $v$ is defined by the following series:
\begin{equation}\label{formula: y in sinc function}
C_{N}(v,h)(x) = \displaystyle \sum_{j=-N}^{N} v_{j,h} \, S(j,h)(x),
\end{equation}
where $v_{j,h} = v(jh)$.
\end{definition}
The Sturm-Liouville (SL) equation in Liouville form is defined as follows:
\begin{align} \label{formula: sturm-liouville problem}
Lu(x) & = - u^{\prime \prime}(x) + q(x) u(x) \,=\, \lambda \rho(x)u(x) \nonumber \\
& \hskip -0.5cm a < x < b \qquad \qquad u(a) = u(b)=0,
\end{align}
where $ -\infty \leq a < b \leq \infty$. Moreover, we assume that the function $q(x)$ is non-negative and the weight function $\rho(x)$ is positive. The values $\lambda$ are known as the eigenvalues of the SL equation.
In \cite{Gaudreau2014a}, we apply the DESCM to obtain an approximation to the eigenvalues $\lambda$ of equation \eqref{formula: sturm-liouville problem}. We initially applied Eggert et al.'s transformation to equation \eqref{formula: sturm-liouville problem} since it was shown that the proposed change of variable results in a symmetric discretized system when using the Sinc collocation method \cite{Eggert1987}. The proposed change of variable is of the form~\cite[Defintion 2.1]{Eggert1987}:
\begin{equation}
v(x) = \left(\sqrt{ (\phi^{-1})^{\prime} } \, u \right) \circ \phi(x) \qquad \Longrightarrow \qquad u(x) = \dfrac{ v \circ \phi^{-1}(x)}{\sqrt{ (\phi^{-1}(x))^{\prime}}},
\label{formula: EggertSub}
\end{equation}
where $\phi^{-1}(x)$ is a conformal map of a simply connected domain in the complex plane with boundary points $a\neq b$ such that $\phi^{-1}(a)=-\infty$ and $\phi^{-1}(b)=\infty$.
Applying the change of variable \eqref{formula: EggertSub} into equation~\eqref{formula: sturm-liouville problem}, one obtains~\cite{Eggert1987}:
\begin{equation}\label{formula: transformed sturm-liouville problem}
\mathcal{L} \, v(x) = - v^{\prime \prime}(x) + \tilde{q}(x) v(x) = \lambda \rho(\phi(x))(\phi^{\prime}(x))^{2} v(x)\quad \textrm{with} \quad \lim_{|x| \to \infty} v(x) = 0,
\end{equation}
where:
\begin{equation}\label{formula: q tilde}
\tilde{q}(x) = - \sqrt{\phi^{\prime}(x)} \, \dfrac{{\rm d}}{{\rm d} x} \left( \dfrac{1}{\phi^{\prime}(x)} \dfrac{{\rm d}}{{\rm d} x}( \sqrt{\phi^{\prime}(x)}) \right) + (\phi^{\prime}(x))^{2} q(\phi(x)).
\end{equation}
To implement the double exponential transformation, we use a conformal mapping $\phi(x)$ such that the solution to equation \eqref{formula: transformed sturm-liouville problem} decays double exponentially. In other words, we need to find a function $\phi(x)$ such that:
\begin{equation}
|v(x)| \leq A \exp \left( -B \exp \left( \gamma |x| \right) \right),
\end{equation}
for some positive constants $A,B,\gamma$. Examples of such mappings are given in~\cite{Gaudreau2014a, Mori2001}.
Applying the SCM method, we obtain the following generalized eigenvalue problem:
\begin{align}\label{formula: matrix solution}
\mathcal{L} \, {\bf C}_{M}(v,h) & = {\bf A}{\bf v} \,=\, \mu {\bf D}^{2}{\bf v} \quad \Longrightarrow \quad ({\bf A} - \mu {\bf D}^{2} ){\bf v} \,=\, 0,
\end{align}
where the vectors ${\bf v}$ and ${\bf C}_{M}(v,h)$ are given by:
\begin{equation}
{\bf v} = (v_{-N,h},\ldots, v_{N,h})^{T} \qquad \textrm{and} \qquad {\bf C}_{N}(v,h) = (C_{N}(v,h)(-Nh), \ldots, C_{N}(v,h)(Nh) )^{T},
\label{EQVECTORCN001}
\end{equation}
and $\mu$ are approximations of the eigenvalues $\lambda$ of equation \eqref{formula: transformed sturm-liouville problem}. For more details on the application of the SCM, we refer the readers to \cite{Safouhi50-arXiv}.
As in \cite{Stenger1997a}, we let $\delta^{(l)}_{j,k}$ denote the $l^{\rm th}$ Sinc differentiation matrix with unit mesh size:
\begin{equation}\label{formula: delta matrices}
\delta^{(l)}_{j,k} = h^{l} \left. \left( \dfrac{\rm d}{{\rm d}x} \right)^{l} S(j,h)(x) \right|_{x=kh}.
\end{equation}
The entries $A_{j,k}$ of the $(2N+1) \times (2N+1)$ matrix ${\bf A}$ are then given by:
\begin{equation}\label{formula: A components}
A_{j,k} = -\dfrac{1}{h^{2}} \, \delta^{(2)}_{j,k} + \tilde{q}(kh) \, \delta^{(0)}_{j,k} \qquad {\rm with} \qquad -N \leq j,k \leq N,
\end{equation}
and the entries $D^{2}_{j,k}$ of the $(2N+1) \times (2N+1)$ diagonal matrix ${\bf D}^{2}$ are given~by:
\begin{equation}\label{formula: D components}
D^{2}_{j,k} = (\phi^{\prime}(kh))^{2} \rho(\phi(kh)) \, \delta^{(0)}_{j,k} \qquad {\rm with} \qquad -N \leq j,k \leq N.
\end{equation}
As previously mentioned, Eggert et al.'s transformation leads to the matrices $\,{\bf A}$ and ${\bf D}^2$ to be symmetric and positive definite. However, as will be illustrated in the next section, given certain parity assumptions, these matrices yield even more symmetry.
\section{Centrosymmetric properties of the matrices ${\bf A}$ and ${\bf D}^2$}
In this section, we present some properties of the matrix $\,{\bf A}$ and ${\bf D}^2$ that will be beneficial in the computation of their eigenvalues. The matrices ${\bf A}$ and ${\bf D}^2$ are symmetric positive definite matrices when equation \eqref{formula: transformed sturm-liouville problem} is discretized using the Sinc collocation method. Additionally, given certain parity assumptions on the functions $q(x)$, $\phi(x)$ and $\rho(x)$ in equation \eqref{formula: transformed sturm-liouville problem}, the matrices ${\bf A}$ and ${\bf D}^2$ will also be centrosymmetric.
\begin{definition}\cite[Section 5.10]{Bransden2000}
Let $\mathcal{J}$ denote the parity operator defined by:
\begin{equation}
\mathcal{J}f(x) = f(-x),
\end{equation}
where $f(x)$ is a well defined function being acted upon by $\mathcal{J}$.
\end{definition}
\begin{definition}
An operator $\mathcal{B}$ is said to commute with parity operator $\mathcal{J}$ if it satisfies the following relation:
\begin{equation}
\mathcal{B}\mathcal{J}f(x) = \mathcal{J}\mathcal{B}f(x).
\end{equation}
Equivalently, we can say that the the commutator between $\mathcal{B}$ and $\mathcal{J}$ is zero, that is:
\begin{equation}
[\mathcal{B},\mathcal{J}] = \mathcal{B}\mathcal{J} - \mathcal{J}\mathcal{B} = 0.
\end{equation}
\end{definition}
\begin{definition}\cite[Definition 5]{Weaver1985}
An exchange matrix denoted by ${\bf J}$ is a square matrix with ones along the anti-diagonal and zeros everywhere else:
\begin{equation}
{\bf J} =
\begin{pmatrix}
\multicolumn{1}{c}{\emph{\text{\kern 0em\smash{\raisebox{-0.5ex}{\Large 0}}}}}& & 1 \\
& \iddots & \\
1 & & \multicolumn{1}{c}{\emph{\text{\kern0.5em\smash{\raisebox{0.5ex}{\Large 0}}}}}
\end{pmatrix}.
\end{equation}
\end{definition}
\begin{definition}\cite[Definition 2]{Weaver1985}
Let {\bf B} be a matrix of dimension $(2N+1) \times (2N+1)$ with components $B_{j,k}$ for $-N \leq j,k \leq N$. ${\bf B}$ is centrosymmetric if and only if ${\bf B}$ satisfies the following property:
\begin{equation}\label{formula: centrosymmetric B}
{\bf B}{\bf J}={\bf J}{\bf B},
\end{equation}
where ${\bf J}$ is an exchange matrix of dimension $(2N+1) \times (2N+1)$.
Writing equation~\eqref{formula: centrosymmetric B} in a component form, we have the following relation:
\begin{equation}\label{formula: centrosymmetric B component}
B_{-j,-k} = B_{j,k} \quad \textrm{for} \quad -N\leq j,k\leq N.
\end{equation}
\end{definition}
We now present the following Theorem establishing the connection between symmetries of the Sturm-Liouville operator and its resulting matrix approximation.
\begin{theorem}\label{theorem: H centrosymmetric}
Let $\mathcal{L}$ denote the operator of the transformed Sturm-Liouville problem in equation \eqref{formula: transformed sturm-liouville problem}:
\begin{equation}\label{formula: transformed sturm-liouville problem operator}
\mathcal{L} = \dfrac{1}{\rho(\phi(x))(\phi^{\prime}(x))^{2}} \left( - \dfrac{ d^{2}}{dx^2} + \tilde{q}(x) \right).
\end{equation}
If the commutator $[\mathcal{L}, \mathcal{J}] = 0$, where $\mathcal{J}$ is the parity operator, then the matrices $\,{\bf A}$ and ${\bf D}^2$ defined by equations \eqref{formula: A components} and \eqref{formula: D components} resulting from the DESCM are centrosymmetric.
\end{theorem}
\underline{\bf Proof} \hskip 0.25cm The commutator $[\mathcal{L}, \mathcal{J}] = 0$ if and only if $q(x)$ and $\rho(x)$ are even functions and $\phi(x)$ is an odd function.
If $\phi(x)$ is an odd function, then $\phi^{\prime}(x)$ is even, $ \phi^{\prime \prime}(x)$ is odd and $ \phi^{\prime \prime \prime}(x)$ is even. From this and equation \eqref{formula: q tilde}, it follows that $\tilde{q}(x)$ is even.
In order to show that the resulting matrices $\,{\bf A}$ and ${\bf D}^2$ are centrosymmetric, we demonstrate that both these matrices satisfy equation \eqref{formula: centrosymmetric B component}. Before doing so, it is important to notice that the $l^{\rm th}$ Sinc differentiation matrices defined in equation \eqref{formula: delta matrices} have the following symmetric properties:
\begin{align}
\delta^{(l)}_{-j,-k} & = h^{l} \left. \left( \dfrac{\rm d}{{\rm d}x} \right)^{l} S(-j,h)(x) \right|_{x=-kh} \,= \, \begin{cases}
~~\delta^{(l)}_{j,k} & \textrm{if} \quad \textrm{$l$ is even} \\[0.1cm]
-\delta^{(l)}_{j,k} & \textrm{if} \quad \textrm{$l$ is odd}.
\end{cases}
\end{align}
Hence, the $l^{\rm th}$ Sinc differentiation matrices are centrosymmetric if $l$ is even. It is worth noting that when $l$ is odd, the Sinc differentiation matrices are skew-centrosymmetric \cite{Trench2004}. Consequently, investigating the form for the components of the matrix ${\bf A}$ in equation \eqref{formula: A components}, we obtain:
\begin{eqnarray}
A_{-j,-k} & = & -\dfrac{1}{h^{2}} \, \delta^{(2)}_{-j,-k} + \tilde{q}(-kh) \, \delta^{(0)}_{-j,-k} \nonumber \\
& = & -\dfrac{1}{h^{2}} \, \delta^{(2)}_{j,k} + \tilde{q}(kh) \, \delta^{(0)}_{j,k} \nonumber\\
& = & A_{j,k}.
\end{eqnarray}
Similarly, investigating the form for the components of the matrix ${\bf D}^2$ in equation \eqref{formula: D components}, we obtain:
\begin{eqnarray}
D^{2}_{-j,-k} & = & (\phi^{\prime}(-kh))^{2} \rho(\phi(-kh)) \, \delta^{(0)}_{-j,-k} \nonumber \\
& = & (\phi^{\prime}(kh))^{2} \rho(\phi(kh)) \, \delta^{(0)}_{j,k} \nonumber \\
& = & D^{2}_{j,k}.
\end{eqnarray}
Both matrices ${\bf A}$ and ${\bf D}^2$ satisfy equation \eqref{formula: centrosymmetric B component}. From this it follows that ${\bf A}$ and ${\bf D}^2$ are centrosymmetric.
Theorem~\ref{theorem: H centrosymmetric} illustrates that Sinc basis functions preserve the parity property of the Sturm-Liouville operator when discretized. Hence, when the matrices $\,{\bf A}$ and ${\bf D}^2$ are symmetric centrosymmetric positive definite matrices, we can utilize these symmetries when solving for their generalized eigenvalues. In \cite{Cantoni1976a}, Cantoni et al. proved several properties of symmetric centrosymmetric matrices. In the following, we will utilize some of these properties to facilitate our task of obtaining approximations to the generalized eigenvalues of the matrices $\,{\bf A}$ and ${\bf D}^2$. The following lemma will demonstrate the internal block structure of symmetric centrosymmetric matrices.
\begin{lemma} \label{lemma: centrosymmetric, symmetric} \cite[Lemma 2]{Cantoni1976a}
If $\, {\bf H}$ is a square symmetric centrosymmetric matrix of dimension $\,(2N+1) \times (2N+1)$, then ${\bf H}$ can be written as:
\begin{equation}\label{formula: H matrix centrosymmet.ric}
{\bf H} = \left[\begin{array}{cccc} {\bf S} & {\bf x} & {\bf C^{T}} \\
{\bf x^{T}} & h & {\bf x^{T}}{\bf J}\\
{\bf C} & {\bf J}{\bf x} & {\bf J}{\bf S}{\,\bf J}\end{array}\right],
\end{equation}
where ${\bf S}, {\bf C}$ are matrices of size $N \times N$, $ {\bf J}$ is the exchange matrix of size $N \times N$, ${\bf x} $ is a column vector of length $N$ and $h$ is a scalar. In addition, ${\bf S^{T}} = {\bf S}$ and ${\bf C^{T}} = {\bf J}{\bf C}{\bf J}$.
\end{lemma}
The next lemma simplifies the calculation needed to solve for these eigenvalues.
\begin{lemma}\cite[Lemma 3]{Cantoni1976a}\label{lemma: Ortho similar}
Let ${\bf H}$ be a square symmetric centrosymmetric matrix as defined in lemma \ref{lemma: centrosymmetric, symmetric} and let ${\bf V}$ be a square matrix of dimension $(2N+1) \times (2N+1)$ defined by:
\begin{equation}
{\bf V} = \left[\begin{array}{cccc} {\bf S-JC} & {\bf 0} & {\bf 0} \\
{\bf 0} & h & \sqrt{2}~{\bf x^{T}}\\
{\bf 0} & \sqrt{2}~{\bf x} & {\bf S+JC}\end{array}\right] ,
\end{equation}
then ${\bf H} $ and ${\bf V}$ are orthogonally similar. That is, the matrices ${\bf H} $ and ${\bf V}$ have the same Jordan normal form and thus the same eigenvalue spectrum.
\end{lemma}
Cantoni et al. use Lemmas~\ref{lemma: centrosymmetric, symmetric} and \ref{lemma: Ortho similar} to prove the following Theorem concerning a standard eigenvalue problem where the matrix is centrosymmetric.
\begin{theorem} \label{theorem: symmetric centrosymmetric}\cite[Theorem 2]{Cantoni1976a}
Let ${\bf H}$ be a square symmetric centrosymmetric matrix as defined in Lemma \ref{lemma: centrosymmetric, symmetric}, then solving the eigenvalue problem $\det( {\bf H} - \lambda {\bf I} )=0$ is equivalent to solving the two smaller eigenvalue problems:
\begin{equation}
\det({\bf S-JC} -\lambda {\bf I} ) = 0 \quad \textrm{and} \quad
\det \left( \left[ \begin{array}{cccc} h & \sqrt{2}~{\bf x^{T}}\\
\sqrt{2}~{\bf x} & {\bf S+JC}\end{array}\right] - \lambda \left[ \begin{array}{cccc} 1 & {\bf 0}\\
{\bf 0} & {\bf I}\end{array}\right] \right) = 0.
\end{equation}
\end{theorem}
Since our problem consists of solving a generalized eigenvalue problem where one matrix is a full symmetric centrosymmetric and the other is a diagonal centrosymmetric matrix, we propose the following Theorem.
\begin{theorem}\label{theorem: symmetric centrosymmetric generalized}
Let ${\bf H}$ and ${\bf W}$ be square symmetric centrosymmetric matrices of the same size, such that:
\begin{equation}
{\bf H} = \left[\begin{array}{cccc} {\bf S} & {\bf x} & {\bf C^{T}} \\
{\bf x^{T}} & h & {\bf x^{T}}{\bf J}\\
{\bf C} & {\bf J}{\bf x} & {\bf J}{\bf S}{\,\bf J}\end{array}\right] \qquad \textrm{and} \qquad
{\bf W} = \left[\begin{array}{cccc} \diag({\bf w}) & {\bf 0} & {\bf 0} \\
{\bf 0} & w & {\bf 0}\\
{\bf 0} & {\bf 0} & {\bf J}\diag({\bf w}){\bf J}\end{array}\right] ,
\end{equation}
then solving the generalized eigenvalue problem $\det( {\bf H} - \lambda {\bf W} )=0$ is equivalent to solving the two smaller generalized eigenvalue problems:
\begin{equation}
\det({\bf S-JC} -\lambda \diag({\bf w}) ) = 0 \quad \textrm{and} \quad
\det \left( \left[ \begin{array}{cccc} h & \sqrt{2}~{\bf x^{T}}\\
\sqrt{2}~{\bf x} & {\bf S+JC}\end{array}\right] - \lambda \left[ \begin{array}{cccc} w & {\bf 0}\\
{\bf 0} & \diag({\bf w})\end{array}\right] \right) = 0.
\end{equation}
\end{theorem}
\underline{\bf Proof} \hskip 0.25cm This proof relies on the unitary transformation matrix presented in \cite[Lemma 3]{Cantoni1976a}:
\begin{equation}
{\bf K} = \dfrac{1}{\sqrt{2}}\left[\begin{array}{cccc} {\bf I} & {\bf 0} & -{\bf J} \\
{\bf 0} & \sqrt{2} & {\bf 0}\\
{\bf I} & {\bf 0} & {\bf J}\end{array}\right],
\end{equation}
where ${\bf I}$ is the identity matrix and ${\bf J}$ is the exchange matrix.
It is easy to verify that:
\begin{equation}
{\bf K}{\bf H}{\bf K^{T}} = {\bf V},
\end{equation}
where ${\bf V}$ is the matrix in lemma \ref{lemma: Ortho similar}.
This result is analogous for the matrix ${\bf W}$ with a change in notation. Hence:
\begin{align}
0 = & \det( {\bf H} - \lambda {\bf W} ) \nonumber\\
= & \det( {\bf K} ) \det( {\bf H} - \lambda {\bf W} )\det( {\bf K^{T}} ) \nonumber \\
= & \det( {\bf KHK^{T}} - \lambda {\bf KWK^{T}} ) \nonumber \\
= & \det \left( \left[\begin{array}{cccc} {\bf S-JC} & {\bf 0} & {\bf 0} \nonumber \\
{\bf 0} & h & \sqrt{2}~{\bf x^{T}}\\
{\bf 0} & \sqrt{2}~{\bf x} & {\bf S+JC}\end{array}\right]- \lambda \left[\begin{array}{cccc} \diag({\bf w}) & {\bf 0} & {\bf 0} \\
{\bf 0} & w & {\bf 0}\\
{\bf 0} & {\bf 0} & \diag({\bf w})\end{array}\right] \right) \nonumber \\
= & \det({\bf S-JC} -\lambda \diag({\bf w}) )\det \left( \left[ \begin{array}{cccc} h & \sqrt{2}~{\bf x^{T}}\\
\sqrt{2}~{\bf x} & {\bf S+JC}\end{array}\right] - \lambda \left[ \begin{array}{cccc} w & {\bf 0}\\
{\bf 0} & \diag({\bf w})\end{array}\right] \right),
\end{align}
from which the result follows.
Theorem \ref{theorem: symmetric centrosymmetric generalized} is very useful when $N$ is large since it is less costly to solve two symmetric generalized eigensystems of dimensions $N \times N$ and $(N+1) \times (N+1)$ rather than one symmetric eigensystem of dimension $(2N+1) \times (2N+1)$. Additionally, Lemma \ref{lemma: centrosymmetric, symmetric} also has large ramifications when it comes to saving storage space. As is discussed in \cite{Stenger1997a}, the $l^{\rm th}$ Sinc differentiation matrices are symmetric toeplitz matrices. Therefore, for a symmetric toeplitz matrix of dimension $(2N+1) \times (2N+1)$, only $2N+1$ elements need to be stored. Investigating the definition of the matrix ${\bf A}$ in equation \eqref{formula: A components}, we can see that ${\bf A}$ is defined as the sum of a symmetric toeplitz matrix and a diagonal matrix. Moreover, from Lemma \ref{lemma: centrosymmetric, symmetric} and Theorem \ref{theorem: symmetric centrosymmetric generalized}, using only the antidiagonal and anti-upper triangular half of matrix ${\bf C}$, the vector ${\bf x}$, the scalar $h$, the diagonal and lower triangular half of the matrix ${\bf S}$, the vector ${\bf w}$ and the scalar $w$, we can create all the elements needed to solve for the generalized eigenvalues of the matrices ${\bf A}$ and ${\bf D}^2$. Hence, the ratio of elements needed to be computed and stored at each iteration $N$ in order to solve for these eigenvalues is given by:
\begin{equation}
\textrm{Proportion of Entries Needed} \,=\, \dfrac{ \left(2N + N+1 \right) + \left(N +1\right) }{(2N+1)^2 + (2N+1)} \,=\, \dfrac{1}{N+1}.
\end{equation}
Thus, only $\frac{1}{N+1}$ of the entries need to be generated and stored at every iteration to obtain all of the generalized eigenvalues.
In the following section, we will illustrate the gain in efficiency of these results by applying the DESCM to the Schr\"odinger equation with an anharmonic oscillator.
\section{The anharmonic oscillator}
The time independent Schr\"{o}dinger equation given by:
\begin{equation}\label{formula:Schrodinger equation}
{\cal H} \, \psi(x) \, \,=\, E \, \psi(x) \qquad \textrm{with} \qquad \lim_{|x|\to \infty}\psi(x) =0.
\end{equation}
In equation \eqref{formula:Schrodinger equation}, the Hamiltonian is given by the following linear operator:
$$
{\cal H} = -\dfrac{{\rm d}^2}{{\rm d} x^2} +V(x),
$$
where $V(x)$ is the potential energy function and $E$ is the energy eigenvalue of the hamiltonian operator ${\cal H}$. In our case, we are treating the anharmonic oscillator potential $V(x)$ defined by:
\begin{equation}\label{formula: anharmonic oscillator}
V(x)= \displaystyle \sum_{i=1}^{m}c_{i}x^{2i} \qquad \textrm{with} \qquad c_{m} > 0 \quad \textrm{and} \quad m \in \mathbb{N}\backslash\{1\}.
\end{equation}
In \cite{Safouhi49}, we successfully applied the DESCM to time independent Schr\"{o}dinger equation with an anharmonic potential. As we can see, the time independent Schr\"{o}dinger equation \eqref{formula:Schrodinger equation} is a special case of a Sturm-Liouville equation with $q(x)=V(x)$ and $\rho(x)=1$. Applying Eggert et al.'s transformation and the DESCM with $\phi(x) =\sinh(x)$, we arrived at the following generalized eigenvalue problem:
\begin{equation}
\det({\bf A}-\mathcal{E}{\bf D}^{2}) = 0,
\label{formula: matrix solution-DET}
\end{equation}
where $\mathcal{E}$ are approximations of the energy eigenvalues $E$.
The matrices ${\bf A}$ and ${\bf D}^{2}$ defined by equation \eqref{formula: A components} and \eqref{formula: D components} are given by:
\begin{equation}\label{formula: A components Schrodinger}
A_{j,k} = -\left(\dfrac{1}{h^{2}}\right)\delta^{(2)}_{j,k} \, + \tilde{V}(kh)\delta^{(0)}_{j,k} \quad {\rm with} \quad -N \leq j,k \leq N,
\end{equation}
where:
\begin{equation}
\tilde{V}(x) \,=\, \dfrac{1}{4} - \dfrac{3}{4} \, \sech^{2}(x) + \cosh^{2}(x) \displaystyle \sum_{i=1}^{m}c_{i}\sinh^{2i}(x),
\end{equation}
and:
\begin{equation}\label{formula: D components Schrodinger}
D^2_{j,k} = \cosh^{2}(kh) \, \delta^{(0)}_{j,k} \qquad {\rm with} \qquad -N \leq j,k \leq N.
\end{equation}
Since the anharmonic potential $V(x)$ defined in \eqref{formula: anharmonic oscillator} is an even function, the function $\rho(x) =1$ is an even function and the conformal map $\phi(x) =\sinh(x)$ is an odd function, we know that Theorem \ref{theorem: H centrosymmetric} applies. Hence, the matrices ${\bf A}$ and ${\bf D}^{2}$ are symmetric centrosymmetric.
\section{Numerical Discussion}
In this section, we test the computational efficiency of the results obtained in Theorem \ref{theorem: symmetric centrosymmetric generalized}. All calculations are performed using the programming language Julia in double precision. The eigenvalue solvers in Julia use the linear algebra package {\it LAPACK}.
In \cite{Chaudhuri1991a}, Chaudhuri et al. presented several potentials with known analytic solutions for energy levels calculated using supersymmetric quantum mechanics, namely:
\begin{equation}\label{formula: true value energy}
\begin{array}{lllll}
V_{1}(x) & = & x^2 -4x^4+x^6 & \Rightarrow & E_{0} = -2 \\
V_{2}(x) & = & 4x^2 -6x^4+x^6 & \Rightarrow & E_{1} = -9 \\
V_{3}(x) & = & (105/64) x^2-(43/8)x^4 + x^6 -x^8 +x^{10} &\Rightarrow & E_{0} = 3/8 \\
V_{4}(x) & = & (169/64)x^2 -(59/8)x^4 + x^6 -x^8 + x^{10} &\Rightarrow & E_{1} = 9/8.
\end{array}
\end{equation}
Figure~\ref{figure: potentials} presents the absolute error between our approximation and the exact values given in~\eqref{formula: true value energy}. The absolute error is defined by:
\begin{equation}
{\rm Absolute \, \, error} = \left| \mathcal{E}_{l}(N) - {\rm Exact \, \, value} \right| \qquad \textrm{for} \qquad l =0,1.
\end{equation}
The optimal mesh size obtained in \cite{Safouhi49}:
\begin{equation}\label{formula: Anharmonic optimal mesh size}
h = \dfrac{W \left( \frac{2^{m} \pi^{2}(m+1)N}{\sqrt{c_{m}}} \right)}{(m+1)N},
\end{equation}
where $W(x)$ is the Lambert W function, is used in the calculation.
As can be seen from Figure~\ref{figure: potentials}, using the centrosymmetric property improves the convergence rate of the DESCM significantly.
\section{Conclusion}
Sturm-Liouville eigenvalue problems are abundant in scientific and engineering problems. In certain applications, these problems possess a symmetry structure which results in the Sturm-Liouville operator to be commutative with the parity operator. As was proven in Theorem \ref{theorem: H centrosymmetric}, applying the DESCM will preserve this symmetry and results in a generalized eigenvalue problem where the matrices are symmetric centrosymmetric. The centrosymmetric property leads to a substantial reduction in the computational cost when computing the eigenvalues by splitting the original eigenvalue problem of dimension $(2N+1)\times(2N+1)$ into two smaller generalized eigensystems of dimension $N\times N$ and $(N+1) \times (N+1)$. Moreover, due to the internal block structure of the matrices obtained using the DESCM, we have shown that only $\dfrac{1}{N+1}$ of all entries need to be computed and stored at every iteration in order to find all of their eigenvalues. Numerical results are presented for the time independent Schr\"odinger equation \eqref{formula:Schrodinger equation} with an anharmonic oscillator potential \eqref{formula: anharmonic oscillator}. Four exact potentials with known eigenvalues are tested and the results clearly demonstrated the reduction in complexity and increase in convergence.
\section{Tables and Figures}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc} \includegraphics[width=0.35\textwidth]{AbsErrorP1} & \includegraphics[width=0.35\textwidth]{AbsErrorP2} \\
(a) & (b) \\
\includegraphics[width=0.35\textwidth]{AbsErrorP3} & \includegraphics[width=0.35\textwidth]{AbsErrorP4} \\
(c) & (d)
\end{tabular}
\caption{Absolute error for the potentials $V_{i}(x)$ for $i=1,2,3,4$ given by~\eqref{formula: true value energy} with $\phi(x) = \sinh(x)$. \newline
(a)~$V_{1}(x) = x^2 -4x^4+x^6$ with exact eigenvalue $E_{0} = -2$. (b)~$V_{2}(x) = 4x^2 -6x^4+x^6$ with exact eigenvalue $E_{1} = -9$. (c)~$V_{3}(x) = (105/64) x^2-(43/8)x^4 + x^6 -x^8 +x^{10}$ with exact eigenvalue $E_{0} = 3/8$. (d)~$V_{4}(x) = (169/64)x^2 -(59/8)x^4 + x^6 -x^8 + x^{10} $ with exact eigenvalue $E_{1} = 9/8$.}
\label{figure: potentials}
\end{center}
\end{figure}
\clearpage
|
1,108,101,565,152 | arxiv | \section{Introduction.}
\label{sec:1}
Morse functions on spheres with exactly two singular points are central objects in Reeb's theorem: a closed manifold admits such a function if and only if it is homeomorphic to a $k$-dimensional sphere for $k \neq 4$ or diffeomorphic to the $4$-dimensional unit sphere.
For general theory of Morse functions including these functions, see also \cite{milnor,milnor2}.
The class of special generic maps is a certain class of smooth maps whose codimensions are not positive and this class contains these functions and canonical projections of unit spheres as simplest examples.
More rigorously, a smooth map from an $m$-dimensional smooth manifold with no boundary into an $n$-dimensional manifold with no boundary is a special generic map if at each {\it singular} point
it is represented by the form\\
$(x_1, \cdots, x_m) \mapsto (x_1,\cdots,x_{n-1},\sum_{j=1}^{m-n+1} {x_{j+n-1}}^2)$ ($m \geq n \geq 1$)
for suitable local coordinates: a {\it singular} point of a differentiable map means a point in the manifold of the domain where the rank of the differential is smaller than both the dimensions of the manifolds of the domains and the targets.
Note also that special generic maps are so-called {\it fold maps}. Morse functions are also fold maps. Related theory on singularities of differentiable functions and maps are presented systematically in \cite{golubitskyguillemin} for example.
\cite{burletderham,calabi,furuyaporto} are pioneering studies on special generic maps. Since the 1990s, \cite{saeki} and related studies such as \cite{nishioka,saeki2,sakuma,sakuma2,saekisakuma,saekisakuma2,wrazidlo,wrazidlo2} have discovered interesting phenomena closely related to algebraic topology and differential topology of manifolds.
Reeb's theorem is a kind of characterization theorems of certain classes of (closed) manifolds. The present paper is on variants of this theorem.
${\mathbb{R}}^k$ denotes the $k$-dimensional Euclidean space (${\mathbb{R}}^1$ is denoted by $\mathbb{R}$ usually). It is a smooth manifold and it admits a natural Riemannian metric: the {\it standard Euclidean} metric. For $x \in {\mathbb{R}}^k$, $||x|| \geq 0$ denotes the distance between $x$ and the origin $0 \in {\mathbb{R}}^k$ where the underlying metric is the standard Euclidean metric. $S^k:=\{x \in {\mathbb{R}}^{k+1}\mid ||x||=1\}$ is the $k$-dimensional unit sphere and a $k$-dimensional compact smooth closed submanifold with no boundary in ${\mathbb{R}}^{k+1}$. $D^k:=\{x \in {\mathbb{R}}^{k}\mid ||x|| \leq 1\}$ is the $k$-dimensional unit disk and a $k$-dimensional connected and compact smooth submanifold in ${\mathbb{R}}^{k}$. A {\it homotopy sphere} means a smooth manifold which is homeomorphic to a (unit) sphere. If it is diffeomorphic to a unit sphere, then it is said to be a {\it standard} sphere. Known smooth manifolds homeomorphic to unit disks are in fact diffeomorphic to them.
Here a {\it smooth} bundle means a bundle whose fiber is a smooth manifold and whose structure group consists of smooth diffeomorphisms.
A {\it linear} bundle means a smooth bundle whose fiber is a Euclidean space, unit disk, or a unit sphere and whose structure group consists of (natural) linear transformations. For general theory of linear bundles and general bundles, see \cite{milnorstasheff,steenrod} for example.
Connected sums and boundary connected sums of manifolds are considered in the smooth category throughout the present paper.
We introduce some known results and Main Theorems.
\begin{Thm}
\label{thm:1}
Let $m \geq n \geq 1$ be integers.
\begin{enumerate}
\item {\rm (\cite{saeki})}
\label{thm:1.1}
Let $m \geq 2$. An $m$-dimensional closed and connected manifold admits a special generic map into ${\mathbb{R}}^2$ if and only if either of the following two holds.
\begin{enumerate}
\item $M$ is a homotopy sphere where $m \neq 4$ or a standard sphere where $m=4$.
\item $M$ is diffeomorphic to a manifold represented as a connected sum of the total spaces of smooth bundles over $S^1$ whose fibers are either of the following two.
\begin{enumerate}
\item An {\rm (}$m-1${\rm )}-dimensional homotopy sphere where $m \neq 5$.
\item A $4$-dimensional standard sphere where $m=5$.
\end{enumerate}
\end{enumerate}
\item {\rm (\cite{saeki})}
\label{thm:1.2}
Let $m=4,5,6$. An $m$-dimensional closed and simply-connected manifold admits a special generic map into ${\mathbb{R}}^3$ if and only if either of the following two holds.
\begin{enumerate}
\item $M$ is an $m$-dimensional standard sphere.
\item $M$ is diffeomorphic to a manifold represented as a connected sum of the total spaces of linear bundles over $S^2$ whose fibers are diffeomorphic to the {\rm (}$m-2${\rm )}-dimensional unit sphere.
\end{enumerate}
\item {\rm (\cite{nishioka})}
\label{thm:1.3}
Let $m=5$. An $m$-dimensional closed and simply-connected manifold admits a special generic map into ${\mathbb{R}}^4$ if and only if either of the following two holds.
\begin{enumerate}
\item $M$ is a $5$-dimensional standard sphere.
\item $M$ is diffeomorphic to a manifold represented as a connected sum of the total spaces of linear bundles over $S^2$ whose fibers are diffeomorphic to the $3$-dimensional unit sphere.
\end{enumerate}
\end{enumerate}
\end{Thm}
\begin{MThm}
\label{mthm:1}
A $6$-dimensional closed and simply-connected manifold admits a special generic map into ${\mathbb{R}}^4$ if and only if $M$ is either of the following two.
\begin{enumerate}
\item A $6$-dimensional standard sphere.
\item A manifold diffeomorphic to one represented as a connected sum of finitely many copies of the following manifolds.
\begin{enumerate}
\item $S^3 \times S^3$.
\item The total space of a linear bundle over $S^2$ whose fiber is diffeomorphic to the $4$-dimensional unit sphere.
\end{enumerate}
\end{enumerate}
\end{MThm}
For our further new results, we also need fundamental notions on algebraic topology such as homology groups, cohomology groups and cohomology rings. The fundamental terminologies and notions on algebraic topology will be reviewed in section \ref{sec:2} where we regard readers have related knowledge to some extent.
$\mathbb{Z} \subset \mathbb{R}$ denotes the ring of all integers. The notation on homology groups and cohomology groups is presented again later. The {\it singular set} of a differentiable map is the set of all singular points of the map. Propositions \ref{prop:1}, \ref{prop:2} and \ref{prop:3} explain about fundamental properties including properties on the singular sets of special generic maps.
As Theorem \ref{thm:2} shows, a special generic map $f:M \rightarrow {\mathbb{R}}^5$ on a $6$-dimensional closed and simply-connected manifold $M$ is represented as the composition of a smooth submersion $q_f$ onto a $5$-dimensional compact and simply-connected smooth manifold $W_f$ with a smooth immersion $\bar{f}:W_f \rightarrow {\mathbb{R}}^5$. Furthermore, as Proposition \ref{prop:2} (Proposition \ref{prop:3}) shows, the restriction of the map to the singular set is a diffeomorphism onto the boundary $\partial W_f$.
\begin{MThm}
\label{mthm:2}
A $6$-dimensional closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$ such that the 2nd homology group $H_2(W_f;\mathbb{Z})$ of the $5$-dimensional compact and simply-connected smooth manifold $W_f$ just before is trivial if and if $M$ is either of the following two.
\begin{enumerate}
\item A $6$-dimensional standard sphere.
\item A manifold diffeomorphic to one represented as a connected sum of finitely many copies of the following manifolds.
\begin{enumerate}
\item $S^3 \times S^3$.
\item $S^2 \times S^4$.
\end{enumerate}
\end{enumerate}
\end{MThm}
We also need some notions on characteristic classes and obstruction theory for linear bundles.
For {\it j-th Stiefel-Whitney classes} and {\it Pontrjagin classes} of (real) vector bundles, linear bundles, tangent bundles, normal bundles of submanifolds and smooth manifolds, for example, see \cite{milnorstasheff} as one of related well-known books. This presents related systematic expositions. $\mathbb{Z}/2\mathbb{Z}$ is the field of order $2$.
\begin{MThm}
\label{mthm:3}
A $6$-dimensional, closed and simply-connected manifold $M$ such that the $2$nd homology group $H_2(M;\mathbb{Z})$ is finite and has no elements which are not the identity elements and whose orders are finite and even admits a special generic map into ${\mathbb{R}}^5$ whose singular set is connected if and only if the following three hold.
\begin{enumerate}
\item \label{mthm:3.1} The 2nd Stiefel Whitney class of $M$, which is the uniquely defined element of the cohomology group $H^2(M;\mathbb{Z}/2\mathbb{Z})$, is the zero element.
\item \label{mthm:3.2} The 1st Pontrjagin class of $M$, which is the the uniquely defined element of the cohomology group $H^4(M;\mathbb{Z})$ for an arbitrary oriented manifold $M$, is the zero element.
\item \label{mthm:3.3} The cup product $c_1 \cup c_2$ is the zero element for any pair $c_1,c_2 \in H^2(M;A)$ of cohomology classes in the cohomology group $H^2(M;A)$ for $A:=\mathbb{Z}$ and any finite field $A$.
\end{enumerate}
\end{MThm}
These results such as Main Theorems \ref{mthm:1}, \ref{mthm:2} and \ref{mthm:3} can be regarded as $6$-dimensional variants of Theorem \ref{thm:1} (\ref{thm:1.3}), for example. Note that for example, in some of Theorem \ref{thm:1}, explicit classifications of closed and simply-connected manifolds of certain classes such as \cite{barden,wall} are key ingredients.
In our new study, \cite{jupp,wall,zhubr,zhubr2} are key results on classifications of such manifolds.
We have another main theorem.
\begin{MThm}
\label{mthm:4}
\begin{enumerate}
\item
\label{mthm:4.1}
Assume that a closed and simply-connected manifold $M$ of dimension $m \geq 5$ admits a special generic map into ${\mathbb{R}}^4$, then $M$ enjoys the following two properties.
\begin{enumerate}
\item \label{thm:4.1.1} The $j$-th Steifel-Whitney class of $M$, which is the uniquely defined element of $H^j(M;\mathbb{Z}/2\mathbb{Z})$, is the zero element for any integer $j \geq 0$ except $j=2$.
\item \label{thm:4.1.2} The $j$-th Pontrjagin class of $M$, which is the uniquely defined element of $H^{4j}(M;\mathbb{Z})$, is the zero element for any integer $j \geq 0$ and an arbitrary oriented $M$.
\end{enumerate}
\item \label{mthm:4.2}
Let $M^{7,0}$ be a $7$-dimensional closed and simply-connected manifold diffeomorphic to a manifold represented as a connected sum of finitely many manifolds in the following two.
\begin{itemize}
\item The total space of a linear bundle over $S^2$ whose fiber is the $5$-dimensional unit sphere.
\item A copy of $S^3 \times S^4$.
\end{itemize}
Furthermore, the family of the finitely many manifolds contains at least one copy of $S^3 \times S^4$. Then there exists a family $\{M^{7,\lambda}\}_{\lambda \in \Lambda}$ of countably many $7$-dimensional closed and simply-connected manifolds satisfying the following two.
\begin{enumerate}
\item \label{mthm:4.2.1} $M^{7,{\lambda}_1}$ and $M^{7,{\lambda}_2}$ are not homeomorphic for distinct elements ${\lambda}_1, {\lambda}_2 \in \Lambda$.
\item \label{mthm:4.2.2} There exists an isomorphism ${\phi}_{\lambda}$ from the cohomology ring $H^{\ast}(M^{7,0};\mathbb{Z})$ of $M^{7,0}$ onto the cohomology ring $H^{\ast}(M^{7,\lambda};\mathbb{Z})$ of $M^{7,\lambda}$.
\item \label{mthm:4.2.3} For ${\phi}_{\lambda}$ before, by considering the natural quotient map from $\mathbb{Z}$ to $\mathbb{Z}/2\mathbb{Z}$, we canonically obtain an isomorphism ${\phi}_{\lambda,2}$ from the cohomology ring $H^{\ast}(M^{7,0};\mathbb{Z}/2\mathbb{Z})$ onto the cohomology ring $H^{\ast}(M^{7,\lambda};\mathbb{Z}/2\mathbb{Z})$. This maps the $j$-th Stiefel Whitney class of $M^{7,0}$ to that of $M^{7,\lambda}${\rm :} the $j$-th Stiefel-Whitney classes are defined uniquely as the elements of the cohomology groups as before, of course.
\item \label{mthm:4.2.4} $M^{7,\lambda}$ does not admit special generic maps into ${\mathbb{R}}^4$, whereas $M^{7,0}$ admits ones. $M^{7,\lambda}$ and $M^{7,0}$ admit special generic maps into ${\mathbb{R}}^5$.
\end{enumerate}
\end{enumerate}
\end{MThm}
This is closely related to Main Theorem \ref{mthm:1}. In addition, this extends some results of section 3 of \cite{saeki}.
Our new results and related facts explicitly show that special generic maps are attractive in algebraic topology and differential topology of manifolds although the class of special generic maps seems to be not so wide considering the definition.
In the next section we review fundamental properties of special generic maps. The third section is devoted to Main Theorems and Theorems \ref{thm:3}, \ref{thm:4} and \ref{thm:5}.
The fourth section is for concluding remarks.
\section{Fundamental properties of special generic maps.}
\label{sec:2}
\begin{Prop}
\label{prop:1}
Let $f$ be a special generic map from an $m$-dimensional manifold with no boundary into $n$-dimensional manifold with no boundary where $m \geq n$. Then the following properties hold.
\begin{enumerate}
\item \label{prop:1.1}
The {\rm singular set} of $f$, defined as the set of all singular points of $f$, is an {\rm (}$n-1${\rm )}-dimensional smooth closed submanifold with no boundary. Furthermore, the restriction of $f$ there is a smooth immersion.
\item \label{prop:1.2}
$f$ is, for suitable local coordinates, represented as the product map of a Morse function and the identity map on a small open neighborhood of each singular point where the open neighborhood is taken in the singular set and of dimension $n-1$.
\end{enumerate}
\end{Prop}
\begin{Prop}[E. g. \cite{saeki}]
\label{prop:2}
Let $m \geq n \geq 1$ be integers.
For a special generic map $f:M \rightarrow N$ on an $m$-dimensional closed manifold $M$ into an $n$-dimensional manifold $N$ with no boundary, the following properties hold.
\begin{enumerate}
\item
\label{prop:2.1}
There exists some $n$-dimensional compact smooth manifold $W_f$ and some smooth immersion $\bar{f}:W_f \rightarrow N$. Furthermore, for example, $W_f$ can be taken as follows.
\begin{enumerate}
\item If the manifold $N$ of the target is orientable, then $W_f$ is taken as an orientable manifold.
\item If the manifold $M$ of the domain is connected, then $W_f$ is taken as a connected manifold.
\item If the manifold $N$ of the target is orientable and the manifold $M$ of the domain is connected, then $W_f$ is taken as a connected and orientable manifold.
\end{enumerate}
\item
\label{prop:2.2}
We have a smooth surjection $q_f:M \rightarrow W_f$ with the relation $f=\bar{f} \circ q_f$.
\item
\label{prop:2.3}
$q_f$ maps the singular set of $f$ onto the boundary $\partial W_f \subset W_f$. The restriction of $q_f$ there is also regarded as a diffeomorphism onto $\partial W_f$.
\item
\label{prop:2.4}
\begin{enumerate}
\item
\label{prop:2.4.1}
For some small collar neighborhood $N(\partial W_f) \subset W_f$, the composition of the map $q_f {\mid}_{{q_f}^{-1}(N(\partial W_f))}$ onto $N(\partial W_f)$ with the canonical projection to $\partial W_f$ can be regarded as the projection of a linear bundle whose fiber is the {\rm(}$m-n+1${\rm )}-dimensional unit disk. Note that this will be in Proposition \ref{prop:3} defined as a {\rm boundary linear bundle} of $f$.
\item
\label{prop:2.4.2}
The restriction of $q_f$ to the preimage of $W_f-{\rm Int}\ N(\partial W_f)$ can be regarded as the projection of a smooth bundle over $W_f-{\rm Int}\ N(\partial W_f)$ whose fiber is an {\rm (}$m-n${\rm )}-dimensional standard sphere. Note that this will be in Proposition \ref{prop:3} defined as an {\rm internal smooth bundle} of $f$.
\end{enumerate}
\end{enumerate}
\end{Prop}
This is presented for the case $m>n \geq 1$ in \cite{saeki} only. We can easily check for the case $m=n$ and related theory is also discussed in section 8 there.
Conversely, we have the following proposition.
\begin{Prop}[E. g. \cite{saeki}]
\label{prop:3}
Let $m \geq n \geq 1$ be integers. Let a smooth immersion $\bar{f}:W_f \rightarrow N$ of an $n$-dimensional compact smooth manifold $W_f$ into an $n$-dimensional manifold $N$ with no boundary be given. We also assume the existence of the two bundles in the following first two conditions and the third condition.
\begin{itemize}
\item A linear bundle over $\partial W_f$ whose fiber is the {\rm (}$m-n+1${\rm )}-dimensional unit disk.
\item A smooth bundle over $W_f-{\rm Int}\ N(W_f)$ whose fiber is an {\rm (}$m-n${\rm )}-dimensional standard sphere where a suitable small collar neighborhood $N(\partial W_{f}) \subset W_{f}$ is taken.
\item The subbundle of the former bundle whose fiber is $\partial D^{m-n+1}$ and the restriction of the latter bundle to the boundary are equivalent as smooth bundles over $N(\partial W_f)-\partial W_f$. Note that for the former bundle, we consider a natural identification between $\partial W_f$ and $N(\partial W_f)-\partial W_f$ defined naturally from the structure of the collar neighborhood $N(\partial W_f)$ in $W_f$.
\end{itemize}
We call the first bundle a {\rm boundary linear bundle over} $W_f$ and the second bundle an {\rm internal smooth bundle over} $W_f$. This respects expositions in \cite{kitazawa6}.
Then we have a special generic map $f:M \rightarrow N$ on a suitable $m$-dimensional closed manifold $M$ into $N$ satisfying the following properties.
\begin{enumerate}
\item There exists a smooth surjection $q_{f}:M \rightarrow W_f$ satisfying the relation $f=\bar{f} \circ q_{f}$.
\item $q_{f}$ maps the singular set of $f$ onto the boundary $\partial W_f \subset W_f$. This is regarded as a diffeomorphism.
\item The composition of the map $q_f {\mid}_{{q_f}^{-1}(N(\partial W_f))}$ onto $N(\partial W_f)$ with the canonical projection to $\partial W_f$ can be regarded as the projection of a linear bundle equivalent to the boundary linear bundle over $W_f$ before. We can canonically define such a bundle as a {\rm boundary linear bundle of $f$}.
\item The restriction of $q_f$ to the preimage of $W_f-{\rm Int}\ N(\partial W_f)$ can be regarded as the projection of a smooth bundle over $W_f-{\rm Int}\ N(\partial W_f)$ equivalent to the internal smooth bundle over $W_f$ before. We can canonically define such a bundle as an {\rm internal smooth bundle of $f$}.
\end{enumerate}
\end{Prop}
Simplest examples are obtained by setting
the two bundles as trivial ones.
Smooth manifolds have canonical PL structures and regarded as polyhedra canonically. In Proposition \ref{prop:4} and the present paper, this is important.
\begin{Prop}[E. g. \cite{saeki}]
\label{prop:4}
Let $m>n \geq 1$ be integers. Let $f:M \rightarrow N$ be a special generic map on an $m$-dimensional closed and connected manifold $M$ into an $n$-dimensional manifold $N$ with no boundary. Then we have an {\rm (}$m+1${\rm )}-dimensional compact and connected {\rm (PL)} manifold $W$ satisfying the following {\rm (}differential{\rm )} topological properties.
\begin{enumerate}
\item \label{prop:4.1} The boundary of $W$ is $M$ where we consider in the topology or PL category.
\item \label{prop:4.2} $W$ collapses to $W_f$ where we abuse the notation in Propositions \ref{prop:2} and \ref{prop:3}.
\item \label{prop:4.3} $W_f$ can be also identified with a suitable CW subcomplex {\rm (}resp. subpolyhedron{\rm )} of $W$.
\item \label{prop:4.4} Let $i_M:M \rightarrow W$ denote the canonical inclusion. Then for a suitable continuous {\rm (}resp .{\rm PL)} map $r_f:W \rightarrow W_f$ giving a collapsing to $W_f$, we have the relation $q_f=r_f \circ i_M$. Furthermore, $r_f$ can be regarded as the projection of a bundle over $W_f$ whose fiber is diffeomorphic to $D^{m-n+1}$ and which may not be a smooth bundle. Furthermore, we may assume the following two where we can abuse the notions, terminologies and notation in Proposition \ref{prop:3}.
\begin{enumerate}
\item Consider the restriction of $r_f$ to the preimage of $W_f-{\rm Int}\ N(\partial W_f)$. The subbundle obtained by restricting the fiber to $\partial D^{m-n+1} \subset D^{m-n+1}$ is equivalent to an internal smooth bundle of $f$.
\item Consider the restriction of $r_f$ to the preimage of $N(\partial W_f)$. Next consider the composition of this projection with the canonical projection to $\partial W_f$. This can be regarded as the projection of a bundle whose fiber is diffeomorphic to $D^{m-n+1} \times D^1$.
Furthermore, consider the subbundle whose fiber is $(\partial D^{m-n+1} \times D^1) \cup (D^{m-n+1} \times {S^0}_0)$ where ${S^0}_0 \subset S^0=\partial D^1$ denotes a one-point set. It is equivalent to a boundary linear bundle of $f$.
\end{enumerate}
\item \label{prop:4.5} In the case $m-n=1,2,3$ for example, $W$ and $r_f$ can be chosen as a smooth manifold and a smooth map and the bundle over $W_f$ can be regarded as a smooth bundle with the projection $r_f$.
\item \label{prop:4.6}
In the case where $M$ and $N$ are oriented, we can take $W$ as an oriented manifold whose boundary is $M$. As before, in the case $m-n=1,2,3$ for example, $W$ and $r_f$ can be chosen as a smooth manifold and a smooth map and the bundle over $W_f$ can be regarded as a smooth bundle with the projection $r_f$.
\end{enumerate}
\end{Prop}
For more general propositions, see \cite{saekisuzuoka} and see papers \cite{kitazawa0.1,kitazawa0.2,kitazawa0.3} and several preprints in References by the author.
We introduce notation and terminologies on {\it homology groups}, {\it cohomology groups}, {\it cohomology rings} and {\it homotopy groups}. For systematic expositions on such notions and fundamental algebraic topology, see \cite{hatcher} for example.
Let $(X,X^{\prime})$ be a pair of topological spaces satisfying $X^{\prime} \subset X$. We allow $X^{\prime}$ to be the empty set.
Let $A$ be a commutative ring.
$A$ is, for example, taken as the ring of all integers, denoted by $\mathbb{Z} \subset \mathbb{R}$.
The $j$-th {\it homology group} and {\it cohomology group} of the pair $(X,X^{\prime})$ of topological spaces satisfying $X^{\prime} \subset X$ are defined and denoted by $H_{j}(X,X^{\prime};A)$ and $H^{j}(X,X^{\prime};A)$ where $A$ is the {\it coefficient ring}. If $X^{\prime}$ is empty, then we may omit ",$X^{\prime}$" in the notation and the homology group and the cohomology group of the pair $(X,X^{\prime})$ are also called the {\it homology group} and the {\it cohomology group} of $X$. {\rm (}{\it Co}{\rm )}{\it homology classes} of $(X,X^{\prime})$ or $X$ mean elements of the (resp. co)homology groups.
The {\it $k$-th homotopy group} of a topological space $X$ is denoted by ${\pi}_k(X)$.
Let $(X_1,{X_1}^{\prime})$ and $(X_2,{X_2}^{\prime})$ be pairs of topological spaces satisfying ${X_1}^{\prime} \subset X_1$ and ${X_2}^{\prime} \subset X_2$ where the second topological spaces of these pairs are allowed to be the empty sets as before. For a continuous map $c:X_1 \rightarrow X_2$ satisfying $c({X_1}^{\prime}) \subset {X_2}^{\prime}$, $c_{\ast}:H_{\ast}(X_1,{X_1}^{\prime};A) \rightarrow H_{\ast}({X_2},{X_2}^{\prime};A)$ and $c^{\ast}:H^{\ast}({X_2},{X_2}^{\prime};A) \rightarrow H^{\ast}(X_1,{X_1}^{\prime};A)$ denote the natural homomorphisms defined canonically. For a continuous map $c:X_1 \rightarrow X_2$, $c_{\ast}:{\pi}_k(X_1) \rightarrow {\pi}_k(X_2)$ also denotes the natural homomorphism between the homotopy groups of degree $k$ which is also defined canonically.
Let $H^{\ast}(X;A)$ denote the direct sum of the $j$-th cohomology groups ${\oplus}_{j=0}^{\infty} H^j(X;A)$ for every integer $j \geq 0$. The cup products for a pair $c_1,c_2 \in H^{\ast}(X;A)$ and a sequence $\{c_j\}_{j=1}^l \subset H^{\ast}(X;A)$ of $l>0$ cohomology classes are important. $c_1 \cup c_2$ and ${\cup}_{j=1}^l c_j$ denote them. The former is regarded as a specific case or the case for $l=2$.
This makes $H^{\ast}(X;A)$ a graded commutative algebra and we call this the {\it cohomology ring} of $X$ whose {\it coefficient ring} is $A$.
\section{The Proofs of our Main Theorems.}
\label{sec:3}
\begin{Thm}
\label{thm:2}
Let $m>n \geq 1$ be integers. Let $l>0$ be another integer.
Let $M$ be an $m$-dimensional closed and connected manifold.
Let $A$ be a commutative ring.
\begin{enumerate}
\item \label{thm:2.1} {\rm (E. g. \cite{saeki})}
Let $f:M \rightarrow N$ be a special generic map into an $n$-dimensional connected and non-closed manifold with no boundary.
Then the homomorphisms ${q_f}_{\ast}:H_j(M;A) \rightarrow H_j(W_f;A)$, ${q_f}^{\ast}:H^j(W_f;A) \rightarrow H^j(M;A)$ and ${q_f}_{\ast}:{\pi}_j(M) \rightarrow {\pi}_j(W_f)$ are also isomorphisms for $0 \leq j \leq m-n$ where we abuse the notation in Propositions \ref{prop:2} and \ref{prop:3}.
\item \label{thm:2.2} {\rm (\cite{kitazawa})} Let there exist a sequence $\{a_j\}_{j=1}^l \subset H^{\ast}(M;A)$ satisfying the following two.
\begin{itemize}
\item The cup product ${\cup}_{j=1}^l a_j$ is not zero.
\item The degree of each cohomology class in $\{a_j\}_{j=1}^l$ is smaller than or equal to $m-n$. The sum of the degrees for all these $l$ cohomology classes is greater than or equal to $n$.
\end{itemize}
Then $M$ does not admit special generic maps into any $n$-dimensional connected and non-closed manifold which has no boundary.
\end{enumerate}
\end{Thm}
We review a proof omitting expositions on a {\it handle} and its {\it index} for PL manifolds and general polyhedra.
\begin{proof}
We first show (\ref{thm:2.1}).
We can take an ($m+1$)-dimensional compact and connected PL manifold $W$ in Proposition \ref{prop:4}.
$W_f$ is a compact smooth manifold and smoothly immersed into $N$. It is simple homotopy equivalent to an ($n-1$)-dimensional compact and connected polyhedron.
$W$ is simple homotopy equivalent to $W_f$. $W$ is shown to be a PL manifold obtained by attaching handles to $M \times \{0\} \subset M \times [-1,0]$ whose indices are greater than $(m+1)-{\dim W_f}=m-n+1$. Here $M$ can be and is identified with $M \times \{-1\} \subset M \times [-1,0]$.
From this we have (\ref{thm:2.1}).
We show (\ref{thm:2.2}). Suppose that $M$ admits a special generic map into an $n$-dimensional connected and non-closed manifold $N$ with no boundary. We have an ($m+1$)-dimensional manifold $W$ as just before. By virtue of (\ref{thm:1.1}) and Proposition \ref{prop:4} (\ref{prop:4.4}), we can take a unique cohomology class $b_j \in H^{\ast}(W;A)$ satisfying $a_j={i_M}^{\ast}(b_j)$ where $i_M$ is as in Proposition \ref{prop:4}. $W$ has the simple homotopy type of an ($n-1$)-dimensional polyhedron. This means that the cup product ${\cup}_{j=1}^l a_j$ is zero, which contradicts the assumption. This completes the proof.
\end{proof}
We shortly review several notions on homology classes of manifolds.
$A$ is a commutative ring having the identity element different from the zero element.
The {\it fundamental class} of a compact, connected and oriented smooth, PL, or more generally, topological manifold $Y$ is the canonically defined ($\dim Y$)-th homology class. This is the generator of $H_{\dim Y}(Y,\partial Y;A)$, which is isomorphic to $A$, and canonically and uniquely defined from the orientation.
Let $i_{Y,X}:Y \rightarrow X$ be an embedding satisfying $i_{Y,X}(\partial Y) \subset \partial X$ and $i_{Y,X}({\rm Int}\ Y) \subset {\rm Int}\ X$ where we consider the embedding as a suitable one in the suitable category.
Let $h \in H_{j}(X,\partial X;A)$. If the value of the homomorphism ${i_{Y,X}}_{\ast}$ induced by the embedding $i_{Y,X}:Y \rightarrow X$ at the fundamental class of $Y$ is $h$, then $h$ is said to be {\it represented} by the oriented submanifold $Y$.
We explain about the {\it Poincar\'e dual} and the {\it dual} to each element of a basis of a module without explaining about rigorous definitions. In various scenes of the present paper, we consider Poincar\'e duality theorem for a compact and connected manifold and we consider the {\it Poincar\'e dual} to a homology class or a cohomology class. Consider a basis of a module whose elements are not divisible by elements which are not unit elements. Each of the elements of the basis, we have the {\it dual} to it and this is also important. For a homology group or its subgroup and its basis as before, we naturally have a cohomology class uniquely as the dual to a homology class of the basis. Their degrees are same. This notion is different from the notion of a Poincar\'e dual.
For notions here, see \cite{hatcher} again for example.
\begin{proof}[A proof of Main Theorem \ref{mthm:1}]
We abuse the notation in Propositions \ref{prop:2} and \ref{prop:3} and Theorem \ref{thm:2} for example.
Suppose that a $6$-dimensional closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^4$.
According to Theorem \ref{thm:2} (\ref{thm:2.1}), $M$ and $W_f$ are simply-connected and $H_2(M;\mathbb{Z})$ is isomorphic to $H_2(W_f;\mathbb{Z})$.
According to \cite{nishioka}, $H_2(W_f;\mathbb{Z})$ is free and $H_2(M;\mathbb{Z})$ is free as a result. Note that this is a key ingredient in showing Theorem \ref{thm:1} (\ref{thm:1.3}). We can take a basis of $H_2(W_f;\mathbb{Z})$ and smoothly and disjointly embedded $r \geq 0$ $2$-dimensional spheres in $W_f-{\rm Int}\ N(\partial W_f)$ enjoying the following two.
\begin{itemize}
\item $r$ denotes the ranks of $H_2(M;\mathbb{Z})$ and $H_2(W_f;\mathbb{Z})$.
\item Each of these $r$ elements of the basis is represented by each of these $r$ $2$-dimensional spheres in $W_f-{\rm Int}\ N(\partial W_f)$.
\end{itemize}
Furthermore, the restriction of the bundle over $W_f-{\rm Int} N(W_f)$ in Proposition \ref{prop:3} to each $2$-dimensional sphere of the $r \geq 0$ spheres admits a section due to some general theory of linear bundles. This bundle over $W_f-{\rm Int} N(W_f)$ is a linear bundle whose fiber is diffeomorphic to $S^2$ and an internal smooth bundle of $f$. By Poincar\'e duality theorem, we have a basis of $H_2(W_f,\partial W_f;\mathbb{Z})$ each of which is the Poincar\'e dual to the dual to the each element of the basis of $H_2(W_f;\mathbb{Z})$.
Homology classes in $H_2(W_f,\partial W_f;\mathbb{Z})$ are represented by compact, connected and oriented surfaces. This is a very specific case of Thom's theory \cite{thom} on homology classes of compact manifolds represented by oriented submanifolds.
We thus have $r \geq 0$ compact, connected and oriented surfaces smoothly embedded in $W_f$ such that the $r$ Poincar\'e duals before are represented by them. Note also that their interiors are embedded in the interior ${\rm Int}\ W_f$ and that their boundaries are embedded in the boundary $\partial W_f$.
Furthermore, by the definition of a special generic map, Proposition \ref{prop:1} (Proposition \ref{prop:1} (\ref{prop:1.2})) and fundamental theory of so-called {\it generic} smooth embeddings and smooth maps, the preimages of the $r$ compact, connected and oriented surfaces are regarded as the $4$-dimensional closed, connected and orientable manifolds of the domains of some special generic maps into orientable surfaces with no boundaries. Note that the orientability of these $4$-dimensional manifolds follows from the fact that (the total space of) an internal smooth bundle of each special generic map $f_{\lambda}$ is orientable for example.
We have exactly $r$ homology classes in $H_4(M;\mathbb{Z})$ each of which is represented by each of these $r$ $4$-dimensional closed, connected and suitably oriented manifolds. They can be regarded as the Poincar\'e duals to the duals to elements of a suitable basis of $H_2(M;\mathbb{Z})$. The set of all of these homology classes can be regarded as a basis of $H_4(M;\mathbb{Z})$.
For example, we can choose a basis of $H_2(M;\mathbb{Z})$ each element of which is represented by (the image of) the section of each bundle over the $r$ bundles over the $r$ $2$-dimensional spheres in ${\rm Int}\ W_f$ before.
We can see that these $r$ $4$-dimensional closed and oriented manifolds which are also smooth closed submanifolds in $M$ are oriented null-cobordant. This follows from Proposition \ref{prop:4} (\ref{prop:4.6}). Note that we apply this to the special generic map $f_{\lambda}$ on the $4$-dimensional manifold.
The special generic map $f_{\lambda}$ is regarded as a map into a non-closed, connected and orientable surface. By Propositions \ref{prop:2} and \ref{prop:3} and the fundamental fact that compact and orientable surfaces can be smoothly immersed into ${\mathbb{R}}^2$, we have a special generic map on the $4$-dimensional manifold into ${\mathbb{R}}^2$. Theorem \ref{thm:1} (\ref{thm:1}) implies that the 2nd cohomology group of the $4$-dimensional manifold is zero where the coefficient ring is $\mathbb{Z}$. This means that a normal bundle of the $4$-dimensional manifold which is a smooth submanifold in $M$ is trivial by a fundamental argument on linear bundles.
From another fundamental argument on linear bundles and characteristic classes again, we have that the 1st Pontrjagin class of $M$ with an arbitrary orientation is zero.
Theorem \ref{thm:2} yields that the cup products of 2nd homology classes for $M$ is always zero where the coefficient ring is $\mathbb{Z}$.
These two together with classification theorems of $6$-dimensional closed and simply-connected manifolds such as \cite{wall,zhubr,zhubr2} imply that $M$ is either of the following two.
\begin{itemize}
\item A $6$-dimensional standard sphere.
\item A closed manifold diffeomorphic to one represented as a connected sum of the following two manifolds.
\begin{itemize}
\item (A copy of) $S^3 \times S^3$.
\item The total space of a linear bundle over $S^2$ whose fiber is diffeomorphic to the $4$-dimensional unit sphere.
\end{itemize}
\end{itemize}
Every standard sphere admits a special generic map into any Euclidean space whose dimension is at most the dimension of the sphere. It is sufficient to consider canonical projections.
We can easily have a copy of $S^2 \times D^2$ smoothly embedded in ${\mathbb{R}}^4$. By applying Proposition \ref{prop:3}, we have a special generic map on an arbitrary manifold diffeomorphic to the total space of a linear bundle over $S^2$ whose fiber is diffeomorphic to the $4$-dimensional unit sphere.
It is important that the structure group of a linear bundle over $S^2$ whose fiber is the $k$-dimensional unit sphere $S^k$ ($k \geq 2$) is regarded as the group consisting of linear transformations on $S^1 \subset S^k$, or the $2$-dimensional rotation group $SO(2)$, isomorphic to $S^1$ as a Lie group. Moreover, after inductively choosing equators starting from $S^k$, we have $S^1 \subset S^k$ and the action by the linear transformations are naturally regarded as linear transformations on $S^k$. This is an elementary argument on linear bundles and for related studies, see \cite{milnorstasheff,steenrod} for example.
In addition, our construction of a special generic map here is also essentially regarded as construction Nishioka has also shown in \cite{nishioka} in the case where the manifolds are $5$-dimensional.
We consider the product map of a Morse function in Reeb's theorem on $S^3$ and the identity map on $S^3$ and we can smoothly embed the manifold of the target into ${\mathbb{R}}^4$. Thus $S^3 \times S^3$ admits a special generic map into ${\mathbb{R}}^4$.
According to \cite{saeki} for example, construction of special generic maps on manifolds represented as connected sums of manifolds admitting special generic maps into a fixed Euclidean space is easy. Of course we construct ones into the fixed Euclidean space.
This completes the proof.
\end{proof}
\begin{Thm}
\label{thm:3} Suppose that a $6$-dimensional, closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$. Let $e_{\rm F}$ be an arbitrary element of the homology group $H_2(M;\mathbb{Z}/2\mathbb{Z})$ which is in the image of the canonically obtained homomorphism ${\phi}_{G,\mathbb{Z},\mathbb{Z}/2\mathbb{Z}}$ associated with some internal direct sum decomposition of the form $G_{\rm Free} \oplus G_{\rm Finite}$ of the homology group $H_2(M;\mathbb{Z})$ where $G_{\rm Free}$ and $G_{\rm Finite}$ are free and finite, respectively. Assume also the following conditions.
\begin{enumerate}
\item The homomorphism ${\phi}_{G,\mathbb{Z},\mathbb{Z}/2\mathbb{Z}}$ is defined on the summand $G_{\rm Finite}$.
\item The homomorphism ${\phi}_{G,\mathbb{Z},\mathbb{Z}/2\mathbb{Z}}$ is the restriction of the homomorphism from $H_2(M;\mathbb{Z})$ into $H_2(M;\mathbb{Z}/2\mathbb{Z})$ defined canonically from the natural quotient map from $\mathbb{Z}$ onto $\mathbb{Z}/2\mathbb{Z}$.
\item ${q_f}_{\ast}(e_{\rm F})$ is not the zero element where we abuse the notation $q_f$ as before.
\end{enumerate}
Then the value of the 2nd Stiefel-Whitney class of $M$, which is the uniquely defined element of the cohomology group $H^2(M;\mathbb{Z}/2\mathbb{Z})$, must be the zero element $0 \in \mathbb{Z}/2\mathbb{Z}$ at some element ${e_{\rm F}}^{\prime} \in H_2(M;\mathbb{Z}/2\mathbb{Z})$ satisfying ${q_f}_{\ast}({e_{\rm F}}^{\prime})={q_f}_{\ast}(e_{\rm F})$.
\end{Thm}
\begin{proof}
We abuse the notation $q_f$ and $W_f$ as before.
$M$ and $W_f$ are simply-connected from the assumption and Theorem \ref{thm:2} (\ref{thm:2.1}). This means that $e_{\rm F} \in H_2(M;\mathbb{Z}/2\mathbb{Z})$ is represented by a smoothly embedded $2$-dimensional sphere in $M$ and that $e_{W_f,{\rm F}}:={q_f}_{\ast}(e_{\rm F}) \in H_2(W_f;\mathbb{Z}/2\mathbb{Z})$ is represented by a smoothly embedded $2$-dimensional sphere in ${\rm Int}\ W_f$. If we restrict an internal smooth bundle of $f$ to the smoothly embedded $2$-dimensional sphere in ${\rm Int}\ W_f$, then it must be trivial. Note that $q_f$ can be regarded as the projection of this bundle over the $2$-dimensional sphere (after the restriction). This is due to the assumption that $W_f$ and $M$ can be oriented and a well-known classification theorem of smooth (linear) bundles whose fibers are circles. In short, smooth linear bundles over a fixed space whose fibers are circles and whose structure groups are reduced to ones consisting of orientation preserving liner transformations, or the 2-dimensional rotation group, isomorphic to $S^1$ as a Lie group, are classified by the 2nd cohomology group of the base space whose coefficient ring is $\mathbb{Z}$.
$W_f$ is a $5$-dimensional compact manifold smoothly immersed into ${\mathbb{R}}^5$ and this means that the tangent bundle is a trivial linear bundle. $e_{W_f,{\rm F}}$ is not the zero element.
By virtue of fundamental properties of Stiefel-Whitney classes, presented systematically in \cite{milnorstasheff} for example, and the tangent bundles of several manifolds such as $W_f$, $M$, and the $2$-dimensional sphere $S^2$, this completes the proof.
\end{proof}
\begin{Thm}
\label{thm:4}
Assume that a $6$-dimensional closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$ such that the 2nd homology group $H_2(W_f;\mathbb{Z})$ of $W_f$ is finite where we abuse $W_f$ as before. Then we have the following three.
\begin{enumerate}
\item \label{thm:4.1}
The 2nd Stiefel-Whitney class of $M$, which is the uniquely determined element of $H^2(M;\mathbb{Z}/2\mathbb{Z})$, is the zero element.
\item \label{thm:4.2}
The 1st Pontrjagin class of $M$, which is the uniquely determined element of $H^4(M;\mathbb{Z})$ where $M$ is oriented in an arbitrary way, is the zero element.
\item \label{thm:4.3}
The cup product of any two elements of $H^2(M;\mathbb{Z})$ is the zero element.
\end{enumerate}
\end{Thm}
\begin{proof}
We abuse the notation in Propositions and
(Main) Theorems in the present paper.
For example we also abuse "$W$" in Proposition \ref{prop:4} here.
We show (\ref{thm:4.1}) and (\ref{thm:4.2}). $H_2(W_f;\mathbb{Z})$ is finite. The classification theory of circle bundles presented in the proof of Theorem \ref{thm:3}
implies that an internal smooth bundle of $f$ and a boundary linear bundle of $f$ are regarded as trivial bundles here. $W_f$ is, as presented in the proof of Theorem \ref{thm:3}, has the trivial tangent bundle.
By fundamental arguments on characteristic classes, we have (\ref{thm:4.1}) and (\ref{thm:4.2}).
Proposition 3.10 of \cite{saeki} shows a useful homology exact sequence and we use another homology exact sequence. Each "$\cong$" between two groups means they are isomorphic. $0$ denotes the zero element of course. ${\partial}_{\ast}$ denotes the boundary homomorphism for an exact sequence.
$M$ and $W_f$ are simply-connected from the assumption and Theorem \ref{thm:2} (\ref{thm:2.1}).
Let $A$ be an arbitrary commutative ring.
The exact sequence in \cite{saeki} is
$$\xymatrix{ H_3(W;A) \ar[r]& H_3(W,M;A) \cong H^4(W;A) \cong H^4(W_f;A) \ar[r]^{{\partial}_{\ast}}& \\
H_2(M;A) \ar[r]^{{i_M}_{\ast}} & H_2(W;A) \cong H_2(W_f;A) \ar[r] & H_2(W,M;A) \cong H^5(W;A) \cong H^5(W_f;A) \cong \{0\} \ar[r]&}
$$
and we also have another homology exact sequence
$$\xymatrix{ H_2(\partial W_f;A) \ar[r]& H_2(W_f,\partial W_f;A) \ar[r]^{{\partial}_{\ast}}& \\
H_1(\partial W_f;A) \ar[r] & H_1(W_f;A) \cong \{0\} \ar[r] & H_1(W_f,\partial W_f;A) \cong H^4(W_f;A) \ar[r]& \\
H_0(\partial W_f;A) \cong A^l:={\oplus}_{j=1}^l A \ar[r]& H_0(W_f;A) \cong A \ar[r]& \{0\}&}
$$
where we apply Poincar\'e duality theorem for compact and simply-connected manifolds $W$ and $W_f$ and Proposition \ref{prop:4} (\ref{prop:4.2}) for several isomorphisms of groups (modules over $A$) for example. $A^l:={\oplus}_{j=1}^{l} A$ denotes the direct sum of exactly $l>0$ copies of $A$ where $l$ is the number of connected components of the singular set of $f$, diffeomorphic to $\partial W_f$ by Proposition \ref{prop:2} (\ref{prop:2.3}).
Let $A:=\mathbb{Z}$. $H_2(W;\mathbb{Z})$ is finite. The rank of $H_2(M;\mathbb{Z})$ is at most $l-1$ since $H^4(W_f;\mathbb{Z}) \cong H^4(W;\mathbb{Z}) \cong H_1(W_f;\partial W_f;A)$ are of rank $l-1$ by the second sequence together with Poincar\'e duality theorem for $W_f$ and Proposition \ref{prop:4} (\ref{prop:4.2}). $H_4(W_f;\mathbb{Z})$ is isomorphic to $H^1(W_f;\partial W_f;\mathbb{Z})$ and $H_1(W_f;\partial W_f;\mathbb{Z})$, free and of rank $l-1$ by Poincar\'e duality theorem for $W_f$. Furthermore,
$H_1(W_f,\partial W_f;\mathbb{Z})$ has a basis each homology class of which is represented a $1$-dimensional connected manifold $L_j$ diffeomorphic to a closed interval enjoying the following properties.
\begin{itemize}
\item $L_j$ is smoothly embedded in $W_f$.
\item The boundary $\partial L_j$ consists of exactly two points and is in the boundary $\partial W_f$. The interior ${\rm Int}\ L_j$ is in the interior ${\rm Int}\ W_f$.
\item ${q_f}^{-1}(L_j)$ can be seen as the $2$-dimensional sphere $S_{L_j}$ of the domain of a Morse function with exactly two singular points.
\item There exists a connected component $C_0$ of $\partial W_f$ and it contains exactly one point in the boundary $\partial L_j$ for every $j$.
\item For each of any pair of distinct manifolds $L_{j_1}$ and $L_{j_2}$, choose a suitable one point in the boundary. Then they are in distinct connected components of $\partial W_f$ which are not the connected component $C_0$ before.
\end{itemize}
By fundamental arguments on Poincare duality theorem or so-called intersection theory, we have the following facts.
\begin{itemize}
\item $H_4(M;\mathbb{Z})$ is of rank $l-1$.
\item $H_4(M;\mathbb{Z})$ is free since $M$ is closed and simply-connected. More precisely, $H^2(M;\mathbb{Z})$ must be free and $H_4(M;\mathbb{Z})$ is isomorphic to it by Poincar\'e duality theorem.
\item $H_4(M;\mathbb{Z})$ has a suitable basis each of which is represented by some connected component of the singular set of $f$, mapped to $\partial W_f$ by the restriction of $f$ there. In addition, remember that $q_f$ maps the singular set by a diffeomorphism onto $\partial W_f$ by Proposition \ref{prop:2} (\ref{prop:2.3}).
\item For each element of the basis of $H_4(M;\mathbb{Z})$ before, the Poincar\'e dual to it can be regarded as an element represented by each sphere $S_{L_j} \in M$ before.
\end{itemize}
By fundamental arguments in \cite{saeki} or easy observations, the singular set of the map $f$ is a $4$-dimensional closed and orientable manifold and its normal bundle (in $M$) is regarded as $2$-dimensional linear bundle and a bundle equivalent to the restriction of a boundary linear bundle and as a result it is trivial. We consider the Poincar\'e duals to homology classes represented by spheres in $\{S_{L_j}\} \subset M$. These Poincar\'e duals are represented by connected components of the singular set of $f$. We perturb a submanifold here or each connected component of the singular set so that the resulting submanifold and the original submanifold are distinct. This means that the cup product of any two elements of $H^2(M;\mathbb{Z})$ is the zero element. Such arguments have been in \cite{kitazawa4,kitazawa5} for example and will be used in our proof of Theorem \ref{thm:5} for example. This completes the proof (\ref{thm:4.3}).
\end{proof}
\begin{proof}[A proof of Main Theorem \ref{mthm:2}]
Suppose that a $6$-dimensional closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$ such that $H_2(W_f;\mathbb{Z})$ is trivial.
Suppose also that $H_2(M;\mathbb{Z})$ is not free. Then there exists an element of a finite order of $H_2(M;\mathbb{Z})$ which is not the zero element.
Every connected component of the singular set of $f$ is mapped onto some connected component of the boundary of $\partial W_f$ by $q_f$ and a diffeomorphism onto the connected component. $H_4(W_f;\mathbb{Z})$ is free by the argument in the proof of Theorem \ref{thm:4} for example.
By a fundamental argument on intersection theory, this element is represented by a $2$-dimensional sphere smoothly embedded in $M$ and we can regard that this and any connected component of the singular set of $f$, which is mapped onto some connected component of the boundary $\partial W_f \subset W_f$ by $q_f$ and a diffeomorphism onto the connected component, do not intersect. Thus $H_2(M;\mathbb{Z})$ is free.
From Theorem \ref{thm:4}, if a $6$-dimensional closed and simply-connected manifold $M$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$ such that $H_2(W_f;\mathbb{Z})$ is trivial, then $M$ must be a manifold in Main Theorem \ref{mthm:1} by virtue of classifications of $6$-dimensional closed and simply-connected manifolds as before. In addition, by the fact that the 2nd Stiefel-Whitney class of $M$ is the zero element, the linear bundles over $S^2$ must be trivial and the total spaces must be diffeomorphic to $S^2 \times S^4$.
It suffices to construct a special generic map into ${\mathbb{R}}^5$ on a desired $6$-dimensional manifold.
We can prepare a compact and simply-connected smooth manifold represented as a boundary connected sum of copies of $S^3 \times D^2$ or $S^4 \times D^1$ and embed the manifold smoothly into ${\mathbb{R}}^5$. Applying Proposition \ref{prop:3} by taking the internal smooth bundle over the resulting manifold $W_f$ and the boundary linear bundle over $W_f$ as trivial bundles and checking the homology groups of the $5$-dimensional manifold $W_f$ and the resulting $6$-dimensional closed and simply-connected manifold $M$ complete the proof.
We give another precise exposition. We consider the product map of a Morse function with exactly two singular points on $S^2$ and the identity map on $S^4$ and the product map of a canonical projection of the unit sphere $S^3$ into ${\mathbb{R}}^2$ and the identity map on $S^3$. By embedding the spaces of the targets suitably, we can regard these two naturally as special generic maps into ${\mathbb{R}}^5$ whose restrictions to the singular sets are embeddings. We can construct a special generic map on a general desired manifold here, represented as a connected sum of these manifolds, easily. Expositions on the construction have been presented in the end of the proof of Main Theorem \ref{mthm:1} for example.
\end{proof}
\begin{Thm}[\cite{kitazawa5} ($A:=\mathbb{Z}$)]
\label{thm:5}
Let $p$ be a power of a prime $p_0$ and represented by $p={p_0}^l$ for some positive integer $l$.
Let $A$ be the ring $\mathbb{Z}$ of all integers or the commutative ring of the form $\mathbb{Z}/p\mathbb{Z}$, which is of order $p={p_0}^l$ and represented as the natural quotient ring of order $p$. In a word, equivalently, the ring is isomorphic to $\mathbb{Z}$ or a finite field.
If a closed and simply-connected manifold $M$ of dimension $m=6$ admits a special generic map $f:M \rightarrow {\mathbb{R}}^5$ whose singular set is connected, then the cup product $c_1 \cup c_2$ is zero for any pair $c_1,c_2 \in H^2(M;A)$ of cohomology classes.
\end{Thm}
\begin{proof}[A proof of Theorem \ref{thm:5}]
We abuse the notation similarly.
$M$ and $W_f$ are simply-connected from the assumption and Theorem \ref{thm:2} (\ref{thm:2.1}).
Let $A$ be an arbitrary commutative ring.
We show an exact sequence
$$\xymatrix{ H_3(W;A) \ar[r]& H_3(W,M;A) \cong H^4(W;A) \cong H^4(W_f;A) \ar[r]^{{\partial}_{\ast}}& \\
H_2(M;A) \ar[r]^{{i_M}_{\ast}} & H_2(W;A) \cong H_2(W_f;A) \ar[r] & H_2(W,M;A) \cong H^5(W;A) \cong H^5(W_f;A) \cong \{0\} \ar[r]&}
$$
and
$$\xymatrix{ H_2(\partial W_f;A) \ar[r]& H_2(W_f,\partial W_f;A) \ar[r]^{{\partial}_{\ast}}& \\
H_1(\partial W_f;A) \ar[r] & H_1(W_f;A) \cong \{0\} \ar[r] & H_1(W_f,\partial W_f;A) \cong H^4(W_f;A) \ar[r]& \\
H_0(\partial W_f;A) \cong A \ar[r]& H_0(W_f;A) \cong A \ar[r]& \{0\}&}
$$
in our proof of Theorem \ref{thm:4} again where we apply Poincar\'e duality theorem for compact and simply-connected manifolds $W$ and $W_f$ for several isomorphisms of groups (modules over $A$) and Proposition \ref{prop:4} (\ref{prop:4.2}) for example.
The last homomorphism from $H_0(\partial W_f;A)$ into $H_0(W_f;A)$ induced by the inclusion of $\partial W_f$ into $W_f$ is an isomorphism since $\partial W_f$ is connected by Proposition \ref{prop:2} (\ref{prop:2.3}). We have $H_1(W_f,\partial W_f;A) \cong H^4(W_f;A) \cong \{0\}$ by this exact sequence and Poincar\'e duality theorem for the compact, connected and orientable manifold $W_f$. We can see that
${i_{M}}_{\ast}:H_2(M;A) \rightarrow H_2(W;A)$ is an isomorphism from the first sequence together with Poincar\'e duality theorem for the compact, connected and orientable manifold $W$ and Proposition \ref{prop:4} (\ref{prop:4.2}).
We set $A:=\mathbb{Z}/q\mathbb{Z}$, which is of order $q$, an arbitrary number represented as the power ${q_0}^{k}$ of some prime number $q_0$ and some positive integer $k>0$. We have a basis ${\mathcal{B}}_{M,q}$ of $H_2(M;\mathbb{Z}/q\mathbb{Z})$. Each element of ${\mathcal{B}}_{M,q} \subset H_2(M;\mathbb{Z}/q\mathbb{Z})$ is represented by a smoothly embedded $2$-dimensional sphere in $M$ mapped to $W_f$ enjoying the following two.
\begin{itemize}
\item The restriction of $q_f$ to the $2$-dimensional sphere is a smooth embedding.
\item The intersection of the image of the $2$-dimensional sphere and $\partial W_f$ consists of finitely many points.
\end{itemize}
For each element $e_j \in {\mathcal{B}}_{M,q}$, we have the dual ${{q_f}_{\ast}(e_j)}^{\ast}$ to ${q_f}_{\ast}(e_j)$ as an element of $H^2(W_f;\mathbb{Z}/q\mathbb{Z})$ and its Poincar\'e dual ${\rm PD}({{q_f}_{\ast}(e_j)}^{\ast})$ as an element of $H_3(W_f,\partial W_f;\mathbb{Z}/q\mathbb{Z})$ (where $M$ and $W_f$ are oriented suitably). Note that ${q_f}_{\ast}:H_2(M;\mathbb{Z}/q\mathbb{Z}) \rightarrow H_2(W_f;\mathbb{Z}/q\mathbb{Z})$ is an isomorphism by virtue of the relation $q_f=r_f \circ i_M$ of Proposition \ref{prop:4} (\ref{prop:4.4}) and the fact that the ${i_{M}}_{\ast}:H_2(M;\mathbb{Z}/q\mathbb{Z}) \rightarrow H_2(W;\mathbb{Z}/q\mathbb{Z})$ and ${r_f}_{\ast}:H_2(W;\mathbb{Z}/q\mathbb{Z}) \rightarrow H_2(W_f;\mathbb{Z}/q\mathbb{Z})$ are isomorphisms. We have a basis of $H_2(W_f;\mathbb{Z}/q\mathbb{Z})$ canonically. As we do in the proof of Main Theorem 1 of \cite{kitazawa4}, we canonically have the Poincar\'e dual ${\rm PD}({e_j}^{\ast}) \in H_4(M;\mathbb{Z}/q\mathbb{Z})$ to the dual ${e_j}^{\ast} \in H^2(M;\mathbb{Z}/q\mathbb{Z})$ to $e_j$. More precisely, according to Proposition \ref{prop:4} (\ref{prop:4.4}), $r_f$ is the projection of a bundle over $W_f$ whose fiber is diffeomorphic to $D^{2}$ and we consider a kind of {\rm prism operators} or arguments on so-called {\it Thom classes} to obtain an element of $H_4(W,M;\mathbb{Z}/q\mathbb{Z})$ and take the value of the boundary homomorphism there to obtain a desired element.
Here we consider the intersection theory for $M$ and $W_f$ as we do in the proof of this theorem of \cite{kitazawa4,kitazawa5}. $3$ is the degree of ${\rm PD}({{q_f}_{\ast}(e_j)}^{\ast})$, $5$ is the dimension of $W_f$, and we have $3+3-5=1$. From this, we may regard that the Poincar\'e dual to the cup product of two duals to elements of ${\mathcal{B}}_{M,q}$ is a sum of homology classes represented by the preimages of circles in ${\rm Int}\ W_f$ or $1$-dimensional compact and connected manifolds diffeomorphic to a closed interval in $W_f$. Moreover, for these $1$-dimensional compact and connected manifolds diffeomorphic to a closed interval, we may assume the following properties. Note that a similar situation appears in the proof of Theorem \ref{thm:4}.
\begin{itemize}
\item The boundaries are embedded in $\partial W_f$.
\item The interiors are embedded in ${\rm Int}\ W_f$.
\item For the map $q_f$, the preimages are $2$-dimensional spheres where they are regarded as the manifolds of the domains of Morse functions with exactly two singular points.
\end{itemize}
The preimages of the circles in ${\rm Int}\ W_f$ are regarded as the boundaries of the total spaces of trivial bundles over a copy of $D^2$ in ${\rm Int}\ W_f$ whose fibers are circles, given by $q_f$. Remember that $W_f$ is simply-connected and that its dimension is $5$ and sufficiently high.
We investigate each $2$-dimensional sphere ${S^2}_{j^{\prime}}$, represented as the preimage of a closed interval in $W_f$ before.
$W_f$ is simply-connected and $\partial W_f$ is connected. This implies the existence of a (smooth) homotopy $H_{f,j^{\prime}}:{S^2}_{j^{\prime}} \times [0,1] \rightarrow W_f$ from the composition of the original embedding of each $2$-dimensional sphere ${S^2}_{j^{\prime}}$ here into $M$ with $q_f$ to a constant map whose image is a one-point set in $\partial W_f$. For $0 \leq t \leq 1$, we can define $S_{{\rm H},f,j^{\prime}}(t)$ as the set of all points in ${S^2}_{j^{\prime}}$ where the pairs of the points and $t$ are mapped into $\partial W_f$ by this homotopy. We may assume the property $S_{{\rm H},f,j^{\prime}}(t_1) \subset S_{{\rm H},f,j^{\prime}}(t_2)$ for any $0<t_1<t_2<1$. The original embedding of the $2$-dimensional sphere ${S^2}_{j^{\prime}}$ into $M$ is shown to be (smoothly) null-homotopic. This completes the proof of Theorem \ref{thm:3} for $A:=\mathbb{Z}/q\mathbb{Z}$.
Let $A:=\mathbb{Z}$. $H_2(M;\mathbb{Z})$ and $H_2(W_f;\mathbb{Z})$ are isomorphic as commutative groups as in the case where $A$ is a finite field before. They are isomorphic to the direct sum of a free commutative group $G_{\rm Free}$ and a finite commutative group $G_{\rm Finite}$. $G_{\rm Finite}$ is isomorphic to a direct sum of finitely many cyclic groups each of which is of order $p_{j^{\prime \prime}}={{p_{j^{\prime \prime}}}_0}^{l_{j^{\prime \prime}}}$ for a suitable prime ${p_{j^{\prime \prime}}}_0$ and a suitable integer $l_{j^{\prime \prime}}>0$. This is due to a fundamental classification theorem for finitely generated commutative groups. We (can) abuse $G_{\rm Free}$ and $G_{\rm Finite}$ for summands of suitable decompositions into internal direct sums of $H_2(M;\mathbb{Z})$ and $H_2(W_f;\mathbb{Z})$. We can define a similar basis ${\mathcal{B}}_M:=\{e_j\}_j \subset G_{\rm Free} \subset H_2(M;\mathbb{Z})$ for the summand $G_{\rm Free} \subset H_2(M;\mathbb{Z})$ and we can argue similarly. This is also similar to a main argument in the proof of Main Theorem \ref{mthm:1}. We have the Poincar\'e dual ${\rm PD}({e_j}^{\ast}) \in H_4(M;\mathbb{Z})$ as before where we abuse the previous notation. As in the proof of Main Theorem \ref{mthm:1}, this is represented by a $4$-dimensional, closed, connected and oriented manifold regarded as the manifold of the domain of a special generic map into a $3$-dimensional non-closed and orientable manifold with no boundary. This comes from the fact that
each homology class of $H_3(W_f,\partial W_f;\mathbb{Z})$ is represented by a $3$-dimensional compact, connected and oriented smooth manifold. This is also a very explicit case of \cite{thom}. We can argue as in the case where $A$ is a finite field to complete the proof of Theorem \ref{thm:3}. This also reviews some ingredients of the proof in \cite{kitazawa5}.
We can also prove Theorem \ref{thm:5} in the case where the dimension $m$ is greater than $6$. This is in \cite{kitazawa4}.
\end{proof}
\begin{proof}[A proof of Main Theorem \ref{mthm:3}]
We abuse the notation similarly.
According to our proof of Theorem \ref{thm:5}, the existence of a special generic map $f:M \rightarrow {\mathbb{R}}^5$ here yields the isomorphism $q_f:H_2(M;\mathbb{Z}) \rightarrow H_2(W_f;\mathbb{Z})$ together with Proposition \ref{prop:4} (\ref{prop:4.4}) and the relation $q_f=r_f \circ i_M$ for the map $r_f:W \rightarrow W_f$ giving the collapsing.
The 2nd Stiefel-Whitney class of $M$ is an element of $H^2(M;\mathbb{Z}/2\mathbb{Z})$ and Theorem \ref{thm:4} implies that it is zero. Furthermore, the 1st Pontrjagin class of $M$, which is oriented in an arbitrary way, is an element in $H^4(M;\mathbb{Z})$ and zero by Theorem \ref{thm:4}. The topology and the differentiable structure of this $6$-dimensional, closed and simply-connected manifold $M$ are determined by an element $\gamma \in H^4(M;\mathbb{Z})$. Furthermore, $4\gamma$ and the 1st Pontrjagin class of (an arbitrary oriented) $M$ agree. This is due to general theory of classifications of $6$-dimensional, closed and simply-connected manifolds. Consult \cite{jupp,wall,zhubr,zhubr2} again. By the assumption on elements of finite orders of the homology group $H_2(M;\mathbb{Z})$, $\gamma$ must be zero. Note that $H^4(M;\mathbb{Z})$ is isomorphic to $H_2(M;\mathbb{Z})$ by virtue of Poincar\'e duality theorem.
To complete the proof, it is sufficient to construct a special generic map $f:M \rightarrow {\mathbb{R}}^5$ on a $6$-dimensional closed and simply-connected manifold $M$ such that $H_2(M;\mathbb{Z})$ is isomorphic to an arbitrary finite commutative group $G$ and that $H_3(M;\mathbb{Z})$ is of rank $2l$ for an arbitrary non-negative integer $l \geq 0$. Note that $H_3(M;\mathbb{Z})$ is isomorphic to the direct sum of a free commutative group of rank $2l$ and a group isomorphic to $G$ by virtue of universal coefficient theorem and Poincar\'e duality theorem.
We show an exact sequence as in the proof of Theorem \ref{thm:5}
$$\xymatrix{ H_4(W;\mathbb{Z}) \cong H_4(W_f;\mathbb{Z}) \cong H^1(W_f;\partial W_f;\mathbb{Z}) \cong \{0\} \ar[r]&\\ H_4(W,M;A) \cong H^3(W;A) \cong H^3(W_f;A) \ar[r]^{{\partial}_{\ast}}& \\
H_3(M;\mathbb{Z}) \ar[r]^{{i_M}_{\ast}} &\\ H_3(W;\mathbb{Z}) \cong H_3(W_f;\mathbb{Z}) \ar[r] &\\ H_3(W,M;\mathbb{Z}) \cong H^4(W;\mathbb{Z}) \cong H^4(W_f;\mathbb{Z}) \cong H_1(W_f,\partial W_f;\mathbb{Z}) \cong \{0\} \ar[r]&}
$$
where we abuse $W$ and apply some important propositions and theorems as before.
The rank of $H_3(M;\mathbb{Z})$ is shown to be twice the rank of $H_3(W_f;\mathbb{Z})$.
By a fundamental argument on handles in the smooth category, we easily have a $5$-dimensional, compact and simply-connected manifold $W_f$ in Proposition \ref{prop:3} satisfying the following two.
\begin{itemize}
\item The boundary $\partial W_f$ is not empty and it is connected.
\item $H_2(W_f;\mathbb{Z})$ is isomorphic to $G$.
\item $H_3(W_f;\mathbb{Z})$ is of rank $l$.
\item $W_f$ collapses to a $3$-dimensional polyhedron and has the (simple) homotopy type of a $3$-dimensional polyhedron.
\item The tangent bundle of $W_f$ is trivial and by well-known studies of smooth immersions by Hirsh for example, we can smoothly immerse $W_f$ into ${\mathbb{R}}^5$.
\end{itemize}
The classification theory tells us that the rank of the $3$rd homology group of a $6$-dimensional closed and simply-connected topological manifold must be even where the coefficient ring is $\mathbb{Z}$. In addition, it also tells that the manifold is represented as a connected sum of another $6$-dimensional closed and simply-connected manifold whose 2nd homology group is finite and finitely many copies of $S^3 \times S^3$ where the coefficient ring is $\mathbb{Z}$. Furthermore, more rigorously, the connected sum here is considered in the topology category if the manifold is a topological manifold, in the PL category if the manifold is a PL manifold, and in the smooth category if the manifold is a smooth manifold.
By applying Proposition \ref{prop:3} where the internal smooth bundle and the boundary linear bundle over $W_f$ are trivial, we have a desired special generic map on a desired $6$-dimensional closed and simply-connected manifold $M$.
This completes our proof.
\end{proof}
For other explicit construction of a special generic map for Main Theorems \ref{mthm:2} and \ref{mthm:3} and on a $6$-dimensional closed and simply-connected manifold $M$ such that $H_j(M;\mathbb{Z})$ is not free, apply Proposition \ref{prop:3} with some $5$-dimensional compact and connected manifolds whose boundaries are connected and which we can smoothly immerse into ${\mathbb{R}}^5$ presented in \cite{kitazawa2,kitazawa3} or ones we can easily obtain from these examples.
\begin{proof}[A proof of Main Theorem \ref{mthm:4}]
As the proofs of other (Main) Theorems, we abuse the notation.
We prove (\ref{mthm:4.1}). For $m=5,6$, this is already shown as Theorem \ref{thm:1} (\ref{thm:1.3}) and Main Theorem \ref{mthm:1}. Let $m \geq 7$. As in the proof of Main Theorem \ref{mthm:1}, we can know that $W_f$ is a $4$-dimensional closed and simply-connected manifold smoothly immersed into ${\mathbb{R}}^4$. We can also know that ${q_f}_{\ast}:H_2(M;\mathbb{Z}) \rightarrow H_2(W_f;\mathbb{Z})$ is an isomorphism and these groups are free. Theorem \ref{thm:2} (\ref{thm:2.1}) also implies that ${q_f}_{\ast}:H_j(M;\mathbb{Z}) \rightarrow H_j(W_f;\mathbb{Z})$ is an isomorphism for $0 \leq j \leq m-4$.
$W_f$ has the homotopy type of a $3$-dimensional polyhedron.
Thus $H_j(M;\mathbb{Z})$ is the trivial group if $j \neq 0,2,3,m-3,m-2,m$ where we apply Poincar\'e duality theorem. It also follows that $H_3(M;\mathbb{Z})$ is free.
We can take a basis of $H_{j_0}(W_f;\mathbb{Z})$ for $j_0=2,3$. Each element of the basis is represented by a $j_0$-dimensional, closed, connected and oriented manifold smoothly embedded in ${\rm Int}\ W_f$. The Poincar\'e dual to the element is represented by a ($4-j_0$)-dimensional, compact, connected and oriented manifold smoothly embedded in $W_f$ satisfying the following condition: the boundary is embedded in the boundary $\partial W_f$ and the interior is embedded in the interior ${\rm Int}\ W_f$. The preimage ${q_f}^{-1}(Y)$ of the ($4-j$)-dimensional manifold $Y$ is regarded as the domain of a special generic map into a ($4-j_0$)-dimensional, non-closed and orientable manifold with no boundary by the definition and local properties on the structure of a special generic map. This is due to the related similar exposition in the proof of Main Theorem \ref{mthm:1}. As a result the special generic map is regarded as a one into ${\mathbb{R}}^{4-j_0}$ by Propositions \ref{prop:2} and \ref{prop:3}. The (oriented) manifold ${q_f}^{-1}(Y)$ bounds an oriented compact manifold $W$ by Proposition \ref{prop:4} (\ref{prop:4.1}). The manifold $W$ may not be smooth. However, it has no problem. The ($m-j_0$)-th homology class represented by this manifold is regarded as the Poincar\'e dual to the class $({q_f}^{\ast})^{-1}(e_Y)$ where $e_Y \in H_{j_0}(W_f;\mathbb{Z})$ denotes the original element of the basis.
$Y$ is embedded smoothly in $W_f$ with a trivial normal bundle by a explicit fundamental argument on linear bundles. As a result, the manifold ${q_f}^{-1}(Y)$ is, as a submanifold, embedded smoothly in $M$ with a trivial normal bundle. In fact, we need sophisticated theory of local properties of special generic maps here and more general smooth maps due to Thom for example: so-called Thom's isotopy theorem. ${q_f}^{-1}(Y)$ is a homotopy sphere or a manifold in Theorem \ref{thm:1} (\ref{thm:1.1}) by Reeb's theorem and Theorem \ref{thm:1} (\ref{thm:1.1}). Note also that the set of all $({q_f}^{\ast})^{-1}(e_Y)$ here obtained by considering every $e_Y$ is regarded as a basis of the group $H_{m-j_0}(M;\mathbb{Z})$.
Thus by several fundamental arguments on linear bundles, the ($m-j_0$)-th Stiefel Whitney class of $M$ is the zero class. In the case where $m-j_0$ is divisible by $4$, for any oriented $M$, the ($\frac{m-j_0}{4}$)-th Pontrjagin class of $M$ is the zero class.
Remember that a similar and more explicit fact has been proved in the proof of Main Theorem \ref{mthm:1} and that we adopt an exposition which is a bit different from the original one.
The 3rd Stiefel-Whitney class of $M$ is the zero element since $H_2(M;\mathbb{Z})$ is free. This is due to general theory of linear bundles. $H_j(M;\mathbb{Z})$ has been shown to be the trivial groups if $j \neq 0,2,3,m-3,m-2,m$. Corollary 3.18 of \cite{saeki} says that the $j$-th Stiefel-Whitney class of $M$ is the zero element for $j > m-4+1=m-3$ and that the $j$-th Pontrjagin class is the zero element for $j > \frac{m-4+1}{2}$ or $4j > 2(m-3)>m-3$. This completes the proof of (\ref{mthm:4.1}).
We prove (\ref{mthm:4.2}). We construct special generic maps on $M^{7,0}$ into ${\mathbb{R}}^n$ for $n=4,5$. We can prepare a smoothly embedded copy of $S^2 \times D^{n-2} \subset {\mathbb{R}}^n$ and that of $S^3 \times D^{n-3}$ easily as we do in several proofs in the present paper. We apply construction as in the end of the proof of Main Theorem \ref{mthm:1} to obtain desired special generic maps for example.
It follows from well-known explicit theory on linear bundles that there exists a linear bundle over $S^4$ whose fiber is the $3$-dimensional unit sphere and whose total space $M$ is a $7$-dimensional closed and simply-connected oriented manifold enjoying the following properties.
\begin{itemize}
\item $M$ is simply-connected and $H_j(M;\mathbb{Z})$ is isomorphic to $\mathbb{Z}$ for $j=0,3,4,7$ and the trivial group for $j=1,2,5,6$.
\item The $1$st Pontrjagin class, defined uniquely as an element of $H^4(M;\mathbb{Z})$, is $4k$ times a generator of $H^4(M;\mathbb{Z})$.
\end{itemize}
By the structure of the bundle before and Proposition \ref{prop:3}, we have a special generic map $f$ over the $5$-dimensional manifold $W_f$, diffeomorphic to $S^4 \times D^1$.
As a fundamental genaral fact, we note that reversing the orientation of a given oriented manifold changes the sign of the $1$st Pontrjagin class of the manifold.
To obtain a desired family $\{M^{7,\lambda}\}_{\lambda \in \Lambda}$ and special generic maps on these manifolds into ${\mathbb{R}}^5$, we apply a similar method of construction. In other words, we use this new map and the previously presented maps into ${\mathbb{R}}^5$ whose images are copies of $S^2 \times D^3$ and $S^3 \times D^2$ smoothly embedded in ${\mathbb{R}}^5$. We consider connected sums of the $7$-dimensional manifolds in general.
This completes the proof.
\end{proof}
\section{Final remarks.}
\begin{Rem}
\label{rem:1}
Related to Theorem \ref{thm:1} (\ref{thm:1.2}), we can construct special generic maps into ${\mathbb{R}}^3$ on manifolds represented as connected sums of the total spaces of linear bundles over $S^2$ whose fibers are diffeomorphic to $S^k$ for $k \geq 2$. We can know this through the original paper. This is also an exercise. Nishioka's construction in the proof of Main Theorem \ref{mthm:1} is also regarded as a higher dimensional version.
\cite{saeki} also shows that a closed and simply-connected manifold whose dimension is greater than $3$ admitting a special generic map into ${\mathbb{R}}^3$ must be represented as a connected sum of the total spaces of smooth bundles over $S^2$ whose fibers are either of the following two (where the connected sum is considered in the smooth category). We have encountered arguments of this type in the present paper for several times.
\begin{itemize}
\item A homotopy sphere whose dimension is greater than $1$ and not $4$.
\item A $4$-dimensional standard sphere.
\end{itemize}
Note also that $W_f$ in Propositions \ref{prop:2} and \ref{prop:3} for a special generic map $f$ must be represented as a boundary connected sum of finitely many copies of $S^2 \times D^1$. Note also that in \cite{saeki}, so-called Poincare's conjecture for $3$-dimensional spheres was regarded as unsolved and that we use the affirmative answer here.
\end{Rem}
\begin{Rem}
\label{rem:2}
Besides Main Theorem \ref{mthm:4}, we do not know variants of our Main Theorems for closed and simply-connected manifolds whose dimensions are greater than $6$. It seems to be difficult to find explicit classifications of closed and simply-connected manifolds of certain classes in general. Although there exist such classifications, it seems to be difficult to apply them. \cite{kreck} is a result for $7$-dimensional ones whose 2nd homology groups are free where the coefficient ring is $\mathbb{Z}$. We do not know how to use this to obtain similar results or variants of Main Theorems \ref{mthm:1}, \ref{mthm:2} and \ref{mthm:3}.
It is well-known that the differentiable structures of homotopy spheres admitting special generic maps into Euclidean spaces whose dimensions are sufficiently high and lower than the dimensions of the homotopy spheres are strongly restricted. For this see \cite{calabi}, (section 4 of) \cite{saeki}, \cite{saeki2} and \cite{wrazidlo} for example. Theorem \ref{thm:1} (\ref{thm:1.2}) is also regarded as a related fact in the case where the dimensions of the manifolds of the domains are $m=4$. For related examples of $4$-dimensional manifolds which are homeomorphic to and not diffeomorphic to the manifolds, see also \cite{saekisakuma2} for example.
\end{Rem}
Related to some of Remark \ref{rem:2} and our main study, \cite{kitazawa4,kitazawa5} mainly concern some other restrictions on cohomology rings of closed and simply-connected manifolds of dimensions at least $6$. Some arguments in the present study are due to them.
\section{Acknowledgement.}
\label{sec:4}
The author is a member of the project JSPS KAKENHI Grant Number JP17H06128 "Innovative research of geometric topology and singularities of differentiable mappings": Principal investigator is Osamu Saeki. The present study is supported by this project. We also declare that data supporting the present study essentially are all in the present paper.
|
1,108,101,565,153 | arxiv | \section{INTRODUCTION}
In last two decades, the robots have gained sufficient social trust and are extensively participating with humans in performing certain tasks that require cognitive abilities of humans to be combined with precision and strength of robots \cite{c1}\cite{c2}. However, such a social trust on robots requires to endow them with multi-sensory information especially the visuo-tactile feedback to make instant decisions, detect obstacles, recognize human intervention and adapt to varying environment proactively \cite{c3}\cite{c4}. The visuo-tactile data thus enables the robot to understand the non-verbal cues to collaborate more intuitively from human perspective.
Exploiting visuo-tactile information in cluttered environment for human-robot joint carrying task, a framework is proposed in \cite{c5}. Under this approach, the robot tasks are defined using standard stack-of-task (SoT) formulation but without taking into account the human intuition and ergonomics. In the same way, a modified technique is presented in \cite{c6} for industrial assembly tasks. Wherein, the adaptive gains and homotopy are introduced in conjunction with visuo-tactile data for switching human-robot roles smoothly according to task requirements but however it does not consider progressive mutations in the agents’ (i.e, robot and human) behavior and environment. Hence, to the extent of our knowledge, there is no any intuitive task formulation framework available that considers the human ergonomics and task progress in planning hierarchical robot actions, thereby exploiting visuo-tactile perception for fine co-manipulation tasks. The results presented here are the part of our recent works in \cite{c7}\cite{c8}.
\section{Research Methodology}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Research_Methodology_New.png}
\caption{Proposed research methodology on formulating intuitive robot actions using an idea of SoT for co-manipulation using visual data for human intuition estimation and object detection and the tactile sensing for flexible object interaction. Human gestures are determined using a standard skeleton tracking algorithm. For object detection, RANSAC algorithm together with SVM classifier is used while the tactile sensing ensure to modulate the gripper's force profile to avoid object slippage based on the friction cone criteria.}
\label{three}
\vspace{-15pt}
\end{figure}
The robot tasks are defined in two groups i.e, the Cartesian tasks accounting for position and orientation of end-effector and the force tasks ensuring flexible and adaptive interaction with the human and objects in the environment. All the primary and secondary tasks are defined in a stack with hard and soft priorities being assigned to each at different levels and are executed sequentially, following a standard hierarchical control formulation called SoT. However, the tasks in SoT framework are Quadratic Programming (QP) problems and are formulated according to \cite{c9}\cite{c10}.
\begin{figure*}[h]
\centering
\includegraphics[width=17cm]{group5.png}
\caption{Visuo-tactile perception in formulating intuitive robotic tasks, (a) represents the aligned depth map on skeleton tracking of human subject performing desired actions on object in the scene, (b) is the point cloud of the scene with objects being recognized and their poses being enumerated locally, (c) is the 3D deformation output of sponge based tactile sensors attached to gripper and the mapped force profile of gripper using shallow neural network for interaction modulation.\\}
\label{raw}
\vspace{-15pt}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=17cm]{third_group.png}
\caption{Robot collaborating with human on detaching marker from its cap, (a) represents the initial configuration of all the actors (human, robot and object), (b) illustrates the human griping marker from its bottom, in (c) the robot arm assumes pre-grasping posture based on the pose of active human arm wrist, (d) shows the robot arm grasping cap from its centroid posture, in (e) human pulls down the marker while the gripper holds the cap, in (f) the gripper releases cap on detecting open human palm, in (g) the robot arm returns to homing position following its human partner. (Video: \url{https://www.youtube.com/watch?v=j41rgnEavx4}).}
\label{nine}
\vspace{-15pt}
\end{figure*}
The defined tasks are subsequently augmented with visual and tactile information for intuitive decision making. Hence, a standard skeleton tracking algorithm (i.e, deep CONVNETs) \cite{c11} is used for tracing the gestures of active human arm using a aligned depth map captured with RGBD tracking camera in Fig. \ref{raw} (a), which is registering the poses of 18 human joints in the local camera frame. Next, a modified RANSAC algorithm together with Support Vector Machine (SVM) classifier is used for object semantic segmentation and recognition respectively and the poses of candidate objects are enumerated by computing the centroid of their processed point cloud captured with RGBD detection camera in Fig. \ref{raw} (b). Moreover, for interaction tasks i.e, grasping and manipulation defined in SoT, the tactile feedback is explicitly being used. The installed tactile sensors \cite{c12} provide 3D deformation output and it is mapped to gripper's force profile using a shallow neural network (with 5 hidden neurons), in Fig. \ref{raw} (c). All the sensory observations are primarily in local reference frames and transformed into robot base using suitable transformations for homogeneous computations.
\section{Results and Discussion}
To better evaluate the reliability and robustness of proposed intuitive task formulation, an industrial test scenario i,e, removing marker from the cap is considered, as shown in Fig. \ref{nine}. At first, all the characters i.e, robot, human and environment are in their initial configurations in Fig. \ref{nine} (a) and then the human tries to grip the marker from bottom in Fig. \ref{nine} (b) which is determined by tracking camera and thus the robot arm assumes the pre-grasping posture (i.e, 40 cm above the active human arm wrist) following human arm gesture in Fig. \ref{nine} (c). In this configuration, the detection camera recognizes the cap-marker pair in the scene and estimates their poses, which are used by the robot system to grasp the cap in Fig. \ref{nine} (d). Once the contact is established with cap-marker, the robot arm lifts it up (20 cm) in Fig. \ref{nine} (e) to provide a sufficient space to human partner to perform required action (pulling down) on the marker in Fig. \ref{nine} (f). After completing required task, the robot arm returns to its homing position following the human gesture in Fig. \ref{nine} (g).
\section{CONCLUSIONS}
This research proposed to exploit visuo-tactile information in formulating robotic tasks in accordance with human intuitions. Firstly, the visual feedback from tracking camera (Intel Realsense D435) using a aligned depth map estimated the gesture of active human arm to guide the robot arm to cooperate accordingly and later the detection camera (Intel Realsense D415) enumerated the object pose from the filtered point cloud, which was sent to gripper for respective grasping action. With the object being grasped, as detected by tactile sensors, the human executed designated task while the tactile sensors modulated the gripper's force profile to maintain the continuous desired contact with the object consistently.
|
1,108,101,565,154 | arxiv | \section{Introduction}
A \textbf{tournament} $ T=(V(T), A(T)) $ is a directed graph obtained by orienting the edge set of a (possibly infinite) complete
undirected graph. A directed cycle is called a \textbf{dicycle} for short. We use some basic set theoretic conventions. We consider functions $ f $ as sets of ordered pairs where $ \left\langle x,y \right\rangle\in f $ and $\left\langle x,z \right\rangle\in f $ imply $ y=z $. For a finite or infinite cardinal $ \kappa $ let $ \mathsf{exp}_0(\kappa)= \kappa $ and let $\boldsymbol{\mathsf{exp}_{k+1}(\kappa)}=2^{\mathsf{exp}_{k}(\kappa)} $. Remember that a cardinal is the set of the ordinals that are smaller than itself, for example $ 3=\{ 0,1,2 \} $. A $ \boldsymbol{\kappa} $\textbf{-edge-colouring} of a tournament $T$ is a function $c:A(T)\rightarrow \kappa$.
A \textbf{monochromatic path} is a directed path (repetition of vertices is not allowed) with edges having the same colour. We call a dicycle \textbf{quasi-monochromatic} if all but at most one of its edges have the same colour.
Our investigation was motivated by the following conjecture of Erd\H{o}s \cite[p. 274]{sands1982monochromatic}.
\begin{conjecture}[Erd\H{o}s] \label{conj:erdos}
For every positive integer $k$ there is a (least) positive integer $f(k)$ so that every $k$-edge-coloured finite tournament admits a subset $S\subseteq V(T)$ of size at most $f(k)$ such that $S$ is reachable from every vertex by a monochromatic path.
\end{conjecture}
It is known that $f(1)=f(2)=1$, and there is an example showing that $f(3)\geq 3$ (see \cite{sands1982monochromatic}). However, there
is no known constant upper bound for $f(3)$, although it is conjectured to be $3$ by Erd\H{o}s. As a weakening of the original conjecture, we
consider source-sink pairs instead of one sink set $ S $. However, we may add bounds on the length of the monochromatic paths. More
precisely, a \textbf{king-serf duo by monochromatic paths} consists of disjoint vertex sets $K,S\subseteq V(D)$ so that every vertex $ v
$ has a monochromatic path of length at most two from $K$ to $v$ or from $v$ to $S$. The \textbf{size} of the duo is defined as
$|K|+|S|$. An edge $ uv $ of an edge-coloured tournament $T$ is called \textbf{forbidding} if there is no monochromatic path of length at
most two from $v$ to $ u $. Note that if $ T' $ is a subtournament of $ T $ containing a forbidding edge $ uv $, then $ uv $ is
forbidding edge with respect to $ T' $ as well.
The main result of the paper is the following.
\begin{theorem}\label{thm:main}
For every (finite or infinite) cardinal $ \kappa $ there is a cardinal $ \lambda_\kappa \leq \mathsf{exp}_{10}(\kappa) $ such that in every $\kappa$-edge-coloured tournament there exists a king-serf duo by monochromatic paths of size at most $ \lambda_k $. For finite $ \kappa $ one can guarantee $ \lambda_\kappa \leq \kappa^{62500\kappa} $.
\end{theorem}
The rest of the paper is organized as follows. In Section~\ref{sec:previous}, we give an overview of previous results. Theorem~\ref{thm:main} is then proved in Section~\ref{sec:main}.
\section{Previous work} \label{sec:previous}
Given a digraph $D=(V,A)$, an independent set $K\subseteq V$ is called a kernel if it is absorbing, that is,
there exists a directed edge from $K$ to $v$ for every $v\in V-K$. Kernels were introduced by Von Neumann and Morgenstern \cite{neumann44} in relation to game theory.
The concept of kernels was generalized by Galeana-S\'anchez \cite{galeana96} for edge-coloured digraphs. In the coloured case, independence and absorbency are only required by means of monochromatic paths, hence these sets are called kernels by monochromatic paths. The existence of such kernels is widely studied, see \cite{galeana98}-\cite{galeana09b}, \cite{minggang88}. The case when $K$ is an absorbing set but not necessarily independent by monochromatic paths is also of interest. Since an absorbing set always exists in a $k$-coloured digraph, a natural problem is to find one with minimum size, which motivates the conjecture of Erd\H{o}s (Conjecture~\ref{conj:erdos}). In \cite{sands1982monochromatic}, Sands, Sauer and Woodrow proved that every $2$-edge-coloured tournament admits an absorbing vertex, and also presented a $3$-edge-coloured tournament in which the minimum size of an absorbing set is $3$. They conjectured that every $3$-edge-coloured tournament without polychromatic dicycles of length $3$ has an absorbing vertex. Minggang \cite{minggang88} verified a slightly different version of the conjecture claiming that any $k$-edge-coloured tournament without polychromatic -not necessarily directed- cycles of length $3$ contains an absorbing vertex. Meanwhile, examples show that for every $k\geq 5$, there exists a $k$-edge-coloured tournament without polychromatic dicycle of length $3$ without an absorbing vertex. Galeana-S\'anchez \cite{galeana96} proved that if each directed cycle of length at most 4 in a $k$-edge-coloured tournament $T$ is quasi-monochromatic then $T$ has an absorbing vertex. In his PhD thesis \cite{bland11}, Bland provided several sufficient conditions for the existence of an absorbing vertex in a $k$-edge-coloured tournament. He also gave a sufficient condition for the existence of an absorbing set of size $3$ in $3$-edge-coloured tournaments.
Quasi-kernels are possible weakenings of kernels. An independent set $K\subseteq V$ is a quasi-kernel if for each vertex $v\in V-K$ there exists a path of length at most $2$ from $K$ to $v$ (quasi-sink sets can be defined analogously). The fundamental theorem of Chv\'atal and Lov\'asz \cite{lovasz74} shows that every finite digraph contains a quasi-kernel. In \cite{soukup09}, P.L. Erd\H{o}s and Soukup studied the existence of quasi-kernels in infinite digraphs. As the plain generalization of the Chv\'atal-Lov\'asz theorem fails even for tournaments, they considered the problem of finding a partition $V=V_1\cup V_2$ of the vertex set such that the induced subgraph $D[V_1]$ has a quasi-kernel and $D[V_2]$ has a quasi-sink. The authors conjectured that such a partition exists for any (possibly infinite) digraph. They verified that every (possibly infinite) directed graph $D=(V,A)$ contains two disjoint, independent subsets $K$ and $S$ of $V$ such that for each node $v\in V$ there exists a path of length at most $2$ from $K$ to $v$ or from $v$ to $S$, but the conjecture is still open.
The motivation of our investigations was to combine the notions of absorbing sets by monochromatic paths and that of quasi-kernels and sinks, which lead to the definition of a king-serf duo by monochromatic paths, and to prove an analogue of Conjecture~\ref{conj:erdos}.
\section{Proof of Theorem~\ref{thm:main}} \label{sec:main}
The proof relies on the following theorem due to Erd\H{o}s, Hajnal and P\'osa \cite{erdos1975strong} (finite case) and Hajnal \cite{hajnal1991embedding} (infinite case).
\begin{theorem}[Erd\H{o}s, Hajnal and P\'osa] \label{thm:hajnal}
For every finite simple graph $ H $ and cardinal $ \kappa>0 $ there is a simple graph $ G $ of size at most $ \mathsf{exp}_{\left|V(H)\right|+5}(\kappa) $ (at most $ \kappa^{500\left|V(H)\right|^{3}\kappa} $ in the finite case) such that in any $ \kappa $-edge-colouring of $ G $ one can find a monochromatic induced subgraph isomorphic to $ H $.
\end{theorem}
With the help of Theorem~\ref{thm:hajnal}, first we prove the following.
\begin{lemma}\label{lem:quasi}
For every cardinal $ \kappa>0 $ there exists a tournament $ T_\kappa $ of size at most $\mathsf{exp}_{10}(\kappa) $ (at most $ \kappa^{62500\kappa} $ in the finite case) such that in any $ \kappa $-edge-colouring of $ T_\kappa $ there exists a quasi-monochromatic dicycle of length three.
\end{lemma}
\begin{proof}
Pick a graph $G$ ensured by Theorem~\ref{thm:hajnal} for $\kappa$ and $H=C_5$, that is, a cycle of length $5$. Fix a well-ordering of $V(G)$. Let $T_\kappa$ denote the tournament obtained by orienting the edges of $G$ forward according to the ordering, and by adding all missing edges as backward edges. We claim that $T_\kappa$ satisfies the conditions of the lemma.
Take an arbitrary $ \kappa $-edge-colouring of $ T_\kappa$. The choice of $G$ implies that there is a monochromatic (not necessarily directed) cycle $C$ of length $5$ in the graph such that $A(C)$ consists of forward edges, and all the other edges induced by $ V(C) $ in $ T_\kappa $ are backward edges.
No matter how the edges of $C$ are oriented, we can always find a directed path of length two in $A(C)$. Take such a path, say $uv$ and $vw$. These edges together with $wu$ form a quasi-monochromatic dicycle, concluding the proof of the lemma.
\end{proof}
We claim that $\lambda_\kappa := |V(T_\kappa)| $ satisfies the conditions of the theorem. Suppose to the contrary that there exists a $ \kappa $-edge-coloured tournament $T$ not containing a king-serf duo by monochromatic paths of size at most $ \lambda_\kappa $. Let $ T_\kappa $ be a tournament that we obtain by applying Lemma~\ref{lem:quasi}.
\begin{lemma}\label{lem:forbidden}
$T$ has a subtournament isomorphic to $ T_\kappa $ consisting of forbidding edges.
\end{lemma}
\begin{proof}
We build up the desired subtournament by transfinite recursion. Let $ V(T_\kappa)=\{ u_\gamma \}_{\gamma< \left|V(T_\kappa)\right|} $. Assume that for some $ \alpha<\left|V(T_\kappa)\right| $ we have already found an $\subset$-increasing chain $ \left\langle f_\beta: \beta<\alpha \right\rangle $ of $ T_\kappa \rightarrow T $ embeddings where
$ \mathsf{dom}(f_\beta)=\{ u_\gamma \}_{\gamma<\beta} $ and the images of the edges of $ T_\kappa $ are forbidding edges of $ T $. If $
\beta $ is a limit ordinal, we may simply take $ f_\beta:= \bigcup_{\gamma<\beta}f_\gamma $ to keep the conditions. Assume that $
\beta=\delta+1 $. Let $ O =\{\gamma<\delta: u_\delta u_\gamma\in A(T_\kappa) \} $.
As $T$ is a counterexample, the sets $ K:= \{ f_\delta(u_\gamma) \}_{\gamma\in O} $ and $ S:= \{ f_\delta(u_\gamma) \}_{\gamma \in \delta
\setminus O} $ cannot form a king-serf duo by monochromatic paths. Therefore there is a vertex $ v\in V(T) $ such that there is a
forbidding edge from $v$ to every element of $ K $, and there is a forbidding edge from every element of $ S $ to $v$. But then $
f_{\delta+1}:= f_\delta\cup \{ \left\langle u_\delta,v \right\rangle \} $ maintains the conditions. Finally, the image of $
f:=\bigcup_{\gamma<\left|V(T_\kappa)\right|} $ gives the desired copy of $ T_\kappa $.
\end{proof}
The $\kappa$-edge-colouring of $T$ defines a $ \kappa $-edge-colouring of its $ T_\kappa $ subgraph as well. Therefore, by the choice of $T_\kappa$, there is a quasi-monochromatic dicycle $ C $ of length three in $ T_\kappa $. Let $ uv $ denote the edge of $ C $ with different colour than the others if $C$ contains two colours, and let $uv$ be an arbitrary edge of $C$ if it is monochromatic. Then $ C-uv $ is a monochromatic path of length two from $ v $ to $ u $, contradicting $ uv $ being a forbidding edge of $ T$. This finishes the proof of Theorem~\ref{thm:main}.
\subsection*{Acknowledgements}
The authors are supported by the Hungarian National Research, Development and Innovation Office -- NKFIH
grants K109240
\bibliographystyle{plain}
|
1,108,101,565,155 | arxiv | \section{Introduction}
Gravitational lensing is the physical phenomena in which light is deflected by gravitational potentials along the line of sight, which results in the distortion and magnification of distant galaxy images. This phenomena can be split into two regimes, strong and weak gravitational lensing. For strong gravitational lensing, observed galaxy images are visibly distorted and multiple images of the same source galaxy can be produced. In the case of weak gravitational lensing (WL), where image distortions are very small, the underlying lensing signal can be recovered by statistically correlating distortions in many source galaxy images over extended patches of the sky \citep{Bacon2000,Kaiser2000,VanWaerbeke2000,Wittman2000}. In particular, WL is sensitive to moderate variations in the mass distribution, such as the large-scale structure (LSS) of the Universe, and allows us to map the cosmic mass content over a large range of scales, from kiloparsecs to hundreds of Megaparsecs \citep[see][for a review]{Bartelmann2001,Kilbinger2015}.
WL represents a powerful cosmological probe because it is an unbiased tracer of the cosmic LSS, whose properties and evolution are governed by the underlying cosmological model, including the matter content in the Universe and the law of gravity. Thus, WL can be used to constrain cosmological parameters within the standard $\Lambda\rm{CDM}$ paradigm, as well as models beyond $\Lambda\rm{CDM}$ \citep{Albrecht2006,LSST2012,Amendola2013,Weinberg2013}. In order to achieve this, one must construct statistics which efficiently capture the cosmological information embedded within WL maps. This can be achieved through two-point statistics such as the power spectrum or the two-point correlation function. One such example is the shear-shear correlation function which has been used to provide constraints on cosmological parameters within $\Lambda$CDM \citep[e.g.][]{Schneider2002,Semboloni2006,Hoekstra2006,Fu2008,Heymans2012,Kilbinger2013,Hildebrandt2017}. The convergence power spectrum and shear-shear correlation have also been used to test modified gravity theories beyond $\Lambda\rm{CDM}$ \citep[e.g.][]{Schmidt2008,Tsujikawa2008,Huterer2010}.
The power spectrum encapsulates all the information required to describe a Gaussian random field, which is an accurate representation of the matter distribution in the Universe at early times. However, the growth of LSS is governed by gravity which induces non-Gaussian features due to nonlinear evolution at late times, when the power spectrum becomes an incomplete description of the underlying matter field. Therefore, for non-Gaussian observables such as WL maps, it is important to develop complementary statistics beyond the power spectrum in order to maximise the cosmological information that can be extracted.
A popular and simple alternative WL statistic that is complementary to the WL power spectrum is the abundance of WL peaks \citep{Jain2000,Pen2003,Dietrich2010}, which are usually defined as the local maxima in the convergence field. The strongest WL peaks are typically produced by the most massive structures in the universe, such as galaxy clusters \citep{Yang2011,X.Liu2015,J.Liu2016}, and so the abundance of these WL peaks is directly sensitive to the non-Gaussian features of the cosmic web. Furthermore, low amplitude WL peaks have been shown to contain useful cosmological information \citep{Dietrich2010, Kratochvil2010, Yang2011}, making the study of weak lensing peaks crucial for cosmological constraints. This complementary information contained in the abundance of WL peaks has been exploited to improve cosmological constraints on $\Lambda$CDM parameters \citep{Shan2012,VanWaerbeke2013,Shan2014,X.Liu2015}, modified gravity \citep{Cardone2013,X.Liu2016,Higuchi2016,Shirasaki:2016twn,Peel2018}, dark energy \citep{Giocoli2018}, and the sum of neutrino masses \citep{Li2018}. Additional WL peak statistics, such as the two point correlation function, have also been shown to be sensitive to the $\Lambda\rm{CDM}$ parameters \citep{Davies2019}.
There are multiple other WL statistics beyond the power spectrum that have been utilised to constrain cosmology, and we briefly mention a few here. The first is Minkowski functionals, which can provide additional constrains on the dark energy equation of state parameter \citep{Kratochvil2012,Petri2013,Ling2015,Marques2018}. The WL bispectrum, which is sensitive to non-Gaussianity by definition, has been shown to be a useful statistic for future surveys \citep{Cooray2001,Rizzato2019,Munshi2019}, and can be used to improve parameter constraints, such as neutrino masses \citep{Coulton2018}. And finally, WL minima, local minima in the convergence field, are less sensitive to baryonic effects, and offer certain advantages over WL peaks \citep{Coulton2019}. Every such novel statistic offers its own unique advantages, which makes the study of novel statistics crucial.
The goal of this paper is to explore the properties of another of such statistic, WL voids, first introduced
in \cite{Davies2018}. Typically voids are identified in the full 3D distribution of the LSS, as regions with low densities of matter or tracers. The void abundance, their radial profiles and shapes contain higher order clustering information (and hence non-Gaussian information; \citealt{White1979,Fry1986,Biswas2010,Bos2012,Lavaux2012}).
Most studies have focused on galaxy voids, which corresponds to underdensities in the galaxy distribution \citep[e.g.][]{Paz2013,Sutter2014,Cautun2016,Nadathur2016}. The statistics of galaxy voids contain complementary information to the galaxy power spectrum and baryonic acoustic oscillations \citep[e.g.][]{Pisani2015,Hamaus2016,Nadathur2019}. One useful void statistic is their WL profiles, which have been argued to represent a powerful cosmological probe \citep{Cai2015,Barreira2015,Falck2018}.
Compared with galaxy voids, WL voids have been shown to corresponds to deeper line-of-sight projected underdensities and thus they have a larger tangential shear signal \citep{Davies2018}. This potentially makes WL voids better cosmological probes than galaxy voids. This has been exemplified by \citet{Davies2019b} in the context of a class of modified gravity models, which can be considerably better constrained with 2D WL voids than with galaxy voids.
The total SNR of void lensing profiles depends on the number of voids and the amplitude of the lensing profile. Depending on how voids are identified, either fewer or more 2D voids can be obtained relative to 3D voids. However, most importantly, the 2D void lensing profiles have amplitudes roughly an order of magnitude larger than those of 3D voids \citep{Cautun2018,Davies2018}. This is the most important factor that contributes to higher SNR for 2D WL voids compared to 3D voids in the cosmic web.
\citet{Davies2018} focused on a particular class of WL voids, called VOLEs (VOids from LEnsing), where the voids are identified as circles devoid of weak lening peaks. However, as for 3D voids, the definition and therefore the finding algorithm of 2D voids are not unique. There are multiple methods of finding underdensities, and thus multiple approaches to define voids \citep[e.g.][]{Colberg2008,Cautun2018}. This ambiguity can lead to systematic differences in void observables among the various void finders. However this ambiguity can also be exploited, by picking the void-finding algorithm that best suits the intended purpose. In our case, we want to maximise the amplitude of the WL void lensing profiles (or similarly the SNR of the WL void lensing profiles), whilst also limiting the impact of observational noises on the resulting void statistics. To this end, we will present WL void statistics for a range of void-finding algorithms, and discuss the limitations and advantages of each void finder.
Here, we compare seven different void definitions. These can be split into two classes. First and seemingly the most natural approach, consists of the methods which identify voids directly from the WL convergence field. In the following, we denote the convergence with $\kappa$. The simplest objects that can be considered as WL voids are the WL minima (i.e., local minima in the $\kappa$ field) where the deepest minima have been shown to correspond to large supervoids along the line of sight \citep{Chang2018}. More advanced void definitions include the watershed void finder (WVF; \citealt{Platen2007}), which identifies voids as the watershed basins of the convergence field, the spherical void finder (SVF; e.g., \citealt{Padilla2005}) applied to the convergence field (which we denote as SVF $\kappa$), which finds the largest circles whose mean $\kappa$ is below a given threshold, and troughs (denoted with Troughs $\kappa$; \citealt{Gruen2015}), which consists of fixed sized circles whose mean convergence is below a given threshold.
By construction, the number and properties of voids identified in the convergence field are sensitive to the lowest $\kappa$ values. These regions are the ones affected the most by galaxy shape noise (GSN). For this reason we consider a second class of void finders, which consists of methods that identify voids using a distribution of tracers, which we take to be the peaks of the convergence field (as we shall discuss, the peaks are less affected by GSN). We study three methods in this class: the `tunnel' algorithm \citep{Cautun2018} employed in \citet{Davies2018}, which identifies voids as the largest circles devoid of tracers, the SVF but now applied to the peak distribution (hereafter referred to as `SVF peak'), and troughs identified in the peak distribution (denoted with `Troughs peak'), which consists of fixed sized circles that enclose fewer than a given number of peaks. A detailed description of how each WL void finder is presented in Section \ref{sec:void finders}.
The content of the paper is as follows: in Section \ref{sec:Theory} we present the relevant WL theory. The numerical simulations and galaxy shape noise prescription used in this study are presented in Section \ref{sec:Weak lensing maps} along with the basic WL map statistics which will help the interpretation of results from different WL void finders. The void finders studied here are presented in Section \ref{sec:void finders}, and the statistics describing the WL voids associated to each WL void finder are presented and discussed in Section \ref{sec:void statistics}. We then compare useful properties of the WL void finders in Section \ref{sec:comparison}, with the discussion and conclusions in Section \ref{sec:discussion and conclusions}. We also present the correlation matrices of the tangential shear profiles for different void finders in Appendix \ref{app:correlation}. In Appendix \ref{app:WL voids in GSN maps} we test how WL voids behave in WL maps with only GSN i.e. WL maps with no physical signal, and discuss how WL voids are sensitive to the physical information in WL maps.
\section{Theory}\label{sec:Theory}
For a gravitationally lensed image, the lens equation is given by
\begin{equation}
\pmb{\alpha} = \pmb{\beta} - \pmb{\theta} \, ,
\end{equation}
where $\pmb{\theta}$ is the observed position of the lensed image, $\pmb{\beta}$ is the true position of the source on the sky, and $\pmb{\alpha}$ is the deflection angle.
The deformation matrix \textbf{A} can be defined as
\begin{equation}
A_{ij} = \frac{\partial \beta_{i}}{\partial \theta_{j}} = \delta_{ij} - \frac{\partial \alpha_{i}}{\partial \theta_{j}} \, ,
\label{eq:amp mat}
\end{equation}
while, under the Born approximation and neglecting lens-lens coupling, the deflection angle can be expressed as the gradient of a 2D lensing potential, $\psi$, which is given by
\begin{equation}
\psi(\pmb{\theta},\chi) = \frac{2}{c^2} \int_0^{\chi} \frac{\chi - \chi'}{\chi \chi'} \Phi(\chi' \pmb{\theta},\pmb{\theta}) d\chi' \, .
\label{eq:lensing potential}
\end{equation}
Here, $\chi$ is the comoving distance to the source, $\chi'$ is the comoving distance to the lens, $c$ is the speed of light and $\Phi$ is the 3D lensing potential of the lens. In the absence of the anisotropic stress, which means that the two gravitational potentials in the Newtonian gauge are both equal to $\Phi$, $\Phi$ is related to the non-relativistic matter density contrast, $\delta$, through the Poisson equation
\begin{equation}
\nabla^2\Phi = 4 \pi G a^2 \bar{\rho} \delta \, ,
\label{eq:Poisson equation}
\end{equation}
where $G$ is the gravitational constant, $a$ is the scale factor, $\bar{\rho}$ is the mean matter density of the universe, and $\delta = \rho/\bar{\rho} - 1$. Eq.~\eqref{eq:lensing potential} shows that the WL signal is produced by the matter distribution along the entire line of sight from the source to the observer.
Using $\pmb{\alpha} = \pmb{\nabla}\psi$ allows Eq. \eqref{eq:amp mat} to be expressed in terms of $\psi$
\begin{equation}
A_{ij} = \delta_{ij} - \partial_{i} \partial_{j} \psi \, ,
\end{equation}
where partial derivatives are taken with respect to $\pmb{\theta}$. The $\pmb{A}$ matrix can be parameterised in terms of convergence, $\kappa$, and shear, $\gamma = \gamma_1 + i\gamma_2$, as
\begin{equation}
\pmb{A} =
\begin{pmatrix}
1 - \kappa -\gamma_1 & -\gamma_2\\
-\gamma_2 & 1-\kappa+\gamma_1
\end{pmatrix}
\, ,
\end{equation}
where the convergence and shear are related to the lensing potential via
\begin{equation}
\kappa \equiv \frac{1}{2} \nabla^2_{\pmb{\theta}} \psi \, ,
\label{eq:convergence}
\end{equation}
\begin{equation}
\gamma_1 \equiv \frac{1}{2}\left(\nabla_{\pmb{\theta}_1}\nabla_{\pmb{\theta}_1}-\nabla_{\pmb{\theta}_2}\nabla_{\pmb{\theta}_2}\right)\psi,
\quad\quad\quad
\gamma_2 \equiv \nabla_{\pmb{\theta}_1}\nabla_{\pmb{\theta}_2}\psi,
\label{eq:shear}
\end{equation}
where $\nabla_{\pmb{\theta}} \equiv (\chi')^{-1}\nabla$. Eq.~\eqref{eq:convergence} can be interpreted as a 2D Poisson equation, and so by substituting Eq.~\eqref{eq:Poisson equation} and Eq.~\eqref{eq:lensing potential} into Eq.~\eqref{eq:convergence}, the convergence can be expressed in terms of the matter overdensity
\begin{equation}
\kappa(\pmb{\theta},\chi) = \frac{3H_0^2\Omega_{\rm{m}}}{2c^2}\int_0^{\chi}\frac{\chi - \chi'}{\chi} \chi' \frac{\delta(\chi'\pmb{\theta},\chi')}{a(\chi')} d\chi' \, .
\label{eq:conv source}
\end{equation}
This shows that the observed WL convergence can be interpreted as the projected density along the line of sight, weighted by the lensing efficiency factor $(\chi-\chi')\chi'/\chi$.
In WL observations, the source galaxies do not occupy a single plane at a fixed distance from the observer. The observed catalogue of source galaxies has a probability distribution $n(\chi)$, and Eq. \eqref{eq:conv source} must be weighted by this source galaxy distribution in order to obtain $\kappa(\pmb{\theta})$ \citep[see, e.g.,][for a more detailed discussion.]{Kilbinger2015}
\begin{equation}
\kappa(\pmb{\theta}) = \int_0^{\chi} n(\chi') \kappa(\pmb{\theta},\chi') d\chi' \, .
\end{equation}
Finally, we can relate the radial convergence profile of an object $\kappa(r)$ to its radial tangential shear profile through
\begin{equation}
\gamma_{\rm{t}}(r) = \bar{\kappa}(< r) - \kappa (r)
\label{eq:gamma_t} \;,
\end{equation}
where
\begin{equation}
\bar{\kappa}(< r) = \frac{1}{\pi r^2}\int_0^{r} 2 \pi r' \kappa(r') dr'
\;
\end{equation}
is the mean enclosed convergence within radius $r$. Notice that here and throughout this paper we use $r$ rather than $\theta$ to represent the 2D distance from the void centre.
In addition to the convergence profiles of WL voids, it is useful to also study the tangential shear profiles, since the tangential shear is the quantity directly measured by observations.
\section{Weak lensing maps}
\label{sec:Weak lensing maps}
In this section, we briefly outline the numerical simulations and the weak lensing maps used in this study, our prescription for including galaxy shape noise in our analysis, and a discussion on the relevant WL statistics that will inform the interpretation of our results from different void finders.
\subsection{Numerical simulations}
To study WL voids we use WL maps generated from N-body simulations taken from \cite{Takahashi2017} (herein \citetalias{Takahashi2017}) which provide publicly-available all-sky WL convergence maps. The WL maps are generated with the ray tracing algorithm from \citet{Hamana2015} \citep[see also][]{Shirasaki2015}. These WL convergence maps have a HEALPix resolution of $N_{\rm{side}}=16384$, and a source redshift of $z_{\rm{s}}=1$. The N-body simulations have a particle number of $2048^3$, and the particle mass varies with the box size ranging from $8.2\times10^8$ to $2.3\times10^{12}M_\odot$ (see Table 1 of \citetalias{Takahashi2017} for more details). To avoid repeating structures along the line-of-sight, \citetalias{Takahashi2017} constructed the light cone by stacking cubic simulation boxes of increasing size, with comoving sizes $L,2L,3L,\cdots,14L$, where $L=450h^{-1}$Mpc. These boxes are then duplicated 8 times and nested around the observer, where nests of larger boxes contain nests of smaller boxes at their centres. The matter distribution of these nested boxes is projected onto the nearest spherical shell centered on the observer, where the shells have radii of $ N \times 150 \ensuremath{~h^{-1}\mathrm{Mpc}}$ with $N=1,\cdots,14$ (see \citetalias{Takahashi2017} for illustration). The cosmological parameters used for these WL maps corresponds to a flat universe with $\Omega_{\rm{m}} = 0.279$, $\Omega_\Lambda=0.721$, $\sigma_8 = 0.820$ and $h = 0.7$, where $h = H_0 / 100$ km s$^{-1}$ Mpc$^{-1}$.
We split the all sky WL convergence maps into 192 \map{10} maps and then extend the map boundaries by a further 5 deg on all sides giving us 192 \map{20} maps with a resolution of $4096^2$ pixels. This approach results in maps where the central \map{10} region of each map does not overlap with the central \map{10} region of any of the remaining 191 maps. The use of the 192 smaller maps allows us to stick to the flat sky approximation. Void detection is carried out on the full \map{20} and voids with centres outside of the central \map{10} are discarded. Additionally, voids that are within twice their radius from the map boundary are discarded when calculating the void lensing profiles. This approach guarantees that void identification is not biased away from large voids due to boundary effects. For more details on our projection method, see Appendix A of \cite{Davies2019}.
\subsection{Galaxy shape noise}
\label{sec:GSN prescription}
The observed correlation in galaxy shapes induced by gravitational lensing is entirely dominated by the random shapes and orientations of galaxies, which are referred to as galaxy shape noise (GSN). As shown by \cite{VanWaerbeke2000b}, GSN can be modelled by adding random values drawn from a Gaussian distribution to each pixel of our simulated WL maps. The standard deviation of this distribution is given by
\begin{equation}
\sigma_{\rm{pix}}^2 = \frac{\sigma_{\rm{int}}^2}{2 \theta_{\rm{pix}} n_{\rm{gal}} }
\;,
\label{eq: GSN gaussian}
\end{equation}
where $\sigma_{\rm{int}}$ is the intrinsic ellipticity dispersion of the source galaxies, $\theta_{\rm{pix}}$ is the width of each pixel, and $n_{\rm{gal}}$ is the measured source galaxy number density. We use $\sigma_{\rm{int}} = 0.4$ and $n_{\rm{gal}} = 40 $ arcmin$^{-2}$, which match $\textsc{lsst}$ specifications \citep{LSST2009}.
The inclusion of GSN results in noise-dominated WL maps. Nevertheless, the noise effect can be suppressed by smoothing with a (usually) Gaussian filter with smoothing length $\theta_{\rm{s}}$. Using a small value for $\theta_{\rm{s}}$ allows a given WL statistic to probe the smallest scales and maximise the information gained, however this also leaves significant contamination from GSN. Using larger $\theta_{\rm{s}}$ values reduces the GSN contamination, but suppresses the small scale information within the WL maps. This means that a trade off must be struck between sufficiently suppressing GSN and retaining WL information on small scales. Additionally, the analysis carried out here relies on WL maps generated from dark matter only simulations, and do not include baryon physics. To suppress the differences between dark matter only and full hydrodynamic simulations, \cite{Weiss2019} found that very large smoothing scales must be used. Furthermore, \cite{J.Liu2015} found that constraints on cosmological parameters from WL peaks are improved when multiple smoothing scales are used. These imply that there is no single best choice of smoothing scale that fits all purposes when analysing WL statistics. So in order to explore this fully, all statistics in this work will be shown for multiple smoothing scales, $\theta_{\rm{s}} = 1$ (blue), $2.5$ (orange), and $5$ (green) arcmin, both in the presence (dashed) and absence (solid) of GSN.
By presenting all statistics for multiple smoothing scales, with and without GSN, we will be able to identify the void finders that are the least affected by GSN. However at this point the impact of GSN on cosmological parameter constraints from WL voids is not known. It is possible that the inclusion of GSN may improve cosmological parameter constraints from WL voids by increasing the signal-to-noise (SNR) ratio relative to the case where GSN is not included, as has been found with WL peaks \citep{Yang2011}. However, GSN could also bias or degrade the cosmological parameter constraints from WL voids. We leave such an investigation to further work and focus on identifying void finders that are the least affected by GSN in this paper.
For the analysis of WL peaks it is useful to define the amplitude of a given peak relative to the r.m.s.~fluctuation of the added GSN component of the WL field. As such $\nu$ is defined as
\begin{equation}
\nu \equiv \frac{\kappa}{\sigma_{\rm{GSN}}(\theta_{\rm{s})}} \, ,
\end{equation}
where $\sigma_{\rm{GSN}}(\theta_{\rm{s}})$ is the standard deviation of the smoothed GSN map (without contributions from the physical WL convergence map i.e. noise only) and varies depending on the smoothing scale with which the WL peak is identified, with $\sigma_{\rm{GSN}} = 0.0126,0.0051$ and $0.0025$ for $\theta_{\rm{s}} = 1, 2.5$ and $5$ arcmin respectively.
\subsection{Convergence PDF and WL peak abundance}
In order to aid the interpretation of the various WL void statistics, we first present some simple statistics that describe the information given to the WL void finders. In the cases of void finders applied directly to the convergence field this is the WL convergence probability distribution function (PDF) shown in the left panel of Fig.~\ref{fig:kappa pdf and WL peak abundance}, and for the void finders that use weak lensing peaks as tracers this is the WL peak abundance shown in the right panel Fig.~\ref{fig:kappa pdf and WL peak abundance}. Note that we define a WL peak as a pixel with a convergence value larger than that of its eight neighbours.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/kappa_pdf_and_WL_peak_abundance_2.pdf}
\caption{Left panels: the probability distribution function (PDF) of the WL convergence field, $\kappa$. Right panels: The differential abundance of WL peaks as a function of peak height $\nu$. The results shown here are obtained using a ${\sim}19,000\ensuremath{~\mathrm{deg}^2}$ area with the shaded regions denoting the one sigma error bars (most of the time the errors are smaller than the line thickness). The dashed and solid lines correspond to the WL convergence maps with and without GSN respectively. The colours correspond to different smoothing scales of the $\kappa$ field: $1.0$ (blue), $2.5$ (orange) and $5.0$ (green) arcmin. The relative differences between the cases with and without GSN are shown in the lower sub-panels.
\newline
}
\label{fig:kappa pdf and WL peak abundance}
\end{figure*}
The left panel of Fig.~\ref{fig:kappa pdf and WL peak abundance} shows the WL convergence PDF for the three smoothing scales (1, 2.5 and 5 arcmin), for cases with and without the inclusion of GSN (dashed and solid). The convergence PDF is well described by a log normal distribution convolved with a Gaussian when GSN is included \citep{Clerkin2017}. The different colours show that as the smoothing scale increases, the width of the distribution decreases, suppressing the non-Gaussian structures within the WL map, and the agreement between the cases with and without GSN improves. The relative differences in the convergence PDF between the no-GSN and the GSN-added cases are larger for $\kappa<0$ than for $\kappa>0$, as can be seen more clearly in the lower panel. Therefore in order to find agreement in WL void statistics with and without the inclusion of GSN we will likely require larger smoothing scales than what is required to get the same agreement for WL peak statistics. Finally, for a smoothing scale of 1 arcmin (blue curves), the inclusion of GSN introduces a significant number of negative convergence values that are much lower than the lowest convergence values found in the WL maps without GSN. This indicates that 1 arcmin smoothing might be too small for WL void finders applied directly to the convergence field in order to agree before and after GSN is added. However, agreement between the two cases is largely improved once the smoothing scale is increased to 2.5 or 5 arcmin.
The differential WL peak abundances identified from WL maps with and without GSN smoothed over the three smoothing scales (1, 2.5 and 5 arcmin) are displayed in the right panel of Fig.~\ref{fig:kappa pdf and WL peak abundance}. By adding GSN, the peak of the distribution is shifted to the right, and more peaks are created. The addition of these spurious peaks from GSN will lead to the identification of spurious voids for void finders that find voids in the WL peak distribution. The differences between WL peak catalogues for maps with and without GSN is suppressed as the smoothing scale increases, but this also decreases the overall abundance of the WL peaks. It can also be seen that, as $\kappa$ increases, the differences between the maps with and without GSN decreases. This is because the largest WL peaks are less affected by GSN, since the physical peak signal dominates over the noise.
The right panel of Fig.~\ref{fig:kappa pdf and WL peak abundance} also shows that there are many WL peaks with negative convergence values, which are local maxima in underdense regions of the WL convergence maps. This is as expected, since most regions have $\kappa<0$ (see left panel in Fig.~\ref{fig:kappa pdf and WL peak abundance}) and thus many local maxima will have heights $\kappa<0$. As we will discuss in Section~\ref{sec:void finders}, the void finders based on the peak distribution identify the voids as the regions that are largely devoid of peaks. Including all the WL peaks in our analysis can raise two problems. Firstly, it reduces the contrast in peak number density between overdense and underdense regions, and thus makes it difficult to robustly identify the underdense regions. Secondly, the location and height of $\kappa\lesssim0$ peaks is much more affected by GSN than for the high $\kappa$ peaks. This defeats the main reason for identifying voids using the WL peaks, which is to mitigate the effect of GSN on the WL void population. Therefore, to deal with these two issues, we proceed by imposing a peak height cut on the WL peak catalogues, and remove all peaks below a given threshold. This adds a free parameter to the analysis and thus, for the void finders that use WL peaks as tracers, we will present results for peak catalogues with peak heights of $\nu > 2$ and $\nu > 4$.
\section{Void finders}
\label{sec:void finders}
In this section, we describe the implementation of each WL void finder used in this paper. These void finders were originally developed to identify voids in a 3D galaxy or matter distribution, which means that some must be modified slightly to identify 2D WL voids. In each case we try to minimise the extent of the modification so that the interpretation of results can remain as similar as possible to the interpretation of 3D voids. Furthermore, where possible, we apply each void finder to both the WL peak distribution and the WL convergence field to see which approach provides the most information (in terms of the signal-to-noise ratio, SNR) and which is least affected by GSN. Finally, all void identification is carried out on the full \map{20} maps, while the voids whose centres reside outside of the central \map{10} are discarded. This ensures that the void identification process is not contaminated by edge effects, and that we do not bias our results away from large voids, since larger voids are more likely to intersect the map boundary.
\subsection{Minima}
Weak lensing minima are the simplest objects which can be interpreted as WL voids, which correspond to the most underdense lines of sight within the WL convergence maps. Here we define WL minima as local minima in the convergence field, which is a pixel whose $\kappa$ value is lower than those of its eight neighbours. We identify WL minima in the smoothed convergence field and discard all minima with a positive $\kappa$ value, because a positive $\kappa$ value indicates that the minimum and its neighbours reside within a local overdensity. This allows us to remain consistent with the general definition of a WL void, which is an underdense patch of the WL convergence map.
\subsection{Troughs}
Troughs \citep{Gruen2015} are underdense circles of a fixed size. Typically troughs are identified by randomly placing circles of that fixed size in a projected galaxy field and discarding the circles that contain the most galaxies, leaving only those that contain the least galaxies. Here we adapt the trough algorithm and apply it to both the WL peak field and the WL convergence field.
For troughs applied directly to the convergence field (Troughs $\kappa$), we first place $5000$ circles\footnote{We have also run the trough algorithm with 10 times as many randomly placed circles, and find that this does not change the SNR values of the trough tangential shear profiles. Therefore in this paper we stick to 5000.} randomly such that their centres fall into the central \map{10} of the WL convergence map. For each of these circles, the mean enclosed convergence is calculated. The trough catalogue consists of the $20\%$ of the circles with the lowest mean enclosed convergence. The above procedure is carried out for circles with radii of $10$, $20$ and $30$ arcmin, which correspond to the typical values used in previous studies \citep[e.g.,][]{Barreira2017,Gruen2018}.
For troughs identified in the WL peak distribution (Troughs peak), the same steps are repeated except that, rather than calculating the mean enclosed convergence, we count the number of enclosed peaks, and keep the $20\%$ of circles which contain the fewest peaks. Again, these steps are repeated for circles of radii of $10$, $20$ and $30$ arcmin.
The number of randomly placed circles and the upper fraction of circles to be discarded are both free parameters. However, to keep the analysis in this work simple we do not vary these parameters, and their values above have been chosen to match the typical abundances of WL voids produced by the other algorithms for a fair comparison.
\subsection{Watershed void finder (WVF)}
The watershed void finder \citep[][WVF]{Platen2007} defines voids as the watershed basins, which are analogous to water basins formed from rain running down a landscape. To identify the watershed basins, each pixel of the convergence map is connected to its neighbour with the lowest density, and this process is repeated for successive neighbours until a local minima is reached. All pixels connected to the same minima then belong to the same watershed basin. This results in ridges of local overdensities along the basin boundaries.
To mitigate the impact of GSN, we compare the average amplitude of each basin boundary with the amplitude of their corresponding minima. If the absolute difference in amplitude between the two is less than $h_{\rm{boundary}}$, we merge that basin with its neighbour, which creates a single larger basin. In this analysis we choose $h_{\rm{boundary}} = \sigma_{\rm{GSN}} / 2$, which allows watershed basins that have been artificially split by spurious structures introduced by GSN to be re-merged. Adding the basin merge criteria means that $h_{\rm{boundary}}$ is an additional free parameter in the WVF algorithm. We have tested the impact of varying $h_{\rm{boundary}}$ and find that it has little impact on our results. We choose $h_{\rm{boundary}} = \sigma_{\rm{GSN}} / 2$ as a compromise between mitigating the impacts of GSN on watershed basins and over merging, which would on average flatten out void lensing profiles.
This algorithm generates irregular basins which span the entire area of the WL convergence map. In order to calculate the stacked lensing profiles of the voids, we must define their void centres and radii using the information of the corresponding basins. We take the void centres as the area-weighted barycentre of all the pixels in each basin and define an effective void radius of $R_{\rm{v}} = (A/\pi)^{1/2}$ (which is the radius of a circle with the same area $A$ as the irregular basin) when calculating the WVF lensing profiles.
When the watershed algorithm is applied to the galaxy distribution to find 3D voids in the LSS, the galaxies are first used as tracers to construct an estimate of the underlying density field using a Delaunay tessellation field estimation (DTFE) \citep{Schaap2000,Cautun2011}. This in principle means that WL peaks could also be used to identify WL voids with the watershed algorithm, by using the WL peaks to construct a WL peak density field. However, we find that the usual DTFE approach is insufficient, since it results in WL voids identified from the WL peak distribution that bear little correlation to underdensities in the original convergence map. While it may be possible to improve this procedure by using information about the WL peak heights in the DTFE reconstruction, this is beyond the scope of this work, and we thus instead choose to only study voids identified by applying the watershed algorithm to the WL convergence field.
\subsection{Spherical void finder (SVF)}
The spherical void finder (SVF) \citep[e.g.,][]{Padilla2005} identifies underdense spherical regions in the galaxy distribution, by growing spheres around regions that are empty of galaxies. When applied to find WL voids, the SVF identifies circular regions in the WL convergence or peak fields that are below a specified `density' threshold. In practice, in order to allow SVF voids to `grow' as large as possible, circles are shrunk from some arbitrarily large size around candidate void centres until the threshold is met.
For the SVF applied directly to the WL convergence map (SVF $\kappa$), local minima are considered as prospective void centres. Starting from a large radius, circles are then shrunk around these void centres until the mean enclosed convergence reaches a predefined threshold, $\kappa_{\rm{thresh}}$. Here, larger values of $\kappa_{\rm{thresh}}$ result in larger voids, and note that we require $\kappa_{\rm{thresh}}$ to be negative so that the SVF finder identifies regions that enclose underdense sections of the convergence map. We have tested a range of values for $\kappa_{\rm{thresh}}$, and as a compromise between identifying the most underdense regions and allowing voids to grow as large as possible, we set $\kappa_{\rm{thresh}} = -0.01$ in this analysis. Once all prospective voids are shrunk until their mean convergence is $\kappa_{\rm{thresh}}$, we proceed to remove the objects that overlap significantly. That is, if the distance between any two prospective voids is less than half the sum of their radii, we discard the smaller of the two. Finally, we remove all voids with radii less than twice the smoothing scale that is applied to the convergence map ($\theta_{\rm{s}}$) to reduce the number of spurious voids.
For the SVF applied to the WL peak distribution (SVF peak), a Delaunay triangulation of the peak field is performed, and the circumcentres associated to each triangle are considered as potential void centres. Starting from a large radius, circles around those centres are shrunk, until the mean enclosed WL peak number density reaches a predefined fraction of the mean WL peak number density. We find that the resulting void catalogues are somewhat insensitive to the exact choice of this threshold value, and therefore pick $40\%$ as a good compromise between allowing SVF voids to grow as large as possible and ensuring these voids correspond to underdense regions of the WL convergence maps. Next, we randomly shift void centre positions within the void radius, in order to verify if it is possible for the void to `grow' a bit more (i.e., to reach the density threshold at a slightly larger radius). Finally, if the centres of two voids are separated by less than half of the sum of their radii, we remove the smaller of the two.
\subsection{Tunnels}
The tunnel algorithm \citep{Cautun2018} identifies the largest circles in a 2D tracer catalogue that are empty of tracers. Initially, a Delaunay tessellation with WL peaks as the vertices is constructed. This produces a tessellation of Delaunay triangles, with a WL peak at the corner of each triangle, and no WL peaks within the triangles. Each Delaunay triangle is then used to construct its corresponding circumcircle, which is the circle that resides directly on top of the Delaunay triangle, with the three vertices of the triangle falling on the circumcircle's circumference. This unique tessellation, by definition, produces circles which do not enclose any WL peaks. To avoid highly overlapping objects, we discard any tunnels whose centres reside within a larger tunnel. Recently, \citet{Davies2018} have studied tunnels in the context of WL maps and \citet{Davies2019b} have shown that they are better at constraining a modified gravity model than tunnels identified in the projected galaxy distribution.
\subsection{Visualisation}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/visualisation.pdf}
\caption{A visualisation of the weak lensing void finders discussed in this work. The convergence field is shown by the background colour map in each panel, with the convergence values illustrated by the colour-bar at the top of the figure. Here the brightest (orange) colours correspond to high $\kappa$ values and the darkest (purple) colours show low $\kappa$ regions. The results presented here are for a Gaussian smoothing scale, $\theta_{\rm{s}} = 2.5$ arcmin. The top eight panels are for WL maps with no GSN (1A to 1H), and the bottom eight panels are for WL maps with GSN (2A to 2H). Panels 1A and 2A show only the convergence fields as a reference point. The panels 1B to 1E and 2B to 2E show voids identified in the convergence field and correspond to: WVF, troughs and SVF applied to the $\kappa$ field, and minima. The remaining panels (1F to 1H and 2F to 2H) show voids identified using WL peaks with height, $\nu > 2$, and correspond to: tunnels, troughs and SVF applied to the peak distribution. Only the central \map{6} of the convergence maps are shown to avoid overcrowding.
}
\label{fig:visualisation}
\end{figure*}
Fig.~\ref{fig:visualisation} shows a visualisation of each of the void finders studied in this work. The eight panels in the top section (1A -- 1H) show results for WL maps without GSN and the eight panels in the bottom section (2A -- 2H) are results for WL maps with GSN. Each panel corresponds to a different void finder, apart from the first panels of each section (panel 1A and 2A) which shows only the WL convergence field for reference. Only the central \map{6} of one of the maps are shown, to avoid over crowding whilst still displaying a fair sample of each void catalogue. The results shown here are for a smoothing scale of $\theta_{\rm{s}} = 2.5$ arcmin and for peak catalogues with WL peak heights of $\nu > 2$ (where applicable). The top row of each section (panels 1A - 1D and 2A - 2D) corresponds to voids identified in the WL convergence maps and the bottom rows (panels 1E - 1H and 2E - 2H) corresponds to voids identified in the WL peak distribution. The WL peaks are shown by the green points, while the WL minima are shown by the cyan points.
Panels 1B and 2B of Fig.~\ref{fig:visualisation} shows the WVF voids identified in the WL convergence map. These voids tend to avoid the more overdense patches of the convergence map, since these more overdense regions reside at the watershed basin boundaries. The WVF voids occupy most of the area of the WL convergence map, which is due to every pixel within the map being assigned to a watershed basin. In some cases, the largest voids enclose smaller voids, as can be seen towards the top left of Panel 1B. The overlap is an artefact of illustrating the WVF as circles when actually these voids have highly non-circular and non-overlapping shapes \citep[e.g. see][]{Platen2007,Cautun2016}. By adding GSN, the size of the WVF voids is reduced and their abundance is increased.
Troughs identified directly on the convergence map are shown in Panels 1C and 2C, where it can be seen that these troughs trace only the most underdense regions of the convergence maps, which is by construction. The consequence of this algorithm is that many troughs significantly (or nearly entirely) overlap with other troughs, with very few troughs existing in isolation from other troughs. This will lead to highly correlated information in the statistics describing these troughs, as will be seen in their correlation matrices in Appendix \ref{app:correlation}. Panel 2C shows how adding GSN can change the spatial distribution of the troughs, although the degree of overlap between neighbouring troughs remains similar to the no-GSN case in panel 1C.
Panels 1D and 2D show the SVF voids identified in the convergence field. As can be seen there, the abundance of these voids is significantly lower compared to void catalogues from other algorithms, and more small voids are generated. However, these voids trace the underdensities of the convergence map reasonably well, as can be seen by their dark interiors. There are more voids in panel 2D, indicating that GSN increases the abundance of these voids.
The WL minima are displayed in Panels 1E and 2E. We remind the reader that we only study underdense minima, i.e., $\nu < 0$, and so only these minima are shown in the figure. These panels illustrate that the WL minima are slightly different from the typical WL void definition used in this work, since they have no size or radius, which has the advantage of simplicity. In later sections we'll discuss the abundance of WL minima as a function of their amplitude, rather than as a function of their size, and the abundance of WL minima has been shown to provide complementary cosmological information to the WL peak abundance \citep{Coulton2019}. We also discuss, for the first time, the potential for the radial lensing profiles of WL minima to be used in a cosmological analysis. There are more WL minima in panel 2E compared to 1E, indicating that there are more spurious minima created by GSN than physical minima removed by GSN.
A visualisation of the tunnel algorithm is shown in Panels 1F and 2F. The WL peaks used to identify the tunnels are shown by the green points, highlighting that the tunnels do not enclose any WL peaks, and that the peaks only reside on the void boundaries. Like the WVF, the tunnels occupy most of the area of the convergence map, however the tunnel algorithm identifies a wider range of void sizes, producing more large voids than those identified in the convergence maps. Smaller tunnels tend to cluster more than the larger ones, with the former appearing more in the overdense parts of the convergence map. Also similar to the WVF voids, panel 2F contains more tunnels which are on average smaller than the tunnels in panel 1F. This is because the spurious WL peaks created by GSN break up the larger tunnels in panel 1F into the multiple smaller tunnels seen in panel 2F.
Panels 1G and 2G show the troughs identified in the WL peak distribution. The troughs identified in this way still have a significant degree of overlap, however the overlap in this case is much weaker than for the troughs identified in the convergence maps. There are underdense patches in which no troughs have been placed, whilst many overlapping troughs can be seen in other regions. This highlights the inefficiency of the trough algorithm when applied to a WL peak distribution. This may be solved by increasing the number of troughs that are placed, however this will also increase the number of significantly overlapping troughs. As with the troughs applied to the convergence map, the troughs identified in the WL peak distribution trace different regions of the WL maps when GSN is added, and the degree of overlap between neighbouring troughs appears similar in both panels 1G and 2G.
Finally, Panels 1H and 2H show the SVF voids identified in the WL peak distribution. This algorithm produces the largest voids of all void finders and, similar to the WVF and tunnel algorithms, populates most of the area of the convergence map with voids. Also similar to the tunnels, the largest voids are in underdense regions and the smaller voids cluster in the overdense patches. It is interesting to note that in some cases, the tunnels and SVF identify the same voids in the WL peak distribution, as can be seen towards the top left of panels 1F and 1H. Panel 2H shows that the SVF voids identified in the WL peak distribution respond to GSN in the same way as tunnels and WVF, where these voids become smaller and more abundant in the presence of GSN.
\section{Void statistics}
\label{sec:void statistics}
In this section we discuss the statistics of each of the seven void populations analysed here and study how the physical signal is affected by GSN in each case. We also investigate the impact of varying the smoothing scale to quantify how this mitigates the impact of GSN. For each void type we present the abundance, convergence profiles and tangential shear profiles. Then, in Section \ref{sec:comparison}, we will compare the different void populations and investigate which type of void is least affected by GSN while giving rise to the strongest tangential shear signature.
\subsection{Minima}
\label{sec:results:minima}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/minima_stats_2.pdf}
\vskip -.2cm
\caption{The statistics describing the properties of WL minima depicted in panel E of Fig. \ref{fig:visualisation}. Solid lines show the properties of WL minima identified in WL maps with no GSN, while dashed lines show the properties of WL minima identified in WL maps with GSN. Different colours correspond to different smoothing scales applied to the convergence maps before identifying the minima, with blue, orange and green for $\theta_{\rm{s}}=1$, $2.5$ and $5$ arcmin respectively. One sigma standard error bars corresponding to the uncertainties associated to our analysis (which makes use of a 19200 deg$^2$ sky area) are given by the shaded coloured regions around each curve, however in most cases these error bars are a similar thickness to the curves. The top panel shows the WL minima abundance as a function of their WL convergence amplitude $\kappa$, and the shaded grey region indicates the minima that are used to calculate the lensing profiles. The middle panel shows the radial WL convergence profiles of the WL minima out to 12 arcmin, and the bottom panel shows the corresponding WL tangential shear profiles. The lower sub-panel in the top (bottom) panel shows the
relative (absolute) difference between the minimum abundances (tangential shear profiles) measured in WL maps with and without GSN.
}
\label{fig:minima statistics}
\end{figure}
Fig.~\ref{fig:minima statistics} shows the statistics of the WL minima depicted in Panels 1E and 2E of Fig.~\ref{fig:visualisation} with and without GSN (dashed and solid lines respectively) for three smoothing scales, $1$, $2.5$ and $5$ arcmin (blue, orange and green respectively).
The top panel shows the differential WL minima abundance as a function of amplitude $\kappa$. The distribution peaks at $\kappa\sim-0.01$, with the peak shifting closer to $0$ as the smoothing scale increases. The distributions are also positively skewed, highlighting the non-Gaussian properties of WL minima. When GSN is included, the abundance of minima is significantly contaminated, especially for small smoothing scales. For $\theta_{\rm{s}} = 1$ arcmin, GSN introduces a large amount of spurious negative minima, while minima with such low negative amplitudes do not exist in the no GSN case. This is shown by the steep cutoff at $\kappa = -0.03$ for the no GSN case, while the minima abundance is still steadily decreasing below $\kappa = -0.03$ in the GSN-added case. A non negligible amount of spurious positive minima are also added by GSN, however this affect is less extreme than for negative minima. The creation of spurious minima due to GSN is suppressed as the smoothing scale increases, however even with $\theta_{\rm{s}} = 5$ arcmin, there is still a noticeable amount of spurious negative minima. For each smoothing scale it can be seen that the WL minima are significantly more impacted by GSN than WL peaks by comparing with the right panel of Fig.~\ref{fig:kappa pdf and WL peak abundance}.
Lensing profiles are calculated from minima with amplitudes $\kappa < 0$, as indicated by the shaded grey region in the top panel. The middle panel shows the mean stacked radial convergence profiles around the WL minima out to $12$ arcmin. For $\theta_{\rm{s}}=1$ arcmin, by comparing the blue solid and dashed lines, it can be seen that the addition of GSN artificially boosts the depth of the convergence profile at $r\sim0$ by over a factor of $3$. This is caused by the creation of a significant number of spurious minima with unphysically deep negative $\kappa$ values, as shown by the minima abundance in the top panel. For the GSN case, the minima convergence profile briefly becomes positive between $\sim1.5$ and $3$ arcmin, which is possibly due to the creation of spurious (negaitve) minima in local overdensities from GSN. In contrast, for the no GSN case, the convergence profile gradually approaches the mean background value of $\kappa = 0$. For larger smoothing scales, similar behaviour is still present, with the $\kappa$ amplitude at $r = 0 $ still artificially boosted by GSN, however this boost decreases with increasing smoothing scale.
The bottom panel shows the tangential shear profiles around the WL minima, $\gamma_{\rm{t}}(r)$, calculated from $\kappa(r)$ using Eq. \eqref{eq:gamma_t}. As the smoothing scale increases, the peak of the tangential shear profile moves to outer radii, whereas the inclusion of GSN shifts the peak to inner radii relative to the no GSN case. The difference in amplitude between the no GSN and GSN cases for the tangential shear profiles is smaller than for the convergence profiles, but significant contamination due to GSN still remains. For the no GSN maps, the height of the peak of the tangential shear profiles is somewhat insensitive to the smoothing scale, whilst increasing $\theta_{\rm{s}}$ quickly suppresses the peak in the tangential shear profiles for the GSN-added maps.
These statistics in Fig.~\ref{fig:minima statistics} show that the WL minima are significantly affected by GSN and are more susceptible to GSN than WL peaks.
\subsection{Troughs in the convergence map}
\label{sec:results:troughs_kappa}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/Troughs_convField_stats_2.pdf}
\caption{The statistics describing troughs identified directly in the convergence field. For the meaning of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. The top row shows the PDF of the mean enclosed convergence within all randomly placed circles. The shaded grey region indicates the circles we define as troughs, that is the ones with a mean enclosed convergence in the bottom $20\%$ of all circles
(here we show the threshold for maps without GSN and for $\theta_{\rm{s}} = 2.5$ arcmin; the exact threshold depends slightly on smoothing scale and if GSN is included). The middle row shows the mean convergence profiles and the bottom row shows the mean tangential shear profiles. The three columns correspond to troughs of different sizes: $10$ (left), $20$ (centre) and $30$ (right) arcmin. The lower sub-panels in the top (bottom) row shows the
relative (absolute) difference between the trough $\kappa$ PDFs (tangential shear profiles) measured in WL maps with and without GSN.}
\label{fig:Trough ConvField statistics}
\end{figure*}
Fig.~\ref{fig:Trough ConvField statistics} shows the statistics of troughs identified directly in the convergence field. The top row shows the probability distribution function (PDF) of the mean enclosed convergence within all randomly placed circles, and the three columns (from left to right) are for trough radius $R_{\rm{v}}$ equal to $10$, $20$ and $30$ arcmin respectively. The shaded grey regions show the circles with a mean enclosed convergence in the bottom $20\%$ of all circles, which are the troughs that are used to calculate the trough lensing profiles. For a fixed trough radius, the $\kappa$ value above which circles are discarded depends on the smoothing scale and whether or not the WL maps includes GSN. For simplicity the shaded grey regions shown here are for $\theta_{\rm{s}} = 2.5$ arcmin in WL maps without GSN.
Increasing the smoothing scale $\theta_{\rm{s}}$ decreases the width of the PDFs, and improves the agreement between the no GSN and GSN maps. As with the minima abundances, the largest differences between the no GSN and GSN maps are found at the negative-$\kappa$ regions of the PDF. As the trough radius increases, the agreement between the no GSN and GSN maps improves, and so does the agreement between different smoothing scales. These PDFs are all positively skewed indicating that the troughs identify more underdense regions than overdense regions.
The middle row shows the mean stacked convergence profiles of the troughs for different radii. The troughs have very underdense centres, and $\kappa$ gradually increases with $r$. This increase gets sharper near $r = R_{\rm{v}}$ and then slows down further outside the trough radius. The depth of the convergence profiles is larger for the GSN maps, and the smoothing scale has a relatively small impact. As the trough radius increases, the overall depth of the convergence profiles decreases, however the shapes of the convergence profiles remain the same. The impact of GSN on the convergence profile decreases with $R_{\rm{v}}$, with the case $R_{\rm{v}}=30$ arcmin showing little difference between the GSN and no GSN cases.
The bottom row shows the tangential shear profiles of troughs, which are characterised by a maximum amplitude that is roughly an order of magnitude smaller than that of the WL minima. The inclusion of GSN has little impact on the trough tangential shear profiles for $r\lesssim R_{\rm{v}}$ (especially for the 20 and 30 arcmin troughs). At larger distances, GSN leads to an increase in tangential shear which persists even up to $r=2R_{\rm{v}}$. The difference between the maximum tangential shear amplitude for the no GSN and GSN maps is very small relative to the same feature in the WL minima. The difference between the no GSN and GSN maps is somewhat insensitive to the smoothing scale, and depends more strongly on the trough radius. As the trough radius increases, the amplitude of the tangential shear profiles decreases slightly and so does the difference between the no GSN and GSN maps.
The statistics describing the troughs identified directly in the convergence maps are significantly less contaminated by the inclusion of GSN than the WL minima. However, the overall amplitude of the tangential shear profile of troughs is also significantly lower, which, as we shall see in Section~\ref{sec:comparison}, implies that we need a larger survey to measure trough profiles with the same SNR as the minima profiles.
\subsection{Troughs in the peak distribution}
\label{sec:results:troughs_peak}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/Troughs_peakField_stats_2.pdf}
\caption{The statistics for troughs identified in the distribution of WL peaks. For the meanings of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. The top row shows the PDF of the mean enclosed convergence within the troughs, the middle row shows the mean convergence profiles of the troughs and the bottom row shows the mean tangential shear profiles of the troughs. All results shown here are for a fixed trough size of $r = 30$ arcmin. We identify troughs using only the high WL peaks and we show results for two peak height selections: $\nu>2$ (left column) and $\nu>4$ (right column). The lower sub-panel in the top (bottom) panel shows the
relative (absolute) difference between the trough $\kappa$ PDFs (tangential shear profiles) measured in WL maps with and without GSN.
}
\label{fig:Trough PeakField statistics}
\end{figure*}
We next study the troughs identified in the distribution of WL peaks. Before identifying troughs, we first remove all peaks below a predetermined $\nu$ threshold from the peak catalogue. This reduces the impact of GSN by discarding peaks with low height. This approach adds another free parameter to the void identification process compared to troughs identified in the convergence field, the $\nu$ threshold for peak heights. In Fig.~\ref{fig:Trough PeakField statistics}, we present results for two $\nu$ thresholds, $\nu > 2$ and $\nu > 4$, to test the impact of this threshold on the resulting trough statistics. To improve clarity, in Fig. \ref{fig:Trough PeakField statistics} all results are presented for a fixed trough size of $R_{\rm{v}} = 30$ arcmin, which is chosen because it is the trough radius at which results for the troughs agree best between the no GSN and GSN maps.
The top row of Fig.~\ref{fig:Trough PeakField statistics} shows the PDFs of the mean enclosed convergence for troughs identified from WL peak catalogues with heights $\nu > 2$ and $\nu > 4$. Note that this is the trough PDF, which is calculated after the randomly placed circles with $\overline{\kappa}(<R_{\rm{v}})$ in the top $80\%$ are discarded, unlike in Fig.~\ref{fig:Trough ConvField statistics}. Away from the peak of the PDF, the results from the no GSN and GSN maps disagree for all smoothing scales for both peak thresholds. However, the agreement between the no GSN and GSN maps is good near the positive-$\kappa$ end of the PDF for all smoothing scales in the $\nu > 4$ catalogue. For the $\nu > 2$ catalogue, the PDFs are positively skewed, indicating that the trough algorithm is preferentially selecting underdense regions, however for the $\nu > 4$ catalogues the PDFs are more symmetric. This is due to the sparsity of tracers at this threshold, where the low number density of WL peaks implies that the $\nu > 4$ catalogue does not give an accurate representation of the underlying convergence field since, for example, many overdense regions of the convergence map do not have peaks with $\nu>4$. Despite this, the maximum of the PDF is still below zero indicating that we predominantly select underdense regions.
The middle row shows the radial convergence profiles of the troughs identified in the WL peak distribution. These profiles have a similar shape to those of the troughs identified in the WL convergence maps. For the $\nu > 2$ catalogue, agreement between the no GSN and GSN maps improves as the smoothing scale increases, and the two convergence profiles are within the one sigma standard error for $\theta_{\rm{s}} = 5$ arcmin. Here, the overall depth of the convergence profiles also decreases with increasing smoothing scale. However, for the $\nu > 4$ catalogues, increasing the smoothing scale only slightly improves the agreement between the no GSN and GSN maps, and there is no trend between smoothing scale and convergence profile depth, since $\theta_{\rm{s}} = 2.5$ arcmin produces the deepest convergence profile. This is due to the sparsity of WL peaks for $\nu > 4$, which results in the troughs more randomly tracing the underlying convergence field when compared to a lower $\nu$ threshold. This is evident from the fact that the convergence profiles are not as deep in the $\nu > 4$ catalogue when compared to the $\nu > 2$ catalogue.
The bottom row shows the radial tangential shear profiles of the troughs identified in the WL peak distribution. For all smoothing scales and both the no GSN and the GSN maps, the tangential shear profiles agree with each other reasonably well below $r = R_{\rm{v}}$, for both $\nu$ thresholds. This is due to the consistent shape of the convergence profiles (with only constant shifts with respect to each other) in all cases, which is the main feature that the tangential shear profile is sensitive to. The tangential shear profiles peak at $r \sim 1.2 R_{\rm{v}}$, which is where results from the different smoothing scales separate. The difference between the no GSN maps and the GSN maps is largest at the peak of the tangential shear, and slowly reduces out to larger radii. These tangential shear profiles are also noisier than for other void finders -- this is due to the larger scatter in the locations of the troughs identified in the peak distribution, as can be seen in Panel G of Fig.~\ref{fig:visualisation}, which results in a larger scatter of convergence profiles.
Compared to troughs found directly in the $\kappa$ map, troughs identified using peaks have tangential shear profiles that have slightly lower amplitudes, however the agreement between the no GSN and GSN cases is better, which is a consequence of the fact that the WL peaks are less affected by GSN than the convergence field in the low $\kappa$ regions of the WL map.
\subsection{WVF voids}
\label{sec:results:WVF}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/WVF_stats_2.pdf}
\caption{The abundance (top row), and the convergence (middle row) and tangential shear (bottom row) profiles of WVF voids. For the meanings of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. The lower sub-panel in the top (bottom) panel shows the
relative (absolute) difference between the WVF void abundances (tangential shear profiles) measured in WL maps with and without GSN.
}
\label{fig:WVF statistics}
\end{figure}
Fig.~\ref{fig:WVF statistics} shows the properties of the WVF voids. The top panel shows the differential void abundance as a function of void radius $R_{\rm{v}}$. For the smallest smoothing scale, the largest void that is identified is $0.2$ deg, and as the smoothing scale increases the sizes of the voids also increases, which also reduces the total number of voids. The size distributions of the voids are significantly different between the no GSN and GSN maps, where including GSN increases the total number of voids and reduces their size. This is due to GSN adding spurious features to the convergence field such as artificial ridges and minima, which results in the production of spurious voids. Since the WVF voids fill the entire area of the convergence map, having more voids implies that the average void size decreases. Even for $\theta_{\rm{s}} = 5$ arcmin, there is still a disagreement in the size distribution between the no GSN and GSN maps, and this disagreement is much larger than the one-sigma standard error bars (shown by the shaded regions around the curves).
The convergence profiles of WVF voids are shown in the middle panel. They have a smooth shape, with negative convergence values at $r = 0$, gradually increasing outwards and crossing $\kappa = 0$ at $r \sim 0.7R_{\rm{v}}$. The convergence profiles continue to smoothly increase until $r = R_{\rm{v}}$, at which point they start to decrease and return to the mean background value of $\kappa = 0$ far outside of the void radius. At $r \sim 1.5 R_{\rm{v}}$ some of the void profiles briefly become underdense, which is because the boundary of each void is also the boundary of one of its neighbours voids, which has an underdense interior. This feature is exaggerated for the smaller voids since averages are taken over smaller areas.
In the absence of GSN, the convergence profiles are very similar for different $\theta_{\rm{s}}$ values. However, after adding GSN, the convergence profiles are heavily dependent of the chosen smoothing scale. For $\theta_{\rm{s}} = 1$ arcmin, the addition of GSN significantly reduces the $\kappa$ value at $r\sim0$, which is very similar to the behaviour seen in the WL minima convergence profiles. The similarity between the two is due to the fact that each watershed basin is connected to a local minima, which on average resides close to the centre of the void, and GSN produces a large number of spurious local minima, which can often be deeper than true minima (Fig.~\ref{fig:minima statistics}, top panel). This same feature will be seen in SVF voids found from the $\kappa$ field below. Furthermore, the amplitude of convergence profile in the positive regions is also boosted by GSN, which makes the peak at $r = R_{\rm{v}}$ significantly higher. The above behaviour occurs because the boundary of WVF voids consists of ridges in the $\kappa$ field and positive GSN values can move and enhance the height of these ridges (the algorithm chooses the highest local ridge and thus preferentially selects the regions with positive GSN values). This is more apparent for smaller smoothing scales, where GSN has not been sufficiently suppressed. The differences between the no GSN and GSN convergence profiles are quickly suppressed with increasing $\theta_{\rm{s}}$.
The bottom panel shows the tangential shear profiles for the watershed voids, which peak at $r\sim0.85R_{\rm{v}}$ and converge to $\gamma_{\rm{t}}\simeq0$ at large distances. Again, the $\gamma_{\rm{t}}$ profiles are significantly boosted by GSN, and quickly converge back to the no GSN counterparts as the smoothing scale increases. However, visible difference still remains even with $\theta_{\rm{s}}=5$ arcmin.
\subsection{SVF in the convergence map}
\label{sec:results:SVF_kappa}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/SVF_kappaField_stats_2.pdf}
\caption{The statistics describing the SVF applied directly to the convergence maps: the abundance (top row), and the convergence (middle row) and tangential shear (bottom row) profiles of SVF $\kappa$ voids. For the meanings of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. The lower sub-panel in the top (bottom) panel shows the
relative (absolute) difference between the SVF-$\kappa$ void abundances (tangential shear profiles) measured in WL maps with and without GSN.
}
\vspace{-1em}
\label{fig:SVF kappa statistics}
\end{figure}
Fig.~\ref{fig:SVF kappa statistics} shows the statistics for SVF voids identified directly in the convergence field (SVF $\kappa$). The shape of the void abundance function is different from the other void finders, declining faster with void radius than for other void types. Additionally, there is no turning point at the small-radius part of the distribution. For example, the WVF finds few very small voids, where the abundance of small voids briefly increases as the void radius increases, before the peak of the distribution. This is not the case for the abundance of SVF $\kappa$ voids, which does not reach a peak even at the smallest radii plotted. This is due to the SVF identifying voids with sizes down to the pixel resolution. As mentioned above, in this work we remove very small voids by imposing a minimum void size, $R_{\rm{v}} \geq 2\theta_{\rm{s}}$.
The abundance of voids is systematically larger for the GSN maps than the no GSN maps, for all smoothing scales. In the case of the WVF, GSN increases the abundance of small voids but decreases the abundance of large voids, due to spurious structures introduced by GSN splitting the larger voids into smaller objects. For the SVF, the abundance of large voids is much lower to start with, and the voids populate the convergence maps much more sparsely, as shown in Panel D of Fig.~\ref{fig:visualisation}. This means that the spurious structures introduced by GSN contribute less to the degradation of true voids and largely only produce spurious voids, which is due to the addition of spurious minima from GSN (Fig. \ref{fig:minima statistics}, top panel) which are the seeds for the SVF $\kappa$ voids; this can be visibly seen by comparing panels 1D and 2D in Fig.~\ref{fig:visualisation}. Also, note that the abundance of SVF $\kappa$ voids decreases for all void radii when $\theta_{\rm{s}}$ increases, which is because the abundance of WL minima decreases with increasing $\theta_{\rm{s}}$, as shown by the top panel of Fig.~\ref{fig:minima statistics}.
The middle panel shows the mean radial convergence profiles of the SVF $\kappa$ voids. These voids are very deep at $r\sim0$, similar to the WL minima, and the convergence increases continuously out to $r=2R_{\rm{v}}$. Like in the WFV case, the convergence profiles in the no-GSN maps are somewhat insensitive to the chosen smoothing scale, whereas the depth of the profiles for the GSN maps is quickly suppressed with increasing $\theta_{\rm{s}}$. The depth of the convergence profiles at $r\sim0$ is artificially boosted when GSN is included (e.g. by a factor of 3 for $\theta_{\rm{s}}=1$ arcmin), which is again due to the creation of spurious minima with very low $\kappa$ values. However by $r=0.5R_{\rm{v}}$ the no GSN and GSN maps agree reasonably well, apart from the voids in the GSN added map for $\theta_{\rm{s}}=1$ arcmin, whose convergence profile returns to $\kappa=0$ faster than the other voids.
The bottom panel shows the tangential shear profiles for the SVF $\kappa$ voids. For all other void finders, the inclusion of GSN boosts the amplitude of the tangential shear profile, and in some cases also changes slightly the radius where the signal reaches maximum. For the SVF $\kappa$ voids, the $\gamma_{\rm{t}}$ signal, which is maximal at $r\sim1.1R_{\rm{v}}$, is also boosted in the GSN maps relative to the no GSN maps. But here we find a secondary peak of $\gamma_{\rm{t}}$ at $r/R_{\rm{v}}\sim0.15$, which is particularly strong for small smoothing scales and when GSN is included. This is due to the flattening of the $\kappa$ profile at $0.3\lesssim r/R_{\rm{v}}\lesssim 0.8$ following a steep increase at $r/R_{\rm{v}}\lesssim0.3$. Such a large inner gradient of the $\kappa$ profile is due to these voids being centred on local WL minima, and this is more true in the GSN maps for which many of the SVF void centres correspond to spurious WL minima that are typically considerably deeper than the physical minima, as can be seen from the abundance of WL minima shown in the top panel of Fig.~\ref{fig:minima statistics} (and also the middle panel of Fig.~\ref{fig:minima statistics}). These spurious minima, on average, have much lower $\kappa$ values than their neighbours that manifests as a strong $\kappa$ gradient, which explains why the secondary peak is more pronounced for the case of GSN maps.
The agreement between the tangential shear profiles in the no-GSN maps and the GSN maps improves slightly as the smoothing scale increases. However, a significant difference remains even for $\theta_{\rm{s}}=5$ arcmin, as in the case of WVF voids, highlighting the fact that the impact of GSN is hard to be completely eliminated for voids identified from the WL convergence map.
\subsection{SVF in the peak distribution}
\label{sec:results:SVF_peak}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/SVF_peak_field_stats_2.pdf}
\caption{The statistics describing the SVF applied to the WL peak distribution: the abundance (top row), and the convergence (middle row) and tangential shear (bottom row) profiles of SVF peak voids. For the meanings of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. Each column corresponds to voids identified in a different WL peak catalogue, $\nu>2$ on the left and $\nu>4$ on the right. The lower sub-panel in the top (bottom) panel shows the relative (absolute) difference between the SVF-peak void abundances (tangential shear profiles) measured in WL maps with and without GSN.
}
\label{fig:SVF peak statistics}
\end{figure*}
Fig.~\ref{fig:SVF peak statistics} shows the statistics for SVF voids identified in the WL peak distribution (SVF peak). The top panel shows the differential void abundance. The SVF peak algorithm identifies the largest voids of all the void finders studied in this work, with some voids as large as two degrees in radius. Here larger smoothing scales reduces the total number of voids but creates larger voids, and including GSN adds spurious small voids and reduces the abundance of large voids. This is due to the generation of spurious WL peaks from the addition of GSN, where a higher number density of tracers split large voids into multiple smaller ones. Fewer voids are detected overall in the $\nu > 4$ catalogue compared to the $\nu > 2$ catalogue, however these voids are larger than their counterparts in the $\nu > 2$ catalogue. This is again due to the reduced number density of WL peaks that are used as tracers in the void identification. Apart from this the abundances of the voids in the two catalogues appear qualitatively similar.
The middle row shows the convergence profile for the SVF peak voids, which are underdense close to the void centre and overdense near the void boundary. Outside of the void radius the convergence gradually approaches the background value of $\kappa=0$. The depths of the void centres and amplitudes at the void radius are boosted in the GSN maps, however the difference between the void convergence profiles in the no-GSN and GSN added maps is quickly suppressed as the smoothing scale increases, and at $\theta_{\rm{s}}=5$ arcmin the difference is small. The depth close to the void centres and the peak at the void boundary also decrease when the smoothing scale increases. These voids are less underdense than most of the other void types.
The bottom row presents the tangential shear profiles for the SVF peak voids. These profiles have a sharp peak at $r = R_{\rm{v}}$ and the amplitude of these peaks is large despite the shallow convergence profiles near the void centres. This is due to the rapid increase in $\kappa(r)$ seen in the range $r/R_{\rm{v}}\in[0.7,1.0]$, with the $\gamma_{\rm{t}}(r)$ amplitude being largest when $\kappa(r)$ changes rapidly. This highlights that identifying the deepest underdensities is not the most important criteria when the tangential shear profile is the observable of main interest. Similar to the other void finders, the peak of the tangential shear profiles is boosted in the GSN maps, however, as with the convergence profiles this difference is quickly suppressed as $\theta_{\rm{s}}$ increases, with most of the difference removed with $\theta_{\rm{s}} = 5$ arcmin. The amplitude of the tangential shear profiles is slightly smaller for the peak catalogue with a larger $\nu$ threshold, indicating that it does not depend strongly on the $\nu$ threshold used for WL peak selection. The main difference comes from the fact that having a higher $\nu$ threshold results in fewer voids that, as we shall see in Section~\ref{sec:comparison}, means a lower SNR when measuring the shear profiles of these voids for a given sky area.
\subsection{Tunnels}
\label{sec:results:tunnels}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/Tunnels_stats_2.pdf}
\caption{The statistics describing the Tunnels identified in the WL peak distribution: the abundance (top row), and the convergence (middle row) and tangential shear (bottom row) profiles of tunnels. For the meanings of line colours and line types see the legend and, for more details, the caption of Figure \ref{fig:minima statistics}. The left and right columns correspond to tunnels identified in WL peak catalogues with heights $\nu >2$ and $\nu > 4$ respectively. The lower sub-panel in the top (bottom) panel shows the relative (absolute) difference between the tunnel abundances (tangential shear profiles) measured in WL maps with and without GSN.}
\label{fig:tunnels statistics}
\end{figure*}
Fig.~\ref{fig:tunnels statistics} shows the statistics of voids identified in the WL peak distribution using the tunnel algorithm, where the left and right columns correspond to tunnels identified in WL peak catalogues with heights $\nu > 2$ and $\nu > 4$ respectively. The top row shows the differential void abundance of the tunnels. The tunnel algorithm also identifies some of the largest voids studied in this work, although the largest SVF peak voids are larger than the largest tunnels. Consistent with other void finders, the tunnel algorithm identifies more voids in total in the maps that include GSN, and fewer large voids. The abundance of the tunnels decreases, and the size of the tunnels, increases with increasing $\theta_{\rm{s}}$. The differences in the void abundances between the no-GSN and GSN maps decreases with increasing $\theta_{\rm{s}}$ and the difference becomes small at $\theta_{\rm{s}}=5$ arcmin.
The middle row shows the tunnel convergence profiles, which have a very similar shape to that of the SVF peak voids. This is to be expected as in some cases both of these algorithms identify the same voids. Beyond their similarities, the tunnel algorithm identifies voids with slightly deeper convergence profiles near the centre and more overdense ridges at the boundary. This is because the tunnels by definition do not enclose any WL peaks but instead only have peaks residing at their boundaries, whereas the SVF peak algorithm allows WL peaks to reside within voids, which can lead to higher $\kappa$ values inside SVF peak voids than inside tunnels. Similar to other void types, adding GSN leads to lower $\kappa$ values at the tunnel centres and a higher overdensity at the tunnel boundaries. This difference is again strongly suppressed for $\theta_{\rm{s}}=5$ arcmin. The tunnels behave similarly to the SVF peak voids when the $\nu$ threshold of the WL peak catalogue is increased, slightly reducing the depth of $\kappa$ profiles at the void centre and the peak at the void boundary, whilst the peak becomes sharper.
The bottom row shows the tangential shear profiles which are qualitatively similar to the results of SVF peak voids, except that the tunnels have a higher peak at $r = R_{\rm{v}}$. The difference between the no-GSN and GSN-added maps respond to the chosen smoothing scale in the same way as the convergence profile, with little difference remaining when $\theta_{\rm{s}}$ increases to $5$ arcmin. Changes in the tangential shear in response to increasing the $\nu$ threshold are also the same as in the convergence profiles. Here we note that for the $\nu > 4$ WL peak catalogue, the convergence and tangential shear profiles for all smoothing scales, and for maps with and without GSN, are all very similar and follow each other closely, overlapping in some places. The main difference between the different curves can be seen at the peak of the profiles where most of the information in terms of SNR is contained \citep{Cautun2018}.
\section{Comparison of different void definitions}
\label{sec:comparison}
In this section we quantify the relative merit of each void finder. There are many criteria that one could use to quantify the suitability of a specific void finder for a given purpose \citep[e.g. see][]{Cautun2018,Paillas2019}. Here we are interested in a rather general comparison of the various methods that identify WL voids. We choose to do so by answering two questions: i) Which void populations are least affected by GSN? and ii) Which void types have the highest tangential shear signal, as quantified in terms of SNR? These questions are motivated by the goal of using WL voids to constrain cosmological parameters and alternative cosmological models. To a first approximation, we expect that the constraints derived from voids will be maximal when their signal, such as $\gamma_{\rm{t}}$ profiles, can be measured with low uncertainties (i.e., high SNR) and when the effects of GSN are minimised \citep[e.g. see][]{Cautun2018,Paillas2019}. This might not always be the case as we discuss later on, but nonetheless is a good starting point for a general comparison.
\subsection{Impact of GSN}
\label{sec:comparison:GSN}
GSN is the leading contribution to noise that contaminates the observed WL signal, and for this reason it is important to understand how the void finders respond to GSN, before the statistics developed here can be used to constrain cosmological parameters. As we saw in Section~\ref{sec:void statistics}, GSN can lead to the identification of spurious voids and to the breaking of physical voids into more objects. This could potentially degrade the cosmological information contained in the statistics of voids, and thus lower the cosmological constraints that can be inferred using WL voids.
To assess the effect of GSN, we proceed by comparing voids in maps with and without GSN. Such a test requires us to choose a WL void statistic to measure the impact of GSN. Up to now, we have studied the abundances and $\gamma_{\rm{t}}$ profiles with and without GSN, and here we choose to focus on the tangential shear profile, which has been shown to provide tighter cosmological constrains, such as when testing modified gravity models \citep[e.g.][]{Davies2019b}. We measure the change in the amplitude of the $\gamma_{\rm{t}}$ signal when GSN is added, as a means to quantify the impact of GSN on the lensing profile. Typically, for the void $\gamma_{\rm{t}}$ profiles most of the cosmological constraining power comes from the bins where the amplitude of the signal is maximal \citep[e.g.,][]{Cai2015,Barreira2015,Cautun2018,Davies2019b} and, as such, we measure the impact of GSN at this location.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figures/diff_and_SNR_2.pdf}
\caption{Comparisons of the seven void populations studied here in terms of the impact of GSN and in terms of the SNR associated to the tangential shear measurement for a \textsc{lsst}{} like survey.
\textit{Left panel:} the relative difference between $\gamma_{\rm{t}}$ in the GSN-added and no-GSN convergence maps, at the radius at which the amplitude of $\gamma_{\rm{t}}$ in the no-GSN maps is highest ($\gamma_{\rm{t}}$ is lowest).
\textit{Right panel:} An $\textsc{lsst}$ forecast of the total SNR with which the $\gamma_{\rm{t}}(r)$ profile will be measured for each void type. All results in both panels are for all void finders studied in this work (x-axis).
A yellow background indicates results for void finders applied to the WL peak distribution and a blue background indicates results for void finders applied directly to the WL convergence maps. Circles correspond to results from voids identified in WL peak catalogues with $\nu > 2$, triangles are for $\nu > 4$, and squares are from voids identified directly in convergence maps. Blue, orange and green markers indicate
different smoothing scales, with $\theta_{\rm{s}} = 1, 2.5$ and $5$ arcmin, respectively. In the right panel solid markers indicate results from no GSN maps, and empty markers show results for WL maps with GSN added. Here we plot troughs with a radius of $R_{\rm{v}}=10$ and $30$ arcmin (labelled as $r10^\prime$ and $r30^\prime$, respectively), to show the impact of changing the trough radius.
}
\label{fig:diff_and_SNR}
\end{figure*}
The left panel of Fig.~\ref{fig:diff_and_SNR} shows the relative difference, $|\gamma_{\rm{t}}^{\rm GSN} - \gamma_{\rm{t}}^{\rm no-GSN}|/ |\gamma_{\rm{t}}^{\rm no-GSN}|$, between $\gamma_{\rm{t}}$ in the GSN-added and no-GSN convergence maps, at the radius at which the amplitude of $\gamma_{\rm{t}}$ in the no-GSN is maximal (i.e., where $\gamma_{\rm{t}}$ has the most negative value), for all void finders studied in this work. Here, lower values correspond to a small relative impact on the $\gamma_{\rm{t}}$ amplitude from GSN while large values indicate that GSN is significantly boosting the $\gamma_{\rm{t}}$ amplitude (for all void populations studied here, GSN always increases the amplitude of the $\gamma_{\rm{t}}$ signal; see Appendix~\ref{app:WL voids in GSN maps} for a discussion of the reason behind that).
We find that GSN has the largest impact on the $\gamma_{\rm{t}}$ profiles of WL minima. This is due to the fact that GSN creates more spurious minima than spurious structures in the other void finders, which is one drawback of the simplicity of the WL minima definition. The boost from GSN is somewhat decreased for the minima when larger smoothing scales are applied. However, in many cases the boost to the minima $\gamma_{\rm{t}}$ profiles from GSN with $\theta_{\rm{s}} = 5$ arcmin (about $55\%$) is larger than the $\gamma_{\rm{t}}$ boost from GSN for other void finders with $\theta_{\rm{s}}=1$ arcmin. The $\gamma_{\rm{t}}$ signal for SVF $\kappa$ is also boosted by GSN by a similar (relative) amount as the WL minima, which is due to the minima being used as prospective void centres at the start of the SVF $\kappa$ void identification process. For SVF $\kappa$ the relative difference between the no-GSN and GSN $\gamma_{\rm{t}}$ amplitudes is more quickly suppressed by increasing $\theta_{\rm{s}}$ than for the WL minima, reaching $\sim20\%$ for $\theta_{\rm{s}}=5$ arcmin. The WVF voids also appear to respond to GSN in a similar way to the WL minima and SVF $\kappa$, however the amplitude of the boost due to GSN is slightly lower. Finally, for all of the void finders applied directly to the convergence maps, troughs $\kappa$ appears to be the least impacted by GSN, and they also see the smallest impact on the agreement between the no-GSN and GSN maps from increasing $\theta_{\rm{s}}$, as can also be seen in Fig.~\ref{fig:Trough ConvField statistics}.
The void populations that are the least impacted by GSN are those identified in the distribution of WL peaks. This is due to high amplitude WL peaks (Fig.~\ref{fig:kappa pdf and WL peak abundance}, right panel) being more resilient to GSN than underdense regions, i.e., $\kappa<0$, which are the ones determining most of the properties of voids identified directly in the convergence field.
We find that both the tunnels and SVF peak voids respond to GSN in very similar ways and that the impact of GSN is reduced for voids identified in peak catalogues with larger $\nu$ thresholds. Finally, the trough peak void finder is the most resilient to GSN of all the methods that employ WL peaks, however in contrast to the tunnels and SVF peak, the impact of GSN increases when the $\nu$ threshold increases, which is because troughs peak is more sensitive to tracer sparsity than tunnels and SVF peak.
Both of the trough algorithms are the least impacted by GSN for $R_{\rm v} = 30$ arcmin, however for a trough radius of $10$ arcmin, the impacts of GSN on the tangential shear profiles for both trough peak and trough $\kappa$ voids becomes worse than tunnels and SVF peak.
\subsection{The SNR of tangential shear profiles}
\label{sec:comparison:SNR}
Next we investigate the signal-to-noise ratio (SNR) with which we can measure the tangential shear signal of WL voids. Our goal is to assess which void type has the largest SNR since potentially those voids are the most promising to use for cosmological constraints. For examples, \citet{Cautun2018} and \citet{Paillas2019} have studied the signature of modified gravity models in the void population identified using multiple void finders. For 2D voids, they have found that all methods show roughly equal fractional differences in the void shear profiles when comparing modified gravity with the standard model, and thus the optimal void type to constrain such alternative cosmological models is the one in which the $\gamma_{\rm{t}}$ profile can be measured with the highest SNR.
We define the SNR with which we can measure the tangential shear profile of voids as:
\begin{equation}
{\rm{SNR}}^2 \equiv \sum_{i,j} \gamma_{\rm{t}}(i) \,\, \alpha \,\, {{\rm Cov}}^{-1}(i,j) \,\, \gamma_{\rm{t}}(j)
\label{eq:SNR} \;,
\end{equation}
where the sum is over all bins of $r/R_{\rm{v}}\in[0,2]$, $i$ and $j$ denote the bins to be summed over, and ${\rm Cov}^{-1}$ is the inverse of the covariance matrix for the tangential shear measurements. Here $\gamma_{\rm{t}}$ is the mean tangential shear measured from all voids from all 192 maps used in this study and $\alpha$ is the Anderson-Hartlap factor \citep{Anderson2003, Hartlap2007} which we use to compensate for the bias introduced by inverting a noisy covariance matrix. The $\alpha$ factor is given by
\begin{equation}
\alpha = \frac{ N - N_{\rm{bin}} - 2 }{N - 1}
\; ,
\end{equation}
where $N = 192$ is the number of realisations used to calculate the covariance matrix, and $N_{\rm{bin}} = 50$ is the number of radial bins. We calculate the covariance matrix using the central \map{10} region of the 192 maps described in Section~\ref{sec:Weak lensing maps}. We then rescale the SNR values by $\sqrt{ A_{ \rm{LSST} } / {A} } = 13.4$ in order to present a forecast for an \textsc{lsst}{} like survey that has a sky coverage, $A_{ \rm{LSST}} = 18,000$ deg$^2$.
The right panel of Fig.~\ref{fig:diff_and_SNR} shows the SNR (see Eq.~\eqref{eq:SNR}) for the tangential shear profiles from each void finder we have studied. The coloured symbols indicate the results for the three smoothing scales we have studied and we present the SNR values for convergence maps with (open symbols) and without (filled symbols) GSN. This allows us to characterise how the SNR changes when identifying voids in noisy maps.
For all void types, we find that increasing the $\theta_{\rm{s}}$ smoothing length decreases the SNR ratio; the only exceptions are the troughs peak ($R_{\rm{v}} = 30$ arcmin) and troughs $\kappa$ voids ($R_{\rm{v}} = 10$ and $30$ arcmin), for which the SNR is roughly the same for all three smoothing scales that we used. For the voids found in the peak distribution, increasing the peak threshold leads to lower SNR. Thus, the SNR is maximised for small smoothing scales and for peak catalogues with small $\nu$ thresholds.
The right panel of Fig.~\ref{fig:diff_and_SNR} reveals a rather interesting result, which is surprising at first. All void types (except SVF $\kappa$) identified in the maps with GSN show a larger SNR than the voids found in the map without GSN. This might be counter-intuitive since, as we discussed, GSN fragments large voids into two or more components and adds spurious objects to the sample, which potentially reduces the sensitivity of voids to cosmology. The answer is given by the fact that the SNR we calculate describes how well we can measure the $\gamma_{\rm{t}}$ signal of a void and not the amount of cosmological information it contains.
The SNR of WL voids in maps with GSN is higher than for the maps without GSN due to two factors: i) adding GSN increases the amplitude of the mean $\gamma_{\rm{t}}$ profile, and ii) it leads to identifying more voids, as shown in Figs.~\ref{fig:minima statistics}-\ref{fig:tunnels statistics}. The change in void shear profiles and abundance is an artificial one and it is due to using the same noisy map to identify voids and calculate their profiles. For example, adding a negative GSN value to a pixel makes it more likely to be associated to the interior of a void, and, as a result, the interior of voids is deeper for maps with GSN since it is more likely to contain regions with negative GSN contributions than positive ones. The opposite holds true for the void boundaries. A pixel with a positive GSN value is more likely to be identified as part of a void's edge, and thus the void boundaries in maps with GSN contain a higher fraction of pixels with positive GSN values, which artificially boosts the mean $\kappa$ value at the void boundary. These two effects lead to an artificially stronger tangential shear profile for voids in GSN maps (for a more detailed discussion and examples see Appendix~\ref{app:WL voids in GSN maps}).
We find that the WL minima tangential shear profiles have the largest SNR both in the no-GSN maps and the GSN-added maps, which indicates that they are promising cosmological probes. The WVF has the second highest SNR in the GSN-added maps, but is beaten by SVF $\kappa$ in the no-GSN maps. Both of the trough algorithms give the lowest SNR values despite being the least affected by GSN in the left panel of Fig.~\ref{fig:diff_and_SNR}. SVF peak gives reasonable SNR values, but fares slightly less well in almost all cases than tunnels, which gives SNR values comparable to the void finders applied directly to the WL convergence maps.
\subsection{Which void definition is best?}
\label{sec:comparison:best_void}
Ideally, the optimal void finder would be the one least affected by GSN while having the largest SNR for its tangential shear profile. Fig.~\ref{fig:diff_and_SNR} shows that these two requirements are not compatible: the void finders least affected by GSN (either troughs peak or troughs $\kappa$) have the lowest SNR for $\gamma_{\rm{t}}$, while the voids with the highest SNR (WL minima) are strongly impacted by GSN. The same behaviour is seen when varying the void parameters studied here. Increasing the $\kappa$ smoothing length, $\theta_{\rm{s}}$, used to identify voids, while lowering the impact of GSN, also decreases the SNR for tangential shear. For voids identified in the peak distribution, increasing the $\nu$ threshold used for selecting the peak catalogue mitigates the effect of GSN, but again reduces the $\gamma_{\rm{t}}$ SNR. Therefore, there is no clear choice for the best void finder or the best selection of void finding parameters, such as $\theta_{\rm{s}}$ or WL peak $\nu$ threshold.
In general, we find that the void finders that use WL peaks as tracers are less impacted by GSN, while the void finders applied directly to the WL convergence maps give higher SNR values. The void finder that generally offers a good compromise between minimal impact from GSN and a high SNR value is the tunnel algorithm. It has a $\gamma_{\rm{t}}$ SNR similar to that of the SVF and WVF $\kappa$ field voids finders while being the second least affected by GSN, after troughs.
We would also like to point out that GSN does not necessarily decrease the amount of cosmological information contained by a probe, and that in some special circumstances it can help make this information more easily accessible. For example, this has been pointed out by \citet{Yang2011}, who have shown that the abundance of WL peaks in maps that include GSN provides better cosmological constraints than for maps without GSN. \citeauthor{Yang2011} have attributed this effect to stochastic resonance, which is a well-studied phenomena \citep{Gammaitoni1998} where a signal in a physical system may be boosted when a source of noise is added, under certain conditions. The conditions required for stochastic resonance to take place within a system are: i) a form of a threshold, ii) a weak coherent input, and iii) a source of noise that adds to the coherent input. From the above it is clear that all three of these conditions apply to WL peaks as discussed in \citeauthor{Yang2011}, and hence they also apply to WL voids. The first requirement for stochastic resonance is a form of threshold, which in the context of WL voids is the criteria that all void finders identify underdense regions through one means or another. The second requirement is a weak coherent input, which in this context is the WL convergence map. The WL map can be considered weakly coherent because GSN dominates the signal (before smoothing), but contains coherent information due to physical correlations in the map induced by gravitational collapse. Finally, for stochastic resonance we require a source of noise that is added to the WL convergence map, which exactly matches our prescription for modelling GSN.
In the case of WL voids, stochastic resonance occurs because the void finders are designed to identify underdense regions, or underdense regions enclosed by overdense regions etc.. The inclusion of GSN exaggerates some underdense regions and some overdense regions. However, since GSN is random and uncorrelated (neglecting higher order effects such as intrinsic alignment), it could also make some underdense and overdense regions flatter (i.e., smoothed out). Because all void finders fulfil a set of criteria when identifying voids, they will preferentially select the regions that have been exaggerated by GSN and neglect the regions that have been flattened by GSN. Furthermore, distinct deep voids in the physical maps (without GSN) are less likely to be removed by GSN, because the physical signal will dominate the GSN. However less distinct voids that might be missed in the physical maps have a chance to be randomly boosted by GSN, which will result in their detection in the GSN-added maps. These are competing factors with the consequence that GSN can affect true voids and generate spurious fake voids, though true voids are rarely destroyed by GSN but instead are most commonly split up into smaller voids (e.g., as discussed with the tunnel algorithm). It is currently unclear whether or not the boost in SNR from GSN seen in Fig.~\ref{fig:diff_and_SNR} will translate to improved parameter constraints relative to the case without GSN (which is unobservable), however we leave this to a future study. For this reason, we have focused on identifying the void finder that is the least impacted by GSN, whilst still producing high SNR values.
\section{Discussion and conclusions}\label{sec:discussion and conclusions}
In this paper we have presented a comparison of different void finders used to identify WL voids within WL convergence or peak fields. The void finders discussed in this work are modified versions of popular void finders that are typically applied to the galaxy distribution. We have shown how each void finder can be modified such that it can be applied to WL maps and have discussed the impact of varying each free parameter associated with the void finders (see Section \ref{sec:void finders}). The WL void finders have been split broadly into two classes: i) those that can identify voids directly in the WL convergence maps, and ii) those that require WL peaks as tracers in order to define the voids. We have found that both void classes offer useful information.
We investigate the WL void abundances, convergence profiles and tangential shear profiles for all void finders (where applicable) in Section \ref{sec:void statistics}. The average void convergence profile consists of an underdense region (i.e. $\kappa<0$) for $r\lesssim R_{\rm{v}}$ (with $R_{\rm{v}}$ the void radius), an overdensity at $r\sim R_{\rm{v}}$ (not present for troughs), followed by a slow convergence to the background expectation of $\kappa=0$ at large radial distances. This translates into a negative tangential shear profile for voids, with the amplitude of $\gamma_{\rm{t}}$ being maximal at $r\simeq R_{\rm{v}}$. We found that WL minima and SVF $\kappa$ produce the deepest (most underdense) convergence profiles at $r = 0$, and the $\gamma_{\rm{t}}$ profiles with the largest amplitudes are produced by tunnels (without GSN) and WL minima (with GSN).
To differentiate the various void finders, we have studied, for each void type, the impact of GSN and the SNR with which their tangential shear profiles can be measured in an \textsc{lsst}{} like survey. In general, voids identified directly in the convergence field have the highest $\gamma_{\rm{t}}$ SNR but are also most severely affected by GSN. The void finders based on the peak distribution have moderate SNR and are less affected by GSN. Troughs with large sizes are least impacted by GSN but are also the ones with the lowest $\gamma_{\rm{t}}$ SNR. Increasing the smoothing length or the peak threshold used to identify voids, while it lowers the impact of GSN, also decreases the SNR with which the void tangential shear profile can be measured. The tunnel algorithm provides a good compromise between mitigating the impact from GSN and producing objects with a large $\gamma_{\rm{t}}$ SNR.
In a future work we will use WL voids to provide cosmological parameter constraints and investigate how WL void statistics can be used in a manner that is complementary to constraints from other probes such as WL peaks and the convergence power spectrum. This will be especially interesting in the context of the $\Omega_{\rm{m}} -\sigma_8$ degeneracy. Both galaxy voids and WL peaks have been shown to be able to help break this parameter degeneracy \citep{Nadathur2019,Dietrich2010,Davies2019}, and WL voids may offer another promising avenue to do so.
For parameter constraints, tunnels may prove useful, since we have found it to be the best WL void finder working in the WL peak distribution, in terms of both large SNR value and small impact from GSN, followed closely by SVF peaks. The high SNR values from the WL minima and WVF tangential shear profiles make these WL void definitions viable candidates for parameter constraints as well. It is possible that void finders applied directly to the convergence field may be complementary to those that use WL peaks, since they are sensitive to different aspects of the WL convergence maps when identifying voids.
Additionally, some of the void finders have high SNR values for all smoothing scales studied here. This makes combining different smoothing scales a possible and potentially useful approach when applied to cosmological parameter constraints, since it has been shown that constraints from WL peaks are improved when multiple smoothing scales are used \citep{J.Liu2015}. Finally, in this work we discuss the merit of a given WL void in terms of their tangential shear profiles, however other WL void statistics such as the void abundance and void correlation functions may also provide useful cosmological information.
When considering the impact of baryons on the WL void statistics, sufficiently large smoothing scales must be used in order to get agreement between hydro simulations and dark matter only simulations, as is the case with other WL statistics \citep{Weiss2019}. \cite{Paillas2017} have shown that voids in the LSS are less impacted by baryons, and \cite{Coulton2019} have shown that WL minima are more robust to baryons than WL peaks. Therefore, given that \cite{Chang2018} have also shown that the deepest WL minima correspond to large supervoids, confirming that the underdense regions of the WL convergence maps are due to underdensities along the line of sight, it is reasonable to expect that the WL voids identified directly in the convergence maps may be more resilient to baryonic physics. However, the void finders which use WL peaks as tracers will be more affected since WL peaks are more sensitive to baryons \citep{Osato2015,Weiss2019,Coulton2019}, and changes to the WL peak distribution could impact the resulting void catalogues. More detailed studies, potentially with the aid of cosmological hydrodynamic simulations, are needed to better understand these issues.
\section*{Acknowledgements}
CTD is funded by a UK Science and Technology Facilities Council (STFC) PhD studentship through grant ST/R504725/1. EP is supported by CONICYT-PCHA/Doctorado Nacional (2017-21170093) and also acknowledges support from CONICYT project Basal AFB-170002. MC is supported by the EU Horizon 2020 research and innovation programme under a Marie Sk{\l}odowska-Curie grant agreement 794474 (DancingGalaxies). BL is supported by an ERC Starting Grant, ERC-StG-PUNCA-716532, and additionally supported by the STFC Consolidated Grants [ST/P000541/1, ST/T000244/1].
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
\section{Data Availability}
The data used in this work is publicly available at \href{http://cosmo.phys.hirosaki-u.ac.jp/takahasi/allsky_raytracing/}{http://cosmo.phys.hirosaki-u.ac.jp/takahasi/allsky\_raytracing/}
\bibliographystyle{mnras}
|
1,108,101,565,156 | arxiv | \section{Introduction}
Cross-view image synthesis aims to translate images between two distinct views, such as synthesizing ground images from aerial images, and vice versa. This problem has aroused great interest in the computer vision and virtual reality communities, and it has been widely studied in recent years~\cite{2,3,4,5,6,7,8,9}. Earlier work used encoder-decoder convolutional neural networks (CNNs) to study the viewpoint code included in the bottleneck representation for urban scene synthesis~\cite{10} and 3D object transformations~\cite{11}. Besides, when the view fields have little overlap or objects are occluded,
and similar objects in one view may be completely different from another view (i.e., view invariance issues), this task will be more challenging. For example, the aerial view of a building (i.e., the roof) tells very little about the color and design of the building seen from the street-view. The generation process is generally easier when the image contains a single object in a uniform background. In contrast, when the scene contains multiple objects, generating other view becomes much more challenging. This is due to the increase in underlying parameters that contribute to the variations (e.g., occlusions, shadows, etc). An example scenario, addressed here, is generating street-view (a.k.a ground level) image of a location from its aerial (a.k.a overhead) image. Fig.~\ref{fig0} illustrates some corresponding images in the two different views.
To solve this challenging problem, Krishna and Ali~\cite{6} proposed a conditional GAN model that jointly learns the generation in both the image domain and the corresponding semantic domain, and the semantic predictions are further utilized to supervise the image generation. Although this method has been interestingly explored, there are still unsatisfactory aspects of the generated scene structure and details.
Moreover, Tang et al.~\cite{12} recently proposed the multi-channel attention selection generation adversarial network (SelectionGAN), which can learn conditional images and target semantic maps together, and the automatically learned uncertainty map can be used to guide pixel loss to achieve better network optimization. However, we observe that there are still unsatisfactory aspects in the generated scene structure and details.
For example, for the outline boundaries of some objects, there are obvious wrong marks and unclear.
To tackle this challenging problem, we add deformed convolution to the U-net network to improve the network's ability to extract features of objects at different scales. At the same time, we use the attention mechanism~\cite{13} to refine the feature map to obtain a more detailed feature map for generating more realistic images. A large number of experiments show that our model can produce better results than state-of-the-art models, i.e., Pix2Pix~\cite{2}, X-Fork~\cite{6}, X-Seq~\cite{6} and SelectionGAN~\cite{12}.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.8\linewidth]{fig0.eps}}
\caption{Example images in overhead/aerial view (left) and street-view/ground-level (right). The images reflect the great diversity and richness of features in two views implying that the network needs to learn a lot for meaningful cross-view generation.}
\label{fig0}
\end{figure}
In summary, our contributions of this paper are as follows:
\begin{itemize}
\item We employed the attention mechanism to refine the feature map to generate more realistic images for the challenging cross-view image translation tasks.
\item We also embed deformable convolutions in the U-net network to improve the network's ability for extracting features of objects at different scales.
\item An additional loss function is added to improve the network training, thereby achieving a more stable optimization process.
\end{itemize}
\section{Related work}
Existing work on viewpoint transformation has been performed to synthesize novel views of the same object~\cite{14,15,16}.
For example, Zhou et al.~\cite{16} proposed models learn to copy pixel information from the input view and uses them to retain the identity and structure of the object to generate a new view. Tatarchenko et al.~\cite{15} trained a network of codes to obtain 3D representation models of cars and chairs, which were subsequently used to generate different views of unseen images of cars or chairs. Dosovitskiy et al.~\cite{14} learned to generate models by training 3D renderings of cars, chairs, and tables, and synthesize intermediate views and objects by interpolating between views and models. Zhai et al.~\cite{17} explored the semantic layout of predicting ground images from their corresponding aerial images. They synthesized ground panoramas using the predicted layouts. Previous work on aerial and ground images has addressed issues such as cross-view co-localization~\cite{18,19}, ground-to-aerial geo-localization~\cite{20} and geo-tagging the cross-view images~\cite{21}.
Compared with existing methods such as Restricted Boltzmann Machines~\cite{22} and Deep Boltzmann Machines~\cite{23}, generative adversarial networks (GANs)~\cite{24} have shown the ability to generate better quality images~\cite{25,26,27,28}. The vanilla GAN model~\cite{24} has two important components, i.e., the generator $G$ and the discriminator $D$.
The generator $G$ aims to generate realistic from the noise vector, while $D$ tries to distinguish between real image and image generated by $G$. Although it has been successfully used to generate high visual fidelity images~\cite{26,29,30,31}, there are still some challenges such as how to control the image generation process under specific settings. To generate domain-specific images, the conditional GAN (CGAN)~\cite{28} has been proposed. CGAN usually combines vanilla GAN with some external information.
Krishna and Ali~\cite{6} proposed two structures (X-Fork and X-Seq) based on Conditional GANs to solve the task of image translation from aerial to street-view using additional semantic segmentation maps. Moreover, Tang et al.~\cite{12} proposed the multi-channel attention selection generation adversarial network (SelectionGAN), which consists of two generation stages. In the first stage, a cyclic semantically guided generation sub-net was proposed.
This network receives images and conditional semantic maps in one view, while synthesizing images and semantic maps in another view. The second stage uses the rough predictions and learned deep semantic features of the first stage, and uses the suggested multi-channel attention selection the module performs fine-grained generation.
\section{Network Design}
The network structure we proposed is based on the SelectionGAN model, which consists of three generators (i.e., $G_i$, $G_a$, $G_s$), two discriminators (i.e., $D_1$, $D_2$), and an attention mechanism module. The network structure can be divided into two stages, as shown in Fig.~\ref{fig2}.
In the first stage, an image $I_{a}$ of one perspective and a semantic map $S_{g}$ of another perspective are input to the generator $G_i$ to generate an image $I_{g}^{\prime}$ of another perspective and the feature map $F_{i}$ of the last convolution layer.
Then the generated image $I_{g}^{\prime}$ is input into the generator $G_s$ to generate the corresponding semantic map $S_{g}^{\prime}$.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.8\linewidth]{fig2.eps}}
\caption{Architecture of the proposed network.}
\label{fig2}
\end{figure}
In the second stage, the feature maps $F_{i}$ and $F_{s}$ generated in the first stage are refined through the attention mechanism module to obtain the refined feature maps $F_{i}^{\prime}$ and $F_{s}^{\prime}$.
Next, they are combined with the image $I_{a}$ and the generated image $I_{g}^{\prime}$ and inputted to the generator $G_a$ to generate a refined image $I_{g}^{\prime \prime}$ as the final output.
This refined image $I_{g}^{\prime \prime}$ is then input to the generator $G_s$ to generate the corresponding semantic map $S_{g}^{\prime \prime}$.
Note that we use only one generator $G_s$ in both the first and second stages, since the purpose is to generate a corresponding semantic image from an image.
\subsection{Attention Mechanism}
\label{ssec:subhead}
Since the SelectionGAN model takes the coarse feature map as input of the second stage. So we consider that we can use the attention mechanism to refine the feature map before inputting it into the generator $G_a$.
The attention mechanism is consisted of Channel Attention Module and Spatial Attention Module, as shown in Fig.~\ref{fig3}. Given an intermediate feature map, the attention mechanism will follow two separate dimensions to infer the attention maps, and then the attention maps are multiplied with the input features to map adaptive features. Experiments show that after adding the attention mechanism, the generation performance is indeed improved.
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\linewidth]{fig3.eps}}
\caption{Attention Mechanism Module.}
\label{fig3}
\end{figure}
\subsection{Deformable Convolution}
\label{ssec:subhead}
Deformable convolution~\cite{32} adds spatial sampling positions with additional offsets and learns offsets in the target task without additional supervision. The new module can easily replace the ordinary peers in existing CNNs and a large number of experiments have verified that this method learns dense spatial transformations in deep CNNs and is effective for complex visual tasks such as object detection and semantic segmentation.
Therefore, we embed deformable convolutions into U-net. The outermost layer of the network can better extract the features from the input maps. The network structure is shown in Fig.~\ref{fig4}.
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\linewidth]{fig4.eps}}
\caption{Network structure of the proposed generator. BN means batch-normalization layer.}
\label{fig4}
\end{figure}
\subsection{Overall Optimization Objective}
\noindent \textbf{Adversarial Loss.} SelectionGAN~\cite{12} uses one discriminator $D_1$ for the generated images on two stages.
$D_1$ takes the input and the generated fake image as input, however, the semantic map is not take into consideration.
Therefore, we propose a new discriminator $D_2$, which also takes the semantic map as input. The proposed semantic-guided adversarial losses can be expressed as follows,
\begin{equation}
\begin{aligned}
& \mathcal{L}_{cGAN}\left(I_{a}\textcircled{+}S_{g}, I_{g}^{\prime}\textcircled{+} S_{g}^{\prime}\right) \\= &
\mathbb{E
\left[\log D_2\left(I_{a}\textcircled{+} S_{g}, I_{g}\textcircled{+} S_{g}\right)\right] \\
+ & \mathbb{E
\left[\log \left(1-D_2\left(I_{a}\textcircled{+} S_{g}, I_{g}^{\prime}\textcircled{+} S_{g}^{\prime}\right)\right)\right],
\end{aligned}
\label{eq:1}
\end{equation}
\begin{equation}
\begin{aligned}
& \mathcal{L}_{cGAN}\left(I_{a}\textcircled{+} S_{g}, I_{g}^{\prime \prime}\textcircled{+} S_{g}^{\prime \prime}\right)\\ = &
\mathbb{E
\left[\log D_2\left(I_{a}\textcircled{+} S_{g}, I_{g}\textcircled{+} S_{g}\right)\right]\\
+ & \mathbb{E
\left[\log \left(1-D_2\left(I_{a}\textcircled{+}S_{g}, I_{g}^{\prime \prime}\textcircled{+}S_{g}^{\prime \prime}\right)\right)\right],
\end{aligned}
\label{eq:2}
\end{equation}
where the symbol $\textcircled{+}$ denotes the channel-wise concatenation operation.
Thus, the total adversarial loss can be formulated as follows,
\begin{equation}
\begin{aligned}
\mathcal{L}_{cGAN}= & \mathcal{L}_{cGAN}\left(I_{a}, I_{g}^{\prime}\right)+\lambda \mathcal{L}_{c G A N}\left(I_{a}, I_{g}^{\prime \prime}\right) \\
+ & \mathcal{L}_{c G A N}\left(I_{a}\textcircled{+} S_{g}, I_{g}^{\prime} \textcircled{+} S_{g}^{\prime}\right) \\
+ & \lambda \mathcal{L}_{cGAN}\left(I_{a}\textcircled{+}S_{g}, I_{g}^{\prime \prime}\textcircled{+}S_{g}^{\prime \prime}\right),
\end{aligned}
\label{eq:adv_loss}
\end{equation}
where $\mathcal{L}_{cGAN}\left(I_{a}, I_{g}^{\prime}\right)$ and $\mathcal{L}_{c G A N}\left(I_{a}, I_{g}^{\prime \prime}\right)$ are the adversarial losses defined in SelectionGAN.
\noindent \textbf{Overall Loss.} The total optimization loss is a weighted sum of several losses. The generators $G_i$, $G_s$, $G_a$ and discriminators $D_1$, $D_2$ are trained in an end-to-end fashion optimizing the following min-max function,
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\min _{\left\{G_{i}, G_{s}, G_{a}\right\}} \underset{\{D_1, D_2\}}{\max} \mathcal{L}=\sum_{i=1}^{4} \lambda_{i} \mathcal{L}_{p}^{i}+\mathcal{L}_{c G A N}+\lambda_{t v} \mathcal{L}_{t v},
\label{eq:loss}
\end{equation}
where $\mathcal{L}_{p}^{i}$ uses the $L1$ reconstruction to separately calculate the pixel loss between the generated images $I_{g}^{\prime}$, $S_{g}^{\prime}$, $I_{g}^{\prime \prime}$ and $S_{g}^{\prime \prime}$ and the corresponding real ones. $\mathcal{L}_{tv}$ is the total variation regularization on the final synthesized image $I_{g}^{\prime \prime}$. $\lambda_{i}$ and $\lambda_{tv}$ are the trade-off parameters to control the relative importance of different objectives.
\section{Experiments}
\noindent \textbf{Datasets.}
We follow~\cite{6,12,33} and perform extensive experiments on the challenging Dayton dataset in a2g (aerial-to-ground) and g2a (ground-to-aerial) directions with two different image resolutions (i.e., $256 {\times} 256$ and $64 {\times} 64$).
Specifically, we select 76,048 images and create a train/test split of 55,000/21,048 pairs. The images in the original dataset have $354 {\times} 354$ resolution.
We then resize them to $256 {\times} 256$.
\begin{table}[!t]
\centering
\caption{Accuracies of different methods.}
\begin{tabular}{cccccccccc}
\toprule
\multirow{1}{*} Dir &\multirow{3}{*}{Method} &\multicolumn{4}{c}{Dayton (64$\times$64)} &\multicolumn{4}{c}{Dayton (256$\times$256)} \\ \cmidrule(lr){3-6} \cmidrule(lr){7-10}
\multirow{2}{*}{$\rightleftharpoons$}& &\multicolumn{2}{c}{Top-1} &\multicolumn{2}{c}{Top-5} &\multicolumn{2}{c}{Top-1} &\multicolumn{2}{c}{Top-5} \\
& &\multicolumn{2}{c}{Accuracy(\%)} &\multicolumn{2}{c}{Accuracy(\%)}
&\multicolumn{2}{c}{Accuracy(\%)}
&\multicolumn{2}{c}{Accuracy(\%)} \\
\hline
\multirow{5}{*}{a2g} & Pix2pix~\cite{2} & 7.90 & 15.33 &27.61 &39.07 &6.80 &9.15 &23.55 &27.00 \\
& X-Fork~\cite{6} & 16.63 & 34.73 & 46.35 & 70.01 & 30.00 & 48.68 & 61.57 & 78.84 \\
& X-Seq~\cite{6} & 4.83 & 5.56 & 19.55 & 24.96 & 30.16 & 49.85 & 62.59 & 80.70 \\
& SelectionGAN~\cite{12} & 45.37 & 79.00 & 83.48 & 97.74 & 42.11 & 68.12 & 77.74 & 92.89 \\
&Ours & \multicolumn{1}{l}{{\bf 47.61}} & \multicolumn{1}{l}{{\bf 81.24}} & \multicolumn{1}{l}{{\bf 86.12}} & \multicolumn{1}{l}{{\bf 98.44}} & \multicolumn{1}{l}{{\bf 45.07}} & \multicolumn{1}{l}{{\bf 77.12}} & \multicolumn{1}{l}{{\bf 80.04}} & \multicolumn{1}{l}{{\bf 94.54}} \\
\hline
\multirow{5}{*}{g2a} & Pix2pix~\cite{2} & 1.65 & 2.24 &7.49 &12.68 &10.23 &16.02 &30.90 &40.49 \\
& X-Fork~\cite{6} &4.00 & 16.41 & 15.42 & 35.82 & 10.54
& 15.29 & 30.76 & 37.32 \\
& X-Seq~\cite{6} & 1.55 & 2.99 & 6.27 & 8.96 &12.30 & 19.62 & 35.95 & 45.94 \\
&SelectionGAN~\cite{12} & 14.12 & 51.81 & 39.45 & 74.70 & 20.66 & 33.70 & 51.01 & 63.03 \\
&Ours & \multicolumn{1}{l}{{\bf 14.26}} & \multicolumn{1}{l}{{\bf 52.17}} & \multicolumn{1}{l}{{\bf 52.55}} & \multicolumn{1}{l}{{\bf 78.72}} & \multicolumn{1}{l}{{\bf 20.81}} & \multicolumn{1}{l}{{\bf 38.41}} & \multicolumn{1}{l}{{\bf 55.51}} & \multicolumn{1}{l}{{\bf 65.84}} \\
\hline\\
\end{tabular}
\label{tb1:table1}
\end{table}
\begin{table}[!t]
\centering
\caption{SSIM, PSNR, and KL score of different methods.}
\begin{tabular}
{cccccccc}
\toprule
\multirow{1}{*} Dir &\multirow{2}{*}{Method} &\multicolumn{3}{c}{Dayton(64$\times$64)} &\multicolumn{3}{c}{Dayton(256$\times$256)} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-8}
\multirow{1}{*}{$\rightleftharpoons$}& &\multicolumn{1}{c}{SSIM} &\multicolumn{1}{c}{PSNR} &\multicolumn{1}{c}{KL} &\multicolumn{1}{c}{SSIM} &\multicolumn{1}{c}{PSNR} &\multicolumn{1}{c}{KL}\\
\hline
\multirow{5}{*}{a2g} & Pix2pix~\cite{2} & 0.4808 &19.4919 & 6.29$\pm$0.80 &0.4180 &17.6291 &38.26$\pm$1.88 \\
& X-Fork~\cite{6} & 0.4921 & 19.6273 & 3.42$\pm$0.72 & 0.4963 & 19.8928 & 6.00$\pm$1.28 \\
& X-Seq~\cite{6} & 0.5171 & 20.1049 & 6.22$\pm$0.87 & 0.5031 & 20.2803 & 5.93$\pm$1.32 \\
& SelectionGAN~\cite{12} & 0.6865 & 24.6143 & 1.70$\pm$0.45 & 0.5938 & 23.8874 & 2.74 $\pm$0.86 \\
&Ours & \multicolumn{1}{l}{{\bf 0.7100}} & \multicolumn{1}{l}{{\bf 24.9674}} & \multicolumn{1}{l}{{\bf 1.55$\pm$0.51}} & \multicolumn{1}{l}{{\bf 0.6524}} & \multicolumn{1}{l}{{\bf 24.4012}} & \multicolumn{1}{l}{{\bf 2.47$\pm$0.76}} \\
\hline
\multirow{5}{*}{g2a} & Pix2pix~\cite{2} & 0.3675 &20.5135 & 6.39$\pm$0.90 &0.2693 &20.2177 &$7.88\pm$1.24 \\
& X-Fork~\cite{6} & 0.3682 & 20.6933 & 4.55$\pm$0.84 & 0.2763 & 20.5978 & 6.92$\pm$1.15 \\
& X-Seq~\cite{6} & 0.3663 & 20.4239 & 7.20$\pm$0.92 & 0.2725 & 20.2925 & 7.07$\pm$1.19 \\
&SelectionGAN~\cite{12} & 0.5118 & 23.2657 & 2.25$\pm$0.56 & 0.3284 & 21.8066 & 3.55$\pm$0.87 \\
&Ours & \multicolumn{1}{l}{{\bf 0.6116}} & \multicolumn{1}{l}{{\bf 24.5445}} & \multicolumn{1}{l}{{\bf 2.13$\pm$0.48}} & \multicolumn{1}{l}{{\bf 0.3924}} & \multicolumn{1}{l}{{\bf 22.7143}} & \multicolumn{1}{l}{{\bf 3.17$\pm$0.82}} \\
\bottomrule\\
\end{tabular}
\label{tb2:table2}
\end{table}
\begin{figure}[!t]
\centerline{\includegraphics[width=0.8\linewidth]{fig1.eps}}
\caption{Results generated by the proposed method and SelectionGAN~\cite{12} in $64 {\times} 64$ resolution in both a2g (top) and g2a (bottom) directions on the Dayton dataset.}
\label{fig1}
\end{figure}
\noindent \textbf{Parameter Settings.}
Similar to~\cite{12}, the low resolution ($64 {\times} 64$) experiments on the Dayton dataset are carried out for 100 epochs with batch size of 16, whereas the high resolution ($256 {\times} 256$) experiments for this dataset are trained for 35 epochs with batch size of 4. We also set $\lambda_{1}{=}100$, $\lambda_{2}{=}1$, $\lambda_{3}{=}200$, $\lambda_{4}{=}2$ and $\lambda_{tv}{=}1e-6$ in Eq. \eqref{eq:loss}, and $\lambda$=4 in Eq. \eqref{eq:adv_loss}.
\noindent \textbf{Evaluation Protocol.}
We employ KL Score and top-k prediction accuracy as the evaluation metrics.
These metrics evaluate the generated images from a high-level feature space. We also employ pixel-level similarity metrics to evaluate our method,i.e., Structural-Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR).
\noindent \textbf{State-of-the-art Comparisons.}
We compare the proposed model with exiting cross-view image translation methods, i.e., Pix2Pix~\cite{2}, X-Fork~\cite{6}, X-Seq~\cite{6} and SelectionGAN~\cite{12}.
Quantitative results of different metrics are shown in Tables~\ref{tb1:table1} and \ref{tb2:table2}.
We compute top-1 and top-5 accuracies in Table \ref{tb1:table1}.
As we can see, for lower resolution images ($64 {\times} 64$) our method outperforms the existing leading cross-view image translation methods.
For higher resolution images ($256 {\times} 256$), our method also achieves the best results on top-1 and top-5 accuracies. This shows the effectiveness of our method and the necessity of the proposed modules.
Moreover, we provide results of SSIM, PSNR, and KL scores Table \ref{tb2:table2}.
We observe that the proposed method is consistently superior to other leading methods, validating the effectiveness of the proposed method.
\noindent \textbf{Qualitative Evaluation.} Qualitative results compared with the most related work, i.e., SelectionGAN~\cite{12} are shown in Fig.~\ref{fig5} and \ref{fig6}.
We can see that our method generates sharper details than SelectionGAN on objects/scenes, e.g., houses, buildings, roads, clouds, and cars.
For example, we can see that the houses generated by our method are more natural than those generated by SelectionGAN as shown in Fig. \ref{fig5}.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.8\linewidth]{fig5.eps}}
\caption{Results generated by the proposed method and SelectionGAN~\cite{12} in $256 {\times} 256$ resolution in a2g direction on the Dayton dataset.}
\label{fig5}
\end{figure}
\begin{figure}[!t]
\centerline{
\includegraphics[width=0.8\linewidth]{fig6.eps}}
\caption{Results generated by the proposed method and SelectionGAN~\cite{12} in $256 {\times} 256$ resolution in g2a direction on the Dayton dataset.}
\label{fig6}
\end{figure}
\begin{table}[!t]
\centering
\caption{Ablations study of the proposed method.}
\begin{tabular}{cccc}
\toprule
Baseline & Method & PSNR & SSIM\\
\midrule
A & SGAN~\cite{12} & 23.9310 & 0.6176 \\
B & SGAN + AM & 24.0539 & 0.6309 \\
C & SGAN + AM + DC & 24.3345 & 0.6507 \\
D & SGAN + AM + DC + LS & {\bf 24.6421} & {\bf 0.6927} \\
\bottomrule
\end{tabular}
\label{tb3:table3}
\end{table}
\noindent \textbf{Ablation Study.}
We also conduct an ablation study in a2g (aerial-to-ground) direction on the Dayton dataset. To reduce the training time, we follow SelectionGAN and randomly select 1/3 samples from the whole 55,000/21,048 samples, i.e., around 18,334 samples for training and 7,017 samples for testing.
The proposed model consists of 4 baselines (A, B, C, D) as shown in Table~\ref{tb3:table3}.
Baseline A uses SelectionGAN (SGAN). Baseline B combines SGAN and the proposed attention mechanism (AM). Baseline C employs deformable convolution (DC) on baseline B.
Baseline D adopts the proposed loss function (LS).
It is obvious that as each module is added, we can obtain better results of both SSIM and PSNR metrics.
This means by adding the proposed attention mechanism, deformable convolution, and the proposed loss function, the overall performance can be further boosted.
\section{Conclusion}
In this paper, we propose a novel generative adversarial network based on deformable convolution and attention mechanisms for solving the challenging cross-view image generation task. We propose a novel attention mechanism to refine the feature maps, thus improving the ability of feature representation. We also embed deformed convolution in our generator to improve the network's ability for extracting object features at different scales.
Moreover, a novel semantic-guide adversarial loss is proposed to improve the whole network training, thus achieving a more robust and stable optimization.
Extensive experimental results show that the proposed method obtains better results than state-of-the-art methods.
%
%
%
%
|
1,108,101,565,157 | arxiv | \section{Introduction}
In this paper we are concerned with certain avoidance properties
of finite and infinite words.
Recall that a word $x$ is said to be a {\it factor} of a word
$w$ if there exist words $y,z$ such that $w = yxz$. For example,
the word {\tt act} is a factor of the word
{\tt factor}. Another term for {\it factor}
is {\it subword}, although this latter term sometimes refers to
a different concept entirely.
For $n \geq 1$
define the property $P_\ell (w)$ of a word $w$ as follows:
\begin{displaymath}
\forall {\text{ factors $x$ of $w$ }}
(|x| \geq \ell) \implies \text{ ($x^R$ is not a factor
of $w$).}
\end{displaymath}
If $P_\ell(w)$ holds, then we say {\it $w$ avoids reversed
factors of length $\geq \ell$}. In particular, if $P_\ell (w)$ holds,
then $w$ has no palindromes of length $\geq \ell$.
Clearly $P_1(w)$ holds only for
$w = \varepsilon$, the empty word, so in what follows we always
assume $\ell \geq 2$.
Define $L_\ell(\Sigma_k) = \{ w \in \Sigma_k^* \ : \ P_\ell (w) \text{ holds} \} $,
the set of all words over the finite alphabet $\Sigma_k$
avoiding reversed factors of length $\geq \ell$.
In 2005, Rampersad and the second author \cite{Rampersad&Shallit:2005}
proved a number of theorems about $L_\ell(\Sigma_k)$ and related infinite
words. These results were proved mostly by case-based arguments.
In this paper, we revisit these results, using a new method,
based on finite automata. Our method is able to prove most of
the results in the previous paper, and more, using a unified
approach.
A companion paper is \cite{Fleischer&Shallit:2019b}, which
explores the same theme with regard to palindromes.
\section{The language of words avoiding reversed factors is regular}
We define $\Sigma_k = \{ 0, 1, \ldots, k-1 \}$.
The crucial observation is contained in this section.
We show that for every $n \geq 2$ and every $k \geq 1$,
the language $L_\ell(\Sigma_k)$ is regular.
\begin{theorem}
\begin{equation}
\overline{L_\ell(\Sigma_k)}
= \bigcup_{x \in \Sigma_k^\ell} \left( \, \Sigma_k^* \, x \, \Sigma_k^* \cap \Sigma_k^* \, x^R \, \Sigma_k^* \, \right) .
\label{lprime}
\end{equation}
\label{one}
\end{theorem}
\begin{proof}
Suppose $w \not\in L_\ell(\Sigma_k)$. Then $w$ contains $z$ and $z^R$
as factors for some $z$ with $|z| \geq \ell$.
Writing $z = xy$ with $|x| = \ell$, we
see that $w$ also contains $x$ and $x^R$ as factors, and hence
$w \in \Sigma_k^* \, x \, \Sigma_k^* \cap \Sigma_k^* \, x^R \, \Sigma_k^*$.
On the other hand, suppose $w \in \bigcup_{x \in \Sigma_k^\ell}
( \Sigma_k^* \, x \, \Sigma_k^* \cap \Sigma_k^* \, x^R \, \Sigma_k^*) $.
Then there exists some $x$ of length $\ell$ such
that $w \in \Sigma_k^* \, x \, \Sigma_k^* \cap \Sigma_k^* \, x^R \, \Sigma_k^*$.
Hence $w$ contains both $x$ and $x^R$ as length-$\ell$ factors,
and so $w \not\in L_\ell(\Sigma_k)$.
\end{proof}
\begin{corollary}
The language $L_\ell (\Sigma_k)$ is regular.
\label{two}
\end{corollary}
\begin{proof}
Theorem~\ref{one} shows that $\overline{L_\ell(\Sigma_k)}$ is regular, as
it is the union of regular languages. So $L_\ell(\Sigma_k)$ is regular.
\end{proof}
Corollary~\ref{two} provides an algorithmic way to characterize all
finite words avoiding reversed factors: namely, just compute the
minimal DFA $A$ for $L_\ell (\Sigma_k)$.
It also provides a way to characterize the (one-sided) infinite
words avoiding reversed factors:
since $L_\ell(\Sigma_k)$
is clearly factor-closed (that is, every factor of a word of
$L_\ell(\Sigma_k)$ is also a word of $L_\ell(\Sigma_k)$), it follows that
$A$ has only one non-accepting state, which is necessarily a dead
state. Without loss of generality, then, we can delete this
dead state, obtaining an automaton $A'$ where every path is labeled
with a word of $L_\ell(\Sigma_k)$ and all words are so represented.
Hence all {\it infinite} words avoiding reversed factors (if any
exist) are given
precisely by the infinite paths through $A'$. We can characterize
these using the results in Section~\ref{sec3}.
\section{Periodicity}
\label{persec}
Let $\Sigma^\omega$ denote the set of all one-sided infinite words
over the alphabet $\Sigma$. For a finite nonempty word $x$,
let $x^\omega$ denote the infinite word $xxx\cdots$. We say
that an element $\bf w$ of $\Sigma^\omega$ is {\it ultimately periodic}
if there exist finite words $y,x$ with $x \not= \varepsilon$ such that
${\bf w} = yx^\omega$. Otherwise we say $\bf w$ is {\it aperiodic}.
In the expression of an ultimately periodic word in the form
$y x^\omega$, we call $|y|$ the {\it preperiod\/} and $|x|$ the
{\it period}.
\begin{theorem}
Let $w_0, w_1$ be two noncommuting finite words (that is,
$w_0 w_1 \not= w_1 w_0$). Define a morphism
$\gamma(i) = w_i$ for $i \in \{ 0, 1 \}$.
Then ${\bf a} \in \{0,1\}^\omega$ is ultimately periodic
iff $\gamma({\bf a})$ is ultimately periodic.
\label{per}
\end{theorem}
\begin{proof}
Suppose ${\bf a} \in \{0,1\}^\omega$ is ultimately periodic, say
${\bf a} = y z^\omega$. Then $\gamma({\bf a}) = \gamma(y) \gamma(z)^\omega$,
which shows that $\gamma({\bf a})$ is ultimately periodic with preperiod
$|\gamma(y)|$ and period $|\gamma(z)|$.
For the other direction, let ${\bf a} = a_0 a_1 a_2 \cdots$ and
suppose $\gamma({\bf a}) = {\bf b} = b_0 b_1 b_2 \cdots$ is ultimately periodic,
with preperiod $r$ and period $p$. Thus $b_i = b_{i+p}$ for all
$i \geq r$.
Now think of ${\bf b}$ as a concatenation of blocks, each of which
is either $w_0$ or $w_1$.
Define $d(i) := |\gamma(a_0 a_1 \cdots a_{i-1})|$, and note
that the starting position in $\bf b$ of the $i$'th block,
for $i \geq 0$, is at index $d(i)$.
Let $s$ be the least integer such that $d(s) \geq r$.
By the infinite pigeonhole principle, there must be two integers
$j, k \geq s$, with $j < k$, such that
\begin{equation}
d(j) \equiv \modd{d(k)} {p}.
\label{mod1}
\end{equation}
The $j$'th block begins at $b_{d(j)}$,
and the $k$'th block begins at $b_{d(k)}$.
The congruence~\eqref{mod1}, together with
the fact that $\bf b$ has period $p$, and
the inequality $d(j), d(k) \geq r$, show that
the two infinite words
$\gamma(a_j a_{j+1} a_{j_2} \cdots) = b_{d(j)} b_{d(j) + 1} b_{d(j) + 2} \cdots$ and
$\gamma(a_k a_{k+1} a_{k+2} \cdots) = b_{d(k)} b_{d(k) + 1} b_{d(k) +2} \cdots $
are identical.
There are now two cases: either the infinite words $a_j a_{j+1} a_{j+2} \cdots$
and $a_k a_{k+1} a_{k+2} \cdots$ differ, or they are identical.
In the former case, let $i \geq 0$ be the least index such that
$a_{j+i} \not= a_{k+i}$.
Then $a_{j+\ell} = a_{k+\ell}$ for $0 \leq \ell < i$, and so it
follows that $d(j+i) \equiv \modd{d(k+i)} {p}$.
Thus $b_{d(j+i)} b_{d(j+i)+1} b_{d(j+i)+2} \cdots =
b_{d(k+i)} b_{d(k+i)+1} b_{d(k+i)+2} \cdots $, and so
we have two infinite words,
\begin{displaymath}
{\bf y} = a_{j+i} a_{j+i+1} \cdots \quad \text{ and } \quad
{\bf z} = a_{k+i} a_{k+i+1} \cdots,
\end{displaymath}
one beginning with $0$ and the other beginning with $1$, such that
$\gamma({\bf y}) = \gamma({\bf z})$. By
\cite[Thm.~2.3.5]{Shallit:2009}, it follows that $w_0$ and $w_1$
commute, a contradiction.
So $a_{j+i} = a_{k+i}$ for all $i \geq 0$, and hence $\bf a$
is ultimately periodic with period $k-j$.
\end{proof}
\section{Adherences}
\label{sec3}
The {\it adherence} $\adh(L)$ of a language is defined as follows:
$$ \adh(L) = \{ {\bf x} \in \Sigma^\omega \ : \
\text{every prefix of $\bf x$ is a prefix of some word of $L$} \} .$$
For example, see \cite{Nivat:1978}.
\begin{theorem}
Let $L$ be a regular language.
\begin{itemize}
\item[(a)] If $L$ is finite then $\adh(L)$ is empty.
\item[(b)] If $L$ is infinite,
but has polynomial growth (that is, there exists a fixed integer $k$
such that the number of length-$n$ words in $L$ is $O(n^k)$),
then $\adh(L)$ is nonempty, but is countable and
contains only ultimately periodic words.
\item[(c)] If $L$ does not have polynomial growth (informally,
$L$ has exponential growth), then $\adh(L)$ is uncountable and
contains uncountably many aperiodic words.
\end{itemize}
\label{adh-thm}
\end{theorem}
\begin{proof}
\leavevmode
\begin{itemize}
\item[(a)] Trivial.
\item[(b)] By combining \cite[Prop.~3]{Lecomte&Rigo:2002} with
\cite[Lemma 2.2]{Bell&Hare&Shallit:2018}, we see that $\adh(L)$ is
countable iff $L$ has polynomial growth. Furthermore, the proof
of \cite[Prop.~3]{Lecomte&Rigo:2002} (specifically, the displayed line
following Eq.~(6) on p.~20 of that paper) actually shows that
$\adh(L)$ consists only of ultimately periodic words.
\item[(c)] By combining \cite[Prop.~3]{Lecomte&Rigo:2002} with
\cite[Lemma 2.3]{Bell&Hare&Shallit:2018}, we see that
$\adh(L)$ is uncountable iff $L$ has exponential growth.
Since there are only a countable number of ultimately periodic
words, it follows that $\adh(L)$ contains uncountably many
aperiodic words.
\end{itemize}
\end{proof}
\section{Applications}
Let us now turn to reproving the principal theorems from
\cite{Rampersad&Shallit:2005}. For many of these theorems, we can
employ the following strategy:
use {\tt Grail}, a software package for manipulating automata
\cite{Raymond&Wood:1994},
to construct a DFA $M$ corresponding to the regular expression
in Eq.~\eqref{lprime}, and from this obtain a DFA $M'$
for $L_\ell(\Sigma_k)$. The
infinite words avoiding reversed
factors of length $\geq n$ are then given by all infinite paths
through the digraph of the transition diagram of $M'$.
Using Theorem~\ref{adh-thm}, we can characterize the infinite words.
Using depth-first search, the finiteness of $L(M')$ can be determined
trivially. The distinction between polynomial and exponential
growth can be determined efficiently using the methods
detailed in \cite{Gawrychoski&Krieger&Rampersad&Shallit:2010}:
call a state $q$ {\it birecurrent} if there are at least two distinct
noncommuting words, $x_0$ and $x_1$, taking state $q$ to $q$.
By Theorem~\ref{per}, if there is
a birecurrent state, we can find an explicit example of an aperiodic
infinite word labeled by an infinite
path through the automaton by replacing the $0$'s (resp., the $1$'s)
in any aperiodic binary word with $x_0$ (resp., $x_1$).
For example,
we can take ${\bf w} = {\bf t} = 01101001 \cdots$, the Thue-Morse
word \cite{Thue:1912,Berstel:1995}.
On the other hand,
if $L(M')$ has polynomial growth, then there are no birecurrent states.
In this case, only periodic infinite words with the
given avoidance properties exist.
In practice, creating the DFA from the regular expression in Eq.~\eqref{lprime}
is not completely straightforward, however,
as exponential blowup is observed
in some formulations. By experimenting, we found that the
following technique works: using de Morgan's law,
we rewrite Eq.~\eqref{lprime} as
$$
L_\ell(\Sigma)
= \bigcap_{x \in \Sigma_k^n} \left( \, \overline{\Sigma_k^* \, x \, \Sigma_k^*} \cup
\overline{\Sigma_k^* \, x^R \, \Sigma_k^*} \, \right) ,
$$
and construct minimal DFA's for each individual term of the intersection.
Clearly it suffices to perform the intersection only for those
$x$ for which $x$ is lexicographically equal to or smaller than
$x^R$.
We then iteratively intersect the resulting DFA's term-by-term.
Although intermediate results can be quite large (thousands
of states), the final
DFA so produced is relatively small.
We used a short program written in
Dyalog APL to create a Linux shell script with
the individual {\tt Grail} commands.
We used {\tt Grail}, version 3.3.4 \cite{Campeanu:2019}.
Running this script
creates a text file describing a DFA for $L_\ell (\Sigma_k)$.
We identify the unique nonaccepting
state in the result, and delete lines referencing this state from the
text file. We then used another
Dyalog APL program to convert this text file to a file in
GraphViz format that can be used to display the automaton.
Since we explicitly construct the DFA for $L_\ell (\Sigma_k)$, another
benefit to our approach is as follows. Using standard
techniques (e.g., \cite[\S 3.8]{Shallit:2009}), we can enumerate
the number of words of length $n$ in the language. We briefly
sketch how this can be done.
Once the automaton $A = (Q, \Sigma, \delta, q_0, F)$ for $L_\ell(\Sigma_k)$
is known, we can create a useful
matrix $r \times r$ matrix $M$ from it as follows (where
$Q = \{q_0, \ldots, q_{r-1} \}$ and $r = |Q|$):
$$M[i,j] = | \{ a \in \Sigma_k \ : \ \delta(q_i, a) = q_j \} |.$$
This matrix $M$ has the property that $M^n[i,j]$ is the number of
words taking $A$ from state $q_i$ to state $q_j$.
The minimal polynomial of $M$ then gives a recurrence for
the number of length-$n$ words that $A$ accepts. For the
details, see \cite{Fleischer&Shallit:2019b}. Thus our method
allows an automated way to obtain the number of length-$n$
words in $L_\ell(\Sigma_k)$ and its asymptotic growth rate.
We now reprove the theorems from \cite{Rampersad&Shallit:2005}.
\subsection{Alphabet size 3}
\begin{theorem}
\label{tern_per}
There exists an infinite word $\mathbf{w}$ over $\Sigma_3$ such that if
$x$ is a factor of $\mathbf{w}$ and $|x| \geq 2$, then $x^R$ is not a
factor of $\mathbf{w}$. Furthermore, $\mathbf{w}$ is unique up to
permutation of the alphabet symbols.
\end{theorem}
\begin{proof}
We use the following Linux shell script to create the automaton:
{\footnotesize
\begin{verbatim}
# making automaton for K = 3; N = 2
echo "00"
echo "(0+1+2)*00(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > d0
./fmstats d0
echo "01"
echo "(0+1+2)*01(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > a1
echo "(0+1+2)*10(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > b1
./fmunion a1 b1 | ./fmdeterm | ./fmmin > c1
./fmcross d0 c1 | ./fmdeterm | ./fmmin > d1
./fmstats d1
echo "02"
echo "(0+1+2)*02(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > a2
echo "(0+1+2)*20(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > b2
./fmunion a2 b2 | ./fmdeterm | ./fmmin > c2
./fmcross d1 c2 | ./fmdeterm | ./fmmin > d2
./fmstats d2
echo "11"
echo "(0+1+2)*11(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > c3
./fmcross d2 c3 | ./fmdeterm | ./fmmin > d3
./fmstats d3
echo "12"
echo "(0+1+2)*12(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > a4
echo "(0+1+2)*21(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > b4
./fmunion a4 b4 | ./fmdeterm | ./fmmin > c4
./fmcross d3 c4 | ./fmdeterm | ./fmmin > d4
./fmstats d4
echo "22"
echo "(0+1+2)*22(0+1+2)*" | ./retofm | ./fmdeterm | ./fmmin | ./fmcment > c5
./fmcross d4 c5 | ./fmdeterm | ./fmmin > d5
./fmstats d5
cp d5 aut32.txt
\end{verbatim}
}
\noindent which, after deleting lines corresponding to the dead state numbered
4, gives the following {\tt Grail} output:
\begin{verbatim}
(START) |- 0
0 0 1
0 1 2
0 2 3
1 1 5
1 2 6
2 0 7
2 2 8
3 0 9
3 1 10
5 2 8
6 1 10
7 2 6
8 0 9
9 1 5
10 0 7
0 -| (FINAL)
1 -| (FINAL)
2 -| (FINAL)
3 -| (FINAL)
5 -| (FINAL)
6 -| (FINAL)
7 -| (FINAL)
8 -| (FINAL)
9 -| (FINAL)
10 -| (FINAL)
\end{verbatim}
which is depicted below.
\begin{center}
\begin{figure}[H]
\includegraphics[width=5in]{aut32.pdf}
\caption{Automaton for $L_2(\Sigma_3)$. Dead state, numbered 4, omitted.}
\end{figure}
\end{center}
As the reader can now easily verify, the set of finite words accepted
are the prefixes of
$$(012)^* + (021)^* + (102)^* + (120)^* + (201)^* + (210)^* .$$
The corresponding set of infinite words is then
$$(012)^\omega + (021)^\omega + (102)^\omega + (120)^\omega + (201)^\omega + (210)^\omega .$$
\end{proof}
In subsequent theorems, we omit providing the shell scripts and outputs
from {\tt Grail}, but the reader can obtain them from
the second author's web page \\
\centerline{\url{https://cs.uwaterloo.ca/~shallit/papers.html} \ .}
\begin{theorem}
There exists an aperiodic infinite word $\mathbf{w}$ over
$\Sigma_3$ such that if $x$ is a factor of $\mathbf{w}$ and $|x| \geq 3$,
then $x^R$ is not a factor of $\mathbf{w}$.
\label{thm2}
\end{theorem}
\begin{proof}
As above, we create the DFA for $L_3(\Sigma_3)$.
Although the intermediate automata have as many as 1033 states,
the final automaton has only 20 states (including the dead state).
It is depicted below.
\begin{center}
\begin{figure}[H]
\includegraphics[width=6in]{aut33.pdf}
\caption{Automaton for $L_3(\Sigma_3)$. Dead state numbered 13, omitted.}
\end{figure}
\end{center}
Then, for example, state $9$ is a birecurrent state, with the
corresponding cycles labeled by $x_0 = 0012$ and $x_1 = 0112$.
It follows that every word in $\{ 0012, 0112 \}^\omega$ avoids
reversed factors of length $\ell \geq 3$, and uncountably many of these
are aperiodic.
\end{proof}
Let the Fibonacci numbers be defined, as usual,
by the recurrence $F_n = F_{n-1} + F_{n-2}$, together
with the initial conditions $F_0 = 0$ and $F_1 = 1$.
\begin{theorem}
The number $r_{33}(n)$ of length-$n$
words in $L_3(\Sigma_3)$
is $6F_{n+1}$ for $n \geq 3$.
\label{m33}
\end{theorem}
\begin{proof}
We create the $20 \times 20$ matrix $M$
corresponding to the transitions of
$L_3 (\Sigma_3)$. Its minimal polynomial is
$p(X) = X^3 (X-3)(X^2 - X - 1)(X^4 + X^3 + 2X^2 + 2X+ 1)$.
It now follows that $r_{33} (n)$ can be expressed as
a linear combination of $n$'th powers of the zeros of
$p(X)$. We can determine the coefficients of this
linear combination by solving a linear system, using
the computed values of the first 10 terms of $r_{33} (n)$.
From this, the result easily follows.
\end{proof}
\subsection{Alphabet size 2}
\begin{theorem}
Let $n \leq 4$ and let $w$ be a word over $\Sigma_2$ such that
if $x$ is a factor of $w$ and $|x| \geq n$, then $x^R$ is not
a factor of $w$. Then $|w| \leq 8$.
\end{theorem}
\begin{proof}
As above, we create the DFA for $L_4(\Sigma_2)$.
It is depicted below, and we easily see that the longest words
accepted are of length $8$.
\begin{center}
\begin{figure}[H]
\includegraphics[width=6in]{aut24.pdf}
\caption{Automaton for $L_4(\Sigma_2)$. Dead state, numbered 15, omitted.}
\end{figure}
\end{center}
\end{proof}
\begin{theorem}
\label{geq5}
There exists an infinite word $\mathbf{w}$ over $\Sigma_2$ such that if
$x$ is a factor of $\mathbf{w}$ and $|x| \geq 5$, then $x^R$ is not a
factor of $\mathbf{w}$.
\end{theorem}
\begin{proof}
As above, we create the DFA for $L_5(\Sigma_2)$.
Although the intermediate automata produced have as many as 598
states, the final DFA has only 59 states (including the dead state).
\begin{center}
\begin{figure}[H]
\includegraphics[width=6in]{aut25.pdf}
\caption{Automaton for $L_5(\Sigma_2)$. Dead state, numbered 27, omitted.}
\end{figure}
\end{center}
Then, for example, $000011(010011)^\omega$ is an infinite path in this
DFA.
\end{proof}
\begin{remark}
We can see by inspection that there are no birecurrent states in this
automaton. Hence all infinite words satisfying the property of
Theorem~\ref{geq5} are periodic.
\end{remark}
\begin{theorem}
The number $r_{25} (m)$ of length-$m$ words in $L_5 (\Sigma_2)$
is given by
$$
r_{25} (m) = \begin{cases}
30, & \text{if $m \equiv \modd{0} {6}$}; \\
32, & \text{if $m \equiv \modd{1,2,3} {6}$ and $m \geq 7$}; \\
34, & \text{if $m \equiv \modd{4} {6}$ and $m \geq 10$}; \\
36, & \text{if $m \equiv \modd{5} {6}$ and $m \geq 11$} .
\end{cases}
$$
\end{theorem}
\begin{proof}
As in the proof of Theorem~\ref{m33}, we can build the $59 \times 59$
matrix corresponding to the automaton, and determine its minimal
polynomial $p(X) = X^6 (X^6-1)(X-2)$. As before we can express
$r_{25} (m)$ as a linear combination of the $m$'th powers of the
zeros of $p$. The result now easily follows.
\end{proof}
\begin{theorem}
There exists an aperiodic infinite word $\mathbf{w}$ over
$\Sigma_2$ such that if $x$ is a factor of $\mathbf{w}$ and $|x| \geq 6$,
then $x^R$ is not a factor of $\mathbf{w}$.
\label{thm26}
\end{theorem}
Here our previous approach does not succeed in a reasonable length
of time, because the intermediate automata grow too large
(at least hundreds of thousands of states).
We describe an alternative approach that produces the desired DFA
for $L_6(\Sigma_2)$.
We can construct a DFA for $L_\ell(\Sigma_k)$ directly as follows:
it suffices to record, in the state, the subset of length-$n$
factors seen so far, and the last $n-1$ symbols seen (or shorter
prefix, if $n-1$ symbols have not yet been seen). Upon reading
a new symbol, the DFA updates the subset of factors and the last
$n-1$ symbols seen. So the total number of states
is $2^{k^n} \cdot (1+k +k^2 + \cdots + k^{n-1})$.
The final states correspond to those
subsets not containing both a word and its reversal.
For our particular case of $k = 2$, $n = 6$, this gives a
DFA with $63 \cdot 2^{64}$ states, which is evidently too large to
manipulate effectively. However, many of these states will
be unreachable from the
start state. Instead, we can construct the reachable states
in a breadth-first manner, using a queue. We wrote a Dyalog APL
program to construct the automaton; it has 63705 states (not
including the dead state). We then minimized this automaton
using {\tt Grail}, and we obtained an automaton $A$ with 7761 states
(not including the dead state). This automaton is much too big to
display here, but can be obtained from the website of the second
author.
State 980 is a birecurrent state, with the corresponding cycles
labeled by $0001011$ and $1001011$. Now we can complete
the proof of Theorem~\ref{thm26}.
\begin{proof}
As before, we can produce an explicit example of an aperiodic infinite
word satisfying the given conditions by applying the morphism
$0 \rightarrow 0001011$, $1 \rightarrow 1001011$ to any aperiodic
binary word, such as the Thue-Morse word.
\end{proof}
As suggested by the size of the minimal automaton $A$,
it turns out that the structure of the language
$L_6(\Sigma_2)$ is very complicated. A natural problem is to
give a recurrence enumerating the number $r_{26}(n)$ of length-$n$ words
in $L_6(\Sigma_2)$. Even this is not so easy; it turns out that
$r_{26}(n)$ satisfies a linear recurrence of order 195.
We describe how this can be proved. The first step is to compute
the minimal polynomial of the matrix $M$ corresponding to $A$.
We were not able to compute this with {\tt Maple} 2017 (X86 64 LINUX),
so we turned to the software {\tt LinBox} \cite{Dumas:2019}. It computed
the minimal polynomial as the following polynomial of degree 239:
\begin{align*}
& X^{18} (X - 2) (X - 1) (X + 1) (X^2 + 1) (X^4 + 1) (X^2 - X + 1) (X^2 + X + 1)(X^4 - X^2 + 1) \times \\
&(X^6 + X^3 + 1) (X^8 - X^2 - 1) (X^8 + X^2 - 1) (X^9 - X^2 - 1) (X^{10} - X^2 - 1) (X^{12} - X^2 - 1) \times \\
&(X^{12} - X^3 - 1) (X^{12} - X^4 - 1) (X^{12} - X^5 - 1) (X^{12} - X^6 - 1) (X^4 - X^3 + X^2 - X + 1) \times \\
&(X^4 - X^3 + X^2 - X + 1) (X^4 + X^3 + X^2 + X + 1) (X^7 - X^6 + X^4 - X^3 - 1)\times \\
& (X^{10} - X^3 - X^2 - X - 1)
(X^{10} - X^8 + X^6 - X^4 - 1) (X^{16} - X^9 - X^7 - X^4 + 1) \times \\
& (X^{16} - X^{10} - X^6 - X^4 + 1)
(X^{10} - X^4 - 2 X^3 - 2 X^2 - 2 X - 1)
(X^{10} - X^8 + X^6 - 2 X^4 + X^2 - 1) \times \\
&(X^6 + X^5 + X^4 + X^3 + X^2 + X + 1) (X^{10} - X^8 + X^6 - X^4 - X^3 + X^2 - 1).
\end{align*}
From this one can compute a linear recurrence of order $239$ for the
sequence $r_{26}(n)$. However, using the techniques from
\cite{Fleischer&Shallit:2019b}, we can find the optimal linear
recurrence, which arises from the following
degree-$195$
divisor of the
minimal polynomial:
\begin{align*}
& (X - 1) (X^2 + 1) (X^2 - X + 1) (X^2 + X + 1) (X^4 - X^2 + 1) (X^8 - X^2 - 1) (X^8 + X^2 - 1) \times \\
& (X^9 - X^2 - 1) (X^{10} - X^2 - 1) (X^{12} - X^2 - 1) (X^{12} - X^3 - 1) (X^{12} - X^4 - 1) (X^{12} - X^5 - 1) \times \\
& (X^{12} - X^6 - 1) (X^7 - X^6 + X^4 - X^3 - 1) (X^{10} - X^3 - X^2 - X - 1) (X^{10} - X^8 + X^6 - X^4 - 1) \times \\
& (X^{16} - X^9 - X^7 - X^4 + 1) (X^{16} - X^{10} - X^6 - X^4 + 1) (X^{10} - X^4 - 2 X^3 - 2 X^2 - 2 X - 1) \times \\
& (X^{10} - X^8 + X^6 - 2 X^4 + X^2 - 1) (X^{10} - X^8 + X^6 - X^4 - X^3 + X^2 - 1) .
\end{align*}
The largest real zero of this polynomial is
$\alpha \doteq 1.305429354041958520199761719029$,
where $\alpha$ is the positive real zero of
$X^{10} - X^4 - 2 X^3 - 2 X^2 - 2 X - 1$.
It follows that $r_{26}(n) \sim c \alpha^n$, where $c \doteq 15.0313407$.
\begin{remark}
The sequence $r_{26} (n)$ is sequence
\seqnum{A330012} in the {\it On-Line Encyclopedia of Integer
Sequences} (OEIS) \cite{Sloane:2019}.
\end{remark}
\subsection{Alphabet size 4}
Inexplicably, the paper \cite{Rampersad&Shallit:2005} did not handle
the case of alphabet size $4$ (or more precisely, it only
considered the case of squarefree words). We consider the alphabet
size $4$ case now.
\begin{theorem}
There are uncountably many infinite words over $\Sigma_4$ avoiding reversed
factors for length $\ell \geq 2$.
\end{theorem}
\begin{proof}
We construct the automaton as in Theorem~\ref{thm26}.
The resulting automaton has
449 states and is minimal. State 360 is birecurrent, with
paths $x_0 = 0123$ and $x_1 = 0120123$.
\end{proof}
\begin{corollary}
Let $r_{42} (n)$ denote the number of length-$n$
words over $\Sigma_4$ avoiding reversed
factors of length $\ell \geq 2$. Then
\begin{align*}
& (r_{42}(0), r_{42}(1), \ldots, r_{42}(16)) = \\
& \quad\quad\quad (1,4,12,24,48,96,168,264,456,720,1056,1656,2520,3600,5352,7944,11256)
\end{align*}
and
\begin{align*}
r_{42}(n) &= r_{42}(n-1) + 5r_{42}(n-3) - 3r_{42}(n-4) - 2r_{42}(n-5) - 8r_{42}(n-6) + r_{42}(n-7) + \\
& \quad 6r_{42}(n-8) + 5r_{42}(n-9) + 2r_{42}(n-10) - 4r_{42}(n-11) - 2r_{42}(n-12)
\end{align*}
for $n \geq 17$.
Asymptotically we have $r_{42}(n) = C \cdot \alpha^n$, where
$\alpha \doteq 1.395336944$ is the largest
real zero of $X^4 -2X - 1$ and $C \doteq 71.2145756$.
\end{corollary}
\begin{proof}
We computed the minimal polynomial of the associated matrix as
above, using {\tt Maple}. It is
$$X^5(X-1)(X-4)(X+1)(X^2+1)(X^3-2)(X^4-2X-1)(X^2+X+1)(X^4-X-1).$$
Using a technique discussed in
\cite{Fleischer&Shallit:2019b}, we can find the annihilator
for the sequence, which is
$$ (X-1)(X^3 - 2)(X^4 - 2X - 1)(X^4 - X - 1).$$
Expanding the coefficients of this polynomial gives us
the recurrence. The largest real root is that of $X^4 - 2X-1$.
\end{proof}
\begin{remark}
The sequence $r_{42} (n)$ is sequence \seqnum{A330011} in the
OEIS.
\end{remark}
\section{Code}
All of the shell scripts, {\tt Maple} code, {\tt LinBox} code,
and automata discussed in the paper
are available at the website of the second author,\\
\centerline{\url{https://cs.uwaterloo.ca/~shallit/papers.html} \ . }
\newcommand{\noopsort}[1]{} \newcommand{\singleletter}[1]{#1}
|
1,108,101,565,158 | arxiv | \section{INTRODUCTION }
The principle underlying most of the slow-light experiments is to
exploit the steep normal dispersion of the refractive index associated
with a pronounced peak in the transmission of the medium and the correlative
reduction of the group velocity. The situation where the resulting
time-delay of the light-pulse is large compared to its duration and
can be controlled by an external laser field is of special importance
for potential applications, especially in the domain of high-speed
all-optical signal-processing. Harris and his co-workers \cite{ref1,ref2}
opened the way to such experiments by exploiting the phenomenon of
electromagnetically induced transparency (EIT) allowing one to create
a narrow transparency-window in an otherwise optically thick atomic
vapour. Using a true-shape detection of the pulses, they demonstrated
propagation velocities as slow as $c/165$ and group delays $\tau_{g}$
as long as $4.6\:\tau_{in}$ where $c$ and $\tau_{in}$ are respectively
the velocity of the light in vacuum and the full width at half-maximum
(FWHM) of the intensity-profile of the incident pulse. Much slower
velocities have been attained in subsequent EIT experiments (for reviews
see, e.g., \cite{ref3,ref4,ref5}) and in experiments involving coherent population
oscillations \cite{ref6} or other processes to induce a transparency-window
in an absorbing medium. It is however worth noticing that only few
of these experiments, all using EIT, have succeeded in giving \emph{direct}
demonstrations of fractional delays $\tau_{g}/\tau_{in}$ exceeding
unity \cite{ref2,ref7,ref8}. Theoretical discussions on the maximum time-delays
attainable in such experiments can be found in \cite{ref1,ref9,ref10}. A different
way to achieve a system with a controllable transmission-peak is to
optically induce a resonant gain in a transparent medium \cite{ref11}.
Initially proposed by Gauthier \cite{ref12}, the arrangement involving
stimulated Brillouin scattering \cite{ref13,ref14,ref15,ref16} seems particularly attractive
from the viewpoint of the above mentioned applications. The Brillouin
gain is indeed directly implemented on an optical fibre and there
are no severe constraints in the choice of the operating wavelength.
The group delay $\tau_{g}$ has already been controlled
on a range of $3.6\:\tau_{in}$ by this technique \cite{ref15}. Note that
preliminary experiments using a Raman fibre amplifier have also been
achieved \cite{ref17}.
The purpose of our paper is to provide analytical results on the propagation
of arbitrarily shaped pulses, the central frequency of which coincides with
that of a pronounced maximum in the medium-transmission. We examine
more specifically the case where the resulting time-delays of the
pulses are large compared to their duration. Our study applies in
particular but not exclusively to the above mentioned systems. Our
approach follows in part and extends that of Bukhman \cite{ref18} with
a special attention paid to the connection of the theoretical results
with the experiments.
\section{GENERAL ANALYSIS }
We denote by $e_{in}(t)$ and $e_{out}(t)$ the slowly-varying envelopes
of the incident and transmitted pulses and by $E_{in}(\Omega)=\int_{-\infty}^{\infty}e_{in}(t)\exp(-i\Omega t)dt$
and $E_{out}(\Omega)$ their Fourier transforms. The slow-light medium
is characterised by its impulse response $h(t)$ or by its transfer
function $H(\Omega)$, Fourier transform of $h(t)$. The input/output
relation or transfer equation reads $e_{out}(t)=h(t)\otimes e_{in}(t)$
in the time-domain or $E_{out}(\Omega)=H(\Omega)E_{in}(\Omega)$ in
the frequency-domain \cite{ref19}. We assume that the incident pulse has
a finite energy, that it is not chirped ($e_{in}(t)$ real and positive)
and that $h(t)$ is also real. The local response of the medium is
characterised by the complex gain-factor $\Gamma(\Omega)=\ln\left[H(\Omega)\right]$
whose real part $F(\Omega)$ and imaginary part $\Phi(\Omega)$ are
respectively the logarithm of the medium amplitude-gain $\left|H(\Omega)\right|$
and the induced phase shift. The condition imposed to $h(t)$ implies
that $H(-\Omega)=H^{*}(\Omega)$ and thus that $F(\Omega)$ and $\Phi(\Omega)$
are respectively even and odd functions of $\Omega$. This has the
advantage of eliminating the lowest-order pulse-distortions resulting
from the gain-slope and from the group velocity dispersion at the
frequency $\omega_{0}$ of the optical carrier ($\Omega=0$). Moreover
the medium is then entirely characterised by the single real function
$h(t)$. In order to have simple expressions we use for $e_{in}(t)$
a time origin located at the pulse centre-of-gravity and for $e_{out}(t)$
a time origin retarded by the transit time at the group velocity outside
the frequency-domain of high-dispersion (local time picture). The
time delays considered hereafter are thus only those originating in
the high-dispersion region.
General properties of the transmitted pulse can be derived by Fourier
analysis. Let $x(t)$ be any of the real functions $e_{in}(t)$, $h(t)$
or $e_{out}(t)$ and $X(\Omega)$ its Fourier transform. We remark
that $X(0)=\int_{-\infty}^{\infty}x(t)dt$ and, following an usual
procedure in probability theory \cite{ref20}, we characterise $X(\Omega)$
by its cumulants $\kappa_{n}$, such that
\begin{equation}
X(\Omega)=X(0)\exp\left(\sum_{n=1}^{\infty}\frac{\kappa_{n}}{n!}(-i\Omega)^{n}\right).\label{EQ1}
\end{equation}
For $H(\Omega)$, we see that the cumulants are simply related to
the coefficients of the series expansion of $\Gamma(\Omega)$ in powers
of $-i\Omega$ and, in particular, that $\kappa_{1}$ coincides with
the group delay $\tau_{g}=-\frac{d\Phi}{d\Omega}\mid_{\Omega=0}$.
We incidentally recall that, due to the causality principle, $\tau_{g}$
can be related to the gain profile \cite{ref21,ref22}. Within our assumptions,
this relation reads
\begin{equation}
\tau_{g}=P\int_{-\infty}^{\infty}\frac{\ln H_{0}-\ln\left|H(\Omega)\right|}{\pi\Omega^{2}}d\Omega \label{EQ2}
\end{equation}
where $H_{0}=H(0)$ is the amplitude-gain of the medium at the frequency
of the optical carrier. This confirms that large group delays are
achieved when the gain $\left|H(\Omega)\right|$ has a pronounced
maximum at $\Omega=0$. We have then $\kappa_{2}>0$.
Coming back to the general problem, we characterise the time function $x(t)$ by its area
$S=\int_{-\infty}^{\infty}x(t)dt$ and its three lowest order moments, namely the mean value
$\left\langle t\right\rangle =\frac{1}{S}\int_{-\infty}^{\infty}t\: x(t)dt$, the variance
$\sigma^{2}=\frac{1}{S}\int_{-\infty}^{\infty}(t-\left\langle t\right\rangle )^{2}x(t)dt$
and the $3^{rd}$ order centred moment $a=\frac{1}{S}\int_{-\infty}^{\infty}(t-\left\langle t\right\rangle )^{3}x(t)dt$ .
We recognise in $\left\langle t\right\rangle$ (resp. $\sigma$) the location of the centre-of-gravity
(resp. the \emph{rms} duration) of the function $x(t$). Its asymmetry may be characterised by the dimensionless
parameter $\xi=a/\sigma^{3}$, the so-called skewness \cite{ref20}. For a Gaussian function, $\sigma=\tau/(2\sqrt{\ln2})$
where $\tau $ is the FWHM of the energy profile $x^{2}(t)$. An important result \cite{ref20} is that the
moments $\left\langle t\right\rangle$ , $\sigma^{2}$ and $a$ of $x(t)$ are equal to the cumulants $\kappa_{1},\kappa_{2}$ and
$\kappa_{3}$ of $X(\Omega)$. Moreover the transfer equation immediately leads to the relations $E_{out}(0)=H_{0}E_{in}(0)$ and
$\kappa_{n,out}=\kappa_{n,in}+\kappa_{n}$ where, as in all our paper, the indexes $in$, $out$ and the absence of index
respectively refer to the incident pulse, the transmitted pulse and the transfer-function or impulse response of the medium.
With our choice of time origin, $\left\langle t_{in}\right\rangle =0$. By combining the previous results, we finally obtain the four equations
$S_{out}=H_{0}S_{in}$, $\left\langle t_{out}\right\rangle =\tau_{g}$, $\sigma_{out}^{2}=\sigma_{in}^{2}+\sigma^{2}>\sigma_{in}^{2}$
and $a_{out}=a_{in}+a$. In the studies of the linear pulse propagation, the first equation which relates the areas of the transmitted
and incident pulses is known as the area theorem \cite{ref23}. The second equation expresses that the time-delay of the pulse
centre-of-gravity equals the group delay \cite{ref22}. The two last ones specify how the \emph{rms} duration and the asymmetry of the
incident pulse are modified by the medium. All these results are valid provided that the involved moments are finite \cite{ref19}.
\section{ANALYTIC EXPRESSIONS OF THE MEDIUM IMPULSE-RESPONSE}
In order to obtain a complete information on the shape and the amplitude
of the transmitted pulse, we obviously have to specify the complex
gain-factor $\Gamma(\Omega)$ of the medium. For the medium with a
resonant gain, it reads
\begin{equation}
\Gamma(\Omega)=p_{N}(\Omega)G/2-A/2 \label{EQ3}
\end{equation}
where $p_{N}(\Omega)$ is the normalised complex profile of the gain-line ($p_{N}(0)=1$), $G$
is the gain parameter for the intensity \cite{ref14} and $A$ stands for
the attenuation introduced to reduce the effects of the amplified
spontaneous emission \cite{ref15} and/or to normalise the overall gain
of the system. $G=g_{0}L$ where $g_{0}$(resp.$L$) is the resonance
gain-coefficient (resp. the thickness) of the medium. The previous
expression of $\Gamma(\Omega)$ also holds for an absorbing medium
with a transparency-window when the absorption background is assumed
to be infinitely wide. We then get $\Gamma(\Omega)=-[1-f\: p_{N}(\Omega)]\alpha_{0}L/2$
where $\alpha_{0}$ is the background absorption-coefficient, $f\leq1$
specifies the depth of the transparency-window \cite{ref9} and $p_{N}(\Omega)$
is the normalised complex profile of the line associated with the transparency-window.
By putting $G=f\:\alpha_{0}L$ and $A=\alpha_{0}L$ , we actually
retrieve the expression of $\Gamma(\Omega)$ obtained for a gain medium.
In both types of experiments, $G$ and $A$ are generally comparable
in order that the resonance gain $H_{0}=\textrm{e}^{G/2-A/2}$ is close to
1 or, at least, does not differ too strongly from 1. Anyway the intensity-transmission
on resonance exceeds its value far from resonance by the factor $\textrm{e}^{G}$.
To go beyond, it seems necessary to explicit the profile $p_{N}(\Omega)$.
We first consider the reference case where $p_{N}(\Omega)$ is associated
with a Lorentzian line. It then reads $p_{N}(\Omega)=1/(1+i\Omega/\gamma)$
where $\gamma$ is the half-width of the line \cite{ref22} and we immediately
get $\kappa_{n}=Gn!/(2\gamma^{n})$ with in particular $\kappa_{1}=\tau_{g}=G/(2\gamma)$,
$\kappa_{2}=\sigma^{2}=G/\gamma^{2}$ and thus $\tau_{g}=\sigma\sqrt{G}/2$.
The last relation shows that achieving substantial fractional delays
$\tau_{g}/\sigma$ requires that $G\gg1$. A quite remarkable property
of the Lorentzian case is that the impulse response has an exact analytical
expression. This result has been obtained by Crisp in a general study
on the propagation of small-area pulses in absorbing and amplifying
media \cite{ref23} but it can easily be retrieved from $H(\Omega)$ by
using standard procedures of Laplace transforms \cite{ref20}. One get
\begin{equation}
h(t)=\textrm{e}^{-A/2}\delta(t)+\textrm{e}^{-A/2}\gamma G\:\frac{I_{1}(\sqrt{2G\gamma t})}{\sqrt{2G\gamma t}}\: \textrm{e}^{-\gamma t}U(t)\label{EQ4}
\end{equation}
where $\delta(t)$, $I_{1}(u)$ and $U(t)$ respectively designate
the Dirac function, the $1^{st}$ order modified Bessel function and
the unit step function. The $1^{st}$ term $h_{i}(t)$ in $h(t)$
results from the constant value $\textrm{e}^{-A/2}$ of $H(\Omega)$ far from
resonance. This part of the response is instantaneous in our local
time picture and only the $2^{nd}$ term $h_{d}(t)$, directly associated
with the transmission peak, contribute to the delay. The areas of
$h_{i}(t)$ and $h_{d}(t)$ are respectively $\textrm{e}^{-A/2}$ and $H_{0}-\textrm{e}^{-A/2}$,
that is in a very small ratio ($\approx \textrm{e}^{-G/2}$ ) for the large
values of $G$ required to achieve substantial delays (see above).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig1.eps}
\caption{Analytical form of the impulse response for a Lorentzian line. From the left to the right the gain
parameter $G$ (resp. the fractional delay $\tau_{g}/\sigma $) is 4, 9, 16, 25, 36, 49 and 64 (resp. 1, 1.5, 2, 2.5, 3, 3.5 and 4).
The horizontal (resp. vertical) unit is $\sigma$ (resp. $H_{0}/\sigma \sqrt{2 \pi}$ ). \label{fig1}}
\end{center}
\end{figure}
The effect of the instantaneous response then becomes negligible.
Fig.\ref{fig1} shows the delayed response obtained for increasing values of
the gain and thus of the fractional group-delay. We see that the curves,
first strongly asymmetric, become more and more symmetric as $G$
increases and that the location of their maximum then approaches the
group delay. They have a discontinuity $H_{0}e^{-G/2}\gamma G/2$
at $t=0$, the relative amplitude of which becomes negligible when
$G\gg1$. From the asymptotic behaviour of $I_{1}(u)$ \cite{ref20}, we
then get
\begin{equation}
h_{d}(t)\approx \frac{H_{0}}{\sigma\sqrt{2\pi}}\left(1-\frac{3\theta}{4\tau_{g}}\right)\exp\left( -\frac{\theta^{2}}{2\sigma^{2}}\right)\label{EQ5}
\end{equation}
with $\theta=t-\tau_{g}$. The maximum of $h_{d}(t)$ occurs at the instant
$\tau_{g}-\Delta t$ with $\Delta t\approx3\sigma^{2}/(4\tau_{g})$.
When $G\rightarrow\infty$,
\begin{equation}
h(t)\rightarrow h^{(2)}(t)=\frac{H_{0}}{\sigma\sqrt{2\pi}}\exp\left(-\frac{\theta^{2}}{2\sigma^{2}}\right).\label{EQ6}
\end{equation}
This Gaussian form is that of the normal distribution derived by means of
the central limit theorem in probability theory. This theorem can
also be used for an approximate evaluation of the convolution of $n$
deterministic functions \cite{ref19}. It applies to our case by splitting
the medium in $n$ cascaded sections, $h(t)$ being then the convolution
of the impulse responses of each section. According to this analysis,
one may expect that the normal form $h^{(2)}(t)$ is universal. From
the frequency viewpoint, it originates in the fact that, when $G\gg1$
, the transmission peak is roughly $\sqrt{G}$ times narrower than
the line. In the region where the relative gain $\left|H(\Omega)\right|/H_{0}$
is not negligible, the curves $\Phi(\Omega)$ vs $\Omega$ (phase-shift)
and $F(\Omega)$ vs $\Omega$ (line-profile) are well approximated
respectively by a straight line and a parabola (Fig.\ref{fig2}).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=7cm]{fig2.eps}
\caption{ $ \Phi( \Omega)$ and $F( \Omega)$ associated with a Lorentzian line (full line) and a Gaussian line (dashed line).
The physical parameters are chosen in order that $H_{0}=0.6$ and $\tau_{g}/\sigma = 3$ in both cases. The resulting gain profiles
$ \lvert H(\Omega) \rvert$ are indistinguishable at the figure scale. The frequency unit is $1/\sigma$ (angular frequency). \label{fig2}}
\end{center}
\end{figure}
This means that only the first two cumulants $\kappa_{1}=\tau_{g}$ and $\kappa_{2}=\sigma^{2}$
play a significant role. We then get $H(\Omega)\approx H^{(2)}(\Omega)=H_{0}\exp(-i\Omega\tau_{g}-\sigma^{2}\Omega^{2}/2)$
and thus $h(t)\approx h^{(2)}(t)$ irrespective of the line-profile.
This confirms the universality of the normal form of $h(t)$ when
$G\rightarrow\infty$. In fact, $h^{(2)}(t)$ is a good approximation
of the exact impulse-response for the gains currently achieved in
the experiments. Fig.\ref{fig3} shows the result obtained with Lorentzian and
Gaussian line-profiles when $\tau_{g}=3\sigma$. In the second case,
$p_{N}(\Omega)=\exp(-\Omega^{2}/\gamma^{2})-2iD(\Omega/\gamma)/\sqrt{\pi}$
with $D(u)=\textrm{e}^{-u^{2}}\int_{0}^{u}\textrm{e}^{v^{2}}dv$ \cite{ref24} and the first
cumulants read $\kappa_{1}=G/(\gamma\sqrt{\pi})$, $\kappa_{2}=G/\gamma^{2}$
and $\kappa_{3}=4G/(\gamma^{3}\sqrt{\pi})$. The parameters
A, G and $\gamma$ are chosen such that $H_{0}$, $\tau_{g}$ and
$\sigma$ have the same values in both cases. Though the line-profiles
are quite different (see Fig.\ref{fig2}), the impulse responses are both close
to the normal form $h^{(2)}(t)$. Similar results (not shown) are
obtained with other line-profiles, including the EIT profile (see
hereafter).
When the gain parameter is large but not very large, a better approximation
of the impulse response is obtained by considering the effect of the
$3^{rd}$ cumulant $\kappa_{3}$, equal to the asymmetry parameter
$a$ of $h(t)$. Provided that this effect may be considered as a
small perturbation, $H(\Omega)\approx H^{(3)}(\Omega)\approx\left(1+i\kappa_{3}\Omega^{3}/3!\right)H^{(2)}(\Omega)$.
From the correspondence $i\Omega\leftrightarrow d/dt$, we finally get
\begin{equation}
h(t)\approx h^{(3)}(t)\approx\left(1-\frac{a\theta}{2\sigma^{4}}\right)h^{(2)}(t). \label{EQ7}
\end{equation}
This result generalises that obtained in the Lorentzian case where
$a=3G/\gamma^{3}$ and $a/(2\sigma^{4})=3/(4\tau_{g})$.
In the Gaussian case, we find $a/(2\sigma^{4})=2/(\pi\tau_{g})$,
a value not far from the previous one. This explains why the two impulse
responses are very close (see Fig.\ref{fig3}). Anyway they are very well approximated
by $h^{(3)}(t)$ in each case. Quite generally the maximum of $h^{(3)}(t)$
occurs at $\tau_{g}-\Delta t$ with $\Delta t\approx a/(2\sigma^{2})=\xi\sigma/2$
where $\xi$ is the skewness of $h(t)$. In all the above calculations
we have implicitly assumed that $\Delta t$ is small compared to $\sigma$.
This implies that $\left|\xi\right|\ll2$ but we checked that $h^{(3)}(t)$
keeps a fairly good approximation of $h(t)$ for skewness up to $1$.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig3.eps}
\caption{Comparison of the exact impulse-responses for the Lorentzian profile (full line) and the Gaussian profile
(dashed line) to the normal form $h^{(2)}(t)$ (dotted line). The fit by the improved forms $h^{(3)}(t)$ (not shown for clarity)
is nearly perfect. Parameters as in Fig.\ref{fig2}. Units as in Fig.\ref{fig1}. \label{fig3}}
\end{center}
\end{figure}
\section{NORMAL FORM OF THE TRANSMITTED PULSE}
The impulse response being known, the envelope of the transmitted
pulse is given by the relation $e_{out}(t)=h(t)\otimes e_{in}(t)$
and will generally differ from that of the incident pulse. However
the distortion will be negligible if the duration of $h(t)$ is small
compared to that of the pulse. We then get $h(t)\approx\delta(t-\tau_{g})\int_{-\infty}^{+\infty}h(t)dt$
and thus $e_{out}(t)\approx H_{0}\: e_{in}(t-\tau_{g})$. Since we
are interested in the situations where the time delay is large compared
to the pulse duration, we obtain the double condition $\sigma\ll\sigma_{in}\ll\tau_{g}$
which can only be met with extremely large gain parameters. Taking
for example $\sigma=\sigma_{in}/7$ and $\tau_{g}=7\sigma_{in}$ that
is $\tau_{g}=49\sigma$, we get $G\approx9600$ in the Lorentzian
case. Fig.4 shows the results obtained with these parameters. As expected
the pulse distortion is small, even in the sensitive case of a square-shaped
pulse. Note that the gain parameter considered is not unrealistic.
It is comparable to that used by Harris and co-workers in their pioneering
EIT experiment where $G\approx A\approx6000$ \cite{ref2}.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig4.eps}
\caption{Propagation of a square-shaped (full line) and a Gaussian-shaped (dashed line) light-pulse with large delay
and low distortion. The parameters are chosen in order that $\sigma = \sigma_{in}/7$, $\tau_{g} = 7 \sigma_{in}$
and $H_{0} = 1$. The envelopes of the input pulses are given for reference. The time unit is the common \emph{rms}
duration $\sigma_{in}$ of the two incident pulses. As expected, the distortion of the Gaussian-shaped pulse
is negligible and there is only a slight softening of the rise and of the fall of the square-shaped pulse. \label{fig4}}
\end{center}
\end{figure}
However most of the direct demonstrations of large fractional pulse
delays have been achieved with smaller gain parameters, typically
ranging from 10 to 100, and with incident pulses whose \emph{rms}
duration $\sigma_{in}$ is comparable to and often smaller than $\sigma$.
Substantial pulse-reshaping is then expected. Suppose first that the
normal form $h^{(2)}(t)$ provides a good approximation of the impulse
response and that the incident pulse is Gaussian-shaped. We have then
$e_{in}(t)=\exp(-t^{2}/2\sigma^{2})$ and $e_{out}(t)$, convolution
of two Gaussian functions, is itself Gaussian.
It reads $e_{out}(t)=H_{0}(\sigma_{in}/\sigma_{out})\exp(-\theta^{2}/2\sigma_{out}^{2})$,
where $\sigma_{out}^{2}=\sigma_{in}^{2}+\sigma^{2}$ . The effect
of the medium on the light-pulse is simply to delay its maximum exactly
by the group delay, to broaden it by the factor $\sqrt{1+\sigma^{2}/\sigma_{in}^{2}}$
{[}1, 9{]} and to modify its amplitude accordingly in order to respect
the area theorem \cite{ref23}. Since this point is often overlooked, we
stress that the broadening mechanism radically differs from that occurring
in standard optical fibres \cite{ref25}. It originates in the $2^{nd}$
order gain-dispersion instead of in the group-velocity dispersion
and the pulse envelope keeps real (no phase modulation or frequency
chirping). In fact, provided that $\sigma_{in}$ be smaller than or
comparable to $\sigma$ and that $\left|\xi_{in}\right|<1$, $e_{out}(t)$
is well approximated by a Gaussian function whatever the shape of
the incident pulse is. This is again a consequence of the central
limit theorem, the response $e_{out}(t)$ being obtained by an extra
convolution added to those used to build $h(t)$. The conditions on
$\sigma_{in}$ and $\left|\xi_{in}\right|$ originate in the requirement
that all the terms to convolute should have moments of the same order
of magnitude. We then obtain $e_{out}(t)\approx e_{out}^{(2)}(t)$
where $e_{out}^{(2)}(t)$ has the normal (Gaussian) form
\begin{equation}
e_{out}^{(2)}(t)=H_{0}\frac{S_{in}}{\sigma_{out}\sqrt{2\pi}}\exp\left(- \frac{\theta^{2}}{2\sigma_{out}^{2}}\right). \label{EQ8}
\end{equation}
This result extends the previous one and shows that incident pulses
having different shapes but the same area $S_{in}$ and the same variance
$\sigma_{in}$ are reshaped in the medium to give approximately the
same Gaussian-shaped pulse (Fig.\ref{fig5}). From an experimental viewpoint,
the dramatic reshaping of a square-shaped pulse has been clearly demonstrated
(but not commented on) by Turukhin \emph{et al}. \cite{ref8} in their EIT experiment
in a solid (see their figure 2c for 0 probe detuning). Pulse reshaping
is also apparent in the Brillouin scattering experiment by Song \emph{et
al}. \cite{ref13} where a flat-topped pulse is actually transformed in
a gaussian-like pulse (see their figure 4 and compare the shapes obtained
for gains 0dB and 30dB).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig5.eps}
\caption{Example of pulse reshaping and broadening. The square-shaped (full line) and Gaussian-shaped (dashed line)
incident pulses originate nearly identical transmitted pulses, respectively close and very close to the normal form
$e_{out}^{(2)}(t)$ (dotted line). The time unit is $\sigma_{in}$ and the parameters are such that $\sigma = \sigma_{in}$,
$\tau_{g}/\sigma = 4 $ and $H_{0} = 1$. With this choice of $H_{0}$ and $\sigma$, the transmitted pulses have the
same area that the incident ones (area theorem) and a \emph{rms} duration $\sqrt{2}$ times larger.\label{fig5}}
\end{center}
\end{figure}
A more precise approximation $e_{out}^{(3)}(t)$ of $e_{out}(t)$
can be obtained by taking into account the effect of the $3^{rd}$
order cumulants. Using the approach already used to determine $h^{(3)}(t)$,
we get
\begin{equation}
e_{out}^{(3)}(t)\approx \left(1-\frac{a_{out} \theta}{2 \sigma_{out}^{4}}\right ) e_{out}^{(2)}(t) \label{EQ9}
\end{equation}
with $a_{out}=a_{in} + a$. When the incident pulse is symmetric ($a_{in}=0$
) as in most experiments, the skewness $\xi_{out}$ of the transmitted
pulse reads $\xi_{out}=a/\left(\sigma^{2}+\sigma_{in}^{2}\right)^{3/2}$
and the pulse maximum occurs at $\tau_{g}-\Delta t_{out}$ with $\Delta t_{out}=a/\left[2\left(\sigma^{2}+\sigma_{in}^{2}\right)\right]$.
Since $\left|\xi_{out}\right|<\left|\xi\right|$, the transmitted
pulse is closer to a normal form than the impulse response of the
medium. The previous results hold without restriction to the value
of $\sigma_{in}$ when $e_{in}(t)$ is Gaussian. In the case of a
Lorentzian line-profile, we easily get
\begin{equation}
\Delta t_{out}=\frac{3}{2\gamma\left(1+\sigma_{in}^{2}\gamma^{2}/G\right)}. \label{EQ10}
\end{equation}
We have compared the theoretical delay of the pulse maximum, namely
$\tau_{g}- \Delta t_{out}$, with the delay actually observed by Okawachi
\emph{et al.} in their Brillouin scattering experiment \cite{ref14}. Fig.\ref{fig6}
shows this delay as a function of the gain parameter $G$ for two
values of the pulse duration, respectively $\tau_{in} = 63\: ns$ ($\sigma_{in}\approx 38\: ns$)
and $\tau_{in}=15\: ns$ ($\sigma_{in}\approx 9\: ns$ ), with $\gamma = 0.22\: ns^{-1}$
($\gamma$ is the half of the full Brillouin linewidth $\Gamma_{B}$).
Without any adjustment of parameters, our analytical results satisfactorily
fit the observations. Note that the shifts $\Delta t_{out}$ are negligible
for the longer pulse but significant for the shorter one.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig6.eps}
\caption{Comparison of the delays of the maximum of the transmitted pulse observed in a Brillouin scattering
experiment \cite{ref14} (filled squares and circles) with our analytical predictions (full lines).\label{fig6}}
\end{center}
\end{figure}
\section{EFFECT OF THE TRANSMISSION BACKGROUND}
In the previous calculations, we have not taken into account the effect
of the instantaneous part $h_{i}(t)=\textrm{e}^{-A/2}\delta(t)$ of the impulse
response, arguing that its area is small compared to that of the delayed
part. As a matter of fact, $h_{i}(t)$ originates a contribution $\textrm{e}^{-A/2}e_{in}(t)$
to $e_{out}(t)$, the amplitude of which is roughly $\textrm{e}^{G/2}\sigma_{in}/\sigma_{out}$
times smaller than that of the main part and is thus actually negligible
in every case of substantial delay ($G\gg1$, $\sigma_{out}$ and
$\sigma_{in}$ of the same order of magnitude). We should however remark
that this result lies on the assumption that the transmission peak
is put on an uniform background.
We will now examine the case where the transmission background is
not uniform. This happens in all the experiments where a transparency-window
is induced in a absorption-profile of finite width, in particular
in the EIT experiments. As an illustrative example we consider the
simplest $\Lambda$ arrangement with a resonant control field. From
the results given in \cite{ref4}, we easily get
\begin{equation}
\Gamma(\Omega)=-\frac{\gamma_{ba}\left(i\Omega+\gamma_{ca}\right)A/2}{\left(i\Omega+\gamma_{ba}\right)\left(i\Omega+\gamma_{ca}\right)+\Omega_{s}^{2}/4} \label{EQ11}
\end{equation}
where $\gamma_{ba}$ (resp. $\gamma_{ca}\ll\gamma_{ba}$) is the coherence
relaxation-rate for the probe transition (resp. for the forbidden
transition), $\Omega_{s}$ is the modulus of the Rabi frequency associated
with the control field and $A=\alpha_{0}L\gg1$ is the resonance optical
thickness in the absence of control field. The control field makes
the resonance gain rise from $\textrm{e}^{-A/2}\approx0$ to
$H_{0}=\exp\left(-A\gamma_{ba}\gamma_{ca}/\left[2\left(\gamma_{ba}\gamma_{ca}+\Omega_{s}^{2}/4\right)\right]\right)$
and a good transparency is induced when $\Omega_{s}$ is larger than
or comparable to $\gamma_{ba}\gamma_{ca}A$. The width of the transparency-window
($\propto$ $\Omega_{s}$/$\sqrt{A}$ ) is then much smaller than that
of the absorption background ($\propto\gamma_{ba}\sqrt{A}$).
Without any approximation, the partial fraction decomposition of $\Gamma(\Omega)$
allows us to write the transfer-function of the medium as a product
of simpler functions, namely $H(\Omega)=H_{1}(\Omega)H_{2}(\Omega)$
with $H_{j}(\Omega)=\exp\left[C_{j}/\left(i\Omega+\gamma_{j}\right)\right]$.
According to the control power, the parameters $C_{1}$, $C_{2}$,
$\gamma_{1}$ and $\gamma_{2}$ are real or complex. When $\Omega_{s}<(\gamma_{ba}-\gamma_{ca})$,
$\gamma_{1}$ and $\gamma_{2}$ are real and positive whereas $C_{1}$
and $C_{2}$ are also real but of opposite sign. The EIT medium is
then equivalent to a medium with two Lorentzian lines both centred
at $\Omega=0$, respectively an absorption-line and a narrower gain-line.
It is also equivalent to a cascade of a gain medium and an absorbing
medium. When $\Omega_{s}>(\gamma_{ba}-\gamma_{ca})$, all the parameters
are complex with $\gamma_{2}=\gamma_{1}^{*}$ and $C_{2}=C_{1}^{*}$
. The two lines are now located at $\Omega=\pm\textrm{Im}(\gamma_{1})$.
They have the same intensity and the same width, but they are hybrid
in the sense that, due to the complex nature of $C_{1}$ and $C_{2}$,
their absorption and dispersion profiles are both the sum of an absorption-like
and a dispersion-like profile. We incidentally note that the parameters
used to obtain the figure 8 in \cite{ref4} correspond to such a situation.
In all cases, the impulse responses associated to $H_{1}(\Omega)$
and $H_{2}(\Omega)$ have analytical expressions \cite{ref23,ref26} and
the impulse response $h(t)$ of the medium is their convolution product.
This general analysis is satisfactory from a formal viewpoint. It
provides some physical insight into the EIT mechanisms but is not
really operational to determine the shape of the transmitted pulse.
From this viewpoint, a fruitful approach consists in exploiting the
fact that the medium is opaque except in the narrow region of induced
transparency and in the far wings of the background absorption-line
(Fig.\ref{fig7}).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig7.eps}
\caption{Gain profile in an EIT experiment and expanded view of its central part (upper scale). In the wings
as in the transparency window, the exact gain $ \lvert H(\Omega) \rvert$ (full line) is scarcely distinguishable
from the approximate form $ \lvert H^{(2)}(\Omega)+ H_{off}(\Omega)\rvert$ (dashed line).
Remind that $ \lvert H(\Omega) \rvert \rightarrow 1$ in the far wings. The parameters are $A=63$, $\gamma_{ba}/2\pi=5$ MHz,
$\gamma_{ca}/\gamma_{ba}= 2.2 \: 10^{-3}$ and $\Omega_{s}/\gamma_{ba}= 0.73$ . The frequency unit is $10^{6}$ Rd/s
(angular frequency).\label{fig7}}
\end{center}
\end{figure}
In the first region, $H(\Omega)$ is well approximated by
the forms $H^{(2)}(\Omega)$ or, if necessary, $H^{(3)}(\Omega)$
, obtained by keeping only the 2 or 3 first cumulants of $H(\Omega)$
as in the case of an uniform background. We only give here the simplified
expressions of these cumulants when $\gamma_{ca}\ll\gamma_{ba}$ and
$\Omega_{s}^{2}\gg\gamma_{ca}\gamma_{ba}$ (conditions of good induced
transparency). We then get $\kappa_{1}=\tau_{g}\approx2A\gamma_{ba}$/$\Omega_{s}^{2}$,
$\kappa_{2}=\sigma^{2}\approx16A\gamma_{ba}^{2}$/$\Omega_{s}^{4}$
and $\kappa_{3}=a\approx48A\gamma_{ba}(4\gamma_{ba}^{2}-\Omega_{s}^{2})$/$\Omega_{s}^{6}$
with $H_{0}\approx\exp\left(-2A\gamma_{ba}\gamma_{ca}/\Omega_{s}^{2}\right)$.
In the far wings $\left|\Omega\right|\gg\Omega_{s}$ and $H(\Omega)\approx H_{off}(\Omega)$,
where $H_{off}(\Omega)=\exp\left(-A/[2(1+i\Omega/\gamma_{ba})]\right)$
is the transfer-function when the control field is off. Finally we
get the relation $H(\Omega)\approx H^{(p)}(\Omega)+H_{off}(\Omega)$
with $p=2$ or $3$, valid at every frequency. Fig.\ref{fig7}, obtained for
typical physical parameters, shows that $\left|H^{(2)}(\Omega)+H_{off}(\Omega)\right|$
already provides a good approximation of the exact gain. Now reduced
to a simple sum instead of a convolution product, the medium impulse-response
reads $h(t)=h^{(p)}(t)+h_{off}(t)$ where $h_{off}(t)$, associated
with a Lorentzian absorption-line, has an analytical expression \cite{ref23}.
As in the case of a gain-line, this expression can be retrieved from
$H_{off}(\Omega)$ by using standard procedures of Laplace transforms
\cite{ref20}. It reads
\begin{equation}
h_{off}(t)=\delta(t)-\gamma_{ba}A\frac{J_{1}(\sqrt{2A\gamma_{ba}t})}{\sqrt{2A\gamma_{ba}t}}\exp\left(-\gamma_{ba}t\right)U(t)\label{EQ12}
\end{equation}
where $J_{1}(u)$ designates the ordinary $1^{st}$ order Bessel function.
The envelope $e_{out}(t)$ of the transmitted pulse will be thus the
sum of two terms. The first one is the approximate solution $e_{out}^{(p)}(t)$
obtained by the cumulants procedure. The second one reads $e_{off}(t)=h_{off}(t)\otimes e_{in}(t)$.
It is worth remarking that $h_{off}(t)$ is rapidly oscillating (characteristic
time $\propto1/A\gamma_{ba}$) and that its area is very small ($\int_{-\infty}^{+\infty}h_{off}(t)dt=H_{off}(0)=\textrm{e}^{-A/2}$
). This entails that $e_{off}(t)$ will have a negligible amplitude
($\propto e^{-A/2}$) when $e_{in}(t)$ is smooth enough so that the
far wings of its Fourier spectrum do not overlap those of the absorption-line.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig8.eps}
\caption{Propagation of a Gaussian-shaped pulse in the EIT experiment. The parameters are as in Fig.\ref{fig7} with
$ \sigma_{in}=1.5 \: \mu s $. The full and dashed lines respectively are the exact intensity-profile of the pulse and the
normal form. The location of the two maximums differ by $ 0.2 \: \mu s $ in agreement with
the relation $ \Delta t_{out}=a/2 \sigma_{out}^{2} $. Inset : the corresponding pulse-envelopes
for $-5 \: \mu s \leq t \leq 15 \: \mu s$ .\label{fig8}}
\end{center}
\end{figure}
Fig.\ref{fig8} shows the result obtained with a Gaussian-shaped incident-pulse. $A$
, $\gamma_{ba}$ and $\sigma_{in}$ being given, we have chosen the
other parameters in order to reproduce the location and the amplitude
of the maximum of the transmitted pulse in the celebrated experiment
by Hau \emph{et al.} \cite{ref7}. We then get $\sigma_{out}/\sigma_{in}\approx1.6$,
a broadening consistent with the observations, and $\xi_{out}\approx0.16$.
The asymmetry being very slight, the delay of the maximum is very
close to $\tau_{g}$ and $e_{out}(t)$ is well fitted by the normal
(Gaussian) form $e_{out}^{(2)}(t)$. A perfect fit is obtained by
using the improved form $e_{out}^{(3)}(t)$. The contribution $e_{off}(t)$
associated with $h_{off}(t)$ is actually too small to be visible.
Conversely $h_{off}(t)$ will be responsible for the generation of
short transients when $e_{in}(t)$ comprises localised defects. As
expected and recently discussed about the EIT experiments \cite{ref27},
the front of these transients will propagate at the velocity $c$
(instantaneously in our local time picture). Their peak amplitude
will be especially large when the defects consist in discontinuities.
Consider again a square-shaped incident-pulse, the total duration
$2\tau_{p}$ and the amplitude $\eta$ of which are such that its area and its
variance equal those of the Gaussian-shaped pulse (Fig.\ref{fig9}).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig9.eps}
\caption{Propagation of a square-shaped pulse in the EIT experiment. Parameters as in the two previous figures. The upper
and lower curves are the envelopes of the incident and transmitted pulses. The peak amplitude of the transients is slightly
smaller than its theoretical value due to the finite time-resolution of the computations (0.4 ns). Inset : the first transient
expanded on a $0.2 \mu s$ time-interval (bottom) and the same ($\times 10$) after passage through a $ 1^{st}$ order filter of
time-constant $ \sigma_{out}/100$ (top). Such a filtering does not significantly affect the smooth part of the pulse which keeps very
close to that obtained with an incident Gaussian-shaped pulse but reduces the amplitude (resp. the intensity) of the transient by a
factor of about 30 (resp. 900). \label{fig9}}
\end{center}
\end{figure}
$e_{off}(t)$ is then easily derived from $h_{off}(t)$. It reads
\begin{equation}
e_{off}(t)=\eta\left[f(t+\tau_{p})-f(t-\tau_{p})\right]\label{EQ13}
\end{equation}
with
\begin{equation}
f(t')=U(t')-\gamma_{ba}A\int_{0}^{t'}\frac{J_{1}(\sqrt{2A\gamma_{ba}x})}{\sqrt{2A\gamma_{ba}x}}\exp\left(-\gamma_{ba}x\right)dx. \label{EQ14}
\end{equation}
Each discontinuity in $e_{in}(t)$ actually originates a large transient.
Its initial amplitude is equal to that of the incident pulse and its
successive maximums of intensity occur at the instants $j_{1n}^{2}/2A\gamma_{ba}$
later, $j_{1n}$ being the $n^{th}$ zero of $J_{1}(u)$ \cite{ref28}.
Note that the amplitude of the transients exceeds that of the smooth
part of $e_{out}(t)$. In a real experiment however the finite values
of the rise and fall times of the incident pulse and of the detection
bandwidth will generally limit the importance of the transients. By
a deliberate reduction of the detection bandwidth, it is even possible
to bring their intensity to a very low level without significantly
affecting the delayed Gaussian-like part (see inset of Fig.\ref{fig9}).
The results obtained on the model EIT-arrangement hold for an extended
class of systems having a transparency-window in a wide absorption
profile. They only lie on three assumptions: (i) $\Gamma(\Omega)$
is Lorentzian in the far wings of the absorption profile (ii) the
opaque regions are much wider that the transparency-window (iii) the
transfer-function does not significantly deviate from the normal form
in the transparency-window. The first condition (i) is generally met
even when $\Gamma(\Omega)$ is not Lorentzian in its central part.
Anyway, it is not essential. If it is not met, the detailed shape
of the transients is modified but not their main features (instantaneous
transmission, duration proportional to the inverse of the spectral
width of the opaque regions). The conditions (ii) and (iii), which
are closely related, are met in the EIT experiments when the medium
transmission is good at the frequency of the optical carrier (see
before) but this is not always sufficient. As a counter-example we
consider the experiment achieved by Tanaka \emph{et al}. in an atomic vapour
with a natural transparency-window between two strong absorption lines
\cite{ref29}. The complex gain-factor reads
\begin{equation}
\Gamma(\Omega)=-\frac{A}{2}\left[\frac{1}{1+i(\Omega+\Delta)/\gamma}+\frac{1}{1+i(\Omega-\Delta)/\gamma}\right]\label{EQ15}
\end{equation}
where $2\Delta$ is the doublet splitting. Despite an apparent similarity,
the associated transfer-function dramatically differs from that obtained
in EIT when $\Omega_{s}>(\gamma_{ba}-\gamma_{ca})$. Indeed the two
involved lines are here purely Lorentzian (not hybrid) and a good
transparency at $\Omega=0$ is achieved only if $\Delta\gg\gamma$.
We then get $H_{0}\approx\exp\left(-A\gamma^{2}/\Delta^{2}\right)$,
$\tau_{g}\approx A\gamma/\Delta^{2}$ , $\sigma^{2}\approx6A\gamma^{2}/\Delta^{4}$,
$a\approx-6A\gamma/\Delta^{4}$ and $\xi\approx-\Delta^{2}/(\gamma^{2}\sqrt{6A})$.
Choosing the physical parameters such that $H_{0}$, $\tau_{g}$ and
$\sigma^{2}$ equal their values in the EIT experiment , we actually
obtain a quite different gain profile with opaque regions whose width
is smaller than that of the transparency-window (Fig.\ref{fig10}).
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=8cm]{fig10.eps}
\caption{. Gain-profile $ \lvert H(\Omega) \rvert$ for the two absorption-lines arrangement (full line) compared to the
corresponding normal gain-profile $ \lvert H^{2(}(\Omega) \rvert$ (dotted line). $H_{0}$, $\tau_{g}$, $\sigma$ and frequency
unit as in Fig.\ref{fig7}. Inset : envelope of the transmitted pulse for a Gaussian-shaped incident pulse (full line) compared
to that obtained with the EIT-arrangement (dotted line). $\sigma_{in}=1.5\:\mu s$ \label{fig10}}
\end{center}
\end{figure}
This entails that the approximation $\left|H(\Omega)\right|\approx\left|H^{(2)}(\Omega)\right|$
only works in the immediate vicinity of $\Omega=0$. It is the same
for the law $\Phi(\Omega)\propto\Omega$, the skewness being very
large ($\xi\approx-7.6$ ). In such conditions, even a Gaussian-shaped
pulse is strongly distorted (see inset of Fig.\ref{fig10}). In fact the two
absorption-lines arrangement allows one to attain large fractional
delays $\tau_{g}/\sigma_{in}$ with moderate distortion and broadening
but this requires to involve much larger absorption-parameters and
to accept a lower transmission. For example, Tanaka \emph{et al.} succeeded
in obtaining $\tau_{g}/\sigma_{in}\approx13$ with Gaussian-like pulses
(see Fig.4c in \cite{ref29}) but the peak-intensity of the transmitted
pulse was 75 times smaller than that of the incident pulse. Their
results are well reproduced with our two-lines model by taking $A=2.6\times10^{4}$ and
$\Delta/\gamma=110$. We then get $\sigma^{2}/\sigma_{in}^{2}\approx1/25\ll1$
and we actually are in a case of low distortion as previously discussed
(see Fig.\ref{fig4}). However, pulses with discontinuities are excluded. Since
$H(\infty)\gg H_{0}$, the resulting transients (not delayed) would
indeed be much larger than the delayed part of the transmitted pulse
and would obscure it. Moreover, due to the narrowness of the opaque
regions, the time scales of the transients and of the delayed part
do not considerably differ and it is thus impossible to filter out
the former without denaturing the latter.
\section{SUMMARY AND DISCUSSION}
Privileging the time-domain analysis, we have studied the linear propagation
of light pulses, the frequency of which coincides with that of a pronounced
maximum in the transmission of the medium. An important point is that
substantial pulse-delays are only attained when the corresponding
transmission exceeds the minimum one by a very large factor $C$ (contrast).
The impulse response of the medium then tends to a normal (Gaussian)
form, irrespective of the line profile associated with the transmission
peak. The propagation of arbitrarily shaped light-pulses with significant
delays and low distortion is possible when the \emph{rms} duration
$\sigma$ of the medium impulse-response is small compared to that
of the incident pulse ($\sigma_{in}$), which should itself be small
compared to the group delay $\tau_{g}$. The fulfilment of this double
condition requires systems where $C$ is extremely large, typically
several tens of thousands of $dB$ in a logarithmic scale. Systems
with such contrast have actually been used \cite{ref2} but, in most slow-light
experiments, $C$ ranges from 30 to 600 dB. Significant fractional
time-delays $\tau_{g}/\sigma_{in}$ keep attainable with such values
of $C$ by using incident pulses, the duration of which is comparable
to or smaller than $\sigma$. As the medium impulse-response, the
transmitted pulse then tends to acquire a normal (Gaussian) shape
whatever its initial shape is. This reshaping is particularly striking
when the incident pulse is square-shaped but is reduced to a simple
broadening when the latter is itself Gaussian-shaped. Despite its
asymptotic character, the normal form generally provides a good approximation
of the shape of the transmitted pulse. More precise shapes are obtained
by a perturbation method, allowing us in particular to specify how
much the delay of the pulse-maximum deviates from the group delay.
All these results have been first established by assuming that the
transmission peak is put on an uniform background. We have shown that
they also apply when the transmission peak is associated with a transparency
window in an absorption-profile of finite width. This however requires
that the nearly opaque regions flanking the transparency window be
considerably wider than the latter. Other things being equal, there
are then no differences between the cases of uniform and non uniform
transmission-backgrounds, at least when the envelope of the incident
pulse is smooth. Conversely localised defects in this envelope will
be responsible for the generation of very short transients which complement
the normal (Gaussian) part of the signal. The front of the transients
is instantaneously propagated in our local time picture (that is at
the velocity $c$ in a dilute sample). In extreme cases, their amplitude
may be comparable to that of the delayed signal but, due to their
location and their duration, they can easily be eliminated without
altering the latter.
The slow and fast light experiments have a common feature. In both
cases, the observation of significant effects requires media with a very large
contrast between the maximum and the minimum of transmission. This results from the
causality principle and implies severe limits to the effects attainable in fast-light
experiments, whatever the involved system is \cite{ref30}. From this viewpoint the slow-light case is obviously less pathologic and the
constraints, although real, are much softer.
\section{ACKNOWLEDGEMENTS}
Laboratoire PhLAM is Unit\'{e} Mixte de Recherche de l'Universit\'{e} de Lille I et du CNRS (UMR 8523).
CERLA is F\'{e}d\'{e}ration de Recherche du CNRS
(FR 2416).
|
1,108,101,565,159 | arxiv | \section{I. Non-reciprocal topological modes in 1D}
In the main text, we have made references to topological modes in non-reciprocal systems, where a competition exists between topological localization and the skin effect. To illustrate this, and also to understand the 4-band model in the main text more deeply, consider the non-reciprocal Su-Schrieffer-Heeger (SSH) model
\begin{equation}
H_{\text{SSH}}(k)=(t+t'\cos k)\sigma_x+(i\delta+t'\sin k)\sigma_y
\end{equation}
with $\sigma_x,\sigma_y$ the Pauli matrices. The effect of non-reciprocity is to make the SSH chain ``look different'' from either end. Without $\delta$, a Hermitian SSH chain has a topological boundary mode at both ends if $|t|<|t'|$ (assuming sublattices with ABAB...AB termination), with both left/right boundary modes having a decay length $L=-\left[\log|\frac{t}{t'}|\right]^{-1}$. But when the non-reciprocal hopping $\delta$ is present, the non-Hermitian SSH chain becomes topologically nontrivial if~\cite{yin2018geometrical,yao2018edge,lee2018anatomy} $\sqrt{t^2-\delta^2}<|t'|$, with generically unequal left/right decay lengths
\begin{eqnarray}
L_{\text{left}}&=& -\left[\log\left|\frac{t-\delta}{t'}\right|\right]^{-1}\notag\\
L_{\text{right}}&=& -\left[\log\left|\frac{t+\delta}{t'}\right|\right]^{-1}
\end{eqnarray}
as shown in Fig.~\ref{fig:corner1}.
At small non-reciprocity $\delta$, we have $L^{-1}_{\text{left}/\text{right}}\approx \mp \log |t/t'|$, i.e. almost identically decaying modes on each boundary.
But at large $\delta$, both modes can accumulate on the \emph{same} boundary: the right/left boundary mode disappears and reappears on the left/right when $\pm t\,\delta>0$ and $|t\pm \delta|>|t'|$.
\begin{figure}[H]
\centering
\begin{minipage}{.65\linewidth}
\subfloat[]{\includegraphics[width=.34\linewidth]{corner_gamma_001.jpg}}
\subfloat[]{\includegraphics[width=.33\linewidth]{corner_gamma_050.jpg}}
\subfloat[]{\includegraphics[width=.31\linewidth]{corner_gamma_100.jpg}}
\end{minipage}
\caption{a-c) Plots of $L_{\text{left}}$ (blue) and $L_{\text{right}}$ (red) as a function of $t$ for $\delta=0.01,0.5,1$ and $t'=1$, corresponding to a,b and c subfigures. The topological region is shaded in yellow. For very small $\delta$, the left and right modes have almost equal decay lengths. As $\delta$ grows, the decay length can become negative, which physically corresponds to a positive decay length on the other edge. At large $\delta$, i.e. $\delta=1$ (c), both edge modes exists at the left, leaving none on the right, even in the topological (yellow) region. }
\label{fig:corner1}
\end{figure}
\section{II. Second-order skin modes}
Consider a 2D non-reciprocal Hamiltonian $H({\bm k})$ with ${\bm k}=(k_x, k_y)$,
1D edge skin modes are obtained by taking OBCs in one direction, say $\hat{x}$, and can be effectively represented by a complex non-Bloch momentum mode with inverse decay length $\kappa_{x}({\bm k})$ determined by Eq.~(2) in the main text. $\kappa_{x}({\bm k})$ generically depends on $\bm k$ because the PBC modes at different $\bm k$ may ``collapse'' into skin modes under different amounts of imaginary flux.
The effective Hamiltonian for $x$-OBCs is hence given by $H\left(k_x+i\kappa_x({\bm k}), k_y\right)$.
Likewise, if OBCs are also taken in the $y$-direction, resultant skin corner modes will be governed by the effective Hamiltonian
\begin{equation}
H(\tilde{\bm k})=H\left(k_x+i\kappa_x({\bm k}), k_y+i\kappa_y({\bm k})\right)
\label{H2nd}
\end{equation}
with the inverse decay length $\kappa_y({\bm k})$ similarly determined by Eq.~(2) in the main text.
In higher dimensions, this procedure can be repeated ad infinitum until 0D corner modes with inverse decay lengths $\kappa_x({\bm k})$, $\kappa_y({\bm k})\,...$ are obtained.
Here, we provide a more detailed derivation of the second-order skin mode results of the simplest illustrative model from Fig.~1(b) of the main text. It is a 2D monoatomic lattice model given by
\begin{equation}
H_{\text{2D skin}}(\bold k)=t^x_+e^{-ik_x}+t^x_-e^{ik_x}+t^y_+e^{-ik_y}+t^y_-e^{ik_y}.
\label{skin2Dapp}
\end{equation}
As explained, under PBCs along both directions (double PBCs), the spectrum consists of a series of closed $k_x$ spectral loops parametrized by $k_y$. Equivalently, it can also be considered as a series of closed $k_y$ spectral loops parametrized by $k_x$. As such, it traces out a projection of a torus in the complex $E$ plane. This is illustrated in the center panel of Fig.~\ref{fig:skin2app}, with model parameters slightly deformed from the main text for additional graphical clarity.
When taking boundary conditions, the system can be taken as a 1D model in the direction of the OBC, with the other momentum taken as parameters. From the main text on the 1D 1-band model, OBCs in the $x$-direction ($x$-OBC/$y$-PBC) yield $k_y$-dependent skin modes in 1D given by
\begin{align}
E'&=E-t^y_+e^{-ik_y}-t^y_-e^{ik_y}\in \mathbb{R},\notag\\
|E'|&<2\sqrt{t^x_+t^x_-}.
\end{align}
As a parameter, $k_y$ modifies the effective energy $E'$ but does not affect $\kappa_x$. This can be seen in the top panel of Fig.~\ref{fig:skin2app}, where the $x$-OBC/$y$-PBC spectrum (gray) consists of straight lines (effective 1D skin modes) with centers displaced by $t^y_+e^{-ik_y}+t^y_-e^{ik_y}$. Further introducing OBCs also in the $y$-direction (double OBC), these effective 1D skin modes will accumulate at the top or bottom edges, forming the 0D skin corner modes. Geometrically, each eigenmode on the gray straight lines lies in an ellipse parametrized by $k_y$, and can thus still undergo another iteration of the skin effect pumping even though they already belong to an $x$-OBC spectra.
\begin{figure}
\centering
\subfloat[]{\includegraphics[width=.6 \linewidth]{2D_skin.jpg}}
\caption{Higher-order spectrum of $H_{\text{2D skin}}$ under different boundary conditions. Parameters are $t^x_+=0.9,t^x_-=0.7,t^y_+=0.3,t^y_-=1$. In the bottom panel, the skin modes (yellow) are skin modes of the skin modes (gray) of the PBC modes (brown), and lie on the real axis due to reflection symmetry about it.}
\label{fig:skin2app}
\end{figure}
Repeating an almost exact computation as in the 1D derivation, but with $E$ replaced by $E'$, we obtain
\begin{align}
|E|&<2(\sqrt{t^x_+t^x_-}+\sqrt{t^y_+t^y_-}),\qquad E\in \mathbb{R}\end{align}
as the loci of the skin corner modes, which lies within in the interior of the $x$-OBC/$y$-PBC loops [yellow line within bottom panel of Fig.~\ref{fig:skin2app}]. Their spatial location depends on the sign of the decay lengths $L^v_{\text{2D skin}}=\kappa^{-1}_v=\left[\log\sqrt{|t^w_+/t^w_-|}\right]^{-1}$, $w=x,y$: e.g. skin modes accumulate on the top right corner if $L^x_{\text{2D skin}}>0$ and $L^y_{\text{2D skin}}>0$.
Note that while the inverse decay lengths $\kappa_v$ are independent of Bloch momentum $\bold k$ in this simple model, in general they are not. Suppose we add further couplings beyond the nearest-neighbor terms $e^{\pm ik}$, or multiple bands, such that the PBC loops do not possess any geometric symmetry. Obviously then, their interior complex flux threading trajectories will no longer be symmetrical, and will terminate at degenerate arcs at different complex distances ($\kappa(\bold k)$) that are dependent on the original PBC starting point $\bold k$. An example with more than 1 band is shown in Fig.~\ref{fig:2ndorderapp}, which features the 4-band model $H_\text{4-band}$ from the main text with second-order skin corner modes, $t'$ adjusted to $0.5$ for greater graphical clarity. Its PBC loops cannot be easily deformed into rotationally-symmetric shapes, unlike ellipses, hence resulting in $\bold k$-dependent $\kappa_x(\bold k)$ and $\kappa_y(\bold k)$.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width=.75 \linewidth]{fig9_tt=05.pdf}}
\caption{Double PBC (brown), $x$-OBC/$y$-PBC (gray) and double OBC (yellow) spectra of $H_\text{4-band}$ of the main text with $t'=0.5$, $t_x=t_y=1$ and $\delta_{1,2,3,4}=0.4,-0.4,-0.8,0.8$. The second-order skin effect is manifestly presented too, but with $\bold k$-dependent $\kappa_x(\bold k)$ and $\kappa_y(\bold k)$. }
\label{fig:2ndorderapp}
\end{figure}
\section{III. PBC-OBC interpolations for $H_\text{4-band}$}
We now explicitly demonstrate how the PBC-OBC interpolation can be visualized using the technique of imaginary flux evolution, as first put forth by Ref.~\cite{lee2018anatomy}. To apply it, the key concept to understand is that the PBC-OBC evolution entails PBC eigenmodes moving from the PBC loop (with real momentum) into its interior, terminating only when the loop degenerates into arcs, lines or points. During this evolution, the eigenmodes necessarily become spatially localized (non-Bloch), because all extended Bloch states must lie on the PBC loop, where $\text{Im}\,\tilde k=\kappa=0$. Note that this evolution is only nontrivial in non-reciprocal lattices, since a reciprocal spectrum satisfies $E(k)=E(-k)$ and hence necessarily retraces back onto itself in an arc/line, which has no interior.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width=.6 \linewidth]{fig9_tt=2_flow.jpg}}\\
\subfloat[]{\includegraphics[width=.6 \linewidth]{fig10_tt=2_flow.jpg}}\\
\subfloat[]{\includegraphics[width=.6 \linewidth]{fig12_tt=2_flow.jpg}}
\caption{The $x$-OBC/$y$-PBC (gray) to double OBC (yellow) spectral flows (blue-purple) for the cases in Figs.~2d (a), 2g (b) and Fig.~3d (c) of the main text. Here the flow is taken with respect to the $y$-direction OBC, with the effective system being the 1D $y$-direction chain of $x$-OBC supercells. Notably, only the 1D topological edge modes (gray loops) exhibit imaginary spectra flow (skin effect) into the loop interior, leading to hybrid skin-topological modes.}
\label{fig:flows}
\end{figure}
\section{IV: Topological characterization of hybrid skin-topological modes}
The existence of hybrid skin-topological modes requires both topological effect and non-Hermitian skin effect along the boundaries. In the case of Fig. 3 in the main text, the topological aspect of the hybrid corner modes arises from the 1D topological boundary modes under x-OBC/y-PBC. With some further analysis, we find that there exist at least two types of transitions as shown in Fig. \ref{fig:transition1} and \ref{fig:transition2}, where we fix the intracell hoppings and the strength of non-reciprocity, and change the value of the intercell hopping $t'$.
When $t'$ is small, the four bands are separated in the complex plane with no boundary mode connecting them, as shown in Fig. \ref{fig:transition1}(a). Increasing $t'$ will induce a topological phase transition at $t'\simeq 0.2$ for the parameters we choose, where the energy-bands touch each other at the real axis when $k_y=0$. After this transition, the system develops a pair of chiral-like 1D boundary modes in its imaginary spectrum [Fig. \ref{fig:transition1}(c2)], which then become the hybrid skin-topological corner modes shown in Fig. \ref{fig:transition1}(c4).
Defining the Chern number $C_n$ for the $n$th band as
\begin{eqnarray}
C_n=\frac{1}{2\pi}\iint d k_x dk_y (\partial_{k_x}A_{n,k_y}-\partial_{k_y}A_{n,k_x}),~A_{n,\alpha}=\frac{i\langle \psi_n^{L} |\partial_\alpha| \psi_n^{R} \rangle}{\langle \psi_n^{L} | \psi_n^{R} \rangle},
\end{eqnarray}
we find that $C_n$ takes the value of $\pm 1$ in the presence of the above boundary modes, as indicated on the figure panel.
On the other hand, we have $C_n=0$ for each band when topological 1D boundary states are absent and hence hybrid skin-topological modes are also absent. This is illustrated in Fig. \ref{fig:transition1}(a4).
\begin{figure}[H]
\centering
\includegraphics[width=0.9 \linewidth]{transition_1.jpg}
\caption{The first column shows the x-OBC/y-PBC and double OBC spectra with gray and black colors respectively.
The second (third) column shows the imaginary (real) part of the x-OBC/y-PBC spectrum as a function of $k_y$.
The fourth column shows the summed squared eigenmode amplitude $\rho(x,y)$ under double OBC. The parameters are (a) $t'=0.1$, (b) $t'=0.2$, and (c) $t'=0.3$, with other parameters being $\delta_1=\delta_2=0.4$, $\delta_3=\delta_4=-0.6$, $t_x=t_y=1$. In (c1) we indicate the Chern number $C_n$ for each band.}
\label{fig:transition1}
\end{figure}
The second topological phase transition occurs at $t'\simeq 1$, where the energy bands touch each other at imaginary axis when $k_y=\pi$, as shown in Fig. \ref{fig:transition2}. After this transition, a pair of chiral-like 1D boundary modes emerges in its real spectrum [Fig. \ref{fig:transition2}(c3)]. The Chern number is found to be $C_n=0$ for each band in this case, suggesting that the two pairs of boundary modes correspond to opposite chiralities, and are no longer protected by a Chern topology.
Nevertheless, in this model, the above-mentioned topological phase transitions are seen to occur only at $k_y=0,\pi$, where the possible boundary modes cross each other at zero (imaginary or real) energy. Such crossing boundary modes can be characterized by a Berry phase defined for an effective 1D Hamiltonian with $k_y$ taken as a parameter,
\begin{eqnarray}
\gamma_n(k_y)=i\int dk \frac{\langle \psi_n^{L} |\partial_{k_x}| \psi_n^{R} \rangle}{\langle \psi_n^{L} | \psi_n^{R} \rangle},
\end{eqnarray}
at $k_y=0$ and $\pi$. A $\pi$ Berry phase at either of these two points ensure the existence of 1D boundary states, and hence hybrid skin-topological modes under double OBC.
In Fig. \ref{fig:phase_diagram} we illustrate the phase diagram of our model regarding $C_n$, $\gamma_n(0)$, and $\gamma_n(\pi)$, which clearly indicates the above two topological phase transitions. Note that the second transition does not eliminate the 1D boundary modes under x-OBC/y-PBC, therefore it does not affect the hybrid skin-topological modes qualitatively.
\begin{figure}[H]
\centering
\includegraphics[width=0.9 \linewidth]{transition_2.jpg}
\caption{The first column shows the x-OBC/y-PBC and double OBC spectra with gray and black colors respectively.
The second (third) column shows the imaginary (real) part of the x-OBC/y-PBC spectrum as a function of $k_y$.
The fourth column shows the summed squared eigenmode amplitude $\rho(x,y)$ under double OBC. The parameters are (a) $t'=0.6$, (b) $t'=1$, and (c) $t'=1.4$, with other parameters being $\delta_1=\delta_2=0.4$, $\delta_3=\delta_4=-0.6$, $t_x=t_y=1$.}
\label{fig:transition2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8 \linewidth]{phase_diagram.jpg}
\caption{The Chern number $C_n$ and Berry phase $\gamma_n(k_y)$ at $k_y=0$ and $\pi$, as functions of $t'$. In our model the Berry phase always takes the same value for different band index $n$. Other parameters are $\delta_1=\delta_2=0.4$, $\delta_3=\delta_4=-0.6$, $t_x=t_y=1$.}
\label{fig:phase_diagram}
\end{figure}
\section{V: SS0 and SSS modes of the 3D model}
To realize a higher-order SS0 skin-skin hinge mode, we stack 2D layers with skin edge modes (S0) from before [Fig.~2(e-g)], with symmetrized non-reciprocities $\delta_{1,2,4}=-\delta_3=0.8$. The 1D edge modes from each layer combine to form 2D ($\hat x$-$\hat z$) surface modes in the 3D system, and are pumped into 1D hinge modes by the surface skin effect induced by the nonzero net surface non-reciprocity. Indeed in Fig.~\ref{fig:3Dskin}(a), surface modes are driven toward the hinge at $(x,y)=(1,10)$, while those already on the hinge are also driven by the edge skin effect toward a corner, reminiscent of the cases in Fig.~4(b,c).
Similarity, the SSS skin modes can be realized by stacking 2D layers with skin corner modes (SS) of Fig.~2(b-d), as shown in Fig.~\ref{fig:3Dskin}(b). Note that in order to have skin effect along each direction, the net non-reciprocity along $z$ direction cannot destructively interfere in this case.
\begin{figure}[H]
\centering
\includegraphics[width=.6 \linewidth]{fig_S.pdf}
\caption{The total site-resolved density $\rho(x,y,z)$ of 3D skin modes with $t_{x,y}=1$ and $t'=2$ in the model of $H_{3D}$, with darker and larger circles indicating larger normalized amplitudes, and color indicating sublattice localization.
(a) SS0 hinge modes from stacks of S0 skin edge modes, with $\delta_{1,2,4}=-\delta_{3}=0.8$, and $\hat z$ couplings given by $t_\alpha=1$, $\delta_{a,c}=-\delta_{b,d}=0.8$; and (b) SSS corner modes from stacks of SS skin corner modes, with $\delta_{1,4}=-\delta_{2,3}=0.8$, $\hat z$ couplings given by $t_\alpha=1$, $\delta_{a,b,c,d}=0.8$.
}
\label{fig:3Dskin}
\end{figure}
\end{document}
|
1,108,101,565,160 | arxiv | \section{Introduction}
Recent astrophysical and cosmological observation
give precise determination on the relic
density of cold dark matter in the range~\cite{Spergel:2006hy}
$\abundcdm =0.104 \pm 0.009 $.
The well motivated candidate for dark matter (DM) is weakly interacting
massive particle (WIMP), especially the lightest neutralino.
In addition to the direct and indirect experiments to explore WIMP,
the Large Hadron Collider has already started to
reveal the secret of TeV energy
and is expected to find several superpartners and to determine their
properties. In particular, the feasibility was investigated to determine
the neutralino's mass $m_\chi$~\cite{mass,Barr:2008ba} and relic abundance
$\abundchi$~\cite{oh2atlhc-cmssm,oh2atlhc-mssm} from LHC
measurements.
An analogous study has also been done in the
context of the Linear Collider, where accuracy of a similar
determination would be much better~\cite{oh2atilc}.
However neutralino is found as
an apparently stable state in LHC detectors, but may not be the true LSP
and therefore not DM in the Universe. Instead, it could decay
into an even lighter and weakly interacting state, the real LSP, outside the detector.
Therefore it is necessary to confirm that we need another
measurement of neutralino by direct or indirect experiment and also
the mass and relic density should be consistent each other.
Moreover, the neutralino relic abundance, as determined at the LHC,
may come out convincingly outside the WMAP range.
If $\abundchi$ comes out below WMAP range, several
solutions have been suggested which invoke non-standard cosmology,
e.g. quintessence-driven kination,
while preserving the neutralino as the DM in the Universe.
However, if at
the same time direct and indirect DM searches bring null results,
or even worse, the lightest super particle turns out to be charged particle
such as scalar tau,
this will provide a
strong indication against the neutralino nature of DM.
In fact, these inconsistencies can be perfectly explained
with axino or gravitino cold dark matter, which is dubbed as
E-WIMPs~\cite{Choi:2005vq}.
Therefore this framework give us an opportunity to probe the features
of the early Universe, since the relic density of E-WIMPs depends on
the reheating temperature $\treh$.
Cosmic Lithium problems also can be solved with Gravitino
DM~\cite{Jedamzik:2005dh}.
In this talk, based on ref.~\cite{Choi:2007rh}, we investigate the
determination of reheating temperature in the E-WIMP scenario using
the possible collider measurement of mass and relic density of NLSP, such as
neutralino or stau etc.
\section{E-WIMPs and Reheating temperature $\treh$}
The spin-$1/2$ axino (the
fermionic superpartner of an axion) and the spin-$3/2$ gravitino (the
fermionic superpartner of a graviton) are both well-motivated E-WIMPs.
The former arises in SUSY extensions of models incorporating the
Peccei-Quinn solution to the strong CP problem. The latter is an
inherent ingredient of the particle spectrum of supergravity models.
The characteristic
strength of their interactions with ordinary matter is strongly
suppressed by a large mass scale, the Peccei-Quinn scale $\fa\sim
10^{11}\gev$ in the case of axinos and the (reduced) Planck scale
$\mplanck\simeq 2.4\times 10^{18} \gev$ for gravitinos. The mass of
them are very model dependent and can vary from keV up to TeV~\cite{ckn}
for axino and from eV to TeV for gravitino.
In this work we want to remain as
model-independent as possible and will treat $\maxino$ and
$\mgravitino$ as free parameters.
The possibility of
axinos as cold DM was pointed out in~\cite{ckr,ckkr},
while axinos as warm DM was considered in~\cite{rtw}.
The heavy axino was studied in~\cite{Choi:2008zq}.
The gravitino as a cosmological relic was
extensively studied in the literature. For more references refer
to~\cite{Choi:2007rh}.
There are two generic ways to produce axinos or gravitinos.
One proceeds via scatterings and decay processes of ordinary particles
and sparticles in thermal bath.
Its efficiency is proportional to their density in
the plasma which is a function of $\treh$ ({\em thermal production}).
The other comes from (out-of-equilibrium) decays of
the NLSPs, after their freeze-out, to E-WIMPs
({\em non-thermal production}).
The thermal production (TP) of axinos and gravitinos is a function of
$\treh$~\cite{ckkr}.
For high $\treh$, both are almost proportional to $\treh$.
For axino~\cite{bs04}
\beqa{
\abundatp\simeq 5.5\, g_s^6 \ln \left(\frac{1.108}{g_s} \right)
\left(\frac{\maxino}{0.1\gev} \right)
\left(\frac{10^{11}\gev}{\fa}\right)^2 \left(\frac{\treh}{10^4 \gev} \right),
\label{eq:TP_axino_bs}
}
where $g_s$ is temperature-dependent strong coupling constant, which
in the above expression is evaluated at $\treh$. Note that,
$\yaxinotp\propto\treh/\fa^2$,
and for gravitino~\cite{bbb00}
\begin{equation}
\abundgtp\simeq 0.27 \left(\frac{\treh}{10^{10}\gev}\right)
\left(\frac{100\gev}{\mgravitino}\right)
\left(\frac{\mgluino(\mu)}{1\tev}\right)^2,
\label{eq:abundgbbb}
\end{equation}
where $\mgluino(\mu)$ stands for the gluino
mass evaluated at a scale $\mu\simeq 1\tev$.
In the axino case, there is a sharp drop-off below
$\treh\sim1\tev$ due to Boltzmann suppression factor
$\exp{(-m/T)}$, with $m$ denoting here squark and gluino mass;
at lower $\treh$ superpartner decay processes become dominant but are less
efficient~\cite{ckkr}.
For the non-thermal production (NTP), the relic abundance is
simply given by
\beq
\abundlspntp = \frac{\mlsp}{\mnlsp} \abundnlsp.
\label{eq:ntp}
\eeq
The total abundance of the LSPs is the sum of both thermal and non-thermal
production contributions
and it is natural to expect that the LSP makes up most of CDM in the
Universe, thus we can write
\beqa{
\abundlsptp\left(\treh,\mlsp,\mgluino,\mnlsp,\ldots \right) +
\frac{\mlsp}{\mnlsp} \abundnlsp = \abundlsp = \abundcdm\simeq 0.1.
\label{eq:oh2relation}
}
Once the neutralino NLSP is discovered and its mass is determined at
the LHC with some precision, and so also $\abundnlsp=\abundchi$, then
eq.~(\ref{eq:oh2relation}) will provide a relation between $\treh$ and
$\mlsp$.
\section{Axino dark matter}
\begin{figure*}[t!]
\vspace*{-0.2in}
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{crr1-TR_Oh2_a.eps}
&
\includegraphics[width=0.45\textwidth]{crr1-TR_ma.eps}
\end{tabular}
\caption{Left panel: $\treh$ vs. $\abundnlsp$ for $\mnlsp=300\gev$ and
for $\maxino=0.01\gev$ (solid blue) and $\maxino=1\gev$ (dashed
red). The bands correspond to the upper and lower limits of dark
matter density from WAMP. Right panel: $\treh$
vs. $\maxino$ for $\abundnlsp=100$ (dashed blue), 0.1 (solid red) and
0.01 (dotted black). To the right of the solid vertical line the axino
is no longer the LSP. In both panels we set $\fa=10^{11}\gev$. }
\label{fig:axinotr}
\end{figure*}
\begin{figure}[!t]
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{crr1-Oh2_mnlsp_ma1gev.eps}
&
\includegraphics[width=0.45\textwidth]{crr1-mlspmax_Oh2.eps}
\end{tabular}
\caption{Left: Contours of the reheating temperature in the plane of
$\mnlsp$ and $\abundnlsp$ such that $\abunda=\abundcdm=0.104$. The axino
mass is assumed to be $1\gev$.
Right: Maximum values of $\mlsp$ as a function of $\abundnlsp$ for
representative values of $\mnlsp$. Once both $\abundnlsp$ and
$\mnlsp$ are determined from experiment, the upper bound on $\mlsp$
can be derived. The plot applies both to the axino and to the
gravitino LSP.}
\label{contour_axino1}
\end{figure}
First we consider axino as the LSP dark matter.
Using (\ref{eq:oh2relation}), we can find the relations between the parameters,
$\treh$, $m_{LSP}$, $m_{NLSP}$, and $\abundnlsp$.
For fixed two parameters, we plot the contour on the space of the other
two parameters in figures~\ref{fig:axinotr} and \ref{contour_axino1}.
From figure~\ref{fig:axinotr} (right panel), where $\maxino$ is small,
TP dominates, $\abundatp \simeq \abundcdm$, hence we find
$\treh\propto \fa^2/\maxino$.
This relation allows one to derive an {\em
upper bound} on $\treh$ if we use the fact that axinos have to be
heavy enough in order to constitute CDM. Assuming conservatively that
$\maxino\gsim100\kev$~\cite{ckkr}, we find $\trehmax<4.9\times10^5\gev$.
At larger $\maxino$ the NTP contribution becomes dominant and the
dependence on $\treh$ is lost,
but in this regime the LSP mass becomes largest, this
allows one to derive an {\em upper bound} on $\maxino$.
This is shown in fig.~\ref{contour_axino1} (right panel).
Note that fig.~\ref{contour_axino1} (right panel) is actually applied to both
the axino and the gravitino LSP since it follows from
eq.~(\ref{eq:ntp}).
\section{Gravitino dark matter}
\begin{figure*}[t!]
\vspace*{-0.2in}
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{crr1-TR_Oh2_G.eps}
&
\includegraphics[width=0.45\textwidth]{crr1-TR_mG.eps}
\end{tabular}
\caption{Left panel: $\treh$ vs. $\abundnlsp$ for $\mnlsp=300\gev$ and
for $\mgravitino=0.01\gev$ (solid blue) and $\mgravitino=1\gev$
(dashed red). The bands correspond to the upper and lower limits of
dark matter density from WMAP. Right panel: $\treh$
vs. $\mgravitino$ for $\abundnlsp=100$ (dashed blue), 0.1 (solid red)
and 0.01 (dotted black). To the right of the solid vertical line the
gravitino is no longer the LSP.}
\label{fig:gravitinotr}
\end{figure*}
\begin{figure*}[t!]
\vspace*{-0.2in}
\begin{tabular}{c c}
\includegraphics[width=0.45\textwidth]{crr1-Oh2_mnlsp_mG100gev.eps}
&
\includegraphics[width=0.45\textwidth]{crr1-TRmax_Oh2_G.eps}
\end{tabular}
\caption{Left: Contours of the reheating temperature in the plane of
$\mnlsp$ and $\abundnlsp$ such that $\abundg=\abundcdm=0.104$. The gravitino
mass is assumed to be $100\gev$.
Right: Maximum reheating temperature $\trehmax$ vs. NLSP relic density
$\abundnlsp$ with gravitino DM for NLSP mass $\mnlsp = 100 \gev$
(dashed red) and $300 \gev$ (solid blue).}
\label{contour_gravitino1}
\end{figure*}
For gravitino LSP, the analogous plots are shown in
figures~\ref{fig:gravitinotr} and~\ref{contour_gravitino1}.
For gravitino LSP, Big Bang Nucleosynthesis (BBN) gives strong constraint
since the lifetime of NLSP is around $1$ sec to $10^{12}$ sec due to the
suppressed interaction. The neutralino NLSP is almost excluded with
$\mgravitino\gsim 1 \gev$ due to BBN~\cite{fengetal,ccjrr}.
While $\yaxinotp$ is independent
of the axino mass, in the gravitino case $\ygravitinotp\propto
1/\mgravitino^2$. Thus $\abundatp\propto \maxino\treh$ while
$\abundgtp\propto \treh/\mgravitino$. In other words, if TP dominates,
$\abundgtp\simeq0.1$, we find
$\treh\propto\mgravitino$.
And then NTP dominates, $\treh$ drops down as shown
in figure~\ref{fig:gravitinotr} (right panel).
The turnover between the TP and NTP dominance
allows one to derive a conservative {\em
upper bound} $\trehmax$ which, unlike for the axino CDM, even without
knowing the gravitino mass. This is plotted in
fig.~\ref{contour_gravitino1} (right panel).
For stau NLSP case, in our numerical example in the right panel of
fig.~\ref{fig:gravitinotr}, with $\mstau=300\gev$, where we have also
taken $\mchi=477\gev$, the condition $\tau_{{\tilde \tau}} \newcommand\mstau{m_{\stau}}>10^3\sec$ implies
$\mgravitino\lsim2\gev$ and $\treh\lsim 9\times 10^6\gev$. Increasing
$\mstau$ to $1\tev$ and $\mchi$ to $1.5\tev$ leads to
$\mgravitino\lsim40\gev$ and $\treh\lsim 4\times 10^8\gev$.
More detailed study on $\treh$ bound with stau NLSP considering only
thermal production and using the constraint from bound state
effects can be seen in~\cite{Steffen:2008bt}.
\section{Summary}
We studied the possible determination of reheating temperature
with axino or gravitino LSP dark matter.
We find that once we know the mass of NLSP and other parameters
determining its relic
abundance from collider data we can give the reheating temperature
depending on the mass of LSP.
Even though the relic abundance of the NLSP is not measured precisely,
if the order of magnitude is much smaller or larger than WMAP range
then we can obtain conservative bound on the reheating temperature.
Note Added: Recently, and long after our work was published, a paper
appeared~\cite{Steffen:2008bt} which, using a different set of variables
(NLSP stau lifetime and mass, instead of our gravitino mass and NLSP mass)
and neglecting non-thermal contribution to $\abund$,
rederived several of our results.
Specifically, with stau mass around $1 \tev$, an upper limit on the
reheating temperature of $\treh \lesssim 10^8 \gev$ was obtained, in
agreement with ours.
\begin{theacknowledgments}
K.-Y.C. is supported
by the Ministerio de Educacion y Ciencia of Spain under
Proyecto Nacional FPA2006-05423 and by the Comunidad de Madrid under
Proyecto HEPHACOS, Ayudas de I+D S-0505/ESP-0346.
L.R is partially supported by the EC
6th Framework Programmes MRTN-CT-2004-503369 and
MRTN-CT-2006-035505. R.RdA is supported by the program ``Juan de la
Cierva'' of the Ministerio de Educaci\'{o}n y Ciencia of Spain.
\end{theacknowledgments}
|
1,108,101,565,161 | arxiv |
\section{Algorithm}
\label{sec:app_algo}
${\textsc{RepDIB}}$ can be integrated on any existing self supervised learning objectives for representations, to reap benefits of the compositional structure for either exploration during pre-training, or for downstream tasks. In this work, we mostly show benefits of ${\textsc{RepDIB}}$ integrated on top of the Proto-RL algorithm \cite{yarats2021protorl}. Algorithm \ref{algo:algo} shows (changes in blue) the additional steps required to integrate the variational information bottleneck (VIB) and discretization bottleneck on the Proto-RL baseline. Furthermore, for structured exploration by using the learnt latent representations, algorithm \ref{algo:algo_intrin_reward} shows how the intrinsic reward based on entropy maximization can be modified to have additional structure, on top of the exploration objective used in Proto-RL.
\begin{algorithm}[t]
\caption{ProtoRL with Vector Quantization and Variational Information Bottleneck}
\label{algo:algo}
\begin{algorithmic}[1]
\STATE Initialize replay buffer $D_\text{explore}$, $D_\text{task}$
\STATE Initialize critic network $Q_\phi$, explore actor network $\pi_{\theta}$, encoder $f_{\xi}$, and predictor $f_{p}$
\STATE (Stage I: Pretraining VQ and VIB modules)
\textcolor{blue}{
\FOR {k = 1, 2, ..., m}
\STATE Rollout with policy $\pi_{\theta}$
\STATE Collect experience in $D_\text{explore}$ using $\pi_\theta$ in the environment
\STATE Sample batch $(s, s') \sim D_\text{explore}$
\STATE Output representations $z_{s}=f_p(f_{\xi}(s))$ and $z_{s'}=f_p(f_{\xi}(s'))$
\IF {use Variational Information Bottleneck}
\STATE Divide state representation $z_s$ into two parts: $\mu_s$ and $\sigma_s$
\STATE Use reparameterization trick to get variational approximation:
\begin{equation}
\hat{z}_{s}=\mu_s + \sigma_s \epsilon, \epsilon\sim \mathcal{N}(0,1)
\end{equation}
\STATE Output embedding $\hat{z}_{s}=f(\hat{z}_{s})$ with a linear layer $f$, do the same for representation $z_{s'}$ and output embedding $\hat{z}_{s'}$
\STATE Compute VIB loss $\mathcal{L}_{\mathrm{gaussian}}=D_\text{KL}(p(\hat{z}|z)||q(\hat{z}|z))$, where we set $q(\hat{z}|z)$ as unit Gaussian.
\ENDIF
\STATE Output discrete state embedding $\tilde{z}_s$ and $\tilde{z}_{s'}$ using Discretization Bottleneck
\STATE Compute the loss function and update the corresponding modules:
\begin{equation}
\mathcal{L}=\mathcal{L}_{\mathrm{discretization}} + \mathcal{L}_{\mathrm{gaussian}}
\end{equation}
\ENDFOR}
\STATE (Stage II: Pretraining encoder modules)
\FOR {k = m, m+1, ..., n}
\STATE Collect experience in $D_\text{explore}$ using $\pi_\theta$ in the environment
\STATE Sample batch $(s_t, a_t, r_t, s_{t+1}) \sim D_\text{explore}$
\textcolor{blue}{ \IF {use Variational Information Bottleneck}
\STATE Do the same as in Stage I, output embedding $\hat{z}_{s}$ and $\hat{z}_{s'}$
\ENDIF}
\STATE Output discrete state embedding $\tilde{z}_s$ and $\tilde{z}_{s'}$ using (Discretization Bottleneck)
\STATE Compute ProtoRL loss $\mathcal{L}_{\mathrm{proto}}=-q_{s'}\log p_{s}$, where $q_{s'}$ is a target probability vector using prototypes, $p_{s}$ is a probability vector over prototypes
\STATE Update representation with total loss: $\mathcal{L} = \mathcal{L}_{\mathrm{proto}}\textcolor{blue}{ + \mathcal{L}_{\mathrm{discretization}} + \mathcal{L}_{\mathrm{gaussian}}}$
\STATE Compute intrinsic reward $r_\text{int}$ with Algorithm~\ref{algo:algo_intrin_reward}
\STATE Update $Q_{\phi}$ and $\pi_{\theta}$ with intrinsic reward $r_\text{int}$ only
\ENDFOR
\STATE (Stage III: Finetuning on downstream tasks)
\FOR {k = n, n+1, ..., K}
\STATE Collect experience in $D_\mathcal{D_{\mathrm{test}}}{task}$ using $\pi_\theta$ in the environment
\STATE Sample batch $(s, a, r, s') \sim D_\text{task}$
\textcolor{blue}{ \IF {use Variational Information Bottleneck}
\STATE Output embedding $\hat{z}_{s}$ and $\hat{z}_{s'}$
\ENDIF}
\STATE \textcolor{blue}{Output discrete state embedding $\tilde{z}_s$ and $\tilde{z}_{s'}$ using fixed pre-trained discretization bottleneck}
\STATE Update $Q_{\phi}$ and $\pi_{\theta}$ with external environment reward $r$ using any RL algorithm
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Compute Intrinsic reward with Vector Quantization and Variational Information Bottleneck}
\label{algo:algo_intrin_reward}
\begin{algorithmic}[1]
\STATE Input: current state representation $z_s$
\textcolor{blue}{\IF {use Variational Information Bottleneck}
\STATE Divide state representation $z_s$ into two parts: $\mu_s$ and $\sigma_s$
\STATE Output deterministic embedding $\hat{z}_{s}=\mu_s$ and pass it to a parameterized linear layer $f_l$: $\hat{z}_{s}=f_l(\hat{z}_{s})$$
\ENDIF
\STATE Output discrete state embedding $\tilde{z}_s$ using Discretization Bottleneck}
\STATE Compute Nearest Neighbor (NN) based entropy estimator as intrinsic reward:
\begin{equation}
\hat{r}=\left\|\tilde{z}_{s}-\mathrm{NN}_{k, \boldsymbol{Q}}\left(\tilde{z}_{s}\right)\right\|
\end{equation}
\STATE Output: intrinsic reward $\hat{r}$
\end{algorithmic}
\end{algorithm}
\clearpage
\section{Pseudocode and Implementation}
(a) code snippet of gaussian bottleneck:
\lstinputlisting[language=Python]{codes/gb.py}
(b) code snippet of computing intrinsic reward:
\lstinputlisting[language=Python]{codes/compute_int_r.py}
(c) code snippet of update proto:
\lstinputlisting[language=Python]{codes/update_proto.py}
\section{Visualization}
In our experiments, we constantly find that the use of a variational information bottleneck (VIB) prior to the discretization bottlenck significantly helps performance of ${\textsc{RepDIB}}$
\begin{figure}[ht]
\centering
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/tsne/baseline_top_right_figure.jpg}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/tsne/vq_top_right_figure.jpg}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/tsne/vq_gaussian_top_right_figure.jpg}
}
\caption{\textbf{Comparing Visualizations with and without bottleneck representations} t-SNE of latent spaces in the JacoReachTopRight task learned with Proto-RL (left t-SNE), ${\textsc{RepDIB}}$ (middle t-SNE), and ${\textsc{RepDIB}}$ with VIB (right t-SNE) after training has completed, color-coded with predicted state values (higher value yellow, lower value purple).}
\label{fig:vib_kl_weightings}
\end{figure}
\section{Hyperparameter Details}
\begin{table}[h]
\caption{\label{tab:hyper_maze} A set of hyper-parameters used in maze navigation tasks.}
\centering
\begin{tabular}{lc}
\hline
Hyper-parameter & Value \\
\hline
\: Size of Maze & $6 \times 6$ \\
\: Mini-batch size & $128$\\
\: Discount ($\gamma$) & $0.99$ \\
\: Optimizer & Adam \\
\: Learning rate & $3\times 10^{-3}$ \\
\: Critic target EMA rate ($\tau_Q$) & $0.01$ \\
\: Features dim. & $128$\\
\: Hidden dim. & $128$ \\
\: Number pre-training frames & $1\times 10^4$ \\
\: Number of discrete codes & $50$\\
\: Number of groups & $8, 16, 32$ \\
\: VIB coefficient & $0.01$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{tab:hyper_cont} A set of hyper-parameters used in continuous control tasks.}
\centering
\begin{tabular}{lc}
\hline
Common hyper-parameter & Value \\
\hline
\: Replay buffer capacity & $10^6$ \\
\: Seed frames & $4000$ \\
\: $n$-step returns & $3$ \\
\: Mini-batch size & $1024$\\
\: Seed frames & $4000$ \\
\: Discount ($\gamma$) & $0.99$ \\
\: Optimizer & Adam \\
\: Learning rate & $10^{-4}$ \\
\: Agent update frequency & $2$ \\
\: Critic target EMA rate ($\tau_Q$) & $0.01$ \\
\: Features dim. & $1024$\\
\: Hidden dim. & $1024$ \\
\: Exploration stddev clip & $0.3$ \\
\: Exploration stddev value & $0.2$ \\
\: Number pre-training frames & up to $2\times 10^6$ \\
\: Number fine-turning frames & up to $2\times 10^6$ \\
\: Number of discrete codes & $50$\\
\: Number of groups & $8, 16, 32$ \\
\: VIB coefficient & $0.01$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{tab:hyper_offline} A set of hyper-parameters used in offline tasks.}
\centering
\begin{tabular}{lc}
\hline
Common hyper-parameter & Value \\
\hline
\: $n$-step returns & $3$ \\
\: Mini-batch size & $256$\\
\: Seed frames & $4000$ \\
\: Discount ($\gamma$) & $0.99$ \\
\: Optimizer & Adam \\
\: Learning rate & $3\times 10^{-4}$ \\
\: Critic target EMA rate ($\tau_Q$) & $0.01$ \\
\: Features dim. & $256$\\
\: Hidden dim. & $1024$ \\
\: Number pre-training frames & $1\times 10^5$ \\
\: Number fine-turning frames & $1\times 10^5$ \\
\: Number of discrete codes & $512$\\
\: Number of groups & $4, 8, 16, 32$ \\
\: VIB coefficient & $0.01$ \\
\hline
\end{tabular}
\end{table}
\section{Additional Experiment Results and Details}
\subsection{Visual Offline RL with Exogenous Observations in Datasets}
\label{sec:visual_offline_exps}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/correlated/1.png}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/correlated/2.png}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/correlated/3.png}
}
\caption{Sample observations from the visual offline datasets with exogenous time correlated images in the background. The exogenous background image changes per episode during offline data collection}
\label{fig:offline_correlated_viz}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/change_video/1.png}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/change_video/2.png}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/visualizations/change_video/3.png}
}
\caption{Sample observations from the visual offline datasets with exogenous changing video in the background. The exogenous background video distractor changes per episode during offline data collection}
\label{fig:offline_video_viz}
\end{figure}
\textbf{Experiment Setup and Details : } We evaluate ${\textsc{RepDIB}}$ on visual offline datasets from the v-d4rl benchmark \cite{vd4rl}. Data collection details on different domains are provided in \cite{vd4rl}. In addition, we also consider an extension of the v-d4rl benchmark, where we re-collect the data with additional \textit{exogenous noise} present in the observations. We follow the same data collection procedure as in v-d4rl, except during data collection, there are two variations of exogenous noise that is considered. $1.$ We first consider a time correlated exogenous noise setting where during data collection, at each episode the agent sees the environment observation and an additional background image from the CIFAR dataset. This image changes per episode of data collection, and we introduce this such that learning robust representations by avoiding the distractors plays an important role for policy learning. $2.$ We then consider a setting where instead of images that changes per episode, we now have a video distractor that changes at every episode of data collection. This is considered an even harder setting since the agent sees the observations while in addition there are unrelated video data playing in background.
For the baseline policy optimization RL algorithms, we follow the same experiment pipeline as in \cite{vd4rl}. The major difference being, we additionally train the encoders with a representation learning objective where we pre-train the encoders with a fixed $100k$ timesteps. Following that, the learnt representations are kept fixed and we fine tune the downstream policy learning algorithms on top of the fixed pre-trained representations. For the RL algorithm, as in \cite{vd4rl} we use the TD3 + BC algorithm, since it has recently been shown to achieve state of the art performance on offline control tasks.
We provide additional results evaluating ${\textsc{RepDIB}}$ on top of learnt representations in the visual pixel based offline RL setting. We implement ${\textsc{RepDIB}}$ on top of the multi-step inverse dynamics objective \cite{lamb2022guaranteed}, 1-step inverse dynamics \cite{pathak2017curiosity} and the temporal contrastive learning based DRIML \cite{MazoureCDBH20} objective. We show that ${\textsc{RepDIB}}$ additionally compresses the learnt latent representations using the factorial bottlenecks, which makes the method quite effective and robust especially when there is additional exogenous information present in the observations \cite{Efroni2021ppe}. We use different types of distractors in the offline datasets, where exogenous information can be either in the form of correlated background images or changing video distractors playing in the background during data collection. Figures \ref{fig:offline_correlated_viz} and \ref{fig:offline_video_viz} shows sample observations from the offline dataset.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/acro/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/acro/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/one_step/cheetah_run_expert.pdf}
}
\\
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/one_step/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/driml/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/correlated/driml/walker_walk_expert.pdf}
}
\caption{\textbf{Time correlated changing background image exogenous noise in offline datasets}. We consider a setting where the observations in the pixel based offline data consists of changing background image distractors in the background. The background exogenous noise is introduced during the data collection procedure. Such a setting requires learning of robust representations that can be invariant to the exogenous images. We consider 3 different representation learning objectives (a) Multi-Step Inverse (b) One-Step Inverse and (c) DRIML, where the encoders are pre-trained with these self-supervised objectives, followed by ${\textsc{RepDIB}}$. We show that for different factorial representations based on groups of factors $4, 8, 16, 32$, the ability of these methods to learn robust representations due to ${\textsc{RepDIB}}$ significantly increases, making them more robust to the exogenous offline datasets.}
\label{fig:offline_correlated_appendix}
\end{figure}
\textbf{Experiment Results :} Our experiments show that existing representation learning methods can suffer in presence of this exogenous noise being present, since the representations cannot fully avoid the distractors. In contrast, when adding ${\textsc{RepDIB}}$ on top of the learnt representations, we find that compressed bottleneck representations can help in avoiding the distractors, improving the overall performance in the downstream offline RL tasks consisting of visual observations. We evaluate ${\textsc{RepDIB}}$ on top of learnt representations using a 1-step inverse dynamics objective \cite{pathak2017curiosity}, a multi-step inverse dynamics objective \cite{lamb2022guaranteed, Efroni2021ppe} and the temporal contrastive learning based DRIML objective \cite{MazoureCDBH20}. Our results show that especially when exogenous noise is present in the observations, existing state of the art representation learning methods can suffer dramatically, leading to an overall degradation of performance. In contrast, addition of ${\textsc{RepDIB}}$ can lead to improved performance due to bottlenecks that can capture factorial representations, while avoiding the exogenous distractors.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/acro/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/acro/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/one_step/cheetah_run_expert.pdf}
}
\\
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/one_step/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/driml/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/change_video/driml/walker_walk_expert.pdf}
}
\caption{\textbf{Changing video exogenous noise in offline datasets}. We then consider a setting where there is background video distractors that changes per episode during data collection. Using ${\textsc{RepDIB}}$ on top of the learnt representations from the same 3 different representation objectives, we find that in particular, the multi-step and one-step inverse models learns more robust representations compared to the DRIML objective. The changing background video distractors is considered to be a hard offline setting, since there is time correlated exogenous information continuous changing and playing in the background. We show that ${\textsc{RepDIB}}$ improves the sample efficiency and overall performance of these methods, when used on visual offline data where learning robust representations plays a key role.}
\label{fig:offline_change_video_appendix}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/acro/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/acro/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/one_step/cheetah_run_expert.pdf}
}
\\
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/one_step/walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/driml/cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/offline_vd4rl/no_distractor/driml/walker_walk_expert.pdf}
}
\caption{\textbf{Visual offline datasets from the v-d4rl benchmark \cite{vd4rl} without any additional exogenous distractors}. We showed that ${\textsc{RepDIB}}$ can learn effectively robust robust representations in presence of correlated exogenous noise as in figures \ref{fig:offline_correlated_appendix} and \ref{fig:offline_change_video_appendix}. Here we show that without any distractors being present, ${\textsc{RepDIB}}$ does not necessarily always outperform baselines, as shown in the results with the DRIML objective. This validates our claim that ${\textsc{RepDIB}}$ based bottlenecks can be particularly effectively when observations consist of exogenous information, such that ${\textsc{RepDIB}}$ can be used to learn more robustly. Without any distractors, it is not always necessary that ${\textsc{RepDIB}}$ based representations would always outperform baselines without bottlenecks. }
\label{fig:offline_no_distractor_appendix}
\end{figure}
\section{Significance of VIB and DIB for {\textsc{RepDIB}}}
\label{rebuttal:vib_dib_comparison}
In this section, we include additional results based on ablation studies of the {\textsc{RepDIB}} objective. In figures \ref{fig:rebuttal_ablation_image} and \ref{fig:rebuttal_ablation_video} we include ablation studies where we compare {\textsc{RepDIB}} with \textit{only} using the discrete information bottleneck (DIB) compared to \textit{only} using the variational information bottleneck (VIB). We do this on top of several existing representation objectives as described in section \ref{sec:visual_offline_exps}. Experimental results show that the significance in performance improvement of {\textsc{RepDIB}} can primarily be achieved when we use the VIB bottleneck prior to the DIB bottleneck, as we have explained previously in the main draft. Without the combination of the two, simply using one of the bottlenecks does not lead to the expected performance improvements.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/driml_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/icm_walker_walk_expert.pdf}
}
\caption{Ablation studies on the {\textsc{RepDIB}} bottleneck on time correlated exogenous distractors in the observations of offline datasets, as per the setup described in section \ref{sec:offline_exo}}
\label{fig:rebuttal_ablation_image}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/ac_state_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/icm_walker_walk_expert.pdf}
}
\caption{Ablation studies on the {\textsc{RepDIB}} bottleneck on changing background video based exogenous distractors in the observations of offline datasets, as per the setup described in section \ref{sec:offline_exo}}
\label{fig:rebuttal_ablation_video}
\end{figure}
\subsection{Generalization on Continuous Control Tasks using URLB Benchmark}
\begin{figure}[!htbp]
\centering
\includegraphics[
width=0.9\textwidth]{figures/envs/DMC_env.png}
\caption{The three domains (walker, quadruped, jaco arm) and twelve downstream tasks.}
\label{fig:dmc_env}
\end{figure}
\textbf{Experiment Setup and Details : } We follow the same set of domains and downstream tasks in \cite{URLB} (See Figure \ref{fig:dmc_env}). Specifically, from easiest to hardest, the domains and tasks are: \textbf{Walker} (\textit{Stand}, \textit{Walk}, \textit{Flip}, \textit{Run}): An improved planar walker based on the one introduced in \cite{LillicrapHPHETS15}. In \textit{Stand} task, the reward is a combination of terms encouraging an upright torso and some minimal torso height, and in \textit{Walk} and \textit{Run} tasks, the reward is proportional to forward velocity, while in \textit{Flip} task, it is relative to angular velocity. \textbf{Quadruped} (\textit{Stand}, \textit{Walk}, \textit{Jump}, \textit{Run}): A quadruped within a a 3D space. The reward function defined in quadruped is similar to that in walker, but quadruped is harder due to a high-dimensional state and action spaces and 3D environment. \textbf{Jaco Arm} (\textit{Reach top left}, \textit{Reach top right}, \textit{Reach bottom left}, \textit{Reach bottom right}): Jaco Arm is a 6-DOF robotic arm with a three-finger gripper that tests the ability to control the robot arm to perform simple manipulation tasks. More detailed explanation refer to \cite{URLB}. In Table \ref{tab:hyper_cont} we present a set of hyper-parameters used in continuous control tasks.
\textbf{Experiment Results : } For the online continuous control tasks, we test for generalization using the URLB benchmark \cite{URLB}, on 12 different environments as shown in figure \ref{fig:dmc_env}. In these experiments, representations are pre-trained on one environment for $100k$ pre-training steps, followed by fine-tuning both the RL algorithm and the encoder in a different environment. Existing baselines such as the ProtoRL \cite{yarats2021protorl} has already shown impressive performance compared to other baselines on the URLB benchmark. For more details on experiment setup and comparisons of ProtoRL with other baselines, see \cite{URLB}. In this task, we take the open source code of the ProtoRL baseline and simply integrate ${\textsc{RepDIB}}$ on top of the encoders, where we have different factors for learning representations. The goal of the experiments is to show that when using compressed representations that are structured and factorial in nature, the compression on the pre-training task helps in fine-tuning on other tasks when the same bottleneck is again applied. Figure \ref{fig:urlb_ablations} summarizes the ablation studies of ${\textsc{RepDIB}}$ built on top of the ProtoRL baseline. Our results in figure \ref{fig:urlb_ablations} show that fine-tuning performance is mostly improved compared to the baseline, typically for higher factors of representation.
\begin{figure*}[!htbp]
\centering
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/walker_flip.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/walker_run.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/quadruped_jump.pdf}
}\\
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/quadruped_run.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/quadruped_stand.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/quadruped_walk.pdf}
}\\
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/jaco_reach_bottom_left.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/jaco_reach_bottom_right.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/urlb/ablations/jaco_reach_top_left.pdf}
}
\caption{\textbf{URLB Benchmark for Continuous Control} Ablation analysis on the URLB benchmark, integrating ${\textsc{RepDIB}}$ on top of the ProtoRL baseline with different factorizations of the discrete bottleneck. Our experiment results show that the factorization in representation, depending on the number of factors, can play a vital role in improving the performance on the generalization task.}
\label{fig:urlb_ablations}
\end{figure*}
\subsection{Robot Arm Experiment}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/robot_data_1.jpg}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/robot_data_2.jpg}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/robot_data_3.jpg}
}
\caption{Experiment setup for robot data collection in presence of exogenous or irrelevant background information. Figure shows three different images, with varying background information, for the robot arm, which are part of the collected dataset.}
\label{fig:robot_arm}
\end{figure}
\paragraph{Robot Arm Experiment with Background Video Exogenous Distractors:} The robot arm in our experiments moves in a grid with $9$ different positions. We use two cameras to take images, for the dataset, one from the front side of the robot and the other with a top down view from above. We collect an image after each action is taken. The robot has 5 actions to take : move forward, backwards, right, left or stay in the current state. We use an episodic length of $500$, ie, the robot arm moves for $500$ steps after which we re-calibrate. The robot arm dataset is collected with a random uniform policy, for a total of $6$ hours collecting $14000$ samples.
For learning the representation $\phi$ given the images, we use a small convolutional neural network to get an estimate $\phi(x)$ of the images $x$. In addition to the CNN network, we further learned the latent state representation with a multi-step inverse dynamics model $p(a \mid \phi(x), \phi(x_k))$, which predicts actions, given current representation $\phi(x)$ and a future representation $\phi(x_k)$. The model is trained with a cross entropy loss, with the ground truth actions available in the dataset. We use a metric of classification accuracy for evaluating the performance of ${\textsc{RepDIB}}$.
\begin{figure}
\centering
\subfigure[ Seaquest]{\includegraphics[width = 6cm]{figures/Seaquest.pdf}}
\subfigure[ Breakout]{\includegraphics[width = 6cm]{figures/Breakout.pdf}}
\caption{Here we show the effect of the number of discretization factors on the model performance for 2 different Atari games. On the ALE benchmark, we find that factor $4$ usually outperforms other factors when learning representations with factorial structure.}
\label{fig:factors}
\end{figure}
\subsection{Atari Benchmark with Exogenous Observations}
\textbf{Experiment Setup : } We follow the experiment setup of decision transformers on the Atari domain following \cite{NEURIPS2021DT}. However, in addition to the environment observations from Atari games, we additionally augment the observations with an exogenous noise on the side. For this, we use CIFAR images placed on side of environment observations as exogenous noise. In Figure \ref{fig:atari_example_observations}, we show example observations from atari games with exogenous noise added. The goal is to see the effect of ${\textsc{RepDIB}}$ when integrated on top of a multi-step inverse dynamics objective for learning robust representations \cite{lamb2022guaranteed}. We keep most of the hyperparameter details same as used in \cite{NEURIPS2021DT}. They use episodes of fixed length during training - also referred to as the \textit{context length}. We use a context length of 30 for Seaquest and Breakout. Similar to \cite{NEURIPS2021DT}, we consider one observation to be a stack of 4 atari frames. To implement the multi-step inverse objective, we sample 8 different values for $k$ and calculate the objective for each value of $k$, obtaining the final loss by taking the sum across all the sampled values of $k$. We do not feed the embedding for $k$ in the MLP that predicts the action while computing the multi-step inverse objective.
\begin{figure}
\centering
\subfigure[ Pong]{\includegraphics{figures/pong.png}}
\subfigure[ Qbert]{\includegraphics{figures/qbert.png}}
\subfigure[ Seaquest]{\includegraphics{figures/seaquest.png}}
\subfigure[ Breakout]{\includegraphics{figures/breakout.png}}
\caption{Example observations from 4 different Atari games, with exogenous images placed on the side of environment observations. We add exogenous noise to show the importance of learning robust representations using an information bottleneck following ${\textsc{RepDIB}}$. }
\label{fig:atari_example_observations}
\end{figure}
Figure \ref{fig:factors} shows the effect of different factors used in the discrete information bottleneck of ${\textsc{RepDIB}}$. We check the effect of the number of discretization factors on the model performance. The effect of the number of factors on overall performance of the Decision Transformer can vary depending on the game and domain.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{figures/multi_modal/sample_mmact.png}
\caption{Sample data of human activities from MMAct dataset with five modalities: Visual views 1 \& 2, Gyroscope, Orientation, and Acceleration.)}
\label{fig:mmact_sample}
\end{figure}
\subsection{Multi-Modal Representation Learning on Human Activity Recognition Task}
\begin{table}[!t]
\centering
\small
\begin{tabular}{ccc}
\toprule
Method & F1-Score (\%) \\ \hline
SVM+HOG \cite{ofli2013berkeley_mhad} & 46.52 \\
TSN (RGB) \cite{wang2016temporal_tsn} & 69.20 \\
TSN (Optical-Flow) \cite{wang2016temporal_tsn} & 72.57 \\
MMAD \cite{kong2019mmact} & 74.58 \\
TSN (Fusion) \cite{wang2016temporal_tsn} & 77.09 \\
MMAD (Fusion) \cite{kong2019mmact} & 78.82\\
Keyless \cite{keyless} & 81.11 \\
HAMLET \cite{islam2020hamlet} & 83.89 \\
RepDIB+MM(Keyless) & 71.35 \\
\textbf{RepDIB+MM(RepDIB+Uni)} & \textbf{84.96} \\
\bottomrule
\end{tabular}
\caption{\textbf{Cross-session performance} comparison (F1-Score) of multimodal learning methods on MMAct dataset}
\label{tab:mmact_session}
\end{table}
\paragraph{Dataset: } MMAct dataset contains 37 activities (e.g., carrying objects, fall, kicking, talking on the phone, jumping, using PCs, sitting). Twenty people performed each activity five times, resulting in $37k$ data samples. All the activities are captured using data from seven modalities: four RGB views, acceleration, gyroscope, and orientation. We used data from two opposing RGB visual views, acceleration, gyroscope, and orientation modalities to train and test. MMAct dataset contains visually occluded data samples, which allows evaluating the effectiveness of HAR approaches for real-world settings. Human activity sample data are depicted in Figure~\ref{fig:mmact_sample}.
\paragraph{Experimental Setup for Multimodal Model Evaluation in Cross-Session Setting: } In this supervised learning task, the model uses multimodal sensor data to recognize human activities. We extend state-of-the-art multimodal representation learning models to extract salient representation using RepDIB information bottleneck. We extended the baseline multi-modal models in two ways to incorporate VQ bottleneck: \textbf{{\textsc{RepDIB}}+MM: } We extract multi-modal representations using existing models (e.g. Keyless \cite{keyless} and HAMLET \cite{islam2020hamlet}) and then apply VQ bottleneck on the fused multi-modal representations. \textbf{{\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni):} We applied VQ bottleneck in two steps. First, we extract unimodal representations and apply VQ bottleneck to produce discretized unimodal representations. These discretized representations are fused and passed thorough a VQ bottleneck to produce task representations for the activity recognition.
In the baselines, we used five modalities: two viewpoints of RGB videos and three wearable sensors (acceleration, gyroscope, and orientation). We evaluated all the baselines on the MMAct dataset in a cross-session evaluation setting and reported F1-Score of activity recognition task \cite{kong2019mmact}. In the cross-session evaluation setting, the training and testing datasets can contain data from the same human subjects.
We train these models using cross-entropy loss. We use Adam optimizer with weight decay regularization and cosine annealing warm restarts learning scheduler, where the initial learning rate is set to $3e^{-4}$. To train the learning model on the MMAct dataset, we set the cycle length ($T_0$) and cycle multiplier ($T_{mult}$) to $30$ and $2$, respectively. We trained the models for $210$ epochs in the distributed GPUs cluster environment, where each node contains $8$ A100 GPUs. We used Pytorch and Pytorch-Lightning frameworks to implement all the models. To ensure reproducibility we a fixed seed.
\paragraph{Experimental Results: } The results in Table~\ref{tab:mmact_session} suggest that incorporating VQ bottleneck on the existing multi-modal learning model (Keyless) degrades the F1-score of activity recognition from $81.11\%$ to $71.35$. However, applying the VQ bottleneck both on the unimodal and multimodal representations improves the performance of the models compared to the models that do not use ${\textsc{RepDIB}}$ or use ${\textsc{RepDIB}}$ only on the multimodal representations. For example, ${\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni)$ model uses the same HAMLET model and applies ${\textsc{RepDIB}}$ on the unimodal and multimodal representations. ${\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni)$ improves the F1-score of activity recognition to $84.96$ and outperforms all the evaluated multimodal models. Thus, hierarchical VQ bottlenecks can help to extract salient multimodal representation for accurately recognizing activities.
\subsection{Maze Navigation Tasks}
\begin{figure}[!htbp]
\centering
\subfigure[GridWorld]{
\includegraphics[
width=0.3\textwidth]{figures/envs/grid_env.png}
}
\hspace{-0.8cm}
\subfigure[LoopWorld]{
\includegraphics[
width=0.3\textwidth]{figures/envs/loop_env.png}
}
\hspace{-0.8cm}
\subfigure[SpiralWorld]{
\includegraphics[
width=0.3\textwidth]{figures/envs/spiral_env.png}
}
\caption{Three environments in maze navigation tasks. The blue lines represent the walls.}
\label{fig:maze_env}
\end{figure}
\paragraph{Maze Navigation Tasks} We develop three kinds of environments for maze navigation tasks: \textit{GridWorld}, \textit{SpiralWorld}, \textit{LoopWorld} (see Figure \ref{fig:maze_env}). All of these environments share the same action space and state space, but their dynamics are slightly different. \textit{Gridworld} is the easiest task that without any walls so that the agent can go wherever it wants. \textit{SpiralWorld} is the hardest one that has spiral-shaped walls blocking the path of the agent, where the agent can only navigate along the spiral grid. \textit{LoopWorld} is a variant of \textit{SpiralWorld} in which the agent can pass through a vacancy in the bottom right corner of the spiral-shaped wall. The task is to choose from one of four directions to travel in at each timestep. The reward function given is -1 at all steps until it reaches the goal where it receives a reward of 0 and the episode is terminated. During pre-training stage, we learn the state representations on \textit{GridWorld} with the data collected by a random policy. During the fine-tuning stage, the agent is trained to reach a goal from a small finite set of training goals, and the agent is tasked with reaching a fixed goal at the center of the maze during evaluation. In Table \ref{tab:hyper_maze} we present a set of hyper-parameters used in maze navigation tasks.
\section{Demonstrating Factorial Representation}
\label{rebuttal:factorial_representation}
\begin{figure*}
\centering
\subfigure[Brightness and Details]{\includegraphics[width=0.9\textwidth]{figures/rebuttal/reconstruction_165.png}}
\subfigure[Different colors]{\includegraphics[width=0.9\textwidth]{figures/rebuttal/reconstruction_250.png}}
\caption{PACS-cartoon-elephant dataset example to demonstrate factorized representations. Top row: Original Image; Second row: Reconstructed Image without substitution; Third row: Reconstructed Image with one groups of discrete codes substituted by zero vectors; Last row: Reconstructed Image with the other groups of discrete codes substituted by zero vectors.}
\label{fig:pacs}
\end{figure*}
We demonstrate that with the discrete factorial information bottleneck, the agent is capable of learning factorial representations on real world data. We provide more details as follow.
\paragraph{Experiment details} To investigate whether the agent has the ability to learn semantic factorial representation with {\textsc{RepDIB}}, we use the cartoon domain images from a benchmark dataset called PACS~\cite{PACS}, where only the elephant category is utilized for training and evaluation for the purpose of the intuitively illustration. The pixel-based input, with the size of 224x224, is first passed through an encoder (consists of CNN layers with the resnet block) to obtain its latent representation with the dimension of 32, then latent representation is quantized into two groups of discrete codes, where the codebook size is 512. After that, two groups of discrete codes are concatenated to obtain the representation, and finally passed through a decoder network (consists of CNN layers with the resnet block). Here we used reconstruction loss (MSE loss) combining with the loss for vector quantization to train the network. For visualizing the semantic meaning of different groups, we randomly sample 25 pictures from the dataset, and pass the images into the network to obtain reconstruction of the images. Ideally, we would like to know whether different groups capture different semantic meaning of one image. For this purpose, we used zero vector to substitute one group of the discrete codes and acquire the reconstructed image by concatenating it with the other group of the discrete codes. As a consequence, we have three reconstructed images in total, as shown in Figure~\ref{fig:pacs}.
\paragraph{Experiment Results} Figure~\ref{fig:pacs} shows the reconstructed images from a trained decoder operating on a discretized 2-factor representation. We find that different factors capture different semantic information. As an example, it is obvious to see that there are 4 elephants in the fifth column in Figure~\ref{fig:pacs}(a), where the elephant at the top and the elephant at the bottom-middle are brighter than the other two elephants. For this image input, factor 1 tends to only capture the shape of the elephant without the brightness, while factor 2 capture specific details of each elephant. The similar observation can be found in the 16th column, where factor 1 captures the ``shadow'' in the picture, and factor 2 captures the brightness of elephant's skin. Another example is in Figure~\ref{fig:pacs}(b), it is shown that two factors learn ``green'' and ``purple'' separately for reconstructing ``black'', and two factors learn ``pink'' and ``orange'' separately for reconstructing ``red''.
\section{Explanation and Significance of {\textsc{RepDIB}}}
\label{rebuttal:significance_work}
We would like to provide further clarification about the significance of our work. In this work, we do not propose any new representation learning objective; rather we simply propose that discrete information based bottlenecks can be significant when it comes to learning representations. Moreover, an approach based on {\textsc{RepDIB}} is demonstrated to be even more impactful especially when the learnt representation needs to discard exogenous or irrelevant information from the observations. We demonstrate this across a range of experiments, not only based on RL, but also based on other tasks such as human activity recognition. Our experiments however are primarily based on RL benchmarks, where we demonstrate that {\textsc{RepDIB}} can be easily applied on top of any learnt representations. To do this, we take existing baseline approaches proposing representation learning objectives and demonstrate the ease with which {\textsc{RepDIB}} can be integrated on top of learnt representations.
We emphasize that although information bottlenecks has been studied extensively in past literature, the use of discrete information bottleneck is rather new; and moreover to apply bottlenecks on top of representation learning objectives, especially to discard exogenous information, has been little studied in the past. Our aim is to propose information bottleneck, which not only captures factorial or compositional representations, but also plays key role in extracting only the relevant latent representation; and most importantly, can be suitably applied on any deep RL algorithm relying on additional representation learning module.
\section{Discussion}
\vspace{-3mm}
\label{sec:discussion}
\textbf{Conclusion}. Representation learning methods in RL have been extensively studied in the recent past. However, when learning directly from observations consisting of exogenous information, the need for learning robust representations becomes vital. To this end, we propose ${\textsc{RepDIB}}$ that learns robust representations by inducing a factorized structure in embedding space. Our work shows that discrete bottleneck representations that compresses the relevant information from observations, can lead to substantial improvements on downstream tasks, as shown in our experimental results.
\textbf{Limitations and Future Work}. Whether the bottlenecks with different factors \textit{truly} leads to a compositional representation space that can disentangle different factors of observations, is an interesting avenue for future work. While we enforce discrete factorization, we provide no theoretical proof that this corresponds to an actual factorization structure in the data. We believe that inducing such compositional structure, can shape the path towards truly achieving better generalization capabilities of RL agents. How to achieve and leverage a compositional representation space for better generalization remains an interesting question, both theoretically and empirically.
\section{Experiments : Representations with Information Bottleneck}
\label{sec:experiments}
\vspace{-3mm}
We seek to understand the effectiveness of information compression in representations. We emphasize that ${\textsc{RepDIB}}$ can be applied as a plug-in approach, on top of any existing framework that learns representations with a self-supervised objective. Through our experiments, we answer the following questions:
\textbf{Does inducing structure in representation space help with exploration?} We first demonstrate on simple toy tasks that learning representations with {{\textsc{RepDIB}}} can induce a factorized structure that can lead to effective exploration. By using discrete information bottlenecks, we can recover the underlying discrete latent states while also learning factorized embedding space, that leads to better exploration in maze tasks when using a simple DQN agent.
\textbf{Does factorized representations with {\textsc{RepDIB}} help learn task agnostic pre-trained representations, for better generalizaton capabilities? } We evaluate ${\textsc{RepDIB}}$ on several complex control tasks using the URLB benchmark \cite{URLB} for testing generalization capabilities. In this setting, we pre-train representations in a reward-free approach on a given task, followed by fine-tuning on different downstream tasks. Most importantly,we show that sample efficiency of ${\textsc{RepDIB}}$ can further be improved as a function of pre-training steps, where ${\textsc{RepDIB}}$ can improve downstream performance with only few pre-training steps.
\textbf{Does information bottleneck help learn parsimoniuous representations to learn relevant representations in a real robot arm task, while ignoring background distractors? } To answer this, we use a real robot arm collected data, with a temporal background structure, where there is background noise from lightnings, TV and video. In this setting, we show that ${\textsc{RepDIB}}$ can capture the relevant factors of variation and ignore irrelevant distractors through the use of information and discretization bottleneck.
\textbf{Does bottleneck representations help in sequence modelling problem from offline datasets?} We evaluate ${\textsc{RepDIB}}$ in the offline Atari benchmark where environment observations consist of additional exogenous information, using the Decision Transformer \cite{NEURIPS2021DT}, and find that pre-trained representations with ${\textsc{RepDIB}}$ can learn robust representations ignoring distractors.
\textbf{What is the impact of VQ information bottleneck to extract unimodal and fuse multi-modal representations?} We study ${\textsc{RepDIB}}$ using existing human activity recognition based dataset in a multi-modal learning setting. We show that when fusing representations from single modality with information bottleneck, followed by compressing the resulting multi-modal representation, ${\textsc{RepDIB}}$ helps achieve performance improvements compared to existing baselines with information bottleneck on multi-modal representations for activity recognition tasks.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
width=0.4\textwidth]{figures/visualizations/heatmap_baseline.png}
}
\subfigure{
\includegraphics[
width=0.4\textwidth]{figures/visualizations/heatmap_repdib.png}
}
\caption{\textbf{Relative Distance in Representation Space}. (Left): Baseline DQN Agent (Right): DQN Agent with ${\textsc{RepDIB}}$. We show that ${\textsc{RepDIB}}$ learns representations that can better capture underlying topology of how the agent can move in the maze. Darkness of position shows how similar the representation is to the point in the center.}
\label{fig:topology}
\end{figure}
\vspace{-3.5mm}
\subsection{Maze Navigation Tasks}
\vspace{-2mm}
\begin{figure*}[htbp]
\includegraphics[width=0.7\textwidth]{figures/architecture.png}
\caption{\textbf{Summary of ${\textsc{RepDIB}}$} integrated on top of the ProtoRL baseline \cite{yarats2021protorl} for testing generalization capability in continuous control tasks from URLB benchmark \cite{URLB}. We find that ${\textsc{RepDIB}}$ improves intrinsically-motivated exploration (left). An information bottleneck is used to encourage the discrete codes to be parsimonious. The reward-based fine-tuning stage remains unchanged when using ${\textsc{RepDIB}}$ (right). }
\label{fig:archit2}%
\end{figure*}
\textbf{Experiment Details}. We use three kinds of maze navigation tasks to evaluate the effectiveness of learning parsimonious representations with ${\textsc{RepDIB}}$: \textit{GridWorld}, \textit{SpiralWorld}, \textit{LoopWorld}. We first learn the state representations on \textit{GridWorld} with data collected by a random policy, and then adapt the pre-trained representations to all these three tasks to learn the end-task policy.
\textbf{Experiment Results}. We study spiral and loop world maze navigation task, with a baseline DQN agent. During fine-tuning based on pre-trained representations from an empty gridworld, we simultaneously update both the representations and the DQN agent, given pixel based observations. We find that the induced factorized representation structure leads to better coverage of the state space, as demonstrated in Figure~\ref{fig:coverage} while also capturing topology of the maze in representation space as shown in figure \ref{fig:topology}. For the self-supervised objective for learning representations, we use DRIML \cite{MazoureCDBH20} for reward-free representation learning, followed by ${\textsc{RepDIB}}$ integrated on top of the encoder. Experiment results, as in figure \ref{fig:maze_results} shows that the induced structure, based on different group factors of the discretization bottleneck, leads to improved performance compared to a baseline DQN agent.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
width=0.4\textwidth]{figures/visualizations/baseline_visitation.png}
}
\subfigure{
\includegraphics[
width=0.4\textwidth]{figures/visualizations/repdib_visitation.png}
}
\caption{\textbf{State Space Coverage} comparing DQN agent (left) and DQN agent with ${\textsc{RepDIB}}$ (right). Factorized Representations with ${\textsc{RepDIB}}$ leads to better state space coverage in reward-free exploration. \vspace{-5mm}}
\label{fig:coverage}
\end{figure}
\vspace{-3.5mm}
\subsection{Generalization in Continuous Control}
\vspace{-2mm}
\begin{figure*}[htbp]
\centering
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.25\textwidth]{figures/urlb/quadruped_run.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.25\textwidth]{figures/urlb/quadruped_walk.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.25\textwidth]{figures/urlb/walker_flip.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.25\textwidth]{figures/urlb/walker_run.pdf}
}
\caption{\textbf{Fine-Tuning performance} on different domains, with pre-trained representations learnt with ${\textsc{RepDIB}}$ and comparison with ProtoRL baseline. \vspace{-4mm} }
\label{fig:dmc_results_1}
\end{figure*}
${\textsc{RepDIB}}$ is then evaluated on a range of continuous control tasks with visual observations. We integrate ${\textsc{RepDIB}}$ on the state of the art Proto-RL baseline \cite{yarats2021protorl}, as shown in figure \ref{fig:archit2} which has been shown to learn good representations from pre-training, for better fine-tuning performance. We follow the experiment setup from the URLB benchmark \cite{URLB}, and explain our experiment setup below, comparing ${\textsc{RepDIB}}$ with a baseline Proto-RL agent, since it has been shown to outperform other baselines learning self-supervised representations.
\begin{figure*}[htbp]
\centering
\subfigure{
\hspace{-0.3cm}
\includegraphics[
width=0.49\textwidth]{figures/barplot-walker.png}
}
\subfigure{
\includegraphics[
width=0.49\textwidth]{figures/barplot.png}
}\\
\subfigure{
\hspace{-0.3cm}
\includegraphics[
width=0.49\textwidth]{figures/barplot-walker-wo.png}
}
\subfigure{
\includegraphics[
width=0.49\textwidth]{figures/barplot_wo.png}
}
\caption{\textbf{Downstream impact} of varying the number of pre-training steps ($100K$, $500K$ or $1M$ timesteps). X-axis shows different fine-tuning steps. We see that with the use of VIB and a discretiztion bottleneck, there is a gradual improvement in performance (\textbf{top row}); however the performance during fine-tuning can degrade as a function of pre-training steps when a ${\textsc{RepDIB}}$ based bottleneck is not used in the baseline Proto-RL agent (\textbf{bottom row}).}
\label{fig:pretrained_models}
\end{figure*}
\textbf{Experiment Details}. The key to Proto-RL is to learn a set of prototypical vectors and prototypes by projecting the embeddings onto clusters, referred to as prototypes. To ensure exploration, a latent state entropy distribution is optimized with an approximation, based on the learnt prototypical representations. This can form the basis for an intrinsic reward function, which ensures sufficient coverage in a pre-training phase in a task agnostic reward-free setting. In contrast to Proto-RL, the key to our approach of learning discret prototypes,e ${\textsc{RepDIB}}$ is to ensure that a factorized structure is learnt in the latent representation. We refer to this as \textit{structured exploration}, since ${\textsc{RepDIB}}$ exploits the use of information bottleneck and vector quantization to induce a factorial structure embeddings.
We use a total of 12 continuous control tasks with varying difficulty (3 different domains with 4 different downstream tasks per domain): \textit{Walker}, \textit{Quadruped} and \textit{Jaco Arm}. The agent is pre-trained on a specific task in a given domain, and then adapts to the other downstream tasks within that domain. We follow the same experiment pipeline as in URLB benchmark~\cite{URLB}. We checkpoint the agent at 100k, 500k, 1M, 2M time-steps during pre-training, and then evaluate the adaptation ability of the method by adapting the pre-trained policy to downstream tasks.
\textbf{Pre-Training:} During the pre-training phase, we train ${\textsc{RepDIB}}$ agent in a task agnostic reward free setting. The goal here is to encourage agent to reach unseen regions to collect more diverse data, such that this can further help in learning better representations. For this, we follow a similar procedure as Proto-RL, where the agent is trained to maximize coverage by estimating an approximation to the entropy of the latent state distribution \cite{yarats2021protorl}. In case of ${\textsc{RepDIB}}$, instead of estimating entropy based on an unstructured representation, ${\textsc{RepDIB}}$ utilize the factorization structure in the representation space, through the use of the information bottleneck followed by the discretization module. Therefore, ${\textsc{RepDIB}}$ computes the intrinsic reward based on the discrete factorial embeddings for more efficient structured exploration.
\textbf{Fine-Tuning}. In the second phase, the agent is fine-tuned to solve new tasks to test its generalization capability. We use the learnt representation to then collect a dataset, which is then used by any standard state based off-policy RL algorithm such as soft actor critic (SAC) \cite{SAC}. Since the compositional structure is mostly exploited during pre-training phase, during fine-tuning we freeze the learnt representation to study the effectiveness of reward free representation.
\textbf{Experiment Results on Control Tasks}. Since Proto-RL is shown to outperform existing baselines, including random exploration DrQ \cite{DrQ}, curiosity based exploration ICM \cite{ICM} and unsupervised active pre-training (APT) \cite{APT}; in this work we mostly compare to the state of the art Proto-RL baseline. We provide comparisons for ${\textsc{RepDIB}}$ including the variational information bottleneck and ${\textsc{RepDIB}}$ which only includes the discretization bottleneck (denoted as ${\textsc{RepDIB}}$ only). Having pre-trained the representation encoder with and without a bottleneck in reward-free setting, we test the fine-tuning performance of the RL algorithm based on the fixed and learnt representation. Figure~\ref{fig:dmc_results_1} demonstrates the significance of ${\textsc{RepDIB}}$ algorithm. We find that the use of information bottleneck prior to the discretization, can significantly help to improve sample efficiency during fine-tuning. We further examine the significance of the information bottleneck with different KL weightings in Appendix figure~\ref{fig:vib_kl_weightings}
\textbf{Fine-Tuning Performance as a Function of Pre-Trained Unsupervised Representation Steps}. Downstream performance should monotonically improve with more steps of pre-training. However, it has been found that downstream performance sometimes degrades with more pre-training steps and that this counter-intuitive failure mode is common to all of the most widely used unsupervised RL algorithms \cite{URLB}. We reproduced this phenomenon in our prototypical-RL baseline. We found that {\textsc{RepDIB}} alleviates this problem, resulting in monotonic improvements in downstream performance with more pre-training steps (Figure \ref{fig:pretrained_models}).
\vspace{-2mm}
\subsection{Offline Experiments with Exogenous Distractors}
\vspace{-2mm}
\label{sec:offline_exo}
\textbf{Experiment Setup with Atari using Decision Transformer}. We first consider the reward-conditioned behavior cloning setup with decision transformers \cite{NEURIPS2021DT}, where the goal is to learn representations that can ignore noisy or background information not relevant to the task using ${\textsc{RepDIB}}$. We consider the 4 games considered in \cite{NEURIPS2021DT} (Pong, Breakout, Seaquest, Qbert), using offline dataset \cite{agarwal2020optimistic} for training. The model is trained using a sequence modeling objective to predict the next action given the past states, actions, and returns-to-go.
To add exogenous information to the observation space, we append a randomly sampled cifar10 \cite{Krizhevsky_2009_17719} image to each frame. We keep the cifar image fixed in an episode but use a different image across episodes. We first pretrain our convolutional encoder with multi-step inverse objective introduced in \cite{lamb2022guaranteed}. We then train the Decision Transformer for action prediction keeping the convolutional encoder fixed. For the proposed approach, we discretize the output the encoder as described in method section \ref{sec:variational} before applying multi-step inverse objective.
\begin{table}[]
\caption{\textbf{Atari Results}. We compare the proposed \textsc{Multi-Step Inverse + {\textsc{RepDIB}}} to \textsc{Multi-Step Inverse} on 4 atari game using the Decision Transformer setup. We can see that the proposed approach outperforms the baseline in all cases. Results averaged across 5 seeds. \vspace{-4mm}}
\label{tab:atari_results}
\scriptsize
\tablestyle{4pt}{1.2}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l| c c}
\textsc{Game} & \textsc{Multi-Step Inverse} & \textsc{Multi-Step Inverse + {\textsc{RepDIB}}} \\
\shline
\textsc{Pong} & \g{11.4}{2.653} & \highlight{\g{12.8}{2.561}} \\
\textsc{Qbert} & \g{878.6}{745.146} & \highlight{\g{1100.0}{898.499}} \\
\textsc{Breakout} & \g{19.8}{3.059} & \highlight{\g{41.8}{7.305}} \\
\textsc{Seaquest} & \g{915.2}{126.368} & \highlight{\g{1058.4}{116.629}} \\
\end{tabular}
}
\end{table}
\textbf{Experiment Results}. Table \ref{tab:atari_results} summarizes the Atari results, where \textsc{Multi-Step Inverse + ${\textsc{RepDIB}}$} outperforms \textsc{Multi-Step Inverse} in all games thus showing the effectiveness of the VQ bottleneck. We use a discretization module with 32 factors for all the games. Additional results analysing the effect of the number of discretization factors is presented in appendix.\\
\textbf{Experiments with Visual Offline RL.} We then consider the visual pixel-based offline dataset \cite{vd4rl} for control, where we learn representations using a \textsc{Multi-Step Inverse} model \cite{lamb2022guaranteed}. We consider two settings : with no visual background distractors and another where we add time correlated exogenous image distractors in the background. Figure \ref{fig:vd4rl} summarizes the results, where we find that in presence of exogenous image distractors, ${\textsc{RepDIB}}$ can learn more robust representations during pre-training; whereas performance is similar in the setting without any additional exogenous distractors.
\begin{figure*}[!htb]
\centering
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/offline_vd4rl/walker_walk_nodist.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/offline_vd4rl/cheetah_run_corr.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/offline_vd4rl/walker_walk_corr.pdf}
}
\caption{\textbf{${\textsc{RepDIB}}$ can learn more robust representations} due to information bottleneck, in presence of background exogenous distractors, when using the offline visual control setup from \cite{vd4rl}. In contrast, performance is almost similar in settings without distractors. }
\label{fig:vd4rl}
\end{figure*}
\paragraph{Comparisons with Other Information Bottleneck Approaches :} We now compare {\textsc{RepDIB}} with several other bottleneck baselines in the pixel based offline RL setup. We follow the same experiment setup as described in section \ref{sec:offline_exo} and integrate information bottleneck approaches on top of three existing representation learning objectives, namely AC-State \cite{lamb2022guaranteed}, One step inverse dynamics \cite{pathak2017curiosity} and DRIML \cite{MazoureCDBH20}. We compare with \textbf{three different baselines} along with comparisons of variations of {\textsc{RepDIB}} bottlneck.
Note that the other baselines we compare with are all based on approximations of a mutual information based objective. In contrast, {\textsc{RepDIB}} does not require any MI based approximations. We mainly compare with EMI(with MINE objectives)\cite{kimICML19}, DB(Dynamic Bottleneck) \cite{BaiDB} and SVIB \cite{SVIBa,SVIBb} as reviewers have pointed out, and show in figures \ref{fig:rebuttal_comparison_bottlenecks_images} and \ref{fig:rebuttal_comparison_bottlenecks_video} how {\textsc{RepDIB}} compares with other baselines. Specifically, EMI proposes to maximize the mutual information of state embedding representations and action embedding representations by maximizing the estimated lower bounds of both mutual information. DB follows the Information Bottleneck principle to learn dynamics-relevant representation by maximizing the mutual information $I(Z_t; S_{t+1})$ while minimizing the mutual information $I([S_t, A_t];Z_t)$, where $Z_t$ is a compressed latent representation of $(S_t,A_t)$, $S_t, A_t$ correspond to the current state and current action respectively. SVIB utilizes the mutual information between the observation and its corresponding representation as an additionally penalized term for standard loss function in RL, optimizing all networks by Stein Variational Gradient Descent (SVGD). Notably, for a fair comparison with the other bottlenecks, we update all networks by just using Adam optimizer instead of SVGD. We emphasize that compared to the baselines, {\textsc{RepDIB}} is easy to integrate since it only requires adding a VQ-VAE based factorization with a variational information bottleneck.
\begin{figure*}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/icm_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/driml_cheetah_run_expert.pdf}
}
\caption{\textbf{Time correlated exogenous images in the background.} Comparison of {\textsc{RepDIB}} with other approaches based on information bottleneck based approximations in the offline RL setup. Following our previous results in the main, we now compare different bottleneck based approaches on top of existing representation learning objectives. }
\label{fig:rebuttal_comparison_bottlenecks_images}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/ac_state_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/icm_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/icm_walker_walk_expert.pdf}
}
\caption{\textbf{Changing video distractors, as exogenous noise in the background}. We now consider a slightly more difficult setup where we have changing video distractors in the background. We again compare {\textsc{RepDIB}} with other information bottleneck based approaches, when integrated on top of existing representation learning objectives}
\label{fig:rebuttal_comparison_bottlenecks_video}
\end{figure*}
\subsection{Robot Arm Experiment in Presence of Irrelevant Background Information}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.28\linewidth,trim={0 0 18.1cm 0},clip]{figures/robot/img_1001.png}
\includegraphics[width=0.28\linewidth,trim={0 0 18.1cm 0},clip]{figures/robot/img_1002.png}
\includegraphics[width=0.28\linewidth,trim={0 0 18.1cm 0},clip]{figures/robot/img_1003.png}
\tablestyle{5pt}{1.2}
\begin{tabular}{c|c c c}
\textbf{Bottleneck} & \textbf{None} & \textbf{{\textsc{RepDIB}} (No VIB)} & \textbf{{\textsc{RepDIB}}} \\
\shline
Temporal Noise Relative Error & 1.0 & 0.7043 & \highlight{0.6650} \\
Visual Noise Relative Error & 1.0 & 0.9725 & \highlight{0.9713} \\
State Estimation Relative Error & 1.0 & 1.001 & \highlight{0.9988} \\
\end{tabular}
\caption{\textbf{Representations learned from videos of a real robotic arm} (with various distractors such as a TV and color-changing lights). We evaluate the representation quality with various types of bottlenecks. ${\textsc{RepDIB}}$ is best able to remove noise from the representation without removing information about the true state of the robot}
\label{fig:robot}
\end{figure*}
\textbf{Experiment Details}. We then evaluate ${\textsc{RepDIB}}$ on a challenging real robot dataset, containing high resolution video of a robot arm in presence of a rich temporal background noise \cite{lamb2022guaranteed}. For learning a latent state representation of the images, we use a multi-step inverse model \cite{lamb2022guaranteed,Efroni2021ppe}, and integrate ${\textsc{RepDIB}}$ with the information and discretization bottleneck on the learnt representation. In this task, the robot arm moves on top of a grid layout, containing $9$ different positions. We denote these as the \textit{true states}. We collect a dataset containing pixel based observations only, where the images consist of the robot arm along with the background distractors. Inspired by the exogenous noise information setup \cite{Efroni2021ppe}, we setup the robot task while there is a TV playing a video in the background, with other flashing lights nearby. The offline dataset consists of $6$ hours of robot data, with $14000$ samples from the arm, taking high level actions of move left, right, up and down. A sample point image is collected after each action, and the background distractors changes significantly, due to video and lighting in the background. The goal of the experiment is to predict accurately the ground truth state position by learning latent representations with ${\textsc{RepDIB}}$.
\textbf{Experiment Results}. We evaluate the ability of ${\textsc{RepDIB}}$ to accurately reconstruct the image, by learning the latent state representation while also ignoring the background distractors. This is denoted as the \textit{Image Noise}, where we compare ${\textsc{RepDIB}}$ with and without VIB, alongside a baseline agent which only learns a representation. For learning latent representations, we use a multi-step inverse dynamics model \cite{Efroni2021ppe}. In addition, we compare the ability of ${\textsc{RepDIB}}$ to accurately predict the ground truth states, denoted by \textit{State Accuracy} solely from the observations, as a classification task. This is challenging since the learnt representation needs to predict ground states while ignoring the irrelevant background information. Furthermore with the learnt model, we predict the time-step for each observation as an additional metric to determine effectiveness of ${\textsc{RepDIB}}$. The time-step is an indicator of the background noise that appeared in each sample; and with \textit{Temporal Noise}, we evaluate ${\textsc{RepDIB}}$ to predict the time step while ignoring irrelevant information from observations. Experiment results in Figure~\ref{fig:robot} shows that the use of VIB helps improve the ability of ${\textsc{RepDIB}}$ to remove noise from the representation, while being able to almost perfectly predict the ground truth state of the robot.
\vspace{-2mm}
\subsection{Multi-Modal Representation Learning with Information Bottleneck}
\vspace{-1mm}
\textbf{Experiment Details}. We evaluated the impact of ${\textsc{RepDIB}}$ to learning multi-modal representations for human activity recognition task. We extended the baseline multi-modal models in two ways to incorporate VQ bottleneck: \textbf{{\textsc{RepDIB}}+MM: } We extract multi-modal representations using existing models (e.g. Keyless \cite{keyless} and HAMLET \cite{islam2020hamlet}) and then apply VQ bottleneck on the fused multi-modal representations. \textbf{{\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni):} We applied VQ bottleneck in two steps. First, we extract unimodal representations and apply VQ bottleneck to produce discretized unimodal representations. These discretized representations are fused and passed thorough a VQ bottleneck to produce task representations for the activity recognition.
In the baselines, we used five modalities: two viewpoints of RGB videos and three wearable sensors (acceleration, gyroscope, and orientation). We evaluated all the baselines on the MMAct dataset in a cross-subject evaluation setting and reported F1-Score of activity recognition task \cite{kong2019mmact}.
\begin{table}[!t]
\caption{\textbf{Cross-subject performance} comparison (F1-Score) of multi-modal learning model on MMAct dataset \vspace{-6mm}}
\label{tab:mmact_subject}
\tablestyle{4pt}{1.2}
\begin{tabular}{ccc}
\shline
Method & F1-Score (\%) \\ \shline
SMD \cite{hinton2015distilling} & 63.89 \\
Multi-Teachers \cite{kong2019mmact} & 62.67 \\
MMAD \cite{kong2019mmact} & 66.45\\
HAMLET \cite{islam2020hamlet} & 69.35 \\
Keyless \cite{keyless} & 71.83 \\
{\textsc{RepDIB}}+MM(HAMLET) & 57.47 \\
{\textsc{RepDIB}}+MM(Keyless) & 63.22 \\
{\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni) & 69.39 \\
\shline
\end{tabular}
\end{table}
\textbf{Experiment Results}. The results in Table~\ref{tab:mmact_subject} suggest that applying VQ bottleneck on the multi-modal representations degrades the performance of multi-modal models for the activity recognition task. For example, applying VQ bottleneck on multi-modal representations from Keyless model $({\textsc{RepDIB}}+MM(Keyless))$ significantly degrades the F1-Score of the activity recognition task to $57.47\%$ from $69.35\%$. In these models, non-discretized unimodal representations are fused to produce a compressed and non-discretized multi-modal representation. The results suggest that applying VQ bottleneck on non-discretized multi-modal representation can not ensure retaining salient representations for task learning.
On the other hand, applying VQ bottleneck both on the unimodal and multi-modal representations improves the performance of the models compared to the models that do not use ${\textsc{RepDIB}}$ or use ${\textsc{RepDIB}}$ only on the multi-modal representations. For example, ${\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni)$ model uses the same HAMLET model and applies ${\textsc{RepDIB}}$ on the unimodal and multi-modal representations. ${\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni)$ slightly improves the performance of HAMLET. ${\textsc{RepDIB}}+MM({\textsc{RepDIB}}+Uni)$ fuses the discretized unimodal representations using a modality weighting approach, which is modeled as 1D-CNN. As several works on multi-modal representation learning showed that the way to fuse unimodal representations could impact the performance of the downstream task \cite{mumu,maven,liang2022foundations}, there is room for improvement by effectively fusing the discretized unimodal representations. Moreover, as a couple of hyper-parameters in VQ bottleneck impact the model performance, such as the number of groups and number of embeddings, finding the appropriate value of hyper-parameters can improve the model performance. Thus, our experimental results show a crucial future avenue of research to utilize ${\textsc{RepDIB}}$ information bottleneck for extracting salient multi-modal representations.
\section{Introduction}
\vspace{-2mm}
In the most general reinforcement learning (RL) setting, an agent is tasked with discovering a policy that achieves high long-term reward \cite{sutton2018reinforcement,mnih2013playing}. One of the key challenges of the RL setting is that credit assignment, exploration, and generalization \cite{sutton2018reinforcement} must be addressed even when the agent has seen very little data and thus has low quality representations \cite{kakade2003sample,foster2021sample}. When the representations are low quality, determining a desirable state to reach and finding a policy to reach that state are both difficult \cite{huang2021sample}. Intuitively, learning a compressed representation should help to address these challenges. If extraneous information can be removed, it should be easier to generalize to new samples from the environment.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{figures/mainfig_aistats.pdf}
\label{fig:archit}%
\caption{\textbf{Illustration} of the generic approach of ${\textsc{RepDIB}}$, where we learn representations with variational and discrete factorial bottlenecks. We show that pre-training Representations with Discrete Information Bottleneck (${\textsc{RepDIB}}$) leads to learning of robust representations, especially when observations consist of irrelevant and exogenous information }
\end{figure*}
Approaches from the RL theory literature have shown benefits from compressed representations in the discrete latent state setting \cite{misra2020kinematic}, \cite{Efroni2021ppe}, \cite{du2019provable}, \cite{xiong2021randomized}. The HOMER algorithm \cite{misra2020kinematic} explores by trying to reach the frontier of pairs of the discrete latent states and actions with the lowest counts. While these algorithms give strong theoretical guarantees \cite{efroni2022colt}, planning and exploring with them does not scale beyond a small number of discrete states.
We explore the intersection between theoretically-grounded representation learning in small tabular-MDPs and representations for the deep reinforcement learning setting. We seek to retain the expressiveness of factorial representations while making the representation compressed \cite{liu2021dvnc, liu2022adaptive}. In our proposed method (Figure \ref{fig:archit}), Representations for RL with Discrete Information Bottleneck (${\textsc{RepDIB}}$), we make the representations discrete and factorial, while also encouraging them to be parsimonious through a gaussian variational information bottleneck \cite{alemi2016deep, InfoBotGoyal, goyal2019reinforcement, goyal2020variational}. These are expressive enough to model complicated environments, yet avoid the unbounded complexity of unstructured continuous representations.
This work studies the effectiveness of learning compressed representations for reinforcement learning. We find that by using an information bottleneck that induces a factorial structure in the embedding space, ${\textsc{RepDIB}}$ can learn more robust representations. This improvement is especially pronounced in settings where the observation contains exogenous noise \cite{Efroni2021ppe, efroni2022colt}, which is any information that is unrelated to the agent's actions. We propose an easy to use approach effective for improving downstream performance in settings with irrelevant background information. Our work offers the the following contributions: (a) Learning representations
which more closely match the salient attributes of the environment, and improved robustness by learning factorial representations that can ignore irrelevant information in a practical robot arm task (b) Improved sample efficiency due to structured representations, for better generalization in continuous control (c) bottleneck representations that can improve robustness in offline RL in presence of exogenous distractors. Through range of experiments, we show that ${\textsc{RepDIB}}$ learns compressed representations, which helps in exploration and reward-free pre-training of representations to improve efficiency and robustness on downstream tasks.
\subsubsection*{\bibname}}
\input{math_commands.tex}
\usepackage{hyperref}
\usepackage{url}
\usepackage{graphics}
\usepackage{graphicx}
\usepackage{multirow}
\usepackage{subfigure}
\usepackage{xcolor}
\usepackage{booktabs}
\usepackage{wrapfig}
\usepackage{sidecap}
\usepackage{etoc}
\newcommand{{\mathcal A}}{{\mathcal A}}
\newcommand{{\mathcal B}}{{\mathcal B}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal I}}{{\mathcal I}}
\newcommand{{\mathcal J}}{{\mathcal J}}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{{\mathcal L}}{{\mathcal L}}
\newcommand{{\mathcal M}}{{\mathcal M}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\mathcal Q}}{{\mathcal Q}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal W}}{{\mathcal W}}
\newcommand{{\mathcal X}}{{\mathcal X}}
\newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{{\bm{Z}}}{{\bm{Z}}}
\newcommand{{\bm{M}}}{{\bm{M}}}
\newcommand{{\bm{N}}}{{\bm{N}}}
\newcommand{{\bm{T}}}{{\bm{T}}}
\newcommand{{\bm{O}}}{{\bm{O}}}
\newcommand{{\bm{X}}}{{\bm{X}}}
\newcommand{{\bm{W}}}{{\bm{W}}}
\newcommand{{\bm{Q}}}{{\bm{Q}}}
\newcommand{{\bm{\kappa}}}{{\bm{\kappa}}}
\newcommand{{\bm{V}}}{{\bm{V}}}
\newcommand{{\bm{S}}}{{\bm{S}}}
\newcommand{\bm{R}}{\bm{R}}
\newcommand{\bm{P}}{\bm{P}}
\newcommand{{\bm{A}}}{{\bm{A}}}
\newcommand{{\bm{H}}}{{\bm{H}}}
\newcommand{{\bm{K}}}{{\bm{K}}}
\newcommand{{\bm{x}_\mathrm{aug}}}{{\bm{x}_\mathrm{aug}}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathbb P}}{{\mathbb P}}
\usepackage{pifont
\newcommand{\cmark}{\ding{51}}%
\newcommand{\xmark}{\ding{55}}%
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb S}}{{\mathbb S}}
\newcommand{{\mathfrak O}}{{\mathfrak O}}
\newcommand{{\mathfrak o}}{{\mathfrak o}}
\newcommand{\highlight}[1]{\colorbox{blue!10}{#1}}
\definecolor{mygray}{gray}{0.4}
\newcommand{\g}[2]{#1\textsubscript{\textcolor{mygray}{$\pm$#2}}}
\newcommand{{\textsc{RepDIB}}}{{\textsc{RepDIB}}}
\newcommand{\riashat}[1]{\textcolor{red}{\{Riashat: #1\}}}
\newcommand{\anirudh}[1]{\textcolor{blue}{\{Anirudh: #1\}}}
\newcommand{\alex}[1]{\textcolor{green}{\{Alex: #1\}}}
\newcommand{\hongyu}[1]{\textcolor{purple}{\{hongyu: #1\}}}
\newcommand{\tablestyle}[2]{\setlength{\tabcolsep}{#1}\renewcommand{\arraystretch}{#2}\centering\footnotesize}
\newlength\savewidth\newcommand\shline{\noalign{\global\savewidth\arrayrulewidth
\global\arrayrulewidth 1pt}\hline\noalign{\global\arrayrulewidth\savewidth}}
\usepackage{comment}
\usepackage{todonotes}
\usepackage{floatrow}
\newfloatcommand{capbtabbox}{table}[][\FBwidth]
\usepackage{blindtext}
\newcommand{{\hat f}}{{\hat f}}
\newcommand{{\hat \Fcal}}{{\hat \Fcal}}
\newcommand{{\hat \Theta}}{{\hat \Theta}}
\newcommand{{\hat \theta}}{{\hat \theta}}
\newcommand{{\hat x}}{{\hat x}}
\newcommand{{\hat y}}{{\hat y}}
\newcommand{{\hat \delta}}{{\hat \delta}}
\newcommand{{\check x}}{{\check x}}
\newcommand{{\hat \Lcal}}{{\hat \Lcal}}
\newcommand{{\tilde f}}{{\tilde f}}
\newcommand{{\tilde \phi}}{{\tilde \phi}}
\newcommand{{\bar \Ccal}}{{\bar \Ccal}}
\newcommand{{\tilde \Ccal}}{{\tilde \Ccal}}
\newcommand{{\tilde \psi}}{{\tilde \psi}}
\newcommand{{\tilde d}}{{\tilde d}}
\newcommand{{\tilde \Lcal}}{{\tilde \Lcal}}
\newcommand{a_{\text{max}}}{a_{\text{max}}}
\newcommand{{\bar p}}{{\bar p}}
\newcommand{{\tilde h}}{{\tilde h}}
\newcommand{{\hat c}}{{\hat c}}
\newcommand{{\bar \delta}}{{\bar \delta}}
\newcommand{{\hat \EE}}{{\hat \EE}}
\newcommand{{\bar k}}{{\bar k}}
\newcommand{{\bar i}}{{\bar i}}
\newcommand{{\bar j}}{{\bar j}}
\newcommand{{\bar y}}{{\bar y}}
\newcommand{{\bar x}}{{\bar x}}
\newcommand{{\tilde x}}{{\tilde x}}
\newcommand{\kk}[2][]{\todo[inline,linecolor=orange,backgroundcolor=orange!25,bordercolor=orange,#1]{KK: #2}}
\definecolor{mygray}{gray}{0.4}
\usepackage{makecell}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{mathabx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{listings}
\usepackage{xcolor}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
\begin{document}
\twocolumn[
\vspace{-4mm}
\aistatstitle{Representation Learning in Deep RL via Discrete Information Bottleneck}
\aistatsauthor{Riashat Islam* \And Hongyu Zang* \And Manan Tomar \And Aniket Didolkar
}
\aistatsaddress{Mila, McGill University\\
Microsoft Research Montreal \And Beijing Institute of Technology \And AMII, University of Alberta\\
Microsoft Research Montreal \And
Mila, University of Montreal}
\aistatsauthor{Md Mofijul Islam \And Samin Yeasar Arnob \And Tariq Iqbal \And Xin Li
}
\aistatsaddress{University of Virginia \And Mila, McGill University \And University of Virginia \And
Beijing Institute of Technology}
\aistatsauthor{Anirudh Goyal \And Nicolas Heess \And Alex Lamb
}
\aistatsaddress{Google DeepMind \And Google DeepMind \And
Microsoft Research NYC}
]
\input{abstract}
\input{introduction}
\input{related}
\input{method}
\input{experiments}
\input{discussion}
\subsubsection*{Acknowledgements}
The authors would like to thank Remi Tachet Des Combes, Romain Laroche, Harm Van Seijen, and Doina Precup for valuable feedback on the draft. Hongyu Zang and Xin Li were partially supported by NSFC under Grant 62276024.
\section{Discrete Factorial Information Bottlenecks in Representation Learning}
\label{sec:variational}
\vspace{-2mm}
The goal of this work is to study the effectiveness of variational and discrete information bottlenecks in representation learning. While several prior works have studied representation learning for RL, we show that especially when observations can contain irrelevant information, addition of simple bottlenecks can lead to learning effective robust representations for improving performance on downstream tasks. Through a range of experiments, as in section \ref{sec:experiments}, we show that {\textsc{RepDIB}} learns a structured representation space, via use of discrete information bottlenecks \cite{liu2022adaptive}, that can be quite effective for downstream learning. In this section, we briefly describe our approach for learning robust representations with information bottlenecks.
The ${\textsc{RepDIB}}$ technique begins with a hidden representation ${\mathbf{z}} \in \mathbb{R}^m$ for a rich-observation $x$. This ${\mathbf{z}}$ could be the output of a convolutional neural network, a recurrent neural network, transformer, or any other expressive neural model. We induce a compositional structure in the learnt representation space by using a vector quantization discretization bottleneck \cite{van2017neural}. This is achieved by using a discretization module with $G$ factors each with $L$ codes. Thus the total number of discrete states that we can express is $L^G$. We can learn embeddings using multiple G factors and can concatenate them into a single embedding $\hat{{\mathbf{z}}} = \phi({\mathbf{z}})$ with $\phi : \mathbb{R}^{m} \xrightarrow{} \mathbb{R}^m$. Thus the discretization bottleneck $\phi$ preserves the size of the hidden representation.
While the compositional structure can solely be achieved through the discretization bottleneck, we additionally add a gaussian information bottleneck \cite{alemi2016deep}. This is added directly before the discretization function $\phi$. encourage more parsimonious discrete representations. Adding an information bottleneck to capture sufficient representations means that the we can achieve better compositionality by using \textit{fewer} discrete codes. Figure \ref{fig:kmeans} shows the learnt compositional structure in the latent embedding space extracted by ${\textsc{RepDIB}}$, while no apparent structure exists in the latent space for a baseline without any bottleneck.
\begin{figure}[t]
\label{fig:kmeans}
\centering
\includegraphics[width=0.49\textwidth]{figures/kmeans/baseline_representation.png}
\includegraphics[width=0.49\textwidth]{figures/kmeans/repdib_representation.png}
\caption{\textbf{T-SNE analysis} comparing representation embeddings. We take the ProtoRL \cite{yarats2021protorl} setup for learning representations in continuous control RL, where {\textsc{RepDIB}} based information bottlenecks are applied on top of the learnt representations from ProtoRL. \textbf{Left}. Latent representations from ProtoRL with discrete prototypes. \textbf{Right}. Factorized latent representations with ProtoRL $+$ {\textsc{RepDIB}}, that learns better structure in the representation space, when we apply a variational (Gaussian) information bottleneck followed by discrete information bottlenecks. \vspace{-5mm}}
\end{figure}
Following the learnt embeddings, we then apply the VQ discretization bottleneck, with different grouping factors. To apply discretization bottleneck, we quantize the output of the projector layer into group-based discrete latent embedding. Concretely, instead of assigning each continuous embedding ${\mathbf{z}}_e$ to a single one discrete vector, we first divide each continuous state representation into $G$ different groups as ${\mathbf{z}}_e=\text{concat}({\mathbf{c}}_1,{\mathbf{c}}_2,\cdots,{\mathbf{c}}_G)$, then we assign each segment ${\mathbf{c}}_i\in\mathbb{R}^{\frac{m}{G}}$ separately to a discrete vector ${\mathbf{e}}\in \mathbb{R}^{L\times\frac{m}{G}}$ using a nearest neighbour look-up: ${\mathbf{e}}_{{\mathbf{o}}_i}=\textsc{discretize}({\mathbf{c}}_i), \quad \text{ where }\quad {\mathbf{o}}_i=\operatorname{argmin}_{j \in \{1,\dots,L\}} ||{\mathbf{c}}_{i}-{\mathbf{e}}_j||$, where $L$ is the size of the discrete latent space (i.e., an $L$-way categorical variable). After that, we concatenate all segments to obtain the discrete embedding ${\mathbf{z}}_{q}=\textsc{concatenate}(\textsc{discretize}({\mathbf{c}}_1), \cdots,\textsc{discretize}({\mathbf{c}}_G))$
This process results in compositionality of latent representation with an information bottleneck.
\begin{figure*}
\centering
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/mazes/spiralworld.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/mazes/loopworld.pdf}
}
\hspace{-0.8cm}
\subfigure{
\includegraphics[
width=0.33\textwidth]{figures/mazes/gridworld.pdf}
}
\caption{\textbf{Performance} comparison on 3 different maze navigation tasks, with ${\textsc{RepDIB}}$, using different factors $8, 16, 32$ in the learnt representation, integrated on a baseline DQN agent.}
\label{fig:maze_results}
\end{figure*}
\textbf{${\textsc{RepDIB}}$ Implementation Details}. We provide technical details of how our approach can be implemented on any existing self-supervised reinforcement learning objective (Figure~\ref{fig:archit}). To enable factorial structure in the representation space, we can integrate a vector quantization discretization bottleneck on top of any encoder that learns a latent state representation. Given an encoder that maps observations $o$ to latent representation $\phi(\cdot)$, we first use a variational information bottleneck (VIB) based on reparameterization, with a uniform Gaussian prior. We then quantize the continuous representation from an information bottleneck into discrete latent variables, generalizing vector quantization in VQ-VAE.
\part*{Author Rebuttal}
We would like to thank the reviewers for their detailed feedback on our work. We first provide a general response to all reviewers, since we believe this would help all reviewers to further clarify their understanding of our work. The general response should address common issues that may have arised from multiple reviewers.
We then also provide individual responses to each reviewer based on the feedback, and further direct to general response whenever needed. We hope our detailed responses, along with additional experimental results and clarifications would help reviewers re-evaluate the score for our paper.
\part*{General Response to All Reviewers}
\section{Comparisons with Other Information Bottleneck Approaches}
\label{rebuttal:comparisons_other_bottlenecks}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/icm_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/image/driml_cheetah_run_expert.pdf}
}
\caption{\textbf{Time correlated exogenous images in the background.} Comparison of {\textsc{RepDIB}} with other approaches based on information bottleneck based approximations in the offline RL setup. Following our previous results in the main, we now compare different bottleneck based approaches on top of existing representation learning objectives. }
\label{fig:rebuttal_comparison_bottlenecks_images}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/ac_state_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/icm_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/comparisons/video/icm_walker_walk_expert.pdf}
}
\caption{\textbf{Changing video distractors, as exogenous noise in the background}. We now consider a slightly more difficult setup where we have changing video distractors in the background. We again compare {\textsc{RepDIB}} with other information bottleneck based approaches, when integrated on top of existing representation learning objectives}
\label{fig:rebuttal_comparison_bottlenecks_video}
\end{figure}
Since several reviewers asked for comparisons with other information bottleneck based approaches, we now compare {\textsc{RepDIB}} with several other bottleneck baselines in the pixel based offline RL setup. We follow the same experiment setup as described in section \ref{sec:offline_exo} and integrate information bottleneck approaches on top of three existing representation learning objectives, namely AC-State \cite{lamb2022guaranteed}, One step inverse dynamics \cite{pathak2017curiosity} and DRIML \cite{MazoureCDBH20}. We compare with \textbf{three different baselines} along with comparisons of variations of {\textsc{RepDIB}} bottlneck itself.
Note that the other baselines we compare with are all based on approximations of a mutual information based objective. In contrast, {\textsc{RepDIB}} does not require any MI based approximations. We mainly compare with EMI(with MINE objectives)\cite{kimICML19}, DB(Dynamic Bottleneck) \cite{BaiDB} and SVIB \cite{SVIBa,SVIBb} as reviewers have pointed out, and show in figures \ref{fig:rebuttal_comparison_bottlenecks_images} and \ref{fig:rebuttal_comparison_bottlenecks_video} how {\textsc{RepDIB}} compares with other baselines. Specifically, EMI proposes to maximize the mutual information of state embedding representations and action embedding representations by maximizing the estimated lower bounds of both mutual information. DB follows the Information Bottleneck principle to learn dynamics-relevant representation by maximizing the mutual information $I(Z_t; S_{t+1})$ while minimizing the mutual information $I([S_t, A_t];Z_t)$, where $Z_t$ is a compressed latent representation of $(S_t,A_t)$, $S_t, A_t$ correspond to the current state and current action respectively. SVIB utilizes the mutual information between the observation and its corresponding representation as an additionally penalized term for standard loss function in RL, optimizing all networks by Stein Variational Gradient Descent (SVGD). Notably, for a fair comparison with the other bottlenecks, we update all networks by just using Adam optimizer instead of SVGD.
We emphasize that compared to the baselines, {\textsc{RepDIB}} is easy to integrate since it only requires adding a VQ-VAE based factorization with a variational information bottleneck.
\section{Significance of VIB and DIB for {\textsc{RepDIB}}}
\label{rebuttal:vib_dib_comparison}
We thank the reviewers for asking for ablations and comparisons of the {\textsc{RepDIB}} bottleneck. In figures \ref{fig:rebuttal_ablation_image} and \ref{fig:rebuttal_ablation_video} we include ablation studies where we compare {\textsc{RepDIB}} with \textit{only} using the discrete information bottleneck (DIB) compared to \textit{only} using the variational information bottleneck (VIB). We do this on top of several existing representation objectives as described in section \ref{rebuttal:comparisons_other_bottlenecks}. Experimental results show that the significance in performance improvement of {\textsc{RepDIB}} can primarily be achieved when we use the VIB bottleneck prior to the DIB bottleneck, as we have explained previously in the main draft. Without the combination of the two, simply using one of the bottlenecks does not lead to the expected performance improvements.
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/driml_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/image/icm_walker_walk_expert.pdf}
}
\caption{Ablation studies on the {\textsc{RepDIB}} bottleneck on time correlated exogenous distractors in the observations of offline datasets, as per the setup described in section \ref{sec:offline_exo}}
\label{fig:rebuttal_ablation_image}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/ac_state_cheetah_run_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/ac_state_walker_walk_expert.pdf}
}
\subfigure{
\includegraphics[
trim=1cm 0cm 1cm 0cm, clip=true,
width=0.3\textwidth]{figures/rebuttal/repdib_ablation/video/icm_walker_walk_expert.pdf}
}
\caption{Ablation studies on the {\textsc{RepDIB}} bottleneck on changing background video based exogenous distractors in the observations of offline datasets, as per the setup described in section \ref{sec:offline_exo}}
\label{fig:rebuttal_ablation_video}
\end{figure}
\section{Comparisons with Prior Related Works}
\label{rebuttal:related_works}
An information bottleneck aiming at minimal sufficient representations can be implemented in various ways, including a variational approach (VIB) and architectural choices such as reducing the dimension of deeper layers, or by discretizing layers. Bai, Chenjia, et al. directly apply the information bottleneck to the dynamics of the system, whereas RepDIB applies it for different downstream targets, such as DQN targets or inverse model targets. RepDIB also combines both kinds of bottlenecks, i.e. architectural (discrete bottlenecks in particular) and variational ones. Previous work in reinforcement learning which enforce bottlenecks have worked with either type independently. Dreamer-v2 and similar variants have included discretization for pixel-level model-based learning. In this paper, we take a zoomed-out perspective on the efficacy of bottlenecks in learning representations for reinforcement learning.
\section{Explanation, Clarification and Significance of our Approach}
\label{rebuttal:significance_work}
We would like to provide further clarification to all reviewers about the significance of our work. In this work, we do not propose any new representation learning objective; rather we simply propose that discrete information based bottlenecks can be significant when it comes to learning representations. Moreover, an approach based on {\textsc{RepDIB}} is demonstrated to be even more impactful especially when the learnt representation needs to discard exogenous or irrelevant information from the observations. We demonstrate this across a range of experiments, not only based on RL, but also based on other tasks such as human activity recognition. Our experiments however are primarily based on RL benchmarks, where we demonstrate that {\textsc{RepDIB}} can be easily applied on top of any learnt representations. To do this, we take existing baseline approaches proposing representation learning objectives and demonstrate the ease with which {\textsc{RepDIB}} can be integrated on top of learnt representations.
We thank the reviewers for asking clarification questions about this, along with discussions on prior related works comparing {\textsc{RepDIB}}. During the rebuttal phase, we also included clarifications on how easily {\textsc{RepDIB}} can be applied on top of exisitng experimental benchmarks. We emphasize that although information bottlenecks has been studied extensively in past literature, the use of discrete information bottleneck is rather new; and moreover to apply bottlenecks on top of representation learning objectives, especially to discard exogenous information, has been little studied in the past. We hope this clarification would help reviewers re-evaluate the score of the paper. Our aim is to propose information bottleneck, which not only captures factorial or compositional representations, but also plays key role in extracting only the relevant latent representation; and most importantly, can be suitably applied on any deep RL algorithm relying on additional representation learning module.
\section{Additional Results for Demonstrating Factorial Representation}
\label{rebuttal:factorial_representation}
\begin{figure}
\centering
\subfigure[Brightness and Details]{\includegraphics[width=0.9\textwidth]{figures/rebuttal/reconstruction_165.png}}
\subfigure[Different colors]{\includegraphics[width=0.9\textwidth]{figures/rebuttal/reconstruction_250.png}}
\caption{PACS-cartoon-elephant dataset example to demonstrate factorized representations. Top row: Original Image; Second row: Reconstructed Image without substitution; Third row: Reconstructed Image with one groups of discrete codes substituted by zero vectors; Last row: Reconstructed Image with the other groups of discrete codes substituted by zero vectors.}
\label{fig:pacs}
\end{figure}
We demonstrate that with the discrete factorial information bottleneck, the agent is capable of learning factorial representations on real world data. We provide more details as follow.
\paragraph{Experiment details} To investigate whether the agent has the ability to learn semantic factorial representation with {\textsc{RepDIB}}, we use the cartoon domain images from a benchmark dataset called PACS~\cite{PACS}, where only the elephant category is utilized for training and evaluation for the purpose of the intuitively illustration. The pixel-based input, with the size of 224x224, is first passed through an encoder (consists of CNN layers with the resnet block) to obtain its latent representation with the dimension of 32, then latent representation is quantized into two groups of discrete codes, where the codebook size is 512. After that, two groups of discrete codes are concatenated to obtain the representation, and finally passed through a decoder network (consists of CNN layers with the resnet block). Here we used reconstruction loss (MSE loss) combining with the loss for vector quantization to train the network. For visualizing the semantic meaning of different groups, we randomly sample 25 pictures from the dataset, and pass the images into the network to obtain reconstruction of the images. Ideally, we would like to know whether different groups capture different semantic meaning of one image. For this purpose, we used zero vector to substitute one group of the discrete codes and acquire the reconstructed image by concatenating it with the other group of the discrete codes. As a consequence, we have three reconstructed images in total, as shown in Figure~\ref{fig:pacs}.
\paragraph{Experiment Results} Figure~\ref{fig:pacs} shows the reconstructed images from a trained decoder operating on a discretized 2-factor representation. We find that different factors capture different semantic information. As an example, it is obvious to see that there are 4 elephants in the fifth column in Figure~\ref{fig:pacs}(a), where the elephant at the top and the elephant at the bottom-middle are brighter than the other two elephants. For this image input, factor 1 tends to only capture the shape of the elephant without the brightness, while factor 2 capture specific details of each elephant. The similar observation can be found in the 16th column, where factor 1 captures the ``shadow'' in the picture, and factor 2 captures the brightness of elephant's skin. Another example is in Figure~\ref{fig:pacs}(b), it is shown that two factors learn ``green'' and ``purple'' separately for reconstructing ``black'', and two factors learn ``pink'' and ``orange'' separately for reconstructing ``red''.
\part*{Individual Responses to Reviewers}
\section{Response to Reviewer 1}
\subsection{Do we propose a representation learning method?}
\paragraph{Feedback:}\textit{ This paper proposed a representation learning method for deep reinforcement learning.}
\paragraph{Response:} We would like to provide a clarification. Please also see general responses in section \ref{rebuttal:significance_work}. We would like to emphasize that we are only proposing for the use of discrete information bottleneck, that can achieve factorial representations, and not proposing a new representation learning method? In fact, our proposed RepDIB approach can be integrated on top of any existing representation learning method.
\subsection{Comparison to SOTA Baselines and Other Bottleneck Baselines}
\paragraph{Feedback:} \textit{The author should compare their method with current SOTA baselines, instead of just variants of their method.}
\paragraph{Feedback:} \textit{Comparison to SOTA bottleneck baseline?}
\paragraph{Response:} Please see generic responses to all reviewers in section \ref{rebuttal:comparisons_other_bottlenecks}. We want to highlight here that there are indeed no such things as SOTA baselines in this case? We are taking existing representation learning approaches, which are already performing well on existing benchmarks; and showing that the use of information bottleneck to discard the irrelevant information from observations, can further significantly improve performance of existing representation objectives. As for SOTA baselines, this can be any task and any method that we take into account, and showing that RepDIB can further be impactful to learn compressed representations, that further helps the RL algorithm (policy or value learning).
\subsection{Novelty of Using Information Bottleneck}
\paragraph{Feedback:} \textit{The novelty is limited, as using the information bottleneck for representation is not a new idea[1].}
\paragraph{Response:} We would like to emphasize here the significance of our work. We acknowledge and also mentioned in the paper that the use of information bottleneck is not novel. Indeed, the contribution of this work is to show the importance of information bottleneck in representation learning. In recent literature, a lot of works have proposed self supervised representations in RL; however, little to no work has meaningfully used information bottlenecks on top of the learnt representations. Our first contribution is to show that, even if the idea of information bottleneck is not novel, the application of it in context of representation learning is important. Our second primary contribution, which itself can be considered novel, is the significance of using a discrete information bottleneck. Prior literature have used continuous bottlenecks in other cases, such as based on mutual information or variational information bottleneck. In this work, we show that discrete bottlenecks can be useful, especially when we are using a bottleneck that naturally learns a factorized representation due to the use of different embedding or grouping factors that can be achieved through the use of a VQ bottleneck.
\subsection{More Experiments and Benchmarks}
\paragraph{Feedback:} \textit{Verify methods on SOTA baselines and more benchmarks?}
\paragraph{Response:} We would like to emphasize here, as already mentioned by two other reviewers too, that we have already performed experiments on several benchmarks and baselines. As two other reviewers have already pointed out - we conduct “comprehensive experiments to demonstrate advantage of RepDIB in many scenarios. Furthermore, as R6 has pointed out - we have “extensive experimental results that covers a wide range of aspects regarding the effectiveness of the representation. For example, this paper tests the effect of structure in the representation via toy experiments, the generalization performance on control benchmarks, robustness against real distractors on real robot arm data and offline Atari benchmarks, scalability to rich observation environments, etc.” We hope that our existing empirical results are already convincing to demonstrate the significance of RepDIB? If there are any other baselines or benchmarks you would prefer us to do experiments on, please let us know. We would try to the best of our abilities, as it can further improve the quality of our paper.
\section{Response to Reviewer 2}
Thank you for your useful and detailed feedback on our work. These clarification questions are certainly helpful, and answeing them also plays a vital part to improving the overall presentation and clarity of our work.
\paragraph{Feedback:} \textit{This paper adapts VQ-VAE as a representation learning module for reinforcement learning and has empirically shows the performance improvement over baselines}
\paragraph{Response:} We want to slightly clarify that the key contribution of our work is to show significance of discretization bottleneck, based on VQ-VAE, and that it can be integrated on top of any existing representation learning objective. We do not explicitly propose a new objective for learning representations; rather show that for RL tasks in general, discrete bottlenecks through which we can achieve compressed representations, can help significantly across a lot of benchmark tasks. The RepDIB algorithm/method can be used as a plug-in approach on top of any existing benchmark or baseline.
\subsection{Do we need to estimate Mutual Information - Answer : No}
\paragraph{Question:} \textit{Mutual information is known to be unstable hard to estimate. The paper should explicitly discuss the exact methods it uses to estimate the mutual information as well as the discretization part.}
\paragraph{Response:} Yes, that is correct that mutual information based bottlenecks are usually hard to estimate. Past works have attempted doing this. In this work, we indeed, as an advantage of our proposed method, \textbf{do not need to estimate any MI terms;} we basically use a VQ bottleneck only, as we have mentioned in the paper. This bottleneck can be integrated on any representation objective, and therefore we are primarily focussing on the factorial discretization part.
\subsection{Significance of Information Bottleneck}
\paragraph{Feedback:} \textit{The paper is well motivated yet the discussion on the information bottleneck is not so concise. The statement on the contribution is a bit chaotic and lacks sufficient logic/reasoning, even as a pure empirical paper.}
\paragraph{Response:} Thank you for providing feedback and asking clarification question on the contribution and motivation for using information bottleneck. We included a general response to all reviewers in section \ref{rebuttal:significance_work}, explaining the significance and impact of our work.
Additionally, we would like to clarify the logic of using {\textsc{RepDIB}}. The logic of it is simply that when we pre-train representations for downstream RL, it is advantageous to make these representations have a discrete representation. We can further improve the representation by encouraging the discrete space to be \textit{small}, by further adding the VIB regularizer prior to discretization. Moreover, the discrete representation must have multiple factors, so that it is sufficiently expressive to handle complex problems. Our experiments support both of these aspects of the approach, across a variety of RL tasks.
\subsection{Comparisons with Prior Works}
\paragraph{Feedback:} \textit{Comparisons with prior works - Lists few papers for comparison - or at least should be mentioned in related works. }
\paragraph{Response:} Please see generic responses to all reviewers in section \ref{rebuttal:comparisons_other_bottlenecks} and \ref{rebuttal:related_works} where we provide empirical comparisons with other approaches, and also describe in detail some of the differences of {\textsc{RepDIB}} compared to prior works. These sections would also further clarify the above question on whether we need to estimate any mutual information related terms for the information bottleneck quantity.
\subsection{Discarding Irrelevant Information and Robustness of Learnt Representations}
\paragraph{Feedback:} \textit{When discussing concepts like "irrelevant information in representation" or so, you should more rigorously define what exactly you are referring to. This reference is a good example.}
\paragraph{Response:} Thank you for your feedback. We would like to clarify what we mean by discarding irrelevant information in the representation here. Firslty, our experiment setup in section \ref{sec:offline_exo} is based on pixel based observations where the observations may contain exogenous information (ie, time correlated background images or changing video distractors in the background). This is primarily inspired from previous theoretical works on learning representations in presence of exogenous information \cite{Efroni2021ppe, efroni2022colt, lamb2022guaranteed} where theoretically these works argue the need for learning robust representations. In section \ref{sec:offline_exo}, we draw inspiration from past works where we learn representations from pixel based offline data containing exogenous noise. Our results (and additionally new results in the general responses) shows that when dealing with exogenous information, simply learning representations might not be enough. Rather, we require some form of information compression, such that the exogenous noise can be discarded. Our results show that this is where learning information bottlenecks might be useful, where {\textsc{RepDIB}} outperforms several other baselines when integrated on top of learnt representations. In addition to this, we have also shown such results in the practical robot art setup in figure \ref{fig:robot} where we found that {\textsc{RepDIB}} based bottlenecks play a vital role in the reconstruction, justifying that compressed representations are required to discard exogenous noise. Our hypothesis of whether {\textsc{RepDIB}} can discard irrelevant information in representation is therefore primarily based on past theoretical works studying latent state decoding in presence of exogenous information \cite{efroni2022colt}. We hope this makes our claims from the experimental results section much clearer.
\section{Response to Reviewer 3}
\subsection{Comparing Variational and Discrete Information Bottleneck (VIB vs DIB)}
\paragraph{Feedback:} \textit{However, the implementation is a combination of variational IB (VIB) and DIB, which makes it a bit difficult to tell which design plays the key role. The only ablation results I found are in Figure 9. It seems that RepDIB without VIB is only marginally better except quadruped-walk (RepDIB without VIB seems even under-perform in walker-flip). To demonstrate the effectiveness, it would be ideal to also have ablation experiments without DIB but with VIB. Besides, it was only provided in the fine-tuning subsection. It would be ideal to include ablation studies for more scenarios}
\paragraph{Additional Feedback:} \textit{I am adequately happy with the demonstrations (those with background distractors) that RepDIB reduces extraneous information. However, it remains unclear to me whether DIB is the key part. Therefore, the claims are only partially correct to me until the ablation study concern is addressed.}
\paragraph{Response:} Please see general response to section \ref{rebuttal:vib_dib_comparison} where we include additional results for ablation studies comparing VIB only with DIB and {\textsc{RepDIB}}. We compare how {\textsc{RepDIB}} performs with and without the variational bottleneck. We have added additional results comparing the discrete bottleneck (DIB) only compared to only using the VIB. In our previous results, we have shown that adding the VIB to the DIB often improves performance; although this is not always consistent. This finding shows that, as proposed in RepDIB, the DIB bottleneck itself is the key bottleneck which leads to the primary performance improvements.
\subsection{Pre-Training and Fine-Tuning Performance}
\paragraph{Feedback:} \textit{Figure 8: If my understanding is correct, Figure 8 (bottom row) attempts to demonstrate that longer pre-training is not necessarily leading better fine-tune performance. However, as the pre-training curves are not provided, it is difficult to see whether this phenomenon is caused by under-fitting or over-fitting. Besides, it is likely that Proto-RL and RepDIB take different amounts of pre-training steps to converge. Therefore, it is possible that for the same amount of pre-training steps, one is converged but the other is not. It is important to show that these variables are controlled, to demonstrate the advantage of RepDIB.}
\paragraph{Response:} We want to clarify the findings as shown in Figure 8 more clearly. Figure 8 shows that when we do pre-training on the ProtoRL baseline, with or without bottleneck, the performance during the fine-tuning phase varies significantly. Both the methods in this case are pre-trained for the same number of timesteps, and the fine-tuning task is based on the pre-trained representation from a similar domain. The bar-plots show results at different stages of fine-tuning with the same pre-trained representation. In this result, we primarily show that when pre-trained with a bottleneck like RepDIB, the fine-tuning performance improves monotonically as we increase the number of fine-tuning steps. However, without the bottleneck, the fine-tuning performance is not necessarily monotonic, and suggests that without bottleneck there is indeed some form of overfitting that is happening here. You are correct that this phenomenon is likely to be due to over-fitting issues. However, we want to emphasize whether this is indeed leading to overfitting or not is difficult to quantify itself. We essentially wanted to point out, following the result from the original ProtoRL and URLB papers that this phenomenon of overfitting during fine-tuning phase, with pre-trained representations, can essentially be avoided if we are using a representation bottleneck.
\subsection{Factorial Representations}
\paragraph{Feedback:} \textit{Factorial representation: Although factorial representation is emphasized as one key advantage of DIB. Unfortunately, the exposition is slightly lacking. I wonder if is there a way to control all other factors but only compare factorized representations versus non-factorized ones. (Please correct me if I misunderstood or overlooked.)}
\paragraph{Response:} One interesting experiment on this topic is on the real robotic arm \ref{fig:robot}, where we had some proxies that allowed us to estimate how much information about the background (unrelated to the agent) was kept in the representation after adding DIB. We found that after DIB, the representation kept all of the information about the robot arm, but the representation had substantially less information about the visual distractors in the data. Factorial representation is important for a discrete representation, because it makes it expressive enough to handle complex data. In our experiments on DIB with a single factor, results were generally worse than when using multiple factors.
Please also see section \ref{rebuttal:factorial_representation} and figure \ref{fig:pacs} where we included additional results demonstrating that {\textsc{RepDIB}} learns factorial representations.
\subsection{Intrinsic Motivation and Exploration with {\textsc{RepDIB}}}
\paragraph{Feedback:} \textit{Figure 5: "We find that REPDIB improves intrinsically-motivated exploration
(left)." I am not sure how to interpret the left figure to arrive this conclusion.}
\paragraph{Response:} Thank you for asking for clarification on this. In the gridworld toy maze experiments, we essentially show that when using a DIB, one of the advantages of it is that in addition to the compressed embeddings from the bottleneck of VQ-VAE, we additionally get the corresponding discrete codes or prototypes from the clusters of the embedding space. This means that for each observation, in addition to the continuous latent, as a bi-product, RepDIB also gives the corresponding discrete latent code. We show that using these latent codes, we can additionally use it as a means for exploration. In this experiment, using the corresponding discrete code or tabular MDP of the RepDIB, we can additionally do a count-based estimate of state-action pairs, following which it can be used as a means for exploration. We refer to this experiment to show that RepDIB can also be used as a means to do intrinsic exploration, since the discrete latent codes can be used in a meaningful way. We hope this makes the claim clear.
\section{Response to Reviewer 4}
\subsection{Clarification on Experimental Section}
\paragraph{Feedback:} \textit{Lack of clarity - Especially experimental section is very confusing and difficult to follow. Also it claims good structure of embedding space. I would like to see how the structure looks like compared to other approaches.}
\paragraph{Response:} Thank you for your feedback and comment. We tried to visualize how the structure in the embedding space looks like. Figure 2 shows T-SNE plots of the embeddings learnt from ProtoRL baseline, and then compare it with when we add RepDIB on top of the ProtoRL baseline. The T-SNE plot shows that there are different clusters due to the vector quantizuation. We also included additional T-SNE plots in another domain, as shown in Figure 23 in the appendix, where we compare the visualizations due to VIB, DIB and without any bottlenecks.
\paragraph{Response:} Clarification of experimental section : We would try to clarify and write clearly the contributions from the experiments section in the updated draft. In experiments section, we first ask what are the key contributions or benefits we are likely to see due to the {\textsc{RepDIB}} bottleneck - for example, whether the bottlenecks can help in the representations learnt from the offline datasets, or whether due to bottlenecks, it is easier to learn parsimonious representations and discard exogenous information. In each of the following sub-sections, we then verify whether this is true through experimental results. We first describe in each subsection what the experiment setup and details are (with further ones in appendix) and then explain our findings. We understand that the way this section is structured might have been difficult to understand for reviewers, and we will improve on the clarification in the updated manuscript.
\subsection{Comparisons with Other Bottleneck Baselines}
\paragraph{Feedback:} \textit{Comparisons with prior works using bottlenecks : E.g. it is not clear why exactly this structure of bottlenecks is the best. Authors could compare various types of structures.
Differentiating clearly former work (general bottleneck approach) and own work is crucial.}
\paragraph{Response:} Please see our general response to all reviewers in sections \ref{rebuttal:comparisons_other_bottlenecks} and \ref{rebuttal:related_works}, where we included additional discussion on related prior works, clearly stating how our work differs from what has appeared before in terms of representation bottlenecks. In addition to that, we included more experimental results where we compare RepDIB with other baseline bottleneck methods. In the general response to all reviewers, we include comparisons of {\textsc{RepDIB}} with several other information bottleneck baselines ( \ref{fig:rebuttal_comparison_bottlenecks_images} and \ref{fig:rebuttal_comparison_bottlenecks_video}), along with ablation studies of {\textsc{RepDIB}} itself (figures \ref{fig:rebuttal_ablation_image} and \ref{fig:rebuttal_ablation_video}). We hope the additional experimental results would help understand the significance of {\textsc{RepDIB}} on top of existing representation learning objectives.
\subsection{Clarity on Experimental Section}
\paragraph{Feedback:} \textit{I find the most confusing the Experiments section. You start with introducing the experiments based on the questions that they answer. Why not then organize the experiments sections in the same order? Also, it is not clear at first sight which of the tasks are your own and which were benchmarks from previous works (i.e. Section 4.5, why wait until the end of the paragraph to say that you used the MMact dataset?).
}
\paragraph{Response:} Thank you for your feedback. We will try to address and structure the experiments section better in the updated version of the manuscript. You are absolutely correct that the structure of the experiments section would look clearer if we structure the experiments section based on the questions in order, and what each section tries to answer.
About the MMact dataset, we include a brief clarification on the results in section 4.5. We evaluated the impact of ${\textsc{RepDIB}}$ to learning multi-modal representations for human activity recognition task. We trained all the baseline and ${\textsc{RepDIB}}$ models on MMAct dataset using five modalities (two visual views, accelerometer, gyroscope, and orientation). Following the previous benchmark \cite{kong2019mmact}, we evaluate the model on test data split and compare the model performance (F1-score) for human-activity recognition task.
\subsection{Clarification on the use of VQ-VAE}
\paragraph{Feedback:} \textit{“Moreover, in the REPDIB Implementation Details paragraph in Section 3, you mention using VQ-VAE - this is however the only mention in the whole paper, the reader does not know what you are referring to or what the abbreviation means. Also the VQ abbreviation itself (probably for vector quantization) used in the VQ bottleneck term was not properly introduced. Same with baseline DQN agent”
}
\paragraph{Response:} We would try to clarify this clearly in the updated manuscript. Yes, we indeed use the vector quantization bottleneck, and we showed this in the illustration of the RepDIB approach as well; furthermore, in the method description section we included details of how the VQ-VAE approach works? Please see section 3 in the manuscript. We would try to mention and cite the VQ paper more often in the draft, and state clearly what we do here and how we achieve the factorial representations for RepDIB.
\subsection{Prior Related Works}
\paragraph{Feedback:} \textit{“The authors briefly cover certain works from the areas of Self-supervised representation learning in RL, Learning representations with information bottlenecks and Information bottlenecks for exploration in deep reinforcement learning. Although they mention that some of the works are similar to their solution (such as Yarats et al., 2021), they do not mention the difference or in what way is their solution more suitable. Only later, in the Experiments Section 4.2., the work from Yarats et al. is suddenly referred to as the Proto-RL baseline and is described in detail including the differences from REPDIB. I would suggest putting this comparison into the Related Work as it is confusing to introduce a baseline halfway through the Experiments section.Overall, I think the Related work section could be rewritten to better explain where does the proposed algorithm stand compared to the state of the art.}
\paragraph{Response:} Please see general responses to all reviewers in sections \ref{rebuttal:related_works} and \ref{rebuttal:comparisons_other_bottlenecks}. We included details on prior related works and {\textsc{RepDIB}} compares to prior works. We also included additional experimental results comparing {\textsc{RepDIB}} with other bottleneck approaches, as in section \ref{rebuttal:comparisons_other_bottlenecks}. We clarify that we only mention the ProtoRL baseline \cite{yarats2021protorl} in that specific section since we only use ProtoRL for the part of the experiments using the URLB benchmark \cite{URLB} where ProtoRL is already an existing state of the art representation learning technique. We would like to clarify that ProtoRL itself does not use any information bottleneck based approach, other than using a clustering technique from which they learn the prototypes. In contrast, {\textsc{RepDIB}} is based on a VQ-VAE based approximation. Therefore, we did not inlucde ProtoRL as a general related work, but only specified about it when we used the URLB benchmark for the control tasks. However, to clarify better on the related works comparing to {\textsc{RepDIB}}, we have provided additional justification of how {\textsc{RepDIB}} compares to prior works in section \ref{rebuttal:related_works} of general responses to all reviewers.
\section{Response to Reviewer 6}
\subsection{Clarification on {\textsc{RepDIB}} Implementation}
\paragraph{Feedback:} \textit{Section 3 is a little bit hard to understand given the current presentation. After reading, I am still confused how the information bottleneck is implemented and I need to dive into literature to help me understand the technique of this paper. Perhaps this is because I am not familiar with the information bottleneck literature. But I do suggest that authors make the introduction of the technique more self-contained. For example, if the space in the main text is limited, then more detailed explanation or/and pseudo-codes can be added in the appendix.}
\paragraph{Feedback:} \textit{I think the introduction of the detail of the proposed technique can be more self-contained.}
\paragraph{Response:} The implementation is very simple. We’ll update the appendix with an algorithm block, but it looks something like: \\
$def bottleneck(h, nfactors):$ \\
$h’, L1 := VIB(h)$\\
$h’’, L2 := VQ-bottleneck(h’, nfactors)$\\
$return h’’, L1+L2$\\
This bottleneck is simply called at the end of the encoder network, and the extra-losses L1+L2 are added to the loss to be optimized. The computational cost of the bottleneck is negligible compared to the part of the encoder that must process the rich observation space in the inputs.
\subsection{Clarifications on Factorial Representatons}
\paragraph{Feedback:} \textit{I am curious why the factorized structure works. Why is this structure superior? Is there any theory or intuition behind this}
\paragraph{Response:} Thank you for asking this question. We would like to provide further clarification on how {\textsc{RepDIB}} learns factorial representations.
So if there are L codes per factor and G factors, then the maximum number of possible encoded values is $L^G$. If $G=1$ and L is reasonable (like 500), then this bottleneck is extremely tight and results are adversely affected. Even $G=4$ or $G=8$ makes this bottleneck much looser and able to work on complex RL tasks.
In addition, we included additional results as response to all reviewers, demonstrating how {\textsc{RepDIB}} learns factorial representations. Please see section \ref{rebuttal:factorial_representation} and figure \ref{fig:pacs}.
\subsection{Robustness of Learnt Representations}
\paragraph{Feedback:} \textit{“This paper mentions many times the robustness of representation. How to measure the robustness of a certain representation? Is it measured by the good performance under the existence of exogenous distractors?”}
\paragraph{Response:} Thank you for asking this question. We would like to provide further clarification on this, since it would help understanding the paper. Our measure of robust representations is indeed based on the performance - otherwise, in the literature, there is no existing metric to measure robustness to the best of our knowledge. However, the reason we claim that RepDIB helps achieve robust representations is because of the following : our experiments, especially in the offline domains are based on observations which consists of time correlated exogenous noise in the background (changing video or fixed images that are time correlated). Now, when learning representations from these observations, if the representations are still dependent on the exogenous components, then that would eventually make an impact in the eventual policy learning performance. In other words, the learnt representation could not discard the irrelevant information, and as such, the policy performance depends on other factors compared to the most informative states of the MDP. When we use RepDIB, we find that the policy improvement significantly improves. This suggests that the use of RepDIB significantly helps in discarding the irrelevant components of the observations, whatever the underlying representation objective that we use on top of which we integrate RepDIB. This is therefore measured by the overall policy learning performance. We draw the connection here that if representations can discard exogenous noise completely, then policy improvement would be significant; otherwise, policies dependent on exogenous factors of observations are unlikely to solve the task well. We hope this justification is clear. We will also include this explanation in the final version of the paper, so that it helps readers in general, since the question you have raised here is an important one that needs to be addressed well. Thank you for your valuable feedback in this.
\section{Related Work}
\label{sec:related_work}
\vspace{-2mm}
\textbf{Self Supervised Representation Learning in RL}. Several prior works have studied representation learning in context of RL, ranging from online to offline settings \cite{NachumRepMatters, KostrikovOfflineCritic, NachumImitation}, while also studying the ability to recover underlying latent states to capture environment dynamics \cite{lamb2022guaranteed, BallLPR21}. Most of these works involve learning representations from high dimensional observations, which may contain irrelevant information. This is formalized as learning under irrelevant exogenous information \cite{Efroni2021ppe, efroni2022colt}, by the theoretical RL community studying representation learning. In this work, we show effectiveness of information bottlenecks with {\textsc{RepDIB}}, when learning under exogenous information, and show that bottlenecks can filter out irrelevant information from observations. Empirically, prior works studied regularized objectives, for learning robust representations \cite{MazoureCDBH20, JaderbergMCSLSK17} while others have exploited empowerment based objectives \cite{mohamed2015variational}. Self supervised objectives, when used for pre-training representations have shown to achieve tremendous performance improvements \cite{CURL, ATC,SchwarzerAGHCB21, SchwarzerRNACHB21}, while when learning with fine-tuning representations, it leads to better exploratory objectives~\cite{yarats2021protorl}.
\textbf{Learning Minimal Representations with Information Bottleneck}. In this work, we argue that information bottleneck based representations with ${\textsc{RepDIB}}$ can be an effective approach for learning robust representatons in RL, in presence of exogenous information. The information bottleneck principle~\cite{wang2022rethinking,tishby2015deep,shwartz2017opening,tishby2000information} advocates for learning minimal sufficient representations, i.e. those which contain \textit{only} sufficient information for the downstream task. Optimal representations contain relevant information between $X$ and $Y$ that is parsimonious to learn a task. Several approaches have been proposed to design information bottlenecks in deep learning models, such as variational bottlenecks \cite{sun2022graph,alemi2016deep} and discrete representation bottlenecks \cite{discrete_bottleneck}. Most prominently, Alemi et al. \cite{alemi2016deep} introduced a variational approximation to a mutual information objective of the information bottleneck and applied this to deep neural networks.
\textbf{Information Bottleneck for Exploration in Deep Reinforcement Learning}. The exploration problem is inherently coupled with the representation learning problem, since discovering underlying latent structure of the world ensures that the agent learns about the unseen frontiers in observation space to reach. While several recent works have studied representation learning in RL for improving downstream task performance~\cite{SchwarzerAGHCB21, SchwarzerRNACHB21}, the closest to our work is learning with prototypical representations ~\cite{yarats2021protorl}, which studies the coupled problem of representation learning and exploration. \cite{InfoBotGoyal, goyal2020variational} previously studied exploration based on identifying latent bottleneck states, but do not learn an explicit representation with a self-supervised objective. \cite{DropBottleneck} studied bottlenecks for inducing exploration in RL. On the theoretical side, \cite{misra2020kinematic} grounds representation learning and exploration with theoretical guarantees, but cannot scale to rich observation environments. Several prior works in exploration have been proposed, with large observation spaces, such as using pseudo-counts \cite{OstrovskiBOM17, BellemareSOSSM16}, optimism-driven exploration \cite{OsbandRRW19}, intrinsic motivation \cite{OudeyerK09}, random network distillation\cite{burda2018exploration} and curiostiy based exploration with prediction errors \cite{pathak2017curiosity}. While these algorithms proposes exploration in complex high dimensional tasks, they do not necessarily learn and exploit any form of structure in the representation space.
\textbf{Comparisons with Prior Related Works : } An information bottleneck aiming at minimal sufficient representations can be implemented in various ways, including a variational approach (VIB) and architectural choices such as reducing the dimension of deeper layers, or by discretizing layers. Chenjia et al.~\cite{BaiDB}. directly apply the information bottleneck to the dynamics of the system, whereas RepDIB applies it for different downstream targets, such as DQN targets or inverse model targets. RepDIB also combines both kinds of bottlenecks, i.e. architectural (discrete bottlenecks in particular) and variational ones. Previous work in reinforcement learning which enforce bottlenecks have worked with either type independently. Dreamer-v2 and similar variants have included discretization for pixel-level model-based learning. In this paper, we take a zoomed-out perspective on the efficacy of bottlenecks in learning representations for reinforcement learning.
\subsubsection*{\bibname}}
\onecolumn
\aistatstitle{Supplementary Materials}
\section*{Appendix}
\input{app_exp_results}
\input{app_significance}
\input{app_exp_details}
\clearpage
\newpage
|
1,108,101,565,162 | arxiv | \section{Introduction} \label{sec:one}
\begin{figure}[b]
\bigskip
\begin{center}
\includegraphics[width = 60 mm]{twophoton.pdf}
\caption{The box diagram for the $\mathcal O(\alpha^5m^4)$ corrections. The graph in which the photons cross is also included.
}
\label{fig:lambbox}
\end{center}
\end{figure}
The proton radius puzzle is one of the most perplexing physics issues of recent times. The
extremely precise extraction of the proton radius~\cite{pohl} from the measured
energy difference between the $2P_{3/2}^{F=2}$ and $2S_{1/2}^{F=1}$ states of muonic hydrogen disagrees with that
extracted from electronic hydrogen.
The extracted value of the proton radius is smaller than
the CODATA~\cite{codata} value (based mainly on electronic H) by about 4\% or 5.0
standard deviations. This implies~\cite{pohl} that either the Rydberg constant has to be
shifted by 4.9 standard deviations or that
present QED calculations for hydrogen are insufficient.
The Rydberg constant is extremely well measured and the
QED calculations seem to be very extensive and highly accurate, so the muonic H finding is
a significant puzzle for the entire physics community.
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
Pohl {\it et al.} show that
the energy difference
between the $2P_{3/2}^{F=2}$ and $2S_{1/2}^{F=1}$ states, $\Delta\widetilde{E}$ is given by
\begin{eqnarray}
\Delta\widetilde{E}=209.9779(49)-5.2262r_p^2+0.0347 r_p^3 \;{\rm meV},\label{rad}
\end{eqnarray}
where $r_p$ is given in units of fm. Using
this equation and the experimentally measured value $\Delta\widetilde{E}=206.2949$ meV, one can see that the difference between the Pohl and CODATA values of the proton radius
would be removed by an increase of the first term on the rhs of Eq.~(1)
by 0.31 meV=$3.1\times 10^{-10}$ MeV.
This proton radius puzzle has been attacked from many different directions~\cite{Jaeckel:2010xx}-\cite{Miller:2011yw}
The present communication is intended to investigate the hypothesis that the proton polarizability contributions entering in the two-photon exchange term,
see Fig.~\ref{fig:lambbox}, can account for the 0.31 meV.
This idea is worthy of consideration because the computed effect is proportional to the lepton mass to the fourth power, and
so is capable of being relevant for muonic atoms, but irrelevant for electronic atoms.
\section{$\Delta E^{subt} $ and its Evaluation } \label{sec:two}
The basic idea is that the two-photon exchange term depends on the forward virtual Compton scattering amplitude $T^{\mu\nu}(\nu, q^2)$ where $q^2$ is the square of the four momentum, $q^\mu$ of the virtual photon and $\nu$ is its time component. One uses symmetries to decompose $T^{\mu\nu}(\nu, q^2)$, into a linear combination of two terms, $T_{1,2}(\nu,q^2)$. The imaginary parts of
$T_{1,2}(\nu,q^2)$ are related to structure functions $F_{1,2}$ measured in electron- or muon-proton scattering, so that
$T_{1,2}$ can be expressed in terms of $F_{1.2}$ through dispersion relations. However,
$F_1(\nu,Q^2)$ falls off too slowly for large values of $\nu$ for the dispersion relation to converge. Hence, one makes a
subtraction at $\nu=0$, requiring that an additional function of $Q^2$ (the subtraction function) be introduced. One accounts for the nucleon Born terms, and the remainder of the unknown subtraction function is
written as $\overline T_1(0,Q^2)$~\cite{Carlson:2011zd}. This term is
handled by making a power series expansion around $Q^2=0$, and then using effective field theory
to determine the coefficients of the series. The problem with using this expansion is that this contribution to the energy is determined by an integral over all values of $Q^2$.
We proceed by elaborating the consequences of the behavior of $\overline T_1(0,Q^2)$ for large values of $Q^2$. This is followed by the development of an alternate effective field theory approach to the muon-proton scattering amplitude. In either case, one can account for the needed Lamb shift, while also providing consequences for the two-photon exchange contribution to the scattering amplitude that can be tested in an upcoming experiment~\cite{Arrington:2012}.
\newcommand{\nonumber \\&&}{\nonumber \\&&}
The contribution to the Lamb shift that is caused by
$\overline T_1(0,Q^2)$ is denoted as $\Delta E^{subt} $ and is given
by~\cite{Pachucki:1996zza,Pachucki:1999zza,Martynenko:2005rc,Carlson:2011zd,Carlson:2011dz}
\begin{align}&\Delta E^{subt} = \frac{\alpha^2}{m} \phi^2(0) \int_0^\infty {dQ^2\over Q^2}
h({Q^2\over 4m^2}
\,\overline T_1(0,Q^2)
\label{one}\end{align}
where
$
\phi^2(0)={\alpha^3 m_r^3\over 8\pi}$ for the 2S state
with $m, (m_r)$ as the lepton (reduced) mass, and
\begin{align}
h(t)&= (1-{2t}) \Big((1+{1\over t})^{1/2} - 1\Big) + 1 .\label{three}
\end{align}
The function $h(t) $ is monotonically falling, approaching $1/\sqrt{t}$ for small values of $t$, and
falling as $3/(4t)$ large values of $t$.
The subtraction function $\overline T_1(0,Q^2)$ is not available from experimental measurements, except at the real photon point $Q^2 = 0$. It comes from the excitation of the proton, and can be described, at small values of $Q^2$, in terms of the electric ($\alpha_E$) and magnetic ($\beta_M$) polarizabilities. For small values of $Q$ and $\nu=0$
one sees~\cite{Pachucki:1996zza}
$
\lim_{\nu^2,Q^2\to 0} \overline T_1(0,Q^2) =
\frac{Q^2}{\alpha} \beta_M. $
Using this simple linear $Q^2$-dependence in \eq{one} shows that the integral over $\overline T_1(0,Q^2)$ converges at the lower limit, but {\bf diverges logarithmically} at the upper limit. Thus obtaining a non-infinite result depends on including an arbitrary form factor that cuts off the integrand for large values of $Q^2$ or some other renormalization procedure.
We note that $\lim_{Q^2\to\infty}\bar{T}_1(0,Q^2)$ can be obtained from the operator production expansion~\cite{Collins:1978hi,WalkerLoud:2012bg}. Using Eq. (2.18) of Ref.~\cite{Collins:1978hi}, neglecting the term proportional to light quark masses, and accounting for different conventions yields $\bar{T}_1(0,Q^2)\sim2.1\; {\rm fm}^{-1}/Q^2$. This $1/Q^2$ behavior removes the putative logarithmic divergence of $\bar{T}_1(0,Q^2)$, but this function is far
from determined.
We follow the previous literature by including a
form factor defined as $F_{\rm loop}$. Then
\be
\overline T_1(0,Q^2) = \frac{\beta_M}{ \alpha} Q^2 F_{\rm loop}(Q^2) \,.\label{six}
\ee
Using Eqs.~(\ref{one},\ref{three},\ref{six}) one finds the energy shift to be
\begin{align}
&\Delta E^{subt} = \frac{\alpha^2 \phi^2(0)\; }{ m} {\beta_M\over\alpha} \int_0^\infty {d Q^2}\left[(1-2 Q^2/(4m^2))\left(\sqrt{1+{4m^2\over Q^2}}-1\right)+1\right]F_{\rm loop}(Q^2).
\label{de3} \end{align}
The issue here is the arbitrary nature of the function $F_{\rm loop}(Q^2)$.
Pachucki~\cite{Pachucki:1999zza} used the dipole form, $\sim 1/Q^4$, often used to characterize
the proton electromagnetic form factors. But the subtraction function should not be computed from the proton form factors, because
virtual component scattering includes a term in which the photon is absorbed and emitted from the same quark~\cite{Brodsky:1971zh}.
Carlson and Vanderhaeghen~\cite{Carlson:2011zd} evaluated a loop diagram using a specific model and
found a form factor $\sim 1/Q^2\log Q^2$, leading to a larger contribution to the subtraction term than previous authors.
Birse \& McGovern~\cite{Birse:2012eb} use terms up to fourth-order in chiral perturbation theory to find
\begin{eqnarray}
\label{eq:bm}
\overline T_1^{BM}(0,Q^2) \simeq\frac{\beta_M}{ \alpha} Q^2 \left(1- {Q^2\over M_\beta^2} +{\cal O}(Q^4)\right) \;\to
\frac{\beta_M}{ \alpha} Q^2 {1\over\left(1+ {Q^2\over 2M_\beta^2}\right)^2}, \label{bm1}
\end{eqnarray}
with $M_\beta=460 \pm 50 $ MeV.
They also use the most recent evaluation of $\beta_M$, based on a fit to real
Compton scattering~\cite{Griesshammer:2012we} that finds
\be
\beta_M = (3.1 \pm 0.5) \times 10^{-4} {\rm\ fm}^3,\label{betam}
\ee
where only statistical and Baldin Sum Rule errors are included. Their result is a negligible
$\Delta E^{subt}= 4.1\mu $ eV~\cite{Birse:2012eb}.
The form \eq{bm1} achieves the correct $1/Q^2$ asymptotic behavior of $ \overline T_1(0,Q^2) $ but the coefficient
$\beta_M/\alpha$ is not the same as obtained from the operator product expansion. The coefficient of \eq{bm1} is about twice the asymptotic limit obtained by Collins~\cite{Collins:1978hi}.
Previous authors~\cite{Carlson:2011zd,Birse:2012eb} noted the sensitivity of the integrand of \eq{de3} to large values of $Q^2$.
Our aim here is to more fully explore the uncertainty in the subtraction term that arises from the logarithmic divergence.
We shall use a form of $F_{\rm loop}(Q^2)$ that is consistent with the constraint on the $Q^4$ term found Birse \& McGovern~\cite{Birse:2012eb}. This is done by postulating a term that begins at order $Q^6$ in \eq{six}, such as
\be
F_{\rm loop}(Q^2)=\left({Q^2\over M_0^2}\right)^n{ 1\over (1+ a Q^2)^N },\; n\ge2,\;N\ge n+3,\label{mine}\ee
where $M_0,a$ are parameters to be determined. With \eq{mine}
the low $Q^2$ behavior of $\bar{T}_1(0,Q^2)$ is of order $Q^6$ or greater and it falls as $1/Q^4$ or greater for large values of $Q^2$.
So far as we know, there are no constraints on the coefficient of the $Q^6$ term and the $1/Q^4$ term. However, we shall determine the subtraction term's contribution to the Lamb shift as a general function of $n,N$. We note that $\beta_M$ is anomalously small due to a cancellation between pion cloud and intermediate $\Delta$ terms~\cite{Thomas:2001kw} , so that one can use a value ten times larger than appears in \eq{betam} to set the overall scale of the subtraction term.
Thus we replace the term $\beta_M$ of \eq{six} by a general form of the same dimensions $\beta$:
$ \beta_M\rightarrow \beta.$
The use of \eq{mine} in \eq{de3} allows one to state the expression for the energy shift in closed form as a general function of
$n,N.$ We find
\begin{eqnarray}&& \Delta E^{subt} = \frac{\alpha^2 \phi^2(0)\; }{ m} {\beta\over\alpha} \left({1\over a M_0^2}\right)^n J_{n,N}(m^2a), \\&&
J_{n,N}(m^2a)\equiv {1\over a} \int_0^\infty\;dx\;{x^n\over (1+x)^N}\left[\left(1-{x\over 2m^2 a}\right)\left((1+{4m^2a\over x})^{1/2}-1\right)+1\right] .
\label{jeq}\end{eqnarray}
The integral over $x$ can be obtained in a closed form in terms of hypergeometric functions. However, a much more understandable expression can be obtained by replacing the bracketed expression in \eq{jeq} by its large argument limit $(3m^2a/x)$. This approximation is valid
over the entire range of the integrand because of the presence of the factor $x^n$ with $n\ge 2$.
Then one obtains
\begin{eqnarray} J_{n,N}(m^2a)\approx 3m^2
{\Gamma(N-n)\Gamma(n)\over \Gamma(N)}=3m^2 B(N,n),
\label{jeq2}\end{eqnarray}
so that
\begin{eqnarray} \Delta E^{subt} \approx 3 {\alpha^2 m\phi^2(0)\; } {\beta \over\alpha} \lambda^n B(N,n),\;\lambda\equiv{1\over M_0^2a}.\label{myde}\end{eqnarray}
Numerical evaluations show that the approximation is accurate to better than a quarter of a percent.
The expression \eq{myde} makes clear the $m^4$ dependence of the contribution to the Lamb shift.
The numerical value of the term $ \Delta E^{subt} $ depends on $(n,N),\beta$ and the combination $M_0^2a\equiv \lambda^{-1}$:
\begin{eqnarray} \Delta E=3.91 {\rm meV \;fm^3} \beta \lambda^n B(N,n).\end{eqnarray}
If we take $N=5,n=2$ so that $B(5,2)=1/12$, and $\beta =10^{-3}$ fm$^{-3}$, a value of $\lambda= 30.9$ reproduces
$ E=0.31\; {\rm meV}$. If we take $M_0=0.5 $ GeV (as in \cite{Birse:2012eb}) , then $a^{-1}=15.4 \;{\rm GeV}^2$, and
that the contribution to the integral comes from the region of very high values of $Q^2$. Other values of
$n,N$ and $\lambda$ could be used to get the identical contribution to the Lamb shift.
Chiral perturbation theory could be used to determine the terms of order $Q^6$ and higher in $ \overline T_1^{BM}(0,Q^2) $,
but this procedure is always limited to a finite number of terms. Indeed one could use values of $n$ greater than 2, and still reproduce the needed contribution to the Lamb shift.
The above discussion shows that the current procedure used to estimate the size of the subtraction term is rather arbitrary. This arises because the chiral EFT is being applied to the virtual-photon nucleon scattering amplitude. Another technique would be to develop an effective field theory to determine the short-distance lepton-nucleon amplitude implied by the subtraction term.
\section{Effective field theory for the $\mu p$ interaction}
The previous considerations show that the value of $ \Delta E^{subt} $ depends heavily
on assumptions the behavior of $\overline T_1(0,Q^2),\; (F_{|rm loop}$) for large values
$Q^2$. This is true even though the leading $1/Q^2$ term is known. The underlying cause of this uncertainty is the would-be logarithmic divergence in the integral of \eq{de3} for the case $F_{\rm loop}=1$. This is a symptom that some other technique could be used~\cite{Georgi:1994qn}.
Another way to proceed is to use an effective field theory (EFT) for the
lepton-proton interaction~\cite{Caswell:1985ui}.
In EFT, logarithmic divergences identified through dimensional regularization are renormalized away by including a lepton-proton contact interaction in the Lagrangian.
We may handle the divergence using
standard dimensional regularization
(DR) techniques by evaluating the scattering amplitude of Fig.~1.
The term of interest is obtained by including only $\overline T_1(0,Q^2)$ of \eq{three} with $F_{\rm loop}=1$.
We evaluate the loop integral in $d=4-\epsilon$ dimensions and obtain the result:
\begin{eqnarray} {\cal M}_2^{DR}({\rm loop}) ={3\over2}i\; \alpha^2 m{ \beta_M\over \alpha}\big[{2\over \epsilon}+\log {{\mu^2\over m^2}}+{5\over 6}-\gamma_E+\log 4\pi\big]\overline u_f u_i \overline U_f U_i ,\label{res1}\end{eqnarray}
where lower case spinors represent leptons of mass $m$, and upper case proton of mass $M$, $q$ is momentum transferred to the proton, and $\gamma_E$ is Euler's constant, 0.577216$\cdots\;$.
The result \eq{res1} corresponds to an infinite contribution to the Lamb shift in the limit that $\epsilon$ goes to zero. In EFT one removes the divergent piece by adding a lepton-proton contact interaction to the Lagrangian that removes the divergence, replacing it by an unknown finite part.
The finite part is obtained by fitting to a relevant piece of data.
Here the only relevant data is the 0.31 meV needed to account for the proton radius puzzle.
The low energy term contributes
\begin{eqnarray} {\cal M}_2^{DR}({\rm LET})=i C(\mu), \label{LET}\end{eqnarray} where $C(\mu)$ is chosen such that the sum of the terms of
\eq{res1} and \eq{LET}, $ \equiv {\cal M}_2^{DR}$, is finite and independent of the value of $\mu$.
Thus we write the resulting scattering amplitude as
\begin{eqnarray} {\cal M}_2^{DR} =i\; \alpha^2 m{ \beta_M\over \alpha}(\lambda +5/4)\;\overline u_f u_i \overline U_f U_i ,\label{mdr}
\end{eqnarray}
where $\lambda $ is determined by fitting to the Lamb shift. \eq{mdr} corresponds to using the $\overline{MS}$ scheme because
the term $\log (4\pi)-\gamma_E$ is absorbed into $\lambda$.
The corresponding contribution to the Lamb shift is
given by
\begin{eqnarray} \Delta E^{DR}=\alpha^2 m{ \beta_M\over \alpha}\phi^2(0)(\lambda +5/4).\label{de4}\end{eqnarray}
Setting $ \Delta E^{DR}$ to 0.31 meV in the above equation requires that $\lambda =769$, which seems like a large number. However, as noted above, $\beta_M$ is extraordinarily small~\cite{Thomas:2001kw}. The natural units of polarizability are ${\beta_M\over \alpha}\sim
4\pi/\Lambda_\chi^3$,~\cite{Butler:1992ci}
where $\Lambda_\chi \equiv 4\pi f_\pi$, ($f_\pi$ is the pion decay constant).
Then \eq{mdr} becomes
\begin{eqnarray} {\cal M}_2^{DR} =i\; 3.95 \;\alpha^2 m {4\pi\over \Lambda_\chi^3 }\overline u_f u_i \overline U_f U_i .\label{mdr1}\end{eqnarray}
The coefficient 3.95 is of natural size.
Thus standard EFT techniques result in an effective lepton-proton interaction of natural size that is proportional to the lepton mass.
The form of \eq{mdr1} is not unique. There are other possible operators that reduce to that form in the low-energy, low-momentum regime
of relevance here.
The present results, \eq{myde} and \eq{de4} represent an assumption that there is a lepton-proton interaction of standard-model origin, caused by the
high-momentum behavior of the virtual scattering amplitude, that is sufficiently large to account for the proton radius puzzle.
Fortunately, our hypothesis can be tested in an upcoming low-energy $\mu^\pm p, e^\pm p$ scattering experiment~\cite{Arrington:2012} planned to occur at PSI.
\section{ Lepton Proton Scattering at Low Energies
Our aim is to determine the consequences of the particular two-photon exchange term for lepton-proton scattering at low energies.
Our previous attempt~\cite{Miller:2011yw} implied very large corrections to quasi-elastic electron-nucleus scattering that are severely in disagreement with experiment~\cite{Miller:2012ht}. It is necessary to check that a similar large unwanted contribution does not
appear here. Thus we
provide a prediction for the PSI experiment. It is well-known that two-photon exchange effects in electron-proton scattering are small
at low energies. Our contact interaction is proportional to the lepton mass, so it could provide a measurable effect for muon-proton scattering but be ignorable for electron-proton scattering.
We shall investigate the two consequences of using form factors (FF) and effective field theory (DR).
\newcommand{\not\hspace{-0.7mm}k}\newcommand{\lslash}{\not\hspace{-0.7mm}l}{\not\hspace{-0.7mm}k}\newcommand{\lslash}{\not\hspace{-0.7mm}l}
\newcommand{\not\hspace{-0.7mm}p}{\not\hspace{-0.7mm}p}
\newcommand{\not\hspace{-0.7mm}s}{\not\hspace{-0.7mm}s}
The invariant amplitude is given as $ {\cal M}_{fi}\equiv {\cal M}_{fi}^{(1)}+ {\cal M}_{fi}^{(2)}$
where the superscripts denote the number of photons exchanged.
The first term is given by
\begin{eqnarray} {\cal M}_{fi}^{(1)} =\mp i {e^2\over q^2+i0}\overline u_f\gamma_\mu u_i \overline U_f \Gamma^\mu U_i,
\end{eqnarray}
where $u_{i,f}$ represent leptons of mass $m$, $U_{i,f}$ represent the proton of mass $M$, and $q$ is momentum transferred to the proton.
The minus sign holds for negatively charged muons, and the plus sign for positively charged muons.
\begin{eqnarray} \Gamma^\mu=\gamma^\mu F_1(q^2)+ i{\sigma^\mn \over 2M}q_\nu F_2(q^2),\end{eqnarray}
with $P_f=P_i+q$.
The second-order term that arising from the use of \eq{six} and \eq{mine} , which uses form factors (FF) is given by
\begin{eqnarray} {\cal M}_{fi}^{(2)} (FF)=-{e^4\over (2\pi)^4}\int d^4 k{1\over k^2+i0} {\overline T_1(0,-k^2)\over (k-q)^2+i0}\overline u_f L u_i \overline U_f U_i, \label{m1}
\end{eqnarray}
where
\begin{eqnarray}&&
L
= {2mk^2+4p_i\cdot k\;\not\hspace{-0.7mm}k}\newcommand{\lslash}{\not\hspace{-0.7mm}l \over -4(p_i\cdot k)^2 +k^4+i0}.
\label{l1}
\end{eqnarray}
The second-order amplitude arising from the use of EFT is given above in \eq{mdr}.
The cross section depends upon the average over initial and sum over final fermion spins, as denoted by an over line. We first obtain $ \overline{ \left | {\cal M}_{fi}^{(1)}\right|^2}$. Standard text-book expressions use the Mott form, obtained by
ignoring the lepton mass. This is not a good approximation for the muons of the experiment~\cite{Arrington:2012} which
have momenta ranging between 100 and 200 MeV/c, and our terms of \eq{m1} and \eq{mdr} would vanish in that approximation.
We find
\begin{eqnarray}&&
\overline{ \left| {\cal M}_{fi}^{(1)}\right|^2}
=[16M^2 (\varepsilon\varepsilon'+q^2/4) (F_1^2-{q^2\over 4M^2}F_2^2 ) +4G_M^2(q^4/2+q^2m^2)]
\left({e^2\over q^2+i0 }\right)^2, \nonumber \\&&
\label{m1sq}\end{eqnarray}
where $\theta$ is the laboratory scattering angle, and $\varepsilon (\varepsilon')$ is the incident (final) lepton laboratory total energy.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 160 mm]{ratios.pdf}
\caption{The ratio $R$ obtained using form factor (FF) regularization or dimensional regularization (DR).
The solid curves show the results for muon laboratory momentum of 200 MeV/c and the dashed curves show the results for 100 MeV/c.}
\label{fig:R}
\end{center}
\end{figure}
The present interference term of interest $\Delta\equiv 2{\rm Real}\;\overline{ [( {\cal M}_{fi}^{(1)})^*{(\cal M}_{fi}^{(2)})]}$
is obtained using standard trace algebra. We find
\begin{eqnarray}&&\Delta(FF)=
{8M G_E(q^2)}{ \mp i e^2\over q^2+i0}{e^4\over (2\pi)^4}\int d^4 k{1\over k^2+i0} {\overline T_1(0,-k^2)\over (k-q)^2+i0} {1\over(-4(p_i\cdot k)^2 +k^4+i0)} \nonumber \\&&\times[2m^2k^2(4\varepsilon M +q^2) +4p_i\cdot k\;k\cdot(P_i+P_f)q^2/2
+2p_i\cdot kk\cdot(p_i+p_f)(4\varepsilon M+q^2 )].
\end{eqnarray}
We have seen that the integrand is dominated by large values of $k$, therefore we neglect $q$ in the integrand. This allows considerable simplification so that we find
\begin{eqnarray}&&\Delta(FF)={\mp96G_E(q^2)}M^2{ e^2\over q^2+i0} \varepsilon\alpha {\beta}m^2 \lambda^n B(N,n) \label{dff}\end{eqnarray}
where the terms $\lambda,B(N,n)$ appear in \eq{myde}.
A negligible term proportional to the square of the incident lepton momentum has been dropped.
The term $\Delta$ adds to the square of the lowest order term for $\mu^- p$ interactions, as expected from an attractive interaction that increases the Lamb shift. The computed value of $\Delta(FF)$ does not depend on $n,N,\lambda$ for those values that
reproduce the needed Lamb shift via \eq{myde}.
For EFT the contribution to the cross section via interference can be worked out using \eq{mdr1} to be
\begin{eqnarray}&&\Delta^{DR}={\mp 8 [4\varepsilon M+q^2] \alpha ( \lambda+{5\over4}) m^2 \beta_M G_E(q^2)}M{ e^2\over q^2+i0}.\label{ddr}\end{eqnarray}
We are now prepared to display the effects of our two-photon exchange term on $\mu^--p$ scattering at low energies.
The size of the effect is represented by the ratio $R$, with
\begin{eqnarray}
R\equiv {\Delta\over \overline{ \left| {\cal M}_{fi}^{(1)}\right|^2}}.\label{rat}\end{eqnarray}
The ratio $R>0$ for $\mu-p $ scattering. The numerator of \eq{rat} is obtained from either \eq{dff} (FF) or \eq{ddr} (DR).
The ratio $R$ is proportional to the square of the lepton mass, which is negligible for $e^\pm-p$ scattering.
We consider two muon momenta 100 and 200 MeV/c. The results are shown in Fig.~\ref{fig:R}. The angular dependence is dominated by the $Q^2=-q^2$ term inherent in \eq{rat}. The two sets of curves are very similar because the size of the effect
is constrained by the required energy shift of 0.31 meV. The size of the effect should be detectable within the expected sub-1 \% accuracy of the PSI experiment. We emphasize that our calculation is valid only at low muon laboratory energies.
\section{Summary and Discussion} \label{sec:end}
The findings of this paper can be summarized with a few statements:
\begin{itemize}
\item The integrand (see \eq{one}) that determines the value of $\Delta E^{subt} $ values slowly with large values of $Q^2$,
causing the uncertainty in the evaluation to be large enough to account for the proton radius puzzle.
\item The integrand can be evaluated using one of an infinite set of possible form factors or a dimensional regularization procedure.
\item Either method can be used to account for the proton radius puzzle and predict an observable effect of a few percent
for low energy $\mu-p$ scattering.
\end{itemize}
The literature \cite{pohl}-\cite{Miller:2011yw} poses several explanations for the proton radius puzzle: The electronic-hydrogen experiments might not be as accurate as previously reported, $\mu-e$ universality might be violated, and that a strong interaction effect entering in a loop diagram is important for muonic hydrogen, but not for electronic hydrogen. It is beyond the scope of the present paper to argue for the unique correctness of any one of these ideas. The strong-interaction effect discussed here is large enough to be testable experimentally,
\begin{acknowledgments}
I thank M. J. Savage, B. Long, R. Pohl, S. J. Brodsky, G. Paz, R. Hill, M. Birse, and R. Gilman and A. W. Thomas for useful discussions. This research was supported by the United States Department of Energy, grant FG02-97ER41014. I gratefully acknowledge the support and gracious hospitality of the University of Adelaide during the formative stages of this work.
\end{acknowledgments}
|
1,108,101,565,163 | arxiv | \section{Introduction}
A classic problem of viscous friction acting on an object receives growing
attention in various contexts. In microfluidics \cite{SquiresQuake2005},
promising for applications in chemistry, biology, medicine and pharmaceutical
industry, viscous friction manifests as important in various geometries, due
to the confinement of liquid. Bioconvection, resulting from swimming of a
large number of small objects, has been studied in biology and has recently
been received considerable attention in active matter
\cite{bees2020advances,kage2013drastic,ramaswamy2010mechanics}. The central
issue of this problem is flow at low Reynolds numbers governed by viscous friction.
Relevant classic studies are motion of a bubble in a capillary tube
\cite{Bretherton} and in a Hele-Shaw cell \cite{SaffmanTaylor}. Our focus in
the present study is on the latter. A number of studies have been performed in
the Hele-Shaw geometry. However, until recently, the existence of lubricating
film between the cell surface and bubble surface have not been highlighted,
and related studies have been studied mainly with a forced flow and/or with
nearly horizontal geometries
\cite{Tanveer1986,Maxworthy1986,Kopf-SillHomsy1988,MaruvadaPark1996}.
We have recently focused on the lubricating film, using a vertically stood
Hele-Shaw cell of millimeter thickness, filled with viscous liquid. As a
result, we have established a number of scaling regimes for drag friction
acting on fluids surrounded by another immiscible fluid
\cite{EriSoftMat2011,yahashi2016,okumura2017AdvCI,murano2020rising}.
Furthermore, other groups have explored closely related issues, using cells
with smaller scales \cite{keiser2018dynamics,keiser2019motion} and comparing
numerical results with experiment \cite{shukla2019film}.
In this study, we explore a seemingly simpler case of drag friction acting on
a solid disk in a Hele-Shaw cell of millimeter thickness, where similar
lubricating films exist between the cell wall and disk surface. Our principal
interest is in the case in which the thickness of the lubrication film is
smaller than the disk thickness when the disk falling in the direction
perpendicular to its axis. As far as we know, this case has not been explored
in the literature, while it is closely related a study of transport of
strongly confined disks under flow in microfluidic devices
\cite{uspal2013engineering}. As a result, we identified a clear scaling
regime, which can be interpreted based on physical arguments. We also discuss
the limitation of this scaling regime.
The present experiment is an extreme case of a Stokes drag friction $F$ for a
solid disk moving in a direction perpendicular to the axis, i.e., in a
direction of the disk plane. For later convenience, we briefly review relevant
previous studies starting from a non-confined case. In this context, there are
two important geometrical parameters. One is the ratio of the radius of the
disk $R$ to the distance of the cell plates parallel to the disk plane $D$, or
a measure of confinement:
\begin{equation}
C=R/D.
\end{equation}
The other is the thickness $D_{0}$ to radius $R$, or the aspect ratio of the
disk:
\begin{equation}
A=D_{0}/R
\end{equation}
An early study by Oberbeck on the issue in 1876 gave a Stokes drag friction
$F_{O}$ under no confinement ($C=0$) for a disk of zero thickness ($A=0$):
\begin{equation}
F_{O}=32\eta VR/3\label{eq00}%
\end{equation}
with $\eta$ the viscosity of surrounding liquid and $V$ the velocity of the
disk \cite{Oberbeck}. In 1962, Brenner considered the effect of a week
confinement, i.e., $C\ll1$, for the zero thickness ($A=0$), to give a
first-order correction to the drag coefficient
\begin{equation}
K=F/F_{O}%
\end{equation}
in the expansion in terms of $C$ \cite{brenner1962effect} (note that $K\geq1$
due to confinement). In 1991, Davis developed analytical theory for stronger
confinement in the case of zero thickness ($A=0$) (see Sec. 4 of
\cite{davis1991slow}), and numerically presented the value of the drag
coefficient $K$ up to a large value ($C=4$), based on the analytical
expression. In 1984 and 1999, Roger et al. theoretically considered when the
disk thickness is finite \cite{trahan1999velocity} but under no confinement
($C=0$), and compared experimental results for finite thickness. In 2006,
Trahan conducted experiments using disks of finite thickness under weak
confinement for very thin disks, to give an empirical fitting formula, which
gives the coefficient $K$ as a function of $A$ and $C$ \cite{trahan2006stokes}%
. Although the fitting function well captures their data, the function
contains six fitting parameters and the physical origin of the complex form of
the function was not justified. It is stressed here that in the literature
there are no analytical expressions based on physical justification when both
$A$ and $C$ are finite. In fact, Trahan's principle aim was to
experimentally\ justify Davis' result \cite{davis1991slow}, which is the case
of $A=0$. In addition, Trahan did not deal with the case the thickness of
lubricant film $h$ defined below is smaller than the disk thickness $D_{0}$,
which is the main focus of the present study.
\section{Experiment}
We filled a Hele-Shaw cell, stood vertically, with a silicone oil
(polydimethylsiloxane, PDMS) as shown in Fig. 1 (a). We inserted a metal disk
of radius $R=10$ mm at the top of the cell with a zero initial speed. We
recorded the ensuing falling motion of the metal with a camera after the disk
began going down in the cell at a constant speed. As indicated in the side
view (Fig. 1 (a)), we expect two lubricating films of thickness $h$ are formed
between the cell inner surface and the disk surface (see the details below).
The width and height of the cell was 90 and 160 mm, respectively. We checked
the thickness $D$ of the cell using a laser censer (ZS-HLDS5, Omron) and its
controller (ZS-HLDC11, Omron) to find $D$ was in the range of 2.2 to 7 mm (we
also provided the data for larger $D$ to show the breakdown of the regime of
our principal interest). The viscosity $\eta$ of PDMS was in the range of
0.490 to 9.57 Pa$\cdot$s. This corresponds to the range of the kinematic
viscosity $\nu$, 1000 to 10000 cS. The metal disk is a stainless-steel 403 of
density 7.70 g/cm$^{3}$, which can be manipulated by a magnet placed on the
cell surface. The thickness $D_{0}$ ($<D$) of the disk was either 0.952, 1.88,
2.87, or 3.90 mm (these three thicknesses will be labeled as $D_{0}=1,2,3,$
and $4$ mm, for convenience, in the following). We used a digital camera
(EX-F1, Casio) setting the time interval in the range of 1 to 1/60 second. The
digital images were analyzed with a software, Image J.
\begin{figure}[h]
\includegraphics[width=\textwidth]{Fig1.eps}\caption{(a) Illustration of
experiment. A metal disk of thickness $D_{0}$ is dropped in the cell of width
$D$ filled with a viscous oil of kinematic viscosity $\nu$. The side view
suggests the existence of two lubricating films of thickness $h$. (b) Vertical
position $x$ vs time $t$. Results of three falling experiments are shown for
each parameter set: $\nu$, $D,$ and $D_{0}$. The data labeled $3000$, 5000,
and 10000 cS are performed for $(\nu,D,D_{0})=(3000,2.61,1.88),$
$(5000,2.59,1.88)$ and $(10000,2.52,1.88)$, respectively, with $D$ and $D_{0}$
given in mm.}%
\label{Fig1}%
\end{figure}
\section{Results}
\subsection{Falling motion}
As demonstrated in Fig. 1 (b), the velocity of falling motion of the disk
reached a constant speed, which was reasonably reproducible at a given
experimental parameter set, $\eta,$ $D_{0},$ and $h$ with $2h=D-D_{0}$. Here,
$h$ is the thickness of two viscous liquid films, each of which is sandwiched
by an inner surface of cell plate and a surface of the disk. To obtain
reproducible data, we need to satisfy the condition $2h=D-D_{0}$ at the entry:
the disk should be vertical and the disk center should match the that of the
cell (in the thickness direction). Experiments were performed carefully to
satisfy these entry condition. For this purpose, it was helpful to set a gate
of width comparable to the disk thickness $D_{0}$ at the entry point and to
keep in mind that the falling speed should be maximized when the condition is
well satisfied. As a result, errors for velocity were typically less than 10
per cent.
\begin{figure}[h]
\includegraphics[width=\textwidth]{Fig2.pdf} \caption{(a)-(c): $V$ vs $h$ for
$h<D_{0}$. (a) $\eta=1000$ cS. (b) $\eta=3000$ cS. (c) $\eta=5000$ and 10000
cS. (d) $V$ vs $h$ at $\eta=3000$ cS for $h>D_{0}$. Colors and marks
differentiate $D_{0}$ and $\eta$, respectively. When $h>D_{0}$ (only for
$\eta=3000$ cS), we assigned different marks. Open squares and diamonds
correspond to the data satisfying $D_{0}<h<R$ and $h>R$, respectively (note
that $R>D_{0}$). The labels for $D_{0}$ and $\eta$ are given in mm and cS (as
for $D_{0}$, the corresponding precise values are given in the text).}%
\label{Fig2}%
\end{figure}
\subsection{The falling velocity at different parameters}
In Fig. 2 (a)-(c), we present the falling velocity $V$, obtained as a slope in
the $x-t$ plot as in Fig. 1 (b), as a function of the film thickness $h$ for
$h<D_{0}$. Details of the data presentation are as follows. One data point,
without error bars, corresponds to each falling experiment. We repeated
falling experiments typically several times (in the range twice to more than
10 times) for a single parameter set and we showed all the results in the
plots. As a result, in some case data points are closely overlapped, which
suggests a size of errors in the measurements.
In Fig. 2 (a)-(c), we observe a coherent and systematic parameter dependence:
$V$ increases with $h$ and $D_{0}$ but decreases with $\eta$, showing viscous
origin of the dynamics. Note that differences in color and mark correspond to
those in $D_{0}$ and $\eta$, respectively.
In Fig. 2 (d), we explore a wider range of $h$ for $\eta=3000$ cS to see the
limitation of the regime of our principal interest. Here, we use the same
color convention, but assign different marks as specified in the caption.
\subsection{Dimensional analysis}
The viscosity dependence of the velocity indicates that the dynamics is
governed by gravity opposed by viscosity. The gravitational energy gain per
time is trivially given as $\sim\Delta\rho gR^{2}D_{0}V$, corresponding to a
gravitational force, $\Delta\rho gR^{2}D_{0}$. As for the viscous dissipation,
we here consider the extreme case of $h<<R$. In this limit, the viscous
dissipation may be characterized by the smallest length scale $h$ of the
problem: the dissipation in the unit volume per time scales as $\eta(V/h)^{2}%
$. This occurs inside the two viscous liquid films, each covering the disk
surface of area $\sim R^{2}$. The total dissipation per time amounts to
$\eta(V/h)^{2}R^{2}h$. Balance of the gravitational energy gain with this
dissipation gives%
\begin{equation}
V\sim\Delta\rho gD_{0}h/\eta\label{eq1a}%
\end{equation}
This corresponds to the following viscous drag and the drag coefficient:%
\begin{align}
F & \sim\eta(V/h)R^{2}\\
K & \sim R/h
\end{align}
\begin{figure}[h]
\includegraphics[width=\textwidth]{Fig3.pdf}\caption{(a) $V$ vs $h$ on a
log-log scale. All the data in Fig. 2 are shown. (b) Renormalized version of
(a), which confirms the first regime is governed by Eq. (\ref{eq1a}).}%
\end{figure}
\subsection{Agreement between experiment and theory}
In Fig. 3 (a), we collect all the data in Fig. 2 on a single plot on a log-log
scale. The total number of the data points shown in Fig. 3 (a) are 367 and
they are obtained for 74 different parameter set $(D_{0},\eta,h)$.
In Fig. 3 (b) we plot the renormalized velocity $\eta V/(\Delta\rho gD_{0}%
^{2})$ as a function of the renormalized thickness $h/D_{0}$, using all the
data in Fig. 3 (a): since Eq. (\ref{eq1a}) can be expressed as $\eta
V/(\Delta\rho gD_{0}^{2})\sim h/D_{0}$, the data described by Eq. (\ref{eq1a})
should collapse onto a master curve on this plot. This is what we observe in
Fig. 3 (b): the data for $h<D_{0}$ (all the data except for open squares and
diamonds), which well satisfy $h<<R$ (note $D_{0}<<R)$, collapse well on the
master curve.
However, the agreement is not perfect. As a result of numerical fitting, we
find that the collapsed data, i.e., all the data in Fig. 3 except for open
squares and diamonds, are well described by the following expression%
\begin{equation}
\eta V/(\Delta\rho gD_{0}^{2})=k_{1}(h/D_{0})^{\alpha} \label{eqA}%
\end{equation}
with $k_{1}=0.127\pm0.001$ and $\alpha=0.716\pm0.008$. Note that the exponent
$\alpha$ deviates from the theoretical prediction $\alpha=1$.
\section{Discussion}
\subsection{Slight deviation from the prediction}
The reason $\alpha$ is slightly less than unity in Eq. (\ref{eqA}) should be
explored in the future. In the light of Eq. (\ref{eq1a}), $V$ should linearly
approach zero as $h$ decreases: scaling arguments predict $\alpha=1$. However,
all the data in Fig. 2 (a)-(c) are well described by Eq. (\ref{eqA}) with
$\alpha$ slightly less than unity. In fact, the data of the same color in Fig.
2 (b) and (c) do not seem on the straight line, but rather seem to smoothly go
to the origin $(h,V)=(0,0)$ by extrapolation, which is consistent with the
fitting with $\alpha~$slightly less than unity.
The data in Fig. 2 (a) may seem to be on the straight line, but may also seem
to smoothly go to the origin by extrapolation. However, the latter view is
supported by the overall fitting. In addition, the former view implies the
slip length of the order of 0.1 mm (note that the simple linear extrapolation
of the data of each color in Fig. 2 (a) may intersect the horizontal axis at a
value around $h=-0.1$ to $-0.3$ mm). This order of magnitude is too large for
the slip length. This characteristic length has been discussed for polymer
liquids \cite{de2005soft} and its typical order estimated by the reptation
model \cite{de2005soft,de1979scaling} is monomer scale multiplied by the ratio
of melt viscosity to monomer viscosity. This suggests that, in the present
case, the slip length may at most $10\mu$m. This means that the present
experiment is not appropriate to discuss the slip length. For this purpose, we
need to explore the case in which $h$ is even smaller, which is technically
challenging.\begin{figure}[h]
\includegraphics[width=\textwidth]{Fig4.pdf}\caption{(a) Range of the aspect
ratio $A/2=D_{0}/(2R)$ and the confinement parameter $2C=2R/D$ of the present
study for $h>D_{0}$. (b) The drag coefficient $K$ as a function of
$A/2=D_{0}/(2R)$. Solid lines show Eq. (\ref{Trahan}) for four different
values of $2C=2R/D$ (see the text for the details). (c) The same plot with (b)
but with a factor 1.31 multiplied for solid lines. (d) $K$ obtained from
velocity vs fit based on Eq. (\ref{eqK2}).}%
\label{Fig4}%
\end{figure}
\subsection{Data for $h>D_{0}$}
In Fig. 2 (d), we provided the data for $h>D_{0}$ and showed these data do not
collapse well on the master curve, on which the data for $h<D_{0}$ collapse
well. In this case of $h>D_{0}$, Trahan \cite{trahan2006stokes} provided the
data in the range $0.025<A/2=D_{0}/2R<0.0759$ and $0.1623<2C=2R/D<3.890$ (as
seen in Fig. 4 (a), compared with the present case, the aspect ratio $A$ in
\cite{trahan2006stokes} is generally much smaller). Trahan showed that the
data are quantitatively well described by the following expression containing
six fitting parameters:%
\begin{equation}
K=1+a(2C)^{b}+(f+g(2C))^{2}(A/2)^{1-k\exp(-p(2C))}\label{Trahan}%
\end{equation}
with $a=0.8160,$ $b=1.0754,$ $f=1.0418,$ $g=1.3312,$ $k=0.2269$, and $p=1.51$.
This complex form was proposed only on the basis that a fit in a much simpler
form of $K=b^{\prime}(A/2)^{c^{\prime}}$ with $b^{\prime}$ and $c^{\prime}$
constants worked well in the case of $C=0$.
As explained below, we can confirm the expression in Eq. (\ref{Trahan}) is
valid only qualitatively for the present data for $h>D_{0}$: $K$ increases
with $A/2$ and the increase becomes significant as $2C$ increases.
In Fig. 4 (b), we draw four straight lines, which represents Eq.
(\ref{Trahan}) for four values of $2C$ to compare them with the present data:
$2C$ $=20/60$ $\simeq0.33$, 20/30.3 $\simeq0.66$, 20/20.2 $\simeq0.99$, and
20/10.2 $\simeq1.96$. The lower and highest straight lines correspond to
$2C=20/60$ and 10/10.2, respectively. Note here that $K$ in the present case
can be obtained from the measured velocity $V$ using the relation $(32/3)\eta
VRK=\Delta\rho g\pi R^{2}D_{0}$.
In Fig. 4 (b), if Eq. (\ref{Trahan}) describes well the present data, all the
open squares above the highest straight line should be on this line, which is
not the case. In addition, if Eq. (\ref{Trahan}) is valid, the lowest open
diamonds of each color should be on the lowest straight line, which is also
not the case. In other words, Eq. (\ref{Trahan}) fails to describe the present
data in a quantitative manner.
As mentioned above, in general, Trahan investigated cases of smaller $A$ than
in the present study. Accordingly, only the red diamonds at $(A/2,2C)$
$=(0.952/20,20/30.2)$ $\simeq(0.048,0.66)$ and the red squares at $(A/2,2C)$
$=(0.952/20,20/20.2)$ $\simeq(0.048,0.99)$ in Fig. 4 (a) are in the
overlapping region. In Fig. 4 (b), these two sets of data are red squares and
diamonds in between the highest and second highest straight lines, and they
should be on the second and third highest straight lines, respectively, if Eq.
(\ref{Trahan}) is valid, which is not the case.
Even though if we determine a factor for Eq. (\ref{Trahan}) so that the line
representing Eq. (\ref{Trahan}) with the factor goes through red diamonds and
replot the four straight lines with the same factor, qualitative agreement was
not obtained, as seen in Fig. 4 (c).
To physically understand the present data with $h>D_{0}$, we consider three
dissipations: $\eta(V/R)^{2}R^{3}$, $\eta(V/D)^{2}R^{2}D$, and $\eta
(V/R)^{2}R^{2}D_{0}$ per unit time (note that the first term corresponds to
the Oberbeck case). When the sum of the three dissipation with factors ($a,b,$
and $c$) is balanced with the gravitational energy gain $\Delta\rho
gR^{2}D_{0}V$, we obtain a physically motivated form:%
\begin{equation}
K=a+b(2C)+c(A/2)\label{eqK2}%
\end{equation}
With this function, we fit the data marked with diamonds, which are the data
satisfying the conditions $h<D_{0}$ and both $2C=2R/D$ and $A/2=D_{0}/(2R)$
smaller than one (we expect that Eq. (\ref{eqK2}) should be valid for small
$C$ and $A$ in the light of previous studies such as \cite{brenner1962effect}
and \cite{trahan1999velocity}). The result of fitting is shown in Fig. 4 (c)
where $a=0.920\pm0.063,$ $b=3.84\pm0.19,$ and $c=5.31\pm0.20$. All the
diamonds are reasonably well represented by the fitting line, which supports
Eq. (\ref{eqK2}). (Trahan proposed an expression $K=1+0.8160(2C)^{1.0754}$ in
the case of $A=0$, which is close to Eq. (\ref{eqK2}) with $a=1$ and
$b=0.8160$. Compared with this, the value we obtained for $b$ (3.84) is
larger, which may because we are dealing with the case of finite $A$).
\subsection{Difference from the case of fluid "disk" falling in confined
viscous liquid}
The scaling regime governed by Eq. (\ref{eq1a}) proposed in the present study
is inherently different from the case in which the disk is replaced with a
fluid "disk." The latter case is studied in our previous study
\cite{yahashi2016}, in which, the thickness of lubricating film $h$ and the
disk thickness (or, the shape of fluid drop) are dynamically determined, with
the former governed by the law of Landau, Levich, and Derjaguin (LLD)
\cite{LandauLevich,Derjaguin1943}. As a result, $h$ depends on the falling
velocity $V$, which leads to an unusual nonlinear drag force $F$. The drag
force in the fluid case scales with $V^{1/3}$. In the present case, $h$ is
fixed by the (fixed) disk thickness, which leads to a usual linear drag force
$F$ scaling simply with $V$.
In microfluidic applications, fluid drops of various kind have been utilized,
which include armored bubbles or drops, gelified drops, bubbles and drops with
rigid and mobile surfactants. Such cases could be similar to the bubble case
compared with the solid case in that the lubricating film thickness is
dynamically determined. However, the detailed boundary condition could be more
delicate compared with the two limiting cases of a simple bubble or a solid
disk and thus worth studying in the future. Note that even if the thickness of
the lubricating film is determined dynamically, the dissipation in the film
can be irrelevant to the drag force as shown in \cite{EriSoftMat2011}.
\section{Conclusion}
In the present study, we investigated the falling velocity of a solid disk in
viscous liquid in a confined space and the drag friction acting on the disk.
We successfully identified a scaling regime, proposing scaling laws for the
velocity and drag friction. Although the data collapsed well, the scaling
exponent $\alpha$ is slightly smaller than the predicted value $\alpha=1$. To
resolve this issue, we may need to perform a separate study. In the
Discussion, we also examine the data we obtained for weakly confined case and
provide detailed comparison with previous study.
Fundamental understandings on fluid flow at low Reynolds numbers provided in
the present study should be valid for small objects in less viscous liquid
such as water. Accordingly, the present results are relevant to various
fundamental issues and applications, for example, in microfluidics,
bioconvection and active matter, in which viscous friction acting on small
objects are highly important.
\subsubsection*{Conflicts of interest}
There are no conflicts of interest to declare.
\begin{acknowledgments}
This work was partly supported by JSPS \ KAKENHI Grant Number JP19H01859.
\end{acknowledgments}
|
1,108,101,565,164 | arxiv | \section{Introduction}
The geometric phases are usually analyzed in the framework of
first quantization by using the adiabatic
approximation~\cite{berry}-\cite{hasegawa2}, though
a non-adiabatic treatment has been considered in, for example,
\cite{aharonov} and the (non-adiabatic) correction to the
geometric phases has been analyzed in~\cite{berry2}.
The Hamiltonian, which contains a set of slowly varying external
parameters, has no obvious singularity by itself. But
a singularity reminiscent of the magnetic monopole is induced
at the level crossing point, which is controlled by the movement
of the external parameters, and the associated geometric
phases appear in the adiabatic approximation. A remarkable fact
is that the geometric phase factors thus introduced are rather
universal independently of detailed physical processes. The
topological properties are considered to be responsible for
this universal behavior. Also, interesting mathematical ideas
such as parallel transport and holonomy are often
used~\cite{simon} in the framework of adiabatic approximation.
The geometric phases revealed the importance of
hitherto un-recognized phase factors in the adiabatic
approximation. It may then be interesting to investigate how
those phases appear in the exact formulation.
The purpose of the present paper is to formulate the level
crossing problem by using the second quantization technique, which works both in the path integral and operator formulations.
We thus derive a convenient exact formula for geometric terms,
including the off-diagonal terms as well as the conventional
diagonal terms. In this formulation, the analysis of geometric
phases is reduced to the familiar diagonalization of the
Hamiltonian. Namely, all the information concerning the
extra phase factors is contained in the effective Hamiltonian.
In Ref.~\cite{berry2}, this fact that the geometric phases are
interpreted as parts of the Hamiltonian has been noted though
only the diagonal geometric terms have been analyzed in the
adiabatic picture. Our formulation is more general without
assuming the adiabatic picture.
When one diagonalizes the
Hamiltonian in a very specific limit, one recovers the
conventional geometric phases defined in the adiabatic
approximation. One can thus analyze the
geometric phases in the present formulation without using the
mathematical notions such as parallel transport and holonomy.
Instead, a hidden local gauge symmetry plays an important role
in our formulation. If one diagonalizes the
Hamiltonian in the other extreme limit, namely, in the
infinitesimal neighborhood of level crossing for any fixed
finite time interval $T$, one can show that the geometric
phases become trivial and thus no monopole-like
singularity. At the level crossing point, the conventional
energy eigenvalues become degenerate but the degeneracy is
lifted if one diagonalizes the geometric terms.
Since the time interval involved in the practical
physical processes is always finite, our analysis implies an important change in our understanding of the qualitative aspects of geometric phases. For example, our analysis
implies that the topological interpretation~\cite{stone, berry}
of geometric phases such as the topological proof of the
Longuet-Higgins' phase-change rule~\cite{higgins} fails in the
practical Born-Oppenheimer approximation where a large but finite
ratio of two time scales is involved and $T$ is identified with
the period of the slower system.
In our analysis, it is important to distinguish the precise
adiabatic approximation, where the time interval $T$ measured in
units of the shorter time scale is taken to be
$T\rightarrow\infty$~\cite{simon}, from the practical
Born-Oppenheimer approximation where a large but finite ratio
of two time scales is involved and the variables with the slower
time scale are approximately treated as external c-number
parameters. Our analysis shows that the
integrability of the Schr\"{o}dinger equation for a regular
Hamiltonian and the appearance of the seemingly
``non-integrable phases'' are consistent: To be precise, the
integrability of the Schr\"{o}dinger equation becomes relevant
when the slowly varying external parameters are promoted to
the dynamical variables of a more fundamental regular
Hamiltonian.
We also clarify the difference between the geometric phases
associated with level crossing and the exact topological object
such as the Aharonov-Bohm phase. A crucial difference
between the quantum anomaly and the geometric phases associated
with level crossing is also noted.
The basic idea involved in the present formulation has been
reported elsewhere~\cite{fujikawa}, and we here present further
details of the analyses.
\section{Second quantized formulation and geometric phases}
We start with the generic (hermitian) Hamiltonian
\begin{equation}
\hat{H}=\hat{H}(\hat{\vec{p}},\hat{\vec{x}},X(t))
\end{equation}
for a single particle theory in a slowly varying background
variable $X(t)=(X_{1}(t),X_{2}(t),...)$.
The path integral for this theory for the time interval
$0\leq t\leq T$ in the second quantized
formulation is given by
\begin{eqnarray}
Z&=&\int{\cal D}\psi^{\star}{\cal D}\psi
\exp\{\frac{i}{\hbar}\int_{0}^{T}dtd^{3}x[
\psi^{\star}(t,\vec{x})i\hbar\frac{\partial}{\partial t}
\psi(t,\vec{x})\nonumber\\
&&-\psi^{\star}(t,\vec{x})
\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},X(t))\psi(t,\vec{x})] \}.
\end{eqnarray}
We then define a complete set of eigenfunctions
\begin{eqnarray}
&&\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},X(0))u_{n}(\vec{x},X(0))
=\lambda_{n}u_{n}(\vec{x},X(0)), \nonumber\\
&&\int d^{3}xu_{n}^{\star}(\vec{x},X(0))u_{m}(\vec{x},X(0))=
\delta_{nm},
\end{eqnarray}
and expand
\begin{eqnarray}
\psi(t,\vec{x})=\sum_{n}a_{n}(t)u_{n}(\vec{x},X(0)).
\end{eqnarray}
We then have
\begin{eqnarray}
{\cal D}\psi^{\star}{\cal D}\psi=\prod_{n}{\cal D}a_{n}^{\star}
{\cal D}a_{n}
\end{eqnarray}
and the path integral is written as
\begin{eqnarray}
Z&=&\int \prod_{n}{\cal D}a_{n}^{\star}
{\cal D}a_{n}
\exp\{\frac{i}{\hbar}\int_{0}^{T}dt[
\sum_{n}a_{n}^{\star}(t)i\hbar\frac{\partial}{\partial t}
a_{n}(t)\nonumber\\
&&-\sum_{n,m}a_{n}^{\star}(t)E_{nm}(X(t))a_{m}(t)] \}
\end{eqnarray}
where
\begin{eqnarray}
E_{nm}(X(t))=\int d^{3}x u_{n}^{\star}(\vec{x},X(0))
\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},X(t))u_{m}(\vec{x},X(0)).
\end{eqnarray}
We next perform a unitary transformation
\begin{eqnarray}
a_{n}=\sum_{m}U(X(t))_{nm}b_{m}
\end{eqnarray}
where
\begin{eqnarray}
U(X(t))_{nm}=\int d^{3}x u^{\star}_{n}(\vec{x},X(0))
v_{m}(\vec{x},X(t))
\end{eqnarray}
with the instantaneous eigenfunctions of the Hamiltonian
\begin{eqnarray}
&&\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},X(t))v_{n}(\vec{x},X(t))
={\cal E}_{n}(X(t))v_{n}(\vec{x},X(t)), \nonumber\\
&&\int d^{3}x v^{\star}_{n}(\vec{x},X(t))v_{m}(\vec{x},X(t))
=\delta_{n,m}.
\end{eqnarray}
We emphasize that $U(X(t))$ is a unit matrix both at $t=0$ and
$t=T$ if $X(T)=X(0)$, and thus
\begin{eqnarray}
\{a_{n}\}=\{b_{n}\}
\end{eqnarray}
both at $t=0$ and $t=T$. We take the time $T$ as a period of the
slowly varying variable $X(t)$.
We can thus re-write the path integral as
\begin{eqnarray}
&&Z=\int \prod_{n}{\cal D}b_{n}^{\star}{\cal D}b_{n}
\exp\{\frac{i}{\hbar}\int_{0}^{T}dt[
\sum_{n}b_{n}^{\star}(t)i\hbar\frac{\partial}{\partial t}
b_{n}(t)\nonumber\\
&&+\sum_{n,m}b_{n}^{\star}(t)
\langle n|i\hbar\frac{\partial}{\partial t}|m\rangle
b_{m}(t)-\sum_{n}b_{n}^{\star}(t){\cal E}_{n}(X(t))b_{n}(t)] \}
\end{eqnarray}
where the second term in the action stands for the term
commonly referred to as Berry's phase\cite{berry} and its
off-diagonal {\em generalization}.
The second term in (2.12) is defined by
\begin{eqnarray}
(U(X(t))^{\dagger}i\hbar\frac{\partial}{\partial t}U(X(t)))_{nm}
&=&\int d^{3}x v^{\star}_{n}(\vec{x},X(t))
i\hbar\frac{\partial}{\partial t}v_{m}(\vec{x},X(t))\nonumber\\
&\equiv& \langle n|i\hbar\frac{\partial}{\partial t}|m\rangle.
\end{eqnarray}
The path integral (2.12) is also derived directly by expanding $\psi(t,\vec{x})
=\sum_{n}b_{n}(t)v_{n}(\vec{x},X(t))$ in terms of the
instantaneous eigenfunctions in (2.10). As for the phase choice
of $v_{n}(\vec{x},X(t))$ in (2.10), it will be discussed in
detail later in connection with the hidden local gauge symmetry.
As we already mentioned, the
fact that the Berry's phase can be understood as a part of the
Hamiltonian, i.e.,{\em dynamical}, has been noted in an
adiabatic picture~\cite{berry2}.
Our formula does not assume the adiabatic approximation, and
thus it gives a generalization.
In the operator formulation of the second quantized theory,
we thus obtain the effective Hamiltonian (depending on Bose or
Fermi statistics)
\begin{eqnarray}
\hat{H}_{eff}(t)&=&\sum_{n}\hat{b}_{n}^{\dagger}(t)
{\cal E}_{n}(X(t))\hat{b}_{n}(t)\nonumber\\
&&-\sum_{n,m}\hat{b}_{n}^{\dagger}(t)
\langle n|i\hbar\frac{\partial}{\partial t}|m\rangle
\hat{b}_{m}(t)
\end{eqnarray}
with
\begin{eqnarray}
[\hat{b}_{n}(t), \hat{b}^{\dagger}_{m}(t)]_{\mp}=\delta_{n,m}.
\end{eqnarray}
Note that these formulas (2.6), (2.12) and (2.14) are exact and,
to our knowledge, the formulas (2.12) and (2.14) have not been
analyzed before~\footnote{It is possible to write the
Schr\"{o}dinger equation in the first quantization in a form
equivalent to (2.14) by expanding the Schr\"{o}dinger
amplitude $\psi(t,\vec{x})=\sum_{n}b_{n}(t)v_{n}(\vec{x},X(t))$
in terms of the instantaneous
eigenfunctions in (2.10); one then deals with simultaneous
equations for the variables $\{b_{n}(t) \}$. However, the
second quantization
provides a natural universal formulation for both of the path
integral and the operator formalism.}. See, however, eq.(2)
in ref.\cite{anandan}. The off-diagonal
geometric terms in (2.14),
which are crucial in the analysis below, are missing in the
usual adiabatic approximation in the first quantization. The use
of the instantaneous eigenfunctions in (2.12) is a common
feature shared with the adiabatic approximation. In our picture,
all the information about geometric phases is included in
the effective Hamiltonian, and for this reason we use the
terminology ``geometric terms'' for those general terms
appearing in the Hamiltonian. The ``geometric phases'' are used
when these terms are interpreted as phase factors of a specific
state vector.
Since our formulation starts with the path integral
representation (2.2), the equivalence of the present exact
formulation to the more conventional representation is expected.
It may however be nice to check this equivalence explicitly.
We define the ``Schr\"{o}dinger'' picture by noting the
Heisenberg equation of motion
\begin{eqnarray}
&&i\hbar\frac{\partial}{\partial t}\hat{b}_{n}(t)=
[\hat{b}_{n}(t), \hat{H}_{eff}(t)]
\end{eqnarray}
and thus introducing a unitary operator $U(t)$ by
\begin{eqnarray}
&&i\hbar\frac{\partial}{\partial t}U(t)= - \hat{H}_{eff}(t)U(t)
\end{eqnarray}
with $U(0)=1$.
We then have
\begin{eqnarray}
\hat{b}_{n}(t)&=&U(t)\hat{b}_{n}(0)U(t)^{\dagger},\nonumber\\
\hat{{\cal H}}_{eff}(t)&\equiv&
U(t)^{\dagger}\hat{H}_{eff}(t)U(t)
\nonumber\\
&=&\sum_{n}\hat{b}_{n}^{\dagger}(0)
{\cal E}_{n}(X(t))\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial t}|m\rangle
\hat{b}_{m}(0).
\end{eqnarray}
We note that the state vectors in the Heisenberg and
Schr\"{o}dinger pictures are related by
\begin{eqnarray}
\Psi_{H}(0)=U(t)\Psi_{S}(t)
\end{eqnarray}
and thus
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\Psi_{S}(t)
=U^{\dagger}(t)\hat{H}_{eff}(t)U(t)U^{\dagger}(t)\Psi_{H}(0)
=\hat{{\cal H}}_{eff}(t)\Psi_{S}(t).
\end{eqnarray}
The second quantization formula for the evolution operator then
gives rise to
\begin{eqnarray}
&&\langle n|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{{\cal H}}_{eff}(t)
dt\}|n\rangle \nonumber\\
&=&\langle n|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}dt
[\sum_{n}\hat{b}_{n}^{\dagger}(0){\cal E}_{n}(X(t))\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial t}|m\rangle
\hat{b}_{m}(0)] \}|n\rangle
\nonumber\\
&=&\sum_{n_{1},n_{2}, ....,n_{N}}\nonumber\\
&&\langle n|\exp\{-\frac{i\epsilon}{\hbar}
[\sum_{n}\hat{b}_{n}^{\dagger}(0){\cal E}_{n}(X(T))\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial T}|m\rangle
\hat{b}_{m}(0)] \}|n_{1}\rangle
\nonumber\\
&\times&\langle n_{1}|\exp\{-\frac{i\epsilon}{\hbar}
[\sum_{n}\hat{b}_{n}^{\dagger}(0){\cal E}_{n}(X(t_{1}))
\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial t_{1}}|m\rangle
\hat{b}_{m}(0)] \}|n_{2}\rangle
\nonumber\\
&\times&
\langle n_{2}|\exp\{-\frac{i\epsilon}{\hbar}
[\sum_{n}\hat{b}_{n}^{\dagger}(0){\cal E}_{n}(X(t_{2}))
\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial t_{2}}|m\rangle
\hat{b}_{m}(0)] \}|n_{3}\rangle
\nonumber\\
&\times& ..... \nonumber\\
&\times&
\langle n_{N}|\exp\{-\frac{i\epsilon}{\hbar}
[\sum_{n}\hat{b}_{n}^{\dagger}(0){\cal E}_{n}(X(t_{N}))
\hat{b}_{n}(0)
-\sum_{n,m}\hat{b}_{n}^{\dagger}(0)
\langle n|i\hbar\frac{\partial}{\partial t_{N}}|m\rangle
\hat{b}_{m}(0)] \}|n\rangle \nonumber\\
\end{eqnarray}
where $T^{\star}$ stands for the time ordering operation and
$\epsilon=T/(N+1)$, and the state vectors in the second
quantization are defined by
\begin{eqnarray}
|n\rangle=\hat{b}_{n}^{\dagger}(0)|0\rangle.
\end{eqnarray}
This formula is re-written as
\begin{eqnarray}
&&\sum_{n_{1},n_{2}, ....,n_{N}}
[\exp\{-\frac{i\epsilon}{\hbar}[
{\cal E}_{n}(X(T))-\langle n|i\hbar\frac{\partial}{\partial T}|n_{1}\rangle]\}\delta_{n,n_{1}}
+\frac{i\epsilon}{\hbar}
\langle n|i\hbar\frac{\partial}{\partial T}|n_{1}\rangle
|_{n\neq n_{1}}]
\nonumber\\
&\times&[\exp\{-\frac{i\epsilon}{\hbar}[
{\cal E}_{n_{1}}(X(t_{1}))-\langle n_{1}|i\hbar
\frac{\partial}{\partial t_{1}}|n_{2}\rangle]\}\delta_{n_{1},n_{2}}
+\frac{i\epsilon}{\hbar}
\langle n_{1}|i\hbar\frac{\partial}{\partial t_{1}}|n_{2}\rangle
|_{n_{1}\neq n_{2}}]
\nonumber\\
&\times&
[\exp\{-\frac{i\epsilon}{\hbar}[
{\cal E}_{n_{2}}(X(t_{2}))-\langle n_{2}|i\hbar
\frac{\partial}{\partial t_{2}}|n_{3}\rangle]\}\delta_{n_{2},n_{3}}
+\frac{i\epsilon}{\hbar}
\langle n_{2}|i\hbar\frac{\partial}{\partial t_{2}}|n_{3}\rangle
|_{n_{2}\neq n_{3}}]
\nonumber\\
&\times& ..... \nonumber\\
&\times&
[\exp\{-\frac{i\epsilon}{\hbar}[
{\cal E}_{n}(X(t_{N}))-\langle n_{N}|i\hbar\frac{\partial}
{\partial
t_{N}}|n\rangle] \}\delta_{n_{N},n}+\frac{i\epsilon}{\hbar}
\langle n_{N}|i\hbar\frac{\partial}{\partial t_{N}}|n\rangle
|_{n_{N}\neq n}]
\end{eqnarray}
where the state vectors in this last expression stand for the
first quantized states defined by
\begin{eqnarray}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t))|n(t)\rangle
={\cal E}_{n}(X(t))|n(t)\rangle,
\end{eqnarray}
and those state vectors also appear in the definition of
geometric terms.
If one retains only the diagonal elements in this formula
(2.23), one
recovers the conventional adiabatic formula~\cite{kuratsuji}
\begin{eqnarray}
\exp\{-\frac{i}{\hbar}\int_{0}^{T}dt
[{\cal E}_{n}(X(t))
-\langle n|i\hbar\frac{\partial}{\partial t}|n\rangle]\}.
\end{eqnarray}
On the other hand,
if one retains the off-diagonal elements also, one obtains the
exact evolution operator. We first observe, for example,
\begin{eqnarray}
&&\exp\{-\frac{i\epsilon}{\hbar}[
{\cal E}_{n_{1}}(X(t_{1}))-\langle n_{1}|i\hbar
\frac{\partial}{\partial t_{1}}|n_{2}\rangle]\}
\delta_{n_{1},n_{2}}
+\frac{i\epsilon}{\hbar}
\langle n_{1}|i\hbar\frac{\partial}{\partial t_{1}}|n_{2}\rangle
|_{n_{1}\neq n_{2}}
\nonumber\\
&=&\exp\{-\frac{i\epsilon}{\hbar}{\cal E}_{n_{1}}(X(t_{1}))\}
\langle n_{1}(t_{1})|n_{2}(t_{1}-\epsilon)\rangle
+ O(\epsilon^{2})\nonumber\\
&=&\langle n_{1}(t_{1})|\exp\{-\frac{i\epsilon}{\hbar}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t_{1}))\}
|n_{2}(t_{1}-\epsilon)\rangle
+ O(\epsilon^{2}).
\end{eqnarray}
By letting $\epsilon\rightarrow 0$, we thus obtain
\begin{eqnarray}
&&\langle n|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{{\cal H}}_{eff}(t)
dt\}|n\rangle\nonumber\\
&&=
\langle n(T)|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t))dt \}|n(0)\rangle \,.
\end{eqnarray}
Both-hand sides of this formula are exact, but the difference is
that the geometric terms, both of diagonal and off-diagonal,
are explicit in the second quantized formulation on the
left-hand side.
Here we would like to comment on the possible
advantages of using the second quantization technique. As we
have already mentioned, all the results of the second quantization are
in principle reproduced by the first quantization in the present
single-particle problem. This fact is exemplified by the relation
(2.27). The possible advantages are thus mainly technical and
conceptual ones. First of all, the general geometric terms are
explicitly and neatly formulated by the second quantization both
for the path integral (2.12) and the operator formalism (2.27).
Also, our emphasis is on the diagonalization of the Hamiltonian
rather than on the subtle notion of phases. This emphasis on the
Hamiltonian is also manifest in the second quantization on the
left-hand side of (2.27). Another technical advantage in the
present formulation is related to the phase freedom of the
basis set in (2.10). The path integral formula (2.12) is based
on the expansion
\begin{eqnarray}
\psi(t,\vec{x})=\sum_{n}b_{n}(t)v_{n}(\vec{x},X(t)),
\end{eqnarray}
and the starting path integral (2.2) depends only on the field
variable $\psi(t,\vec{x})$, not on $\{ b_{n}(t)\}$
and $\{v_{n}(\vec{x},X(t))\}$ separately. This fact shows that
our formulation contains a hidden local gauge symmetry
\begin{eqnarray}
v_{n}(\vec{x},X(t))\rightarrow v^{\prime}_{n}(\vec{x},X(t))=
e^{i\alpha_{n}(t)}v_{n}(\vec{x},X(t))
, \ \ \ \ b_{n}(t) \rightarrow b^{\prime}_{n}(t)=
e^{-i\alpha_{n}(t)}b_{n}(t)
\end{eqnarray}
where the gauge parameter $\alpha_{n}(t)$ is a general
function of $t$. One can confirm that both of the path integral measure and the action in (2.12) are invariant under this gauge transformation. By using this gauge freedom, one can choose the
phase convention of the basis set $\{v_{n}(\vec{x},X(t))\}$ such
that the analysis of geometric phases becomes most transparent
; in (3.4) later, we choose the basis set $\{v_{n}(y(t))\}$ such that the artificial singularity introduced by the use of
polar coordinates becomes minimum. The meaning of this gauge
transformation shall be explained further in connection with
equations (3.12) and (3.21).
The expression on the right-hand side of (2.27) stands for the first
quantized formula which has an exact path integral
representation given by
\begin{eqnarray}
&&\langle n(T)|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t))dt \}|n(0)\rangle
\nonumber\\
&=& \iint d^{3}x(T) d^{3}x(0) u_{n}^{\star}(\vec{x}(T))
u_{n}(\vec{x}(0))\nonumber\\
&&\times
\langle \vec{x}(T)|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t))dt \}|\vec{x}(0)\rangle
\end{eqnarray}
and
\begin{eqnarray}
&&\langle \vec{x}(T)|T^{\star}\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}(\hat{\vec{p}}, \hat{\vec{x}},
X(t))dt \}|\vec{x}(0)\rangle
\nonumber\\
&=&\int_{\vec{x}(0)}^{\vec{x}(T)}{\cal D}\vec{x} {\cal D}\vec{p}
\exp\{\frac{i}{\hbar}\int_{0}^{T}dt [\vec{p}\cdot\dot{\vec{x}}-
H(\vec{p},\vec{x},X(t))] \}
\nonumber\\
&=&\int_{\vec{x}(0)}^{\vec{x}(T)}{\cal D}\vec{x} {\cal D}\vec{p}
\exp\{\frac{i}{\hbar}\int_{0}^{T}dt [\vec{p}\cdot\dot{\vec{x}}-
H(\vec{p},\vec{x},0)-\sum_{l}H_{l}(\vec{p},\vec{x})X_{l}(t)] \}
\end{eqnarray}
where the last expression is valid for sufficiently small
$X(t)=(X_{1}(t), X_{2}(t), ... )$. In the
analysis of level crossing, it is convenient to assume that the
specific level crossing we are interested in takes place at the
origin of $X(t)$ with
\begin{eqnarray}
H_{l}(\vec{p},\vec{x})=\frac{\partial H(\vec{p},\vec{x},X(t))}
{\partial X_{l}(t)}|_{X(t)=0}.
\end{eqnarray}
We note that the path integral (2.31) shows no obvious
singular behavior at the level crossing point $X(t)=0$.
\section{Level crossing and geometric phases}
We are mainly interested in the topological properties of
geometric phases. To simplify the analysis, we now assume that
the level crossing takes place only between
the lowest two levels, and we consider the familiar idealized
model with only the lowest two levels. This simplification is
expected to be valid to analyze the topological properties
in the infinitesimal neighborhood of level crossing.
The effective Hamiltonian to be analyzed
in the path integral (2.6) is then defined by the $2\times 2$
matrix $ h(X(t))=\left(E_{nm}(X(t))\right)$.
If one assumes that the level crossing takes place at the
origin of the parameter space $X(t)=0$, one needs to analyze
the matrix
\begin{eqnarray}
h(X(t)) = \left(E_{nm}(0)\right) +
\left(\frac{\partial}{\partial X_{k}}E_{nm}(X)|_{X=0}\right)
X_{k}(t)
\end{eqnarray}
for sufficiently small $(X_{1}(1),X_{2}(1), ... )$. By a time
independent unitary transformation, which does not induce
an extra geometric term, the first term is diagonalized.
In the present approximation, essentially the four dimensional
sub-space of the parameter space is relevant, and after a
suitable re-definition of the parameters by taking linear
combinations of $X_{k}(t)$, we write the matrix as~\cite{berry}
\begin{eqnarray}
h(X(t))
&=&\left(\begin{array}{cc}
E(0)+y_{0}(t)&0\\
0&E(0)+y_{0}(t)
\end{array}\right)
+g \sigma^{l}y_{l}(t)\nonumber\\
\end{eqnarray}
where $\sigma^{l}$ stands for the Pauli matrices, and $g$ is a
suitable (positive) coupling constant. This parametrization in
terms of the variables $y_{l}$ is valid beyond the linear
approximation, but the two-level approximation is expected to
be valid only near the level crossing point.
The above matrix is diagonalized in the standard way as
\begin{eqnarray}
h(X(t))v_{\pm}(y)=(E(0)+y_{0}(t) \pm g r)v_{\pm}(y)
\end{eqnarray}
where $r=\sqrt{y^{2}_{1}+y^{2}_{2}+y^{2}_{3}}$ and
\begin{eqnarray}
v_{+}(y)=\left(\begin{array}{c}
\cos\frac{\theta}{2}e^{-i\varphi}\\
\sin\frac{\theta}{2}
\end{array}\right), \ \ \ \ \
v_{-}(y)=\left(\begin{array}{c}
\sin\frac{\theta}{2}e^{-i\varphi}\\
-\cos\frac{\theta}{2}
\end{array}\right)
\end{eqnarray}
by using the polar coordinates,
$y_{1}=r\sin\theta\cos\varphi,\ y_{2}=r\sin\theta\sin\varphi,
\ y_{3}=r\cos\theta$. Note that
\begin{eqnarray}
v_{\pm}(y(0))=v_{\pm}(y(T))
\end{eqnarray}
if $y(0)=y(T)$ except for $(y_{1}, y_{2}, y_{3}) = (0,0,0)$,
and $\theta=0\ {\rm or}\ \pi$; when one analyzes the behavior
near those singular points, due care needs to be exercised.
If one defines
\begin{eqnarray}
v^{\dagger}_{m}(y)i\frac{\partial}{\partial t}v_{n}(y)
=A_{mn}^{k}(y)\dot{y}_{k}
\end{eqnarray}
where $m$ and $n$ run over $\pm$,
we have
\begin{eqnarray}
A_{++}^{k}(y)\dot{y}_{k}
&=&\frac{(1+\cos\theta)}{2}\dot{\varphi}
\nonumber\\
A_{+-}^{k}(y)\dot{y}_{k}
&=&\frac{\sin\theta}{2}\dot{\varphi}+\frac{i}{2}\dot{\theta}
=(A_{-+}^{k}(y)\dot{y}_{k})^{\star}
,\nonumber\\
A_{--}^{k}(y)\dot{y}_{k}
&=&\frac{1-\cos\theta}{2}\dot{\varphi}.
\end{eqnarray}
The effective Hamiltonian (2.14) is then given by
\begin{eqnarray}
\hat{H}_{eff}(t)&=&(E(0)+y_{0}(t) + g r(t))\hat{b}^{\dagger}_{+}
\hat{b}_{+}
\nonumber\\
&+&(E(0)+y_{0}(t) - g r(t))\hat{b}^{\dagger}_{-}\hat{b}_{-}
-\hbar \sum_{m,n}\hat{b}^{\dagger}_{m}A^{k}_{mn}(y)\dot{y}_{k}
\hat{b}_{n}.
\end{eqnarray}
In the conventional adiabatic approximation, one approximates
the effective Hamiltonian (3.8) by
\begin{eqnarray}
\hat{H}_{eff}(t)&\simeq& (E(0)+y_{0}(t) + g r(t))
\hat{b}^{\dagger}_{+}\hat{b}_{+}\nonumber\\
&&+(E(0)+y_{0}(t) - g r(t))\hat{b}^{\dagger}_{-}\hat{b}_{-}
\nonumber\\
&&-\hbar [\hat{b}^{\dagger}_{+}A^{k}_{++}(y)\dot{y}_{k}
\hat{b}_{+}
+\hat{b}^{\dagger}_{-}A^{k}_{--}(y)\dot{y}_{k}\hat{b}_{-}]
\end{eqnarray}
which is valid for
\begin{eqnarray}
Tg r(t)\gg \hbar\pi,\nonumber
\end{eqnarray}
where $\hbar\pi$ stands for the magnitude of the geometric term
times $T$.
The Hamiltonian for $b_{-}$, for example, is then eliminated by
a ``gauge transformation''
\begin{eqnarray}
b_{-}(t)=
\exp\{-(i/\hbar)\int_{0}^{t}dt[
E(0)+y_{0}(t) - g r(t)
-\hbar A^{k}_{--}(y)\dot{y}_{k}] \} \tilde{b}_{-}(t)
\end{eqnarray}
in the path integral (2.12) with the above approximation (3.9),
and the amplitude
$\langle 0|\hat{\psi}(T)\hat{b}^{\dagger}_{-}(0)|0\rangle$,
which corresponds to the probability amplitude in the first
quantization, is given by (up to an eigenfunction
$\phi_{E}(\vec{x})$ of
$\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x}, 0)$ in (2.3))
\begin{eqnarray}
\psi_{-}(T)&\equiv&\langle 0|\hat{\psi}(T)\hat{b}^{\dagger}_{-}(0)|0\rangle\nonumber\\
&=&\exp\{-\frac{i}{\hbar}\int_{0}^{T}dt[
E(0)+y_{0}(t) - g r(t)
-\hbar A^{k}_{--}(y)\dot{y}_{k}] \}v_{-}(y(T))
\nonumber\\
&&\times
\langle 0|\hat{\tilde{b}}_{-}(T)\hat{\tilde{b}}{}^{\dagger}_{-}(0)|0\rangle\nonumber\\
&=&\exp\{-\frac{i}{\hbar}\int_{0}^{T}dt[
E(0)+y_{0}(t) - g r(t)
-\hbar A^{k}_{--}(y)\dot{y}_{k}] \}v_{-}(y(T))
\end{eqnarray}
with $\langle 0|\hat{\tilde{b}}_{-}(T)
\hat{\tilde{b}}{}^{\dagger}_{-}(0)
|0\rangle=\langle 0|\hat{\tilde{b}}_{-}(0)
\hat{\tilde{b}}{}^{\dagger}_{-}(0)
|0\rangle=1$.
For a $2\pi$ rotation in $\varphi$ with fixed $\theta$, for
example, the geometric term gives rise to the well-known
factor~\footnote{If one performs the gauge transformation (2.29)
for the bases (3.4) in the formula (3.11), one can confirm
\begin{eqnarray}
\psi_{-}(T)\rightarrow e^{i\alpha_{-}(0)}\psi_{-}(T) \nonumber
\end{eqnarray}
independently of the value of $T$, and thus the amplitude
$\psi_{-}(T)$ relative to $\psi_{-}(0)$,
which is the quantity of physical significance,
is independent of the gauge transformation. }
\begin{eqnarray}
\psi_{-}(T)=\exp\{i\pi(1-\cos\theta) \}
\exp\{-\frac{i}{\hbar}\int_{C(0\rightarrow T)}dt
[E(0)+y_{0}(t) - g r(t)] \}v_{-}(y(T))
\end{eqnarray}
by using (3.7)~\cite{berry}, and the path $C(0\rightarrow T)$
specifies the integration along the above specific closed path.
Note that $v_{-}(y(T))=v_{-}(y(0))$ in the present choice of
the basis set, and thus (3.12) can also be written as
\begin{eqnarray}
\psi_{-}(T)=\exp\{i\pi(1-\cos\theta) \}
\exp\{-\frac{i}{\hbar}\int_{C(0\rightarrow T)}dt
[E(0)+y_{0}(t) - g r(t)] \}\psi_{-}(0)\nonumber
\end{eqnarray}
The correction to the formula (3.12)
arising from the finite $1/T$ may be analyzed by an iterative
procedure~\cite{berry2}, for example. One can thus analyze the
geometric phase in the present formulation without using the
mathematical notions such as parallel transport and holonomy.
Another representation, which is useful to analyze the behavior
near the level crossing point, is obtained by a further unitary
transformation
\begin{eqnarray}
\hat{b}_{m}=\sum_{n}U(\theta(t))_{mn}\hat{c}_{n}
\end{eqnarray}
where $m,n$ run over $\pm$ with
\begin{eqnarray}
U(\theta(t))=\left(\begin{array}{cc}
\cos\frac{\theta}{2}&-\sin\frac{\theta}{2}\\
\sin\frac{\theta}{2}&\cos\frac{\theta}{2}
\end{array}\right),
\end{eqnarray}
and the above effective Hamiltonian (3.8) is written as
\begin{eqnarray}
\hat{H}_{eff}(t)&&= (E(0)+y_{0}(t)+gr\cos\theta)
\hat{c}^{\dagger}_{+}\hat{c}_{+}\nonumber\\
&&+(E(0)+y_{0}(t)-gr\cos\theta)\hat{c}^{\dagger}_{-}
\hat{c}_{-}\nonumber\\
&&-gr\sin\theta \hat{c}^{\dagger}_{+}\hat{c}_{-}
-gr\sin\theta \hat{c}^{\dagger}_{-}\hat{c}_{+}
-\hbar\dot{\varphi} \hat{c}^{\dagger}_{+}\hat{c}_{+}.
\end{eqnarray}
In the above unitary transformation, an extra geometric
term $-U(\theta)^{\dagger}i\hbar\partial_{t}U(\theta)$ is
induced by the kinetic term of the path integral
representation (2.12). One can
confirm that this extra term precisely cancels the term
containing $\dot{\theta}$ in $\hat{b}^{\dagger}_{m}
A^{k}_{mn}(y)\dot{y}_{k}\hat{b}_{n}$ as in (3.7).
We thus diagonalize the geometric terms in this representation.
We also note that
$U(\theta(T))=U(\theta(0))$ if $X(T)=X(0)$ except for the
origin, and thus the initial and final states receive the same
transformation in scattering amplitudes. The above
diagonalization of the geometric terms corresponds to the use
of eigenfunctions
\begin{eqnarray}
w_{m}=\sum_{n}U(\theta(t))^{\dagger}_{mn}v_{n}
\end{eqnarray}
or explicitly
\begin{eqnarray}
w_{+}=\left(\begin{array}{c}
e^{-i\varphi}\\
0
\end{array}\right), \ \ \ \ \
w_{-}=\left(\begin{array}{c}
0\\
1
\end{array}\right)
\end{eqnarray}
in the definition of geometric terms.
In the infinitesimal neighborhood of the level crossing point,
namely, for sufficiently close to the origin of the parameter
space $(y_{1}(t), y_{2}(t), y_{3}(t) )$ but
$(y_{1}(t), y_{2}(t), y_{3}(t))\neq (0,0,0)$, one may
approximate (3.15) by
\begin{eqnarray}
\hat{H}_{eff}(t)&\simeq& (E(0)+y_{0}(t)+gr\cos\theta)
\hat{c}^{\dagger}_{+}\hat{c}_{+}\nonumber\\
&+&(E(0)+y_{0}(t)-gr\cos\theta)\hat{c}^{\dagger}_{-}\hat{c}_{-}
-\hbar\dot{\varphi} \hat{c}^{\dagger}_{+}\hat{c}_{+}.
\end{eqnarray}
To be precise, for any given {\em fixed} time interval $T$,
\begin{eqnarray}
T\hbar\dot{\varphi}\sim 2\pi\hbar
\end{eqnarray}
which is invariant under the uniform scale transformation
$y_{k}(t)\rightarrow
\epsilon y_{k}(t)$. On the other hand,
one has $T gr\sin\theta \rightarrow T\epsilon gr\sin\theta$
by the above scaling, and thus one can choose
\begin{eqnarray}
T\epsilon gr\ll \hbar.
\nonumber
\end{eqnarray}
The terms $\pm gr\cos\theta$ in (3.18)
may also be ignored in the present approximation.
In this new basis (3.18), the geometric phase appears only for
the mode $\hat{c}_{+}$ which gives rise to a phase factor
\begin{eqnarray}
\exp\{i\int_{C} \dot{\varphi}dt \}=\exp\{2i\pi \}=1,
\end{eqnarray}
and thus no physical effects. In the infinitesimal neighborhood
of level crossing, the states spanned by
$(\hat{b}_{+},\hat{b}_{-})$ are transformed to a linear
combination of the
states spanned by $(\hat{c}_{+},\hat{c}_{-})$, which give no
non-trivial
geometric phases. The geometric terms are topological
in the sense that they are invariant under the uniform scaling
of $y_{k}(t)$, but their physical implications in conjunction
with
other terms in the effective Hamiltonian are not. For example,
starting with the state
$\hat{b}^{\dagger}_{-}(0)|0\rangle$ one may first make
$r\rightarrow small$ with fixed $\theta$ and $\varphi$,
then make a $2\pi$ rotation in $\varphi$ in the bases
$\hat{c}^{\dagger}_{\pm}|0\rangle$, and then come back to
the original $r$ with fixed $\theta$ and $\varphi$ for a given
fixed $T$ as in Fig.1 ; in this cycle, one does not pick up any
non-trivial geometric phase even though one covers the solid
angle $2\pi(1-\cos\theta)$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.9cm]{1}
\end{center}
\vspace{-9mm}
\caption{\small (Color online) The path 1 gives the conventional
geometric phase as in (3.12) for a fixed finite $T$,
whereas the path 2 gives a trivial phase for a fixed finite $T$.
Note that both of the paths cover the same solid angle $2\pi(1-\cos\theta)$.}
\end{figure}
\vspace{1mm}
To be precise, the physical quantity in
(3.12) is replaced by
\begin{eqnarray}
\psi_{-}(T)&=&\exp\{-\frac{i}{\hbar}\int_{C_{2}(0\rightarrow T)}
dt[E(0)+y_{0}(t) - g r(t)
-\hbar A^{k}_{--}(y)\dot{y}_{k}] \}v_{-}(y(T))\nonumber\\
&=&\exp\{-\frac{i}{\hbar}\int_{C_{2}(0\rightarrow T)}
dt[E(0)+y_{0}(t) - g r(t)] \}v_{-}(y(T))
\end{eqnarray}
by deforming the path 1 to the path 2 in the parameter space in
Fig. 1. The path $C_{2}(0\rightarrow T)$ specifies the path 2 in
Fig.1, and $v_{-}(y(T))=v_{-}(y(0))=\psi_{-}(0)$ in the present specific
choice of the basis set. The first expression in the above
equation explicitly shows the invariance of $\psi_{-}(T)$ under
the gauge transformation (2.29) up to a trivial
overall constant phase~\footnote{The gauge transformation (2.29)
for the present case (3.4) is written as
\begin{eqnarray}
U(\alpha(t))=\left(\begin{array}{cc}
e^{i\alpha_{+}(t)}&0\\
0&e^{i\alpha_{-}(t)}
\end{array}\right).\nonumber
\end{eqnarray}
It is convenient to keep the auxiliary variables $\{c_{m} \}$
and $\{w_{m} \}$ in the standard form as in (3.15) and (3.17)
even after the gauge transformation.
This is achieved by replacing $U(\theta(t))$ in (3.14) by
$U(\alpha(t))U(\theta(t))$. The effect of the gauge
transformation survives only in the external states
$\hat{b}_{\pm}(0)|0\rangle$ in (3.11) resulting in the
appearance of trivial overall constant phase.
}. The transformation from $\hat{b}_{\pm}$ to
$\hat{c}_{\pm}$ is highly non-perturbative, since a complete
re-arrangement of two levels is involved.
It should be noted that one cannot simultaneously diagonalize
the conventional energy eigenvalues and the induced geometric
terms in (3.8) which is exact in the present two-level model
(3.2). The topological considerations~\cite{stone, berry} are
thus inevitably approximate. In this respect, it may be
instructive to consider a model without level crossing which is
defined by setting
\begin{eqnarray}
y_{3}=\Delta E/2g
\end{eqnarray}
in (3.8), where $\Delta E$ stands for the minimum of the level
spacing. The geometric terms then loose invariance under the
uniform scaling of $y_{1}$ and $y_{2}$.
In the limit
\begin{eqnarray}
\sqrt{y^{2}_{1}+y^{2}_{2}}\gg\Delta E/2g,
\end{eqnarray}
$\theta\rightarrow \pi/2$ and the geometric terms in (3.8)
exhibit approximately topological behavior for the reduced
variables $(y_{1},y_{2})$: One can thus perform an approximate
topological analysis of the phase change rule.
Near the point where the level
spacing becomes minimum, which is specified by
\begin{eqnarray}
(y_{1},y_{2})\rightarrow (0,0)
\end{eqnarray}
(and thus $\theta\rightarrow0$), the
geometric terms in (3.8) assume the form of the geometric
term in (3.18) and thus the geometric phases become trivial.
Our analysis shows that the model
{\em with} level crossing
(3.2) exhibits precisely the same topological properties for any
finite $T$.
It is instructive to analyze an explicit
example in Refs.~\cite{geller,bhandari} where the following
parametrization has been introduced
\begin{eqnarray}
(y_{1},y_{2},y_{3})=(B_{0}(b_{1}+\cos\omega t),
B_{0}\sin\omega t, B_{z})
\end{eqnarray}
and $g=\mu$ in the notation of (3.2). The case $b_{1}=0$ and
$B_{z}\neq 0$ corresponds to
the model without level crossing discussed above in (3.22), and
the geometric phase becomes trivial for $B_{0}\rightarrow 0$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.6cm]{2}
\end{center}
\vspace{-9mm}
\caption{\small (Color online) The path 1 for $(y_{1},y_{2},y_{3})
=(B_{0}\cos\omega t, B_{0}\sin\omega t, 0)$
gives rise to the phase change rule for a fixed finite
$T=2\pi/\omega$ and $\mu B_{0}/\hbar\omega\gg 1$, whereas the
path 2 gives a trivial phase for a fixed finite $T$ and
$\mu B_{0}/\hbar\omega\ll 1$, thus resulting in the failure of
the topological argument for the phase change rule for any fixed
finite $T$.}
\end{figure}
The case $b_{1}=B_{z} = 0$ describes the model with level
crossing: The case $b_{1}=B_{z} = 0$ with
\begin{eqnarray}
T=2\pi/\omega
\end{eqnarray}
kept fixed describes the situation in (3.18) with
$\theta=\pi/2$, namely, a closed
cycle in the infinitesimal neighborhood of level crossing for
$B_{0}\rightarrow 0$, and the geometric phase becomes trivial.
See Fig.2. To be explicit,
\begin{eqnarray}
\psi_{-}(T)&=&\exp\{-\frac{i}{\hbar}\int_{C_{1}}
dt[E(0)+y_{0}(t) - \mu B_{0}
-\hbar A^{k}_{--}(y)\dot{y}_{k}] \}v_{-}(y(T))\nonumber\\
&=&(-1)\exp\{-\frac{i}{\hbar}\int_{C_{1}}
dt[E(0)+y_{0}(t) - \mu B_{0}] \}\frac{1}{\sqrt{2}}
\left(\begin{array}{c}
e^{-i\varphi(T)}\\
-1
\end{array}\right)
\end{eqnarray}
for the path 1 with $\mu B_{0}/\hbar\omega\gg 1$ where the
factor $(-1)$ stands for the Longuet-Higgins' phase
change~\cite{higgins}, and
\begin{eqnarray}
\psi_{-}(T)
&=&\exp\{-\frac{i}{\hbar}\int_{C_{2}}
dt[E(0)+y_{0}(t)] \}\frac{1}{\sqrt{2}}
\left(\begin{array}{c}
e^{-i\varphi(T)}\\
-1
\end{array}\right)
\end{eqnarray}
for the path 2 with $\mu B_{0}/\hbar\omega\ll 1$. Here we
defined $v_{-}$ as a linear combination of $w_{\pm}$ in (3.17)
to compare the result with (3.27). Note that
$\varphi(T)=\varphi(0)$ both in (3.27) and (3.28), and thus
\begin{eqnarray}
\psi_{-}(0)=\frac{1}{\sqrt{2}}
\left(\begin{array}{c}
e^{-i\varphi(T)}\\
-1
\end{array}\right)=\frac{1}{\sqrt{2}}
\left(\begin{array}{c}
e^{-i\varphi(0)}\\
-1
\end{array}\right).\nonumber
\end{eqnarray}
The triviality of the geometric phase persists for
$\omega\rightarrow 0$ and $B_{0}\rightarrow 0$ if one
keeps
\begin{eqnarray}
\mu B_{0}/\hbar\omega\ll 1
\end{eqnarray}
fixed for $b_{1}=B_{z} = 0$.
On the other hand, the usual adiabatic approximation (3.9) (with
$\theta=\pi/2$ in the present model) in the neighborhood of
level crossing is
described by $b_{1}=B_{z} = 0$ and $B_{0}\rightarrow 0$ with
\begin{eqnarray}
\mu B_{0}/\hbar\omega\gg 1
\end{eqnarray}
kept fixed (and thus $\omega=2\pi/T \rightarrow 0$), namely, the
effective magnetic field is always strong; the topological
proof of phase-change rule~\cite{stone} is based on the
consideration of this case. (If one starts with
$b_{1}=B_{z} = 0$ and
$\omega=0$, of course, no geometric terms.) These cases in the
approach to the level crossing $B_{0}\rightarrow 0$ are
summarized in Fig.3. One recognizes that the geometric phase
is non-trivial only for a very narrow window of the parameter
space $(\mu B_{0}, \hbar\omega)$ for small $B_{0}$ and for an essentially
measure zero window in the approach to the level crossing
$B_{0}\rightarrow 0$.
In this analysis, it is
important to distinguish the level crossing problem from the
motion of a spin $1/2$ particle; the wave functions (3.4) are
single valued for a $2\pi$ rotation in $\varphi$ with fixed
$\theta$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.9cm]{3}
\end{center}
\vspace{-9mm}
\caption{\small (Color online) Summary of the behavior of the geometric
phases in the approach to the level crossing point $B_{0}\rightarrow 0$ in
the parameter space $(\mu B_{0},\hbar\omega)$. The path 1 with
fixed $\omega=2\pi/T\neq 0$ and also the path 2 with fixed
$\mu B_{0}/\hbar\omega\ll 1$ give a trivial phase for
$B_{0}\rightarrow 0$. The path 3 with fixed
$\mu B_{0}/\hbar\omega\gg 1$ gives a non-trivial phase for
$B_{0}\rightarrow 0$. The path 4 with $\omega=0$ gives no
geometric phase. The non-trivial phase arises for an essentially
measure zero set in the parameter space
$(\mu B_{0},\hbar\omega)$ for the approach to the level crossing
$B_{0}\rightarrow 0$. }
\end{figure}
The conventional treatment
of geometric phases in adiabatic approximation is based on
the premise that one
can choose $T$ sufficiently large for any {\em given}
$\epsilon\sim r$ such that
\begin{eqnarray}
Tg\epsilon \gg \hbar,
\end{eqnarray}
and thus $T\rightarrow \infty$ for $\epsilon\rightarrow 0$,
namely, it takes an infinite amount of time to approach the
level crossing point~\cite{berry, simon}.
Finite $T$ may however be appropriate in
practical applications, as is noted in~\cite{berry}.
Because of the uncertainty principle
$T\Delta E \geq \frac{1}{2}\hbar$,
the (physically measured) energy uncertainty for any given fixed
$T$ is not much different from the magnitude of the geometric
term $2\pi\hbar$, and the level spacing becomes much smaller
than these values in the infinitesimal neighborhood of level
crossing for the given $T$. An intuitive picture behind (3.18) is
that the motion in $\dot{\varphi}$ smears the ``monopole''
singularity for arbitrarily large but finite $T$.
In the topological analysis of the geometric phase for any fixed
finite $T$, one needs to cover the parameter regions starting
with the region where the adiabatic approximation
is reasonably good to the parameter region near the level
crossing point where the adiabatic approximation totally fails.
\vspace{-2mm}
\section{Integrability of Schr\"{o}dinger equation and geometric
phase}
We here briefly comment on the integrability of Schr\"{o}dinger
equation and the appearance of seemingly non-integrable phase
factors. The Hamiltonian (2.1), which is parametrized by a set
of external parameters, gives rise to a unique time development
for a given $X(t)$ even in the presence of non-integrable phase
factors. If one understands that the Hamiltonians with
different $X(t)$ define completely different theories, one need
not compare theories with different $X(t)$ and thus the issue
of the integrability of the Schr\"{o}dinger equation does not
directly arise. However, in the practical applications of
geometric phases, one usually uses the Born-Oppenheimer
approximation. The external parameters $X(t)$ then become
dynamical variables of a more fundamental Hamiltonian, and the
appearance of non-integrable phases suggests that one cannot
deform some of the paths $X(t)$ smoothly to other sets of paths
$X(t)$. The integrability of the Schr\"{o}dinger equation
defined by the {\it regular} fundamental Hamiltonian could then
be spoiled,
since the different paths $X(t)$ are supposed to be able to be
deformed smoothly to each other for the regular Hamiltonian in
the Schr\"{o}dinger equation. Our analysis however shows that
geometric phases are topologically trivial for any finite time
interval $T$ and thus the integrability of the basic
Schr\"{o}dinger equation is always ensured.
From the view point of path integral, the formula (2.6) where
the Hamiltonian is diagonalized both at $t=0$ and $t=T$ if
$X(T)=X(0)$ shows no obvious singular behavior at the level
crossing point. On the other hand, the path integral (2.12)
becomes somewhat subtle at the level crossing point; the bases
$\{v_{n}(\vec{x},X(t))\}$ are singular on top of level crossing
as in (3.4), and thus the unitary transformation $U$ to (2.9)
and the induced geometric terms become singular there. The
present analysis however shows that
the path integral is not singular for any finite $T$.
This suggests that one can promote the variables $X(t)$ to
fully dynamical variables by adding the kinetic and potential
terms for $X(t)$,
and the path integral is still well-defined.
We consider that this result is satisfactory since the starting
Hamiltonian (2.1) does not contain any obvious singularity even
when one promotes the variables $X(t)$ to fully dynamical
variables.
\vspace{-2mm}
\section{Aharonov-Bohm phase}
It is important to clarify the similarity and difference between
the geometric phases associated with level crossing and the
Aharonov-Bohm phase~\cite{berry, aharonov}.
We thus start with the hermitian Hamiltonian
\begin{equation}
\hat{H}=\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},A_{k}(\vec{x}))
=\frac{1}{2m}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}}
- e\vec{A}(\vec{x}))^{2}
\end{equation}
for a single particle theory in the {\em time independent}
background gauge potential $A_{k}(\vec{x})$
\begin{eqnarray}
&&A_{k}(\vec{x})=(-\frac{B}{2}y, \frac{B}{2}x, 0)\ \ {\rm for}
\ \ r=\sqrt{x^{2}+y^{2}} \leq a, \nonumber\\
&&A_{k}(\vec{x})=(-\frac{a^{2}B}{2r^{2}}y, \frac{a^{2}B}{2r^{2}}
x, 0)\ \ {\rm for}
\ \ r=\sqrt{x^{2}+y^{2}} \geq a
\end{eqnarray}
and thus no electric field. The uniform constant magnetic field
$\vec{B}=\vec{\nabla}\times \vec{A}$ is confined in a cylinder
along the $z$-axis with a radius
$a$. The first quantized formulation of the Aharonov-Bohm effect
is given by
\begin{eqnarray}
&&\langle \vec{x}(T)|\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}dt\}|\vec{x}(0)\rangle
\nonumber\\
&=&\int {\cal D}\vec{p}{\cal D}\vec{x} \exp\{\frac{i}{\hbar}
\int_{0}^{T} dt[\vec{p}\cdot \dot{\vec{x}}
-\frac{1}{2m}(\vec{p}-e\vec{A}(\vec{x}))^{2}] \}
\nonumber\\
&=&\int {\cal D}\vec{p}{\cal D}\vec{x} \exp\{\frac{i}{\hbar}
\int_{0}^{T} dt[(\vec{p}+e\vec{A}(\vec{x}))\cdot\dot{\vec{x}}
-\frac{1}{2m}\vec{p}\,{}^{2}] \}
\nonumber\\
&=&\int {\cal D}\vec{p}{\cal D}\vec{x} \exp\{\frac{i}{\hbar}
\int_{0}^{T} dt
[\vec{p}\cdot\dot{\vec{x}}-\frac{1}{2m}\vec{p}\,{}^{2}]
+\frac{ie}{\hbar}
\int_{C}\vec{A}(\vec{x})\cdot d\vec{x} \}
\nonumber\\
&=&\sum_n \int [{\cal D}\vec{p}{\cal D}\vec{x}]_{(n)}
\exp\{\frac{i}{\hbar}
\int_{0}^{T} dt
[\vec{p}\cdot\dot{\vec{x}}-\frac{1}{2m}\vec{p}\,{}^{2}]
+\frac{ie}{\hbar}
n\Phi \}
\nonumber\\
&=&\sum_n \langle \vec{x}(T)|\exp\{-\frac{i}{\hbar}\int_{0}^{T}
\hat{H}_{0}dt\}|\vec{x}(0)\rangle_{(n)}
\exp\{\frac{ie}{\hbar}n\Phi \}
\end{eqnarray}
for any closed spatial path $C$, $\vec{x}(T)=\vec{x}(0)$, which
winds the cylinder by $n$ times, and $\Phi=\int \vec{B}\cdot
d\vec{S}$
stands for the magnetic flux inside the cylinder. We used
the translational invariance of the path integral measure
\begin{eqnarray}
{\cal D}\vec{p}={\cal D}(\vec{p}-e\vec{A}(\vec{x}))
\end{eqnarray}
for the transformation from the second line to the third line in
(5.3). Note that the formula (5.3) is exact, and the phase
factor gives a truly topological quantity even for any fixed
finite $T$; for the general case with only $\vec{x}(0)
=\vec{x}(T)$ specified, one needs to sum over $n$ in (5.3). In
practice, the Aharonov-Bohm phase is analyzed in
connection with interference effects, but the basic mathematical
treatment is the same as in (5.3).
The path integral for the Aharonov-Bohm effect for the time
interval $0\leq t\leq T$ in the second quantized
formulation is given by
\begin{eqnarray}
Z&=&\int{\cal D}\psi^{\star}{\cal D}\psi\nonumber\\
&\times&\exp\{\frac{i}{\hbar}\int_{0}^{T}dtd^{3}x[
\psi^{\star}(t,\vec{x})i\hbar\frac{\partial}{\partial t}
\psi(t,\vec{x})-\psi^{\star}(t,\vec{x})
\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},A_{k}(\vec{x}))\psi(t,\vec{x})] \}
\nonumber\\
\end{eqnarray}
We then define a complete set of eigenfunctions ( in a domain
of 3-dimensional space with a cylinder along the $z$-axis of
radius $a$ removed)
\begin{eqnarray}
&&\hat{H}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}},
\vec{x},A_{k}(\vec{x}))u_{n}(\vec{x})
=E_{n}u_{n}(\vec{x}),\nonumber\\
&&\int d^{3}xu_{n}^{\star}(\vec{x})u_{m}(\vec{x})=
\delta_{nm}
\end{eqnarray}
with a suitable boundary condition on the surface of the
cylinder and expand
\begin{equation}
\psi(t,\vec{x})=\sum_{n}a_{n}(t)u_{n}(\vec{x}).
\end{equation}
Then
\begin{equation}
{\cal D}\psi^{\star}{\cal D}\psi=\prod_{n}{\cal D}a_{n}^{\star}
{\cal D}a_{n}
\end{equation}
and the path integral is written as
\begin{eqnarray}
Z&=&\int \prod_{n}{\cal D}a_{n}^{\star}
{\cal D}a_{n}\nonumber\\
&\times&\exp\{\frac{i}{\hbar}\int_{0}^{T}dt[
\sum_{n}a_{n}^{\star}(t)i\hbar\frac{\partial}{\partial t}
a_{n}(t)-\sum_{n}E_{n}a_{n}^{\star}(t)a_{n}(t)] \}.
\end{eqnarray}
We next define
\begin{eqnarray}
u_{n}(\vec{x})=e^{(ie/\hbar)\int^{x}_{x(0)}A_{k}(\vec{y})dy^{k}}
v_{\vec{p}}(\vec{x})
\end{eqnarray}
and then
\begin{eqnarray}
\frac{1}{2m}(\frac{\hbar}{i}\frac{\partial}{\partial\vec{x}}
- e\vec{A}(\vec{x}))^{2}e^{(ie/\hbar)\int^{x}_{x(0)}A_{k}dy^{k}}
v_{\vec{p}}(\vec{x})
&=&e^{(ie/\hbar)\int^{x}_{x(0)}A_{k}dy^{k}}
\frac{\hat{\vec{p}}\,{}^{2}}{2m}
v_{\vec{p}}(\vec{x})\nonumber\\
&=&E_{n}(p)u_{n}(\vec{x}),
\end{eqnarray}
namely
\begin{eqnarray}
\frac{\hat{\vec{p}}\,{}^{2}}{2m}v_{\vec{p}}(\vec{x})&=&E_{n}(p)
v_{\vec{p}}(\vec{x}),
\nonumber\\
\int d^{3}x v_{\vec{p}}^{\star}(\vec{x})
v_{\vec{p}^{\prime}}(\vec{x})&=&\delta_{\vec{p},\vec{p}^{\prime}}
\end{eqnarray}
where $v_{\vec{p}}(\vec{x})$ is defined in terms of cylindrical
coordinates (with the inside of the cylinder removed), and its
phase convention is defined to be single valued in the sense that
\begin{eqnarray}
v_{\vec{p}}(\vec{x}(0))=v_{\vec{p}}(\vec{x}(T))
\end{eqnarray}
if $\vec{x}(0)=\vec{x}(T)$. In practical applications, one may
choose $v_{\vec{p}}(\vec{x})$ such that it approaches a plane
wave specified by the momentum $\vec{p}$ at far away from the
cylinder.
Since the Hamiltonian in (5.9) is eliminated by a
``gauge transformation''
\begin{eqnarray}
a_{n}(t)=\exp\{-\frac{i}{\hbar}E_{n}t \}\tilde{a}_{n}(t),
\end{eqnarray}
we have the probability amplitude in the first quantization
\begin{eqnarray}
\langle 0|\hat{\psi}(T)\hat{a}_{n}^{\dagger}(0)|0\rangle
&=&\exp\{-\frac{i}{\hbar}E_{n}(p)T
+ (ie/\hbar)\int^{x(T)}_{x(0)}A_{k}(\vec{y})dy^{k}\}
v_{\vec{p}}(\vec{x}(T))\nonumber\\
&&\times \langle 0|\hat{\tilde{a}}_{n}(T)
\hat{\tilde{a}}{}^{\dagger}_{n}(0)
|0\rangle
\nonumber\\
&=&\exp\{-\frac{i}{\hbar}E_{n}(p)T
+ (ie/\hbar)\int^{x(T)}_{x(0)}A_{k}(\vec{y})dy^{k}\}
v_{\vec{p}}(\vec{x}(T))\nonumber\\
\end{eqnarray}
For a closed path $x^{k}(T)=x^{k}(0)$, we pick up the
familiar phase factor as in (5.3).
Formulated in the manner (5.15), the Aharonov-Bohm phase is
analogous to the geometric phase (3.11) associated with level
crossing, but there are several
critical differences. First of all, the Aharonov-Bohm effect
is defined for a space which is not simply connected, and the
Aharonov-Bohm phase is exact for any finite time interval $T$
(one may consider a narrow cylinder $a\rightarrow {\rm small}$
with the magnetic flux $\Phi=a^{2}B$ kept fixed), whereas
the geometric phase is topologically trivial for any finite
time interval $T$ as we have shown. The summation over the winding number $n$ in (5.3) is generally required in the case of the Aharonov-Bohm phase, but no such summation in the case of the geometric phase since the notion of the winding number is not well-defined for any fixed finite $T$. Secondly, a closed
path in the parameter space, which may have no direct connection
with the real spatial coordinates, is important in the geometric
phase, whereas a closed path in the real 3-dimensional space
is important for the Aharonov-Bohm phase. Related to this last
property, the Aharonov-Bohm phase is defined for the time
{\em independent} gauge potential, whereas the geometric phase
is defined for the explicitly time {\em dependent} external
parameter $X(t)$.
\section{Discussion}
The notion of Berry's phase is known to be useful in various
physical contexts~\cite{shapere}-\cite{review}, and the
topological considerations are often crucial to obtain a
qualitative understanding of what is going on. Our analysis
however shows that the topological interpretation of
Berry's phase associated with level crossing generally fails in
practical physical settings with any finite $T$. The notion of
``approximate topology'' has no rigorous meaning, and it is
important to keep this approximate topological property of
geometric phases associated with level crossing in mind when one
applies the notion of geometric phases to concrete physical
processes. This approximate topological property is in sharp
contrast to the Aharonov-Bohm phase~\cite{aharonov} which is
induced by the time-independent gauge potential and
topologically exact for any finite time interval $T$. The
similarity and difference between the geometric phase and the
Aharonov-Bohm phase have been recognized in the early
literature~\cite{berry, aharonov}, but our second quantized
formulation, in which the analysis of the geometric phase is
reduced to a diagonalization of the effective Hamiltonian,
allowed us to analyze the topological properties precisely in
the infinitesimal neighborhood of level crossing.
The correction to the geometric phase in terms of the small
slowness parameter $\epsilon$ has been analyzed, and the closer
to a degeneracy a system passes the slower is the necessary
passage for adiabaticity has been noted in~\cite{berry2}. But,
to our knowledge, the fact that the geometric phase becomes
topologically trivial for practical physical settings with any
fixed finite $T$, such as in the practical Born-Oppenheimer
approximation where $T$ is identified with the period of the
slower system, has been clearly stated only in the recent
paper~\cite{fujikawa}. We emphasize that this fact is
proved independently of the adiabatic approximation. The notion
of the geometric phase is very useful, but great care needs to
be exercised as to its topological properties~\footnote{In page
47 of ref.\cite{shapere}, it is stated `` In a beautiful 1976
paper, which the editors feel has not been sufficiently
appreciated,... He [A.J. Stone] showed, quite generally, that
the non-integrable phases imply the existence of degeneracies,
by means of the following topological argument." This
enthusiasm about topology needs to be taken with due care.}.
Our analysis shows that there are no mysteries about the phase
factors of the Schr\"{o}dinger amplitude. All the information
about the geometric phases is contained in the evolution
operator (2.27) and thus in the path integral. The geometric
phases are induced by the time-dependent (gauge) transformation
(2.8). One can analyze the geometric phases without referring
to the mathematical notions such as parallel transport and
holonomy which are useful in the framework of a precise
adiabatic picture. Instead, the consideration of invariance
under the gauge symmetry (2.29) plays an important role in our
formulation.
Also, the present path integral formulation shows a critical
difference between the geometric phase associated with level
crossing and the quantum anomaly;
the quantum anomaly is associated with the symmetry breaking
by the path integral measure~\cite{fujikawa2}, whereas the
geometric phase arises from the non-anomalous terms associated
with a change of variables as in (2.12). The similarity between
the quantum anomaly and the geometric phase is nicely elaborated
in~\cite{jackiw}. But the quantum anomaly is basically a local
object in the 4-dimensional space-time whereas the
geometric phase crucially depends on the infinite time
interval as our analysis shows. Besides, the basic symmetry
involved and its breaking mechanism in the case of geometric
phase are not obvious. A detailed analysis of this issue will be
given elsewhere.
\\
We thank Professor L. Stodolsky for asking if our conclusion
is modified when the phase choice of the basis set is changed,
which prompted us to include an analysis of the hidden
local gauge symmetry into the present paper.
|
1,108,101,565,165 | arxiv | \subsubsection{Introduction}
\paragraph{ The wreath product and $\mathfrak{S}_\infty$-central states.}
Let $\mathbb{N}$ be the set of the natural numbers.
By definition,
a bijection $s: \mathbb{N}\to \mathbb{N}$ is called
{\it finite} if the
set $\left\{ i\in\mathbb{N}|s(i) \neq i\right\}$ is finite.
Define a group
$\mathfrak{S}_\infty$ as the group of all finite bijections
$\mathbb{N}\to\mathbb{N}$ and set
$\mathfrak{S}_n=\left\{s\in\mathfrak{S}_\infty|\; s(i)=i\;
\text{ for each }
i>n\right\}$. Given a group $\Gamma$ identify
element $\left(\gamma_1,\gamma_2,\ldots,\gamma_n\right)\in\Gamma^n$ with
$\left(\gamma_1,\gamma_2,\ldots,\gamma_n,e\right)\in\Gamma^{n+1}$, where
$e$ is the identity element of $\Gamma$. The group $\Gamma^\infty_e$ is
defined as a inductive limit of sets
\begin{eqnarray}
\Gamma\mapsto\Gamma^2\mapsto\Gamma^3\mapsto\cdots\mapsto\Gamma^n\mapsto\cdots.
\end{eqnarray}
The wreath product $\Gamma\wr \mathfrak{S}_\infty$ is the
semidirect product $\Gamma^\infty_e \rtimes \mathfrak{S}_\infty$
for the usual
permutation action of $\mathfrak{S}_\infty$ on $\Gamma^\infty_e$.
Using the
imbeddings
$\gamma\in \Gamma^\infty_e\to \left( \gamma,{\rm id}\right)
\in \Gamma\wr \mathfrak{S}_\infty$,
$s\in\mathfrak{S}_\infty\to\left( e^{(\infty)},s \right)\in
\Gamma\wr \mathfrak{S}_\infty$, where $e^{(\infty)}=
\left( e,e,\ldots,e,\ldots\right)$ and ${\rm id}$ is the
identical bijection,
we identify $\Gamma^\infty_e$ and $\mathfrak{S}_\infty$
with the corresponding
subgroups of $\Gamma\wr \mathfrak{S}_\infty$.
Therefore, each element $g$ of $\Gamma\wr \mathfrak{S}_\infty$ is of the
form $g=s\gamma$, with $\gamma =\left(\gamma_1,
\gamma_2,\ldots\right)\in\Gamma^\infty_e$ and $s\in\mathfrak{S}_\infty$.
Furthermore, it is assumed that $s\left(\gamma_1, \gamma_2,\ldots
\right)s^{-1}=\left(\gamma_{s^{-1}(1)}, \gamma_{s^{-1}(2)},\ldots \right)$.
If $\Gamma$ is a
topological group, then we
will equip $\Gamma^n$ with the natural
product-topology. Furthermore, we will always consider
$ \Gamma^\infty_e $ as a topological group with
the inductive limit topology.
The group $\Gamma\wr \mathfrak{S}_\infty$
is isomorphic to $ \Gamma^\infty_e \times
\mathfrak{S}_\infty $, as a set. Therefore, we will
equip the group
$\Gamma\wr \mathfrak{S}_\infty$ with the product-topology,
considering
$\mathfrak{S}_\infty$ as a discrete topological space. From now on we
assume that $\Gamma$ is a separable topological group.
\paragraph{ The basic definitions. }
Let $\mathcal{H}$ be a Hilbert space, let
$\mathcal{B}\left(\mathcal{H}\right)$ be the set of all
bounded operators in $\mathcal{H}$ and let
$\mathcal{I}_{\mathcal{H}}$ be the identity operator
in $\mathcal{H}$. We denote by $\mathcal{U}\left(
\mathcal{H} \right)$ the unitary subgroup in
$\mathcal{B}\left(\mathcal{H}\right)$.
By a unitary representation of the topological group
$G$ we will always
mean a {\it continuous} homomorphism of $G$ into
$\mathcal{U}\left(\mathcal{H} \right)$, where
$\mathcal{U}\left(\mathcal{H} \right)$
is equipped with the strong operator topology. For unitary
representation $\pi$ of the group $G$ we denote $\mathcal{M}_\pi$
the $W^*-$algebra $\pi(G)^{\prime\prime}$, which is
generated by the operators $\pi(g)\; \left( g\in G\right)$.
\begin{Def}\label{indecomposable}
An unitary representation
$\pi:G\to\mathcal{U}\left(\mathcal{H} \right)$ of
the group $G$ is called a factor-representation if
$\mathcal{M}_\pi$
is a factor. A positive definite function $\varphi$ on group $G$
is called an indecomposable, if the corresponding GNS-representation
is a factor-representation.
\end{Def}
Further, an element $\Gamma\wr\mathfrak{S}_\infty$
can always be written
as the product of an element from $\mathfrak{S}_\infty$
and an element
from $\Gamma^\infty_e $. The commutation rule between
these two kinds of elements is
\begin{equation}\label{product}
s \gamma=s \left(\gamma_1,\gamma_2,\ldots \right)=
\left(
\gamma_{s^{-1}(1)},\gamma_{s^{-1}(2)},\ldots \right) s,
\end{equation}
where $s\in\mathfrak{S}_\infty$,$\gamma=\left(\gamma_1, \gamma_2,\ldots
\right)\in\Gamma^\infty_e $. Let $\mathbb{N}\diagup s$
be the set of orbits
of $s$ on the set $\mathbb{N}$. Note that for $p\in \mathbb{N}\diagup s$
permutation $s_p$, which is
defined by the formula
\begin{equation}\label{sp}
s_p(k)=
\left\{
\begin{array}{rl}
s(k)&\text{ if } k\in p\\
k&\textit{otherwise}
\end{array}\right.,
\end{equation}
is a cycle of the order $|p|$, where $|p|$ denotes the
cardinality of $p$.
For $\gamma=\left(\gamma_1,\gamma_2,\ldots \right)
\in\Gamma^\infty_e$
we define the element $\gamma(p)=\left(\gamma_1(p), \gamma_2(p),\ldots
\right) \in\Gamma^\infty_e $ as follows
\begin{equation}\label{color1}
\gamma_k(p)= \left\{
\begin{array}{rl}
\gamma_k&\text{ if } k\in p\\
e&\textit{otherwise}.
\end{array}\right.
\end{equation}
Thus, using ({\ref{product}}), we have
\begin{equation}\label{decompositiontocycles}
s \gamma = \prod\limits_{p\in \mathbb{N}\diagup s}
s_p \gamma(p).
\end{equation}
Element $ s_p \gamma(p)$ is called the {\it generalized cycle} of
$s\gamma$.
Denote by $(n\,\;k)\in\mathfrak{S}_\infty$ the transposition of
numbers $k$ and $n.$ Following Olshanski (see \cite{O2}) we
introduce permutations
$\omega_n=\omega^{(0)}_n\in\mathfrak{S}_\infty$ by the next formula:
\begin{eqnarray}\label{omega}
\omega_n(i)=\left\{\begin{array}{ll}i,&\textit{ if }\;2n<i,\\
i+n,&\textit{ if }\;i\leqslant n,\\
i-n,&\textit{ if }\;n<i\leqslant 2n.\end{array}\right.
\end{eqnarray}
For the element $g=s \gamma$ we call $\textit{support}$ of $g$ the
set ${\rm supp}(g)=\left\{i:s(i)\neq i\text{ or }\gamma_i\neq
e\right\}$. Note that ${\rm supp}(g)$ is always finite subset of
$\mathbb{N}$. If ${\rm supp}(g_1)\cap {\rm supp}(g_2)=\emptyset$
then elements $g_1$ and $g_2$ commute.
\begin{Def}\label{def central}
Let $G$ be a group and let $H$ be a subgroup of $G$. A positive definite
function $\varphi$ on
$G$ is called $H$-central if $\varphi(gh)=\varphi(hg)$ for all
$h\in H$ and $g\in G$. We say that $\varphi$ is a {\it state} on $G$, if
$\varphi(e)=1$, where $e$ is the identical element of $G$. A state $\varphi
$ is called {\it indecomposable}, if the corresponding GNS-representation
$\pi_\varphi$ is a factor representation.
\end{Def}
Let $\mathcal{M}_*$ denotes the space of all $\sigma$-weakly continuous
functional on $w^*$-algebra $\mathcal{M}$.
Now we fix a $\mathfrak{S}_\infty$-central state $\varphi$ on $\Gamma\wr
\mathfrak{S}_\infty$, and denote by $\pi_\varphi$ the corresponding
GNS-representations.
\begin{Th}\label{Th3}
Let $\pi_\varphi\left( \Gamma\wr \mathfrak{S}_\infty\right)^{\prime\prime}$
be a $w^*$-algebra generated by operators $\pi_\varphi\left(\Gamma\wr
\mathfrak{S}_\infty \right)$ and let $\mathcal{C}\left(
\pi_\varphi\left(\Gamma\wr \mathfrak{S}_\infty \right)\right)$ be the center of $\pi_\varphi\left( \Gamma\wr \mathfrak{S}_\infty\right)^{\prime\prime}$.
Suppose that the positive functionals $\varphi_1$ and $\varphi_2$ from
$\pi_\varphi\left( \Gamma\wr \mathfrak{S}_\infty\right)^{\prime\prime}_*$
satisfy the next conditions:
\begin{itemize}
\item {\rm i}) $\varphi_k\left(\pi_\varphi(s) a \right) =
\varphi_k\left( a \pi_\varphi(s)\right)$ for all $s\in\mathfrak{S}_\infty$
and $a\in\pi_\varphi\left( \Gamma\wr
\mathfrak{S}_\infty\right)^{\prime\prime}$ $\left(k=1,2 \right)$;
\item
{\rm ii}) $\varphi_1\left(\mathfrak{c} \right) = \varphi_2\left(\mathfrak{c}
\right) $ for all $\mathfrak{c}\in \mathcal{C}\left(
\pi_\varphi\left(\Gamma\wr \mathfrak{S}_\infty \right)\right)$.
\end{itemize}
Then $\varphi_1\left(\mathfrak{a} \right) = \varphi_2\left(\mathfrak{a}
\right) $ for all $\mathfrak{a}\in \pi_\varphi\left(\Gamma\wr
\mathfrak{S}_\infty \right)$.
\end{Th}
Recall that representations $\pi_1$ and $\pi_2$ of the group $G$ are called
quasiequiva\-lent if there exists isomorphism $\theta:\pi_1\left(G
\right)^{\prime\prime}\mapsto\pi_2\left(G \right)^{\prime\prime}$ with the
property
\begin{eqnarray}
\theta\left(\pi_1\left(g \right)\right)=\pi_2\left(g \right) \text{ for all
} g\in G.
\end{eqnarray}
The following corollary is immediate consequence of the above theorem.
\begin{Co}\label{Co4}
If $\varphi_1$ and $\varphi_2$ are indecomposable
$\mathfrak{S}_\infty$-central states on $\Gamma\wr \mathfrak{S}_\infty$
such that the corresponding GNS-representations $\pi_{\varphi_1}$ and
$\pi_{\varphi_2}$ are quasiequivalent, then $\varphi_1=\varphi_2$.
\end{Co}
\paragraph{The natural examples.}\label{parnatexmmpl}
For any state $\varphi $ on $\Gamma$ define two
$\mathfrak{S}_\infty$-central states $\varphi_{sp} $ and
$\varphi_{reg}$ on $\Gamma\wr \mathfrak{S}_\infty$ as follows
\begin{eqnarray}
\varphi_{sp} \left(s\gamma \right)=\prod\varphi \left(\gamma_k \right)
\text{ for all } \gamma=\left(\gamma_1,\gamma_2,\ldots
\right)\in\Gamma^\infty_e \text{ and } s\in\mathfrak{S}_\infty;\label{sp0}\\
\varphi_{reg}\left(s\gamma \right)=\left\{
\begin{array}{rl}
\prod\varphi \left(\gamma_k \right)&\text{ if } s=e\\
0&\text{ if } s\neq e.
\end{array}\right.
\end{eqnarray}
We have the following result:
\begin{Prop}
For GNS-representations $\pi_{\varphi_{sp} }$ and $\pi_{\varphi_{reg} }$
the next properties hold:
\begin{itemize}
\item {\rm(i)} If $\pi_{\varphi_{sp} }$ acts in Hilbert space
$\mathcal{H}_{\varphi_{sp} }$, and
$\mathcal{H}_{\varphi_{sp}}^\mathfrak{S}=\left\{\eta\in\mathcal{H}_{\varphi_{sp}
}: \pi_{sp}(s)\eta=\eta \text{ for all } s\in \mathfrak{S}_\infty
\right\}$, then ${\rm dim}
\mathcal{H}_{\varphi_{sp}}^\mathfrak{S}=1 $. In particular,
$\pi_{\varphi_{sp} }$ is irreducible.
\item {\rm(ii)} $\pi_{\varphi_{reg} }$ is a factor representation.
\item {\rm(iii)} $w^*$-algebra $\pi_{\varphi_{reg} }
\left(\Gamma\wr \mathfrak{S}_\infty\right)^{\prime\prime}$ is a
factor of the type ${\rm II}$ or ${\rm III}$.
\end{itemize}
\end{Prop}
\begin{proof}
Let $\xi_{\varphi_{sp}}$ $\left(\xi_{\varphi_{reg}} \right)$ be the cyclic
vector for representation $\pi_{sp}$ $\left(\pi_{reg}\right)$ with the
property
\begin{eqnarray*}
&\varphi_{sp}\left(g \right)=\left(\pi_{sp}(g)
\xi_{\varphi_{sp}},\xi_{\varphi_{sp}} \right)&\;\;
\bigg(\varphi_{reg}\left(g \right)=\left(\pi_{reg}(g)
\xi_{\varphi_{reg}},\xi_{\varphi_{reg}} \right)\bigg)\\
&\text{ for all }
g\in \Gamma\wr
\mathfrak{S}_\infty.&
\end{eqnarray*}
Set $\Gamma^{n \infty}_e= \left\{\gamma=\left(\gamma_1,\gamma_2,\ldots
\right)\in\Gamma^\infty_e\big| \gamma_k=e\text{ for all } k\leq
n\right\}$,\\
$\mathfrak{S}_{n \infty}= \left\{s\in\mathfrak{S}_{\infty}\big|
s(k)=k \text{ for all } k\leq n\right\}$. Denote by $\Gamma\wr
\mathfrak{S}_{n\infty}$ the subgroup of
$\Gamma\wr\mathfrak{S}_\infty$ generated by $\Gamma^{n \infty}_e$ and
$\mathfrak{S}_{n \infty}$.
To the proof point {\rm(i)}, first we note that, by definition GNS-construction,
$\xi_{\varphi_{sp}}$ lies in $\mathcal{H}_{\varphi_{sp}}^\mathfrak{S}$.
Further we will use the important mixing-property. Namely, denote by
$\omega_n$ a bijection which acts as follows
\begin{eqnarray}\label{omegan}
\omega_n(i)=\left\{\begin{array}{ll}i,&\textit{ if }\;2n<i,\\
i+n,&\textit{ if }\;i\leqslant n,\\
i-n,&\textit{ if }\;n<i\leqslant 2n.\end{array}\right.
\end{eqnarray}
Then for any $\eta \in \mathcal{H}_{\varphi_{sp}}^\mathfrak{S}$, using
(\ref{sp0}), we obtain
\begin{eqnarray}
\lim\limits_{n\to\infty}\left(\pi_{sp}\left(\omega_n \right)\eta,\eta
\right) =\left(\xi_{\varphi_{sp}},\eta \right)\left(\eta,\xi_{\varphi_{sp}}
\right).
\end{eqnarray}
This implies {\rm (i)}.
A property {\rm (ii)} follows from Proposition
\ref{multiplicativity} (below). Nevertheless, using the explicit
realizations of $\pi_{\varphi_{reg} }$, we give another proof. We begin
with the GNS-representation $T$ of $\Gamma$ which acts in Hilbert space
$\mathcal{H}_T$ with cyclic vector $\xi_\varphi$: $\varphi \left(\gamma
\right)=\left(T(\gamma )\xi_\varphi,\xi_\varphi\right)$ for all
$\Gamma\in\gamma$. Further, using embedding $\mathcal{H}_T^{\otimes n}\ni
\eta\mapsto \eta\otimes\xi_\varphi\in\mathcal{H}_T^{\otimes n+1}$, define
Hilbert space $\mathcal{H}_T^{\otimes \infty}$ and corresponding
representation $T^{\otimes\infty}$ of $\Gamma^\infty_e$:
\begin{eqnarray*}
T^{\otimes\infty}(\gamma) \left(\xi_1\otimes\xi_2\otimes\ldots \right)=
T\left(\gamma_1 \right)\xi_1\otimes T\left(\gamma_2
\right)\xi_2\otimes\ldots, \text{ where } \gamma
=\left(\gamma_1,\gamma_2,\ldots \right).
\end{eqnarray*}
The action $U$ of $\mathfrak{S}_\infty$ on $\mathcal{H}_T^{\otimes \infty}$
is given by the formula
\begin{eqnarray*}
U(s)\left(\xi_1\otimes\xi_2\otimes\ldots\otimes\xi_k \otimes\ldots\right)=
\xi_{s^{-1}(1)}\otimes\xi_{s^{-1}(2)}\otimes\ldots\otimes\xi_{s^{-1}(k)}
\otimes\ldots
\end{eqnarray*}
Now we define operator $\Pi(g)$
$\left(g\in\Gamma\wr\mathfrak{S}_\infty\right)$ in $l^2\left(
\mathfrak{S}_\infty,\mathcal{H}_T^{\otimes \infty}\right)$ as follows
\begin{eqnarray*}
\left(\Pi(\gamma)\eta\right)(s)=U(s)T^{\otimes\infty}(\gamma)U^*(s)\eta(s)\;\;
\left(\gamma \in \Gamma^\infty_e, \eta\in l^2\left(
\mathfrak{S}_\infty,\mathcal{H}_T^{\otimes \infty}\right) \right);\\
\left(\Pi(t)\eta\right)(s)=\eta(st) \;\;\left(t\in\mathfrak{S}_\infty \right).
\end{eqnarray*}
Since for any $s\in\mathfrak{S}_\infty$ and $g=\left(\gamma_1,
\gamma_2,\ldots \right)\in\Gamma_e^\infty$ $s\left(\gamma_1,
\gamma_2,\ldots \right)s^{-1}=\left(\gamma_{s^{-1}(1)},
\gamma_{s^{-1}(2)},\ldots \right)$, $\Pi$ extends by multiplicativity to
the representation of $\Gamma\wr\mathfrak{S}_\infty$.
If ${\xi}_\varphi^{\otimes\infty}
=\xi_\varphi\otimes\xi_\varphi\otimes\ldots\in\mathcal{H}_T^{\otimes
\infty}$ and $\widehat{\xi}_\varphi (g)=\left\{\begin{array}{ll}
{\xi}_\varphi^{\otimes\infty},
&\textit{ if }\;g=e,\\
0,&\textit{ if }\;g\neq e\end{array}\right. $ then we have
\begin{eqnarray}
\varphi_{reg} \left(s\gamma\right)=\left(\Pi(s\gamma
)\widehat{\xi}_\varphi,
\widehat{\xi}_\varphi\right) \;\; \left(s\in\mathfrak{S}_\infty,\gamma
\in\Gamma^\infty_e\right).
\end{eqnarray}
Therefore, without loss generality we can assume that $\pi_{reg}=\Pi$.
Let $\Pi^\prime$ denote the representation of $\mathfrak{S}_\infty$ which
acts on $l^2\left( \mathfrak{S}_\infty,\mathcal{H}_T^{\otimes
\infty}\right)$ by
\begin{eqnarray}
\left(\Pi^\prime(t)\eta\right)(s)=U(t)\eta(t^{-1}s).
\end{eqnarray}
Obvious, $\Pi^\prime\left(\mathfrak{S}_\infty\right)$ is contained in
commutant $\Pi\left(\Gamma\wr \mathfrak{S}_{\infty}\right)^\prime$ of
$\Pi\left(\Gamma\wr \mathfrak{S}_{\infty}\right)$.
Let us prove that center $\mathcal{C}=\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^{\prime\prime}\cap\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^\prime$ of $\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^{\prime\prime}$ is trivial.
Our proof starts with the observation that
\begin{eqnarray}
\Pi(g)\Pi^\prime(g) \widehat{\xi}_\varphi=\widehat{\xi}_\varphi\text{ for
all } g\in\mathfrak{S}_\infty.
\end{eqnarray}
Hence for $\mathfrak{c}\in\mathcal{C}$ we have
\begin{eqnarray}\label{fixed}
\Pi(g)\Pi^\prime(g)\mathfrak{c}
\widehat{\xi}_\varphi=\mathfrak{c}\widehat{\xi}_\varphi\text{ for all }
g\in\mathfrak{S}_\infty.
\end{eqnarray}
In particular, this gives
\begin{eqnarray}
\left\| \mathfrak{c}\widehat{\xi}_\varphi(s)\right\|=
\left\| \mathfrak{c}\widehat{\xi}_\varphi\left(gsg^{-1} \right)\right\|
\text{ for all } g,s\in\mathfrak{S}_\infty.
\end{eqnarray}
Since every conjugacy class $C(s)= \left\{gsg^{-1}:g\in
\mathfrak{S}_\infty\right\}$ is infinite except $s=e$, we have
\begin{eqnarray}
\mathfrak{c}\widehat{\xi}_\varphi(s)=0 \text{ for all } s\neq e.
\end{eqnarray}
It follows from (\ref{fixed}) that
\begin{eqnarray}
U(s)\left(\mathfrak{c}\widehat{\xi}_\varphi(e) \right)=
\mathfrak{c}\widehat{\xi}_\varphi(e) \text{ for all }
s\in\mathfrak{S}_\infty.
\end{eqnarray}
As in the proof of the point {\rm(i)}, this gives that
$\mathfrak{c}\widehat{\xi}_\varphi(e)=\alpha {\xi}_\varphi^{\otimes\infty}$
$\left(\alpha \in\mathbb{C} \right)$. Since $\widehat{\xi}_\varphi$ is
cyclic, we have $\mathfrak{c}=\alpha I$. Therefore, $w^*$-algebra
$\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^{\prime\prime}$ is a factor.
{\rm (iii)} We begin by recalling the notion of a {\it central sequence} in
a factor $\mathcal{M}$. A bounded sequence $ \left\{a_n
\right\}\subset\mathcal{M}$ is called {\it central} if
\begin{eqnarray*}
s-\lim\limits_{n\to\infty}\left(a_n m -ma_n\right)=0\text{ and }
s-\lim\limits_{n\to\infty}\left(a_n^* m -ma_n^*\right)=0 \text{ for all
}m\in\mathcal{M}.
\end{eqnarray*}
A {\it central sequence} is called {\it trivial} if there exists sequence $
\left\{c_n \right\}\subset\mathbb{C}$ such that
\begin{eqnarray*}
s-\lim\limits_{n\to\infty}\left(a_n -c_nI\right)=0\text{ and }
s-\lim\limits_{n\to\infty}\left(a_n^* -\overline{c}_n I\right)=0.
\end{eqnarray*}
Let $s_k$ be the transposition interchanging $k$ and $k+1$. We claim that $
\left\{\pi_{reg}\left(s_n \right)\right\}$ is non trivial cental sequence.
Indeed, since $\varphi_{reg}$ is a $\mathfrak{S}_\infty$-central state, we
have
\begin{eqnarray*}
\lim\limits_{n\to\infty}\left(m\pi_{reg}\left(s_n
\right)-\pi_{reg}\left(s_n \right)m\right){\xi_{\varphi_{reg} }}=0 \text{
for all } m\in
\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^{\prime\prime}.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\lim\limits_{n\to\infty}\left(m\pi_{reg}\left(s_n
\right)-\pi_{reg}\left(s_n \right)m\right)x{\xi_{\varphi_{reg} }}=0 \text{
for all } m,x\in
\Pi\left(\Gamma\wr
\mathfrak{S}_{\infty}\right)^{\prime\prime}.
\end{eqnarray*}
Since $\xi_{\varphi_{reg} } $ is cyclic and $\varphi_{reg}\left(s_n
\right)=0$, then $ \left\{\pi_{reg}\left(s_n \right)\right\}$ is non
trivial central sequence.
It remains to prove that each cental sequence in factor $\mathcal{M}$ of
type ${\rm I}$ is trivial.
Suppose that $\mathcal{M}$ is a factor of type ${\rm I}$.
Let $\left\{\mathfrak{e}_{kl}: k,l\in\mathbb{N} \right\}$
be a matrix unit in $\mathcal{M}$. This means that the next relations hold
\begin{eqnarray}\label{munit}
\mathfrak{e}_{kl}^*=\mathfrak{e}_{lk},\;
\mathfrak{e}_{kl}\mathfrak{e}_{pq}=\delta_{lp}\mathfrak{e}_{kq},\;
\sum\limits_{k\in\mathbb{N}} \mathfrak{e}_{kk}=I.
\end{eqnarray}
Let $ \left\{a_n=\sum\limits_{k,l}c_{kl}(n)
\mathfrak{e}_{kl}:c_{kl}(n)\in\mathbb{C}\right\}$ be a cental sequence in
$\mathcal{M}$. Set
$\mathfrak{C}_{pq}(n)=a_n\mathfrak{e}_{pq}-\mathfrak{e}_{pq}a_n$. An easy
computation shows that
\begin{eqnarray*}
\mathfrak{e}_{qq}\left(\mathfrak{C}_{pq}(n)\right)^*\mathfrak{C}_{pq}(n)
\mathfrak{e}_{qq}=
\left[\left|c_{pp}(n)-c_{qq}(n) \right|^2-\left|c_{pp}(n)
\right|^2+\sum\limits_k\left|c_{kp}(n) \right|^2\right]\mathfrak{e}_{qq},\\
\mathfrak{e}_{pp}\mathfrak{C}_{pq}(n)
\left(\mathfrak{C}_{pq}(n)\right)^*\mathfrak{e}_{pp}=
\left[\left|c_{pp}(n)-c_{qq}(n) \right|^2-\left|c_{qq}(n)
\right|^2+\sum\limits_k\left|c_{qk}(n) \right|^2\right]\mathfrak{e}_{pp}.
\end{eqnarray*}
Using the fact that $\left\{a_n\right\}$ is a central sequence, we deduce
from this that
\begin{eqnarray*}
\lim\limits_{n\to\infty}\sum\limits_{k:k\neq q}\left|c_{qk}(n)
\right|^2=0,\; \lim\limits_{n\to\infty}\sum\limits_{k:k\neq
q}\left|c_{kq}(n) \right|^2=0,\\
\lim\limits_{n\to\infty}\left|c_{11}(n)
-c_{qq}(n)\right|^2=0 \text{ for all } q.
\end{eqnarray*}
This means that $s-\lim\limits_{n\to\infty}\left( a_n-c_{11}(n)I\right)=0$
and $s-\lim\limits_{n\to\infty}\left(
a_n^*-\overline{c_{11}(n)}I\right)=0$. Thus $\left\{a_n\right\}$ is
trivial.
\end{proof}
The goal of this paper is to give the full description of indecomposable
$\mathfrak{S}_\infty$-central states on $\Gamma\wr\mathfrak{S}_\infty$ (see definition \ref{def central}).
The character theory of infinite wreath product in the case of finite $\Gamma$ is
developed by R. Boyer \cite{Boy}. In this case $\Gamma\wr\mathfrak{S}_\infty$
is inductive limit of finite groups, their finite characters can be obtained
as limits of normalized characters of prelimit finite groups,
and Boyer's method is a direct generalization of Vershik's-Kerov's asymptotic
approach \cite{VK0}. The characters of $\Gamma\wr\mathfrak{S}_\infty$ for
general separable group $\Gamma$ were found by authors in \cite{DN},
\cite{DN1}. Our method has been based on the ideas of Okounkov, which
he has developed for the proof of Thoma's theorem \cite{Thoma},
\cite{Ok1}, \cite{Ok2}.
A finite character is a $\Gamma\wr\mathfrak{S}_\infty$-central
positive definite function on
$\Gamma\wr\mathfrak{S}_\infty$. In this paper we study the more general
class of the $\mathfrak{S}_\infty$-central states on
$\Gamma\wr\mathfrak{S}_\infty$. Our results provide a complete classification
such indecomposable states. The set of all indecomposable $\mathfrak{S}_\infty$-central states
have very important property. Namely, if for for two indecomposable
$\mathfrak{S}_\infty$-central states $\varphi_1$ and $\varphi_2$ the corresponding
GNS-representations $\pi_{\varphi_1}$ and
$\pi_{\varphi_2}$ are quasiequivalent, then $\varphi_1=\varphi_2$
(theorem \ref{Th3}, corollary \ref{Co4}).
The papers is organized as follows. Below we give a brief description of
the general properties of the $\mathfrak{S}_\infty$-central states.
The key results are
lemma \ref{weak-lim} and proposition \ref{multiplicativity}. Here we also
recall the classification of the traces (central states) on
$\Gamma\wr\mathfrak{S}_\infty$ (theorem \ref{mainth}). In section
\ref{ex of repr} we present the full collection of factor-representations,
which define the $\mathfrak{S}_\infty$-central states
(proposition \ref{Prop11a}).
Each such state is parametrized by pair $\left(A,\rho\right)$, where
$A$ is self-adjoint operator, $\rho$ is the unitary representation
of $\Gamma $ (paragraph \ref{paragraph2.1}). In proposition \ref{Prop12} we prove that the unitary
equivalence of pairs
$\left(A_1,\rho_1 \right)$ and $\left(A_2,\rho_2 \right)$ is
equivalent to the equality of the corresponding $\mathfrak{S}_\infty$-central
states. In section \ref{KMSsec} we discuss about physical KMS-condition (see \cite{Tak})
for these states (theorem \ref{theorem15}). In section \ref{Themainresult} we prove
the classification theorem \ref{main}.
\paragraph{ The multiplicativity.}
Let $\varphi$ be an indecomposable $\mathfrak{S}_\infty$-central
state on the group $\Gamma\wr \mathfrak{S}_\infty$. Then it defines
according to GNS-construction a factor-representation $\pi_\varphi$
of the group $\Gamma\wr \mathfrak{S}_\infty$ with cyclic vector
$\xi_\varphi$ such that
$\pi_\varphi(g)=\left(\pi_\varphi(g)\xi_\varphi,\xi_\varphi\right)$
for each $g\in \Gamma\wr \mathfrak{S}_\infty$. The next lemma shows,
that different indecomposable $\mathfrak{S}_\infty$-central states
define representations which are not quasiequivalent. Let $ w-lim$
stand for the limit in the weak operator topology.
\begin{Lm}\label{weak-lim}
Let $\varphi$ be an indecomposable $\mathfrak{S}_\infty$-central
state on the group $\Gamma\wr \mathfrak{S}_\infty$. Than for each
$g\in \Gamma\wr \mathfrak{S}_\infty$ there exists
$w-\lim\limits_{n\rightarrow\infty}\pi_\varphi\left(\omega_n
g\omega_n\right)$ and the next equality holds:
\begin{eqnarray}\label{weak-lim equality}
w-\lim\limits_{n\rightarrow\infty}\pi_\varphi\left(\omega_n
g\omega_n\right)=\varphi(g) I.
\end{eqnarray}
\end{Lm}
\begin{proof}
Let $h_1,h_2\in \Gamma\wr \mathfrak{S}_\infty$. Fix $k$ such that
\begin{eqnarray}\label{supp g}
{\rm supp}(h_1),{\rm supp}(h_2),{\rm supp}(g)\subset\{1,2,\ldots,k\}.\end{eqnarray} For each
$n\in \mathbb{N}$ there exists elements
$g_{(n,k)},h_{(n,k)}\in \mathfrak{S}_\infty$ such that
\begin{eqnarray}\label{supp gnk}
{\rm supp}(g_{(n,k)}),{\rm supp}(h_{(n,k)})\subset \{k+1,k+2,\ldots\}\end{eqnarray}
and
$\omega_{n+k}=g_{(n,k)}\omega_kh_{(n,k)}$ (see (\ref{omega})). Permutations $g_{(n,k)},h_{(n,k)}$ can
be defined as follows:
\begin{eqnarray*}
g_{(n,k)}(i)=\left\{\begin{array}{ll}
i,&\textit{ if }\;i\leqslant k\textit{ or }2k+2n<i,\\
i+n,&\textit{ if }\;k<i\leqslant 2k+n,\\
i-k-n,&\textit{ if }\;2k+n<i\leqslant 2k+2n.\end{array}\right.\\
h_{(n,k)}(i)=\left\{\begin{array}{ll}
i,&\textit{ if }\;i\leqslant k\textit{ or }2k+n<i,\\
i+k,&\textit{ if }\;k<i\leqslant k+n,\\
i-n,&\textit{ if }\;k+n<i\leqslant 2k+n.\end{array}\right.
\end{eqnarray*}
By (\ref{supp g}) and (\ref{supp gnk}), the elements $g_{(n,k)}$ and
$h_{(n,k)}$ commutes with the elements $h_1,h_2$ and $g$. Therefore
\begin{eqnarray}\label{h(n,k)}\begin{split}
&h_2^{-1}\omega_{n+k}
g\omega_{n+k}h_1=h_2^{-1}
\left(g_{(n,k)}\omega_kh_{(n,k)}\right)^{-1}
g g_{(n,k)}\omega_kh_{(n,k)}
h_1\\&=h_{(n,k)}^{-1}h_2^{-1}\omega_k
g\omega_kh_1h_{(n,k)}.
\end{split}
\end{eqnarray}As $\varphi$ is $\mathfrak{S}_\infty$-central, one has:
\begin{eqnarray}\label{stabilisation}
\begin{split}
&\left(\pi_\varphi\left(\omega_{n+k}
g\omega_{n+k}\right)\pi_\varphi(h_1)\xi_\varphi,\pi_\varphi(h_2)\xi_\varphi\right)
=\varphi\left(h_2^{-1}\omega_{n+k}
g\omega_{n+k}h_1\right)=\\
&\varphi\left(h_2^{-1}\omega_k
g\omega_kh_1\right)=
\left(\pi_\varphi\left(\omega_k
g\omega_k\right)\pi_\varphi(h_1)\xi_\varphi,\pi_\varphi(h_2)\xi_\varphi\right).
\end{split}
\end{eqnarray}
As $\xi_\varphi$ is cyclic, by
$(\ref{stabilisation})$,
there exists the limit
\begin{eqnarray*}w-\lim\limits_{n\rightarrow\infty}\pi_\varphi(\omega_n
g\omega_n).\end{eqnarray*} For each $h\in \Gamma\wr
\mathfrak{S}_\infty$ for large enough $n$ one has ${\rm
supp}(\omega_ng\omega_n)\cap {\rm supp}(h)=\emptyset$. Therefore
$\pi_\varphi(\omega_ng\omega_n)\pi_\varphi(h)=\pi_\varphi(h)\pi_\varphi(\omega_ng\omega_n)$.
This involves that the weak limit
$w-\lim\limits_{n\rightarrow\infty}\pi_\varphi(\omega_n g\omega_n)$
lies in the center of the algebra $M_{\pi_\varphi}$, generated by
operators of
the representation $\pi_\varphi$. Thus $\lim\limits_{n\rightarrow\infty}\pi_\varphi(\omega_n g\omega_n)$ is scalar.
By
$\mathfrak{S}_\infty$-centrality of $\varphi,$
\begin{eqnarray*}\left(w-\lim\limits_{n\rightarrow\infty}\pi_\varphi(\omega_n
g\omega_n)\xi_\varphi,\xi_\varphi\right)=\lim\limits_{n\rightarrow\infty}\varphi(\omega_n
g\omega_n)=\varphi(g),\end{eqnarray*} which finishes the proof.
\end{proof}
The following claim gives a useful characterization
of the class of the indecomposable $\mathfrak{S}_\infty$-central states:
\begin{Prop}\label{multiplicativity}
The following conditions for $\mathfrak{S}_\infty$-central state
$\varphi$ on the group
$ \Gamma\wr\mathfrak{S}_\infty$ are equivalent:
\begin{itemize}
\item [\it (a)] $\varphi$ is indecomposable;
\item [\it (b)] $\varphi(gg')=\varphi(g)\varphi(g')$ for each
$g,g'\in\Gamma\wr\mathfrak{S}_\infty$ with
${\rm supp}(g)\cap {\rm supp}(g')=\emptyset$;
\item [\it (c)]
$\varphi(g) = \prod\limits_{p\in \mathbb{N}\diagup s}
\varphi\left(s_p \gamma(p)\right)$ for each $g=s \gamma
=\prod\limits_{p\in \mathbb{N}\diagup s}
s_p \gamma(p)$ (see {\ref{decompositiontocycles}}).
\end{itemize}
\end{Prop}
\begin{proof} The equivalence of $(b)$ and $(c)$ is obvious. We prove the equivalence of $(a)$ and $(b)$.
Using GNS-construction, we
build the representation $\pi_{\varphi}$ of the group
$\Gamma\wr\mathfrak{S}_\infty$ which acts in the Hilbert space
$\mathcal{H}_\varphi$ with cyclic vector $\xi_\varphi$ such that
\begin{eqnarray*}
\varphi(g)=\left( \pi_\varphi\left( g\right)\xi_\varphi,\xi_\varphi
\right)\textit{ for each }g\in\Gamma\wr\mathfrak{S}_\infty.
\end{eqnarray*}
Suppose that the property ({\it a}) holds.
Consider two elements
$g=s \gamma$ and
$g^{\prime}=s^{\prime} \gamma^{\prime}$
from $ \Gamma\wr\mathfrak{S}_\infty$ satisfying ${\rm supp}(g)\cap
{\rm supp}(g')=\emptyset$.
Then there exists a sequence
$\left\{ s_n \right\}_{n\in\mathbb{N}}\subset
\mathfrak{S}_\infty$ such that for each $n$
\begin{eqnarray}\label{assympt}
{\rm supp}(s_n)\cap {\rm supp}(g)=\emptyset\text{ and }
{\rm supp}(s_n g^{\prime}s_n^{-1})\subset\{n+1,n+2,\ldots\}.
\end{eqnarray} For example we can put $s_n=\prod\limits_{i\in
{\rm supp}(g')}(i,i+k+n)$, where $k$ is fixed number such that
${\rm supp}(g)\cup {\rm supp}(g')\subset\{1,2,\ldots,k\}$.
Using the ideas of the proof of the lemma \ref{weak-lim} we obtain,
that the limit $\lim\limits_{n\rightarrow
\infty}\pi_\varphi(s_ng's_n)$ exists in the weak operator topology
and the next equality holds:
\begin{eqnarray}\label{fi s_n}
w-\lim\limits_{n\rightarrow \infty}\pi_\varphi(s_ng's_n)=\varphi(g')
I.
\end{eqnarray}
Using ({\ref{assympt}}), (\ref{fi s_n}) and
$\mathfrak{S}_\infty$-centrality of $\varphi$,
we obtain
\begin{eqnarray*}
&\varphi\left( g g^\prime \right)=
\lim\limits_{n\to\infty}\varphi
\left( g s_ng^\prime s_n^{-1}\right)=\\&
\lim\limits_{n\to\infty}\left(\pi_\varphi
(g)\pi_\varphi\left(s_n g^\prime s_n^{-1}\right)\xi_\varphi,\xi_\varphi\right)= \varphi (g) \varphi
\left(g^\prime\right).
\end{eqnarray*}
Thus {\it (b)} follows from {\it(a)}.
Further suppose that the condition {\it(b)} holds.
If $\pi_{\varphi}
\left(\Gamma\wr\mathfrak{S}_\infty\right)^\prime
\bigcap\pi_{\varphi}
\left(\Gamma\wr\mathfrak{S}_\infty\right)^{\prime\prime}=
\mathcal{Z}$
is larger than the scalars, then it contains a pair
of
orthogonal projections $E$ and $F$ satisfying the condition:
\begin{eqnarray}\label{0inequalities}
E F=0.
\end{eqnarray}
Fix arbitrary $\varepsilon>0$. By the von Neumann Double Commutant Theorem there
exist $g_k, h_k\in\Gamma\wr\mathfrak{S}_\infty$ and complex numbers
$c_k,d_k$ $\left( k=1,2,\ldots, N<\infty \right)$
such that
\begin{eqnarray}\label{inequalities}
\begin{split}
\left|\left|\sum\limits_{k=1}^Nc_k\pi_\varphi\left(g_k\right)\xi_\varphi
-E\xi_\varphi\right|\right|<\varepsilon,
\\ \left|\left|\sum\limits_{k=1}^N d_k\pi_\varphi\left(h_k\right)
\xi_\varphi-F\xi_\varphi\right|\right|<\varepsilon.
\end{split}
\end{eqnarray}
Fix $n$ such that ${\rm supp}(g_k)\subset\{1,2,\ldots,n\}$ and
${\rm supp}(h_k)\subset\{1,2,\ldots,n\}$ for each $k$.
As $\varphi$ is $\mathfrak{S}_\infty$-central, using
({\ref{inequalities}}), we obtain
\begin{eqnarray}\label{1inequalities}
\left|\left|\sum\limits_{k=1}^N c_k\pi_\varphi\left(\omega_n
g_k\omega_n\right)\xi_\varphi-E\xi_\varphi\right|\right|<\varepsilon,\textit{
(see (\ref{omega}))}.
\end{eqnarray}
Now, using ({\ref{0inequalities}}), ({\ref{inequalities}}) and
({\ref{1inequalities}}), we have
\begin{eqnarray}\label{2inequalities}\left|\left(\sum\limits_{k=1}^Nc_k
\pi_\varphi\left(\omega_n g_k\omega_n \right) \sum\limits_{k=1}^N
d_k\pi_\varphi \left( h_k
\right)\xi_\varphi,\xi_\varphi\right)\right|<2\varepsilon+\varepsilon^2.
\end{eqnarray} Note, that ${\rm supp}(\omega_ng_k\omega_n)\subset\{n+1,n+2,\ldots\}$
for each $k$. Therefore, by the property $(b)$,
({\ref{inequalities}}) and
({\ref{1inequalities}}), one has:
\begin{eqnarray}\label{3inequalities}
\begin{split}\left|\left(\sum\limits_{k=1}^Nc_k\pi_\varphi\left(\omega_n g_k\omega_n
\right) \sum\limits_{k=1}^N d_k\pi_\varphi \left( h_k
\right)\xi_\varphi,\xi_\varphi\right)\right|=\\\left|\left(\sum\limits_{k=1}^Nc_k\pi_\varphi\left(\omega_n
g_k\omega_n\right)\xi_\varphi,\xi_\varphi\right) \left(
\sum\limits_{k=1}^N d_k\pi_\varphi \left( h_k
\right)\xi_\varphi,\xi_\varphi\right)\right|>\\
\left(E\xi_\varphi,\xi_\varphi\right)
\left(F\xi_\varphi,\xi_\varphi\right)-\varepsilon\left(\left(E\xi_\varphi,\xi_\varphi\right)
+\left(F\xi_\varphi,\xi_\varphi\right)\right)-\varepsilon^2.\end{split}\end{eqnarray}
Note that, as $\xi_\varphi$ is cyclic, $E\xi_\varphi\neq 0$ and
$F\xi_\varphi\neq 0$. Therefore, taking in view
(\ref{2inequalities}) and (\ref{3inequalities}),
we arrive at a
contradiction.
\end{proof}
Denote the element $ \sigma_n \in \mathfrak{S}_\infty$ by the formula:
\begin{eqnarray}\label{12...n}
\sigma_n (i)= \left\{
\begin{array}{rl}
i+1&\textit{ if }\; i<n,\\
1&\textit{ if }\;i=n,\\
i&\textit{ if }\;i>n.
\end{array}\right.
\end{eqnarray}
\begin{Co}\label{Co of mult}
Each indecomposable $\mathfrak{S}_\infty$-central state
$\varphi$ on the group
$ \Gamma\wr\mathfrak{S}_\infty$ is defined by its values on the
elements of the form
$ \sigma_n \gamma,$ where $\gamma=(\gamma_1,\gamma_2,\ldots,\gamma_n,e,e,\ldots)$ and $n\in
\mathbb{N}$.
\end{Co}
\begin{proof}
By the proposition \ref{multiplicativity}, $\varphi$ is defined by
its values on the elements of the view $s_p\gamma(p)$ (see
(\ref{decompositiontocycles})). Fix an element $s_p\gamma(p)$. Let
$n=|p|$.
Then there exists a permutation $h\in
\mathfrak{S}_\infty$ such that $hs_ph^{-1}= \sigma_n $.
Therefore
$\varphi(s_p\gamma(p))=\varphi(hs_p\gamma(p)h^{-1})=\varphi( \sigma_n h\gamma(p)h^{-1})$,
which proves the corollary.
\end{proof}
\paragraph{The characters of the group $\mathfrak{S}_\infty$ and $\Gamma\wr
\mathfrak{S}_\infty$.}
In the paper {\cite{Thoma}}, E.Thoma obtained the following
remarkable description of all {\it indecomposable}
character ($\mathfrak{S}_\infty$-central states) of the group $\mathfrak{S}_\infty$. Characters of the group $\mathfrak{S}_\infty$ are labeled by a
pair of non-increasing positive sequences of
numbers
$\left\{ \alpha_k \right\}$,
$\left\{ \beta_k \right\}$ $\left(k\in\mathbb{N} \right)$,
such that
\begin{eqnarray}\label{cond}
\sum\limits_{k=1}^{\infty}\alpha_k +
\sum\limits_{k=1}^{\infty}\beta_k\leq 1.
\end{eqnarray}
The value of the corresponding character on
a cycle of length $l$ is
\begin{eqnarray*}\label{color2}
\sum\limits_{k=1}^{\infty}\alpha_k^l +(-1)^{l-1}
\sum\limits_{k=1}^{\infty}\beta_k^l.
\end{eqnarray*}
Its value on a product of several
disjoint cycles equals to the product
of values on each of cycles.
In $\cite{DN}$ authors described all indecomposable characters on
the group $\Gamma\wr
\mathfrak{S}_\infty.$ Before to formulate the main result of
$\cite{DN}$ we introduce some more notations. We call an element
$g=s \gamma$ a generated cycle if either $s$ is a cycle and
${\rm supp}(\gamma)\subset {\rm supp}(s)$ or $s=e$ and ${\rm supp}(\gamma)=\{n\}$ for some
$n$.
For an element $g=s \gamma$ and an orbit $p\in
\mathbb{N}/s$ choose the minimal number $k\in p$ and denote
\begin{eqnarray}\label{tilde gamma}\tilde{\gamma}(p)= \gamma_k \gamma_{s^{(-1)}(k)} \cdots
\gamma_{s^{(-l)}(k)}\cdots
\gamma_{s^{(-|p|+1)}(k)}.
\end{eqnarray}
For a factor-representation $\tau$ of the finite type let $\chi_{\tau}$
be its normalized character. That is $\chi_\tau(g) =tr_{\mathcal{M}_\tau}\left( \tau(g)
\right)$, where $ tr_\mathcal{M}$ stands for the
unique normal, normalized $\left( tr_\mathcal{M}(I)=1 \right)$
trace on the factor $\mathcal{M}$ of the finite type. Note that
$\chi_\tau(e)=1$. Let $tr$ be the ordinary matrix normalized trace.
\begin{Th}[\cite{DN}, \cite{DN1}]\label{mainth}
Let $\varphi$ be a function on the group $\Gamma\wr
\mathfrak{S}_\infty$. Then the following conditions are equivalent.
$a)$ $\varphi$ is an indecomposable character.
$b)$ There exist a representation
$\tau$ of the {\it finite} type of the group $\Gamma$,
two non-increasing positive sequences of numbers
$\left\{ \alpha_k \right\}$,
$\left\{ \beta_k \right\}$ $\left(k\in\mathbb{N} \right)$ and two
sequences $\left\{\rho_k\right\},\left\{\varrho_k\right\}$ of
finite-dimensional irreducible representations of
$\Gamma$ with properties
\begin{itemize}
\item {\rm(i)}
$\delta=1-\sum\limits_k\alpha_k
dim\,\rho_k-\sum\limits_k\beta_k dim \varrho_k\geqslant 0$;
\item {\rm(ii)} if $s$ is cycle, $g=s\gamma$ $\left(
\gamma \in\Gamma^\infty_e\right)$, $p={\rm supp} s ={\rm supp}
\left(s\gamma \right)$, then
\begin{eqnarray*}\varphi(g)=\left\{
\begin{array}{ll}
\sum\limits_k\alpha_k\,tr(\rho_k(\gamma_n))+
\sum\limits_k\beta_k,\,tr(\varrho_k(\gamma_n))+\delta\chi_\tau(\gamma_n),
\textit{ if }\;p=\{n\},\\
\sum\limits_k\alpha_k^{|p|} tr(\rho_k(\tilde{\gamma}(p)))+(-1)^{|p\,|-1}
\sum\limits_k\beta_k^{|p\,|} tr(\varrho_k(\tilde{\gamma}(p))),
\textit{
if }\;|p\,|>1;\end{array}\right.
\end{eqnarray*}
\item {\rm(iii)} if $g=s \gamma
=\prod\limits_{p\in \mathbb{N}\diagup s}
s_p \gamma(p)$ (see {\ref{decompositiontocycles}}), then
$\varphi(g) = \prod\limits_{p\in \mathbb{N}\diagup s}
\varphi\left(s_p \gamma(p)\right)$.
\end{itemize}
\end{Th}
\subsubsection{Examples of representations.}\label{ex of repr}
\paragraph{ Parameters of states.}\label{paragraph2.1}
Let $A$ be a self-adjoint operator of the {\it trace class} (see
{\cite{RS}}) from $\mathcal{B}(\mathcal{H})$ with the property:
${\rm Tr} (|A|)\leq1$, where ${\rm Tr}$ is ordinary trace\footnote{If
$\mathfrak{p}$ is the minimal projection from
$\mathcal{B}(\mathcal{H})$, then ${\rm Tr} (\mathfrak{p})=1$.} on
$\mathcal{B}(\mathcal{H})$.
Further we fix vector $\hat{\xi}\in{\rm Ker}
\, A$ and the unitary representation $\rho$ of $\Gamma$ in $\mathcal{H}$,
which satisfies the conditions:
\begin{itemize}
\item {\rm(1)} if ${\rm Tr} (|A|)=1$, then subspace $({\rm Ker}
\, A)^\perp=\mathcal{H}\ominus{\rm Ker}\, A$ is cyclic for
$w^*$-algebra $\mathfrak{A}$ generated by $A$ and
$\rho(\Gamma)$;
\item {\rm(2)} if ${\rm Tr} (|A|)<1$,
subspace $\widetilde{\mathcal{H}}$ is
generated by $\left\{ \mathfrak{A}v, v\in ({\rm
Ker} A)^\perp \right\}$ and $\mathcal{H}_{reg}=
\mathcal{H}\ominus\widetilde{\mathcal{H}}$, then
$\dim \mathcal{H}_{reg}=\infty$;
\item {\rm(3)} \label{page11} if $P_{]0,1]}$ and $P_{[-1,0[}$ are the spectral
projections of $A$, then subspaces $\mathcal{H}_+$ and
$\mathcal{H}_-$ generated by vectors
$ \left\{\mathfrak{A}v,\;v\in P_{]0,1]}\mathcal{H} \right\}$ and
$ \left\{\mathfrak{A}v,\;v\in P_{[-1,0[}\mathcal{H}
\right\}$, respectively, are orthogonal;
\item {\rm(4)} \label{property4}there exist ${\rm I}_\infty$-factor
$N^\prime_{reg}\subset \left(\rho\left(\Gamma
\right)\Big|_{\mathcal{H}_{reg}}\right)^\prime$ with matrix unit
$ \left\{\mathfrak{e}_{kl}^\prime,\; k,l\in\mathbb{N} \right\}$
such that $\hat{\xi}\in
\mathfrak{e}_{11}^\prime\mathcal{H}_{reg}$,
$\left\|\hat{\xi}\right\|=1$ and
$\mathfrak{e}_{11}^\prime\mathcal{H}_{reg}$ is generated by $
\left\{\rho \left(\Gamma \right)\hat{\xi} \right\}$. In
particular, if ${\rm Tr} (|A|)=1$ then $\hat{\xi}=0$.
When ${\rm Tr} (|A|)<1$ we assume for convenience that
$\left\|\hat{\xi}\right\|=1$.
\end{itemize}
\paragraph{Hilbert space $\mathcal{H}^\rho_A$.}\label{paragraph2.2}
Define
a state $\psi_k$ on $\mathcal{B}\left(\mathcal{H} \right)$ as
follows
\begin{eqnarray}\label{psik}
\psi_k\left(v \right)={\rm Tr}\left(v|A| \right)+\left(1-{\rm
Tr}\left(|A| \right)
\right)\left(v\mathfrak{e}_{k1}^\prime\hat{\xi},\mathfrak{e}_{k1}^\prime\hat{\xi}
\right),\;\;v\in\mathcal{B}\left(\mathcal{H} \right).
\end{eqnarray}
Let $ _1\psi_k$ denote the product-state on $\mathcal{B} \left(
H\right)^{\otimes k}$:
\begin{eqnarray}\label{psik1}
_1\psi_k\left(v_1\otimes v_2\otimes \ldots\otimes v_k
\right)=\prod\limits_{j=1}^k\psi_j\left(v_j \right).
\end{eqnarray}
Now define inner product on $\mathcal{B} \left( H\right)^{\otimes
k}$ by
\begin{eqnarray}\label{psik2}
\left( v,u\right)_k=\,_1\psi_k\left( u^*v \right).
\end{eqnarray}
Let $\mathcal{H}_k$ denote the Hilbert space obtained by completing
$\mathcal{B} \left( H\right)^{\otimes k}$ in above inner product
norm. Now we consider the natural isometrical embedding
\begin{eqnarray}
v\ni\mathcal{H}_k\mapsto v\otimes {\rm I}\in\mathcal{H}_{k+1}.
\end{eqnarray}
and define Hilbert space $\mathcal{H}^\rho_A$ as completing
$\bigcup\limits_{k=1}^\infty \mathcal{H}_k$.
\paragraph{The action $\Gamma\wr\mathfrak{S}_{\infty}$ on
$\mathcal{H}^\rho_A$.}\label{paragraph2.3}
First, using the
embedding $a\ni\mathcal{B}\left(\mathcal{H} \right)^{\otimes
k}\mapsto a\otimes{\rm I}\in\mathcal{B}\left(\mathcal{H}
\right)^{\otimes k+1}$, we identify $\mathcal{B}\left(\mathcal{H}
\right)^{\otimes k}$ with subalgebra $\mathcal{B}\left(\mathcal{H}
\right)^{\otimes
k}\otimes\mathbb{C}\subset\mathcal{B}\left(\mathcal{H}
\right)^{\otimes k+1}$. Therefore, algebra
$\mathcal{B}\left(\mathcal{H}
\right)^{\otimes\infty}=\bigcup\limits_{n=1}^\infty
\mathcal{B}\left(\mathcal{H} \right)^{\otimes n}$ is well defined.
Further we give the explicit embedding $\mathfrak{S}_\infty$ into
unitary group of $\mathcal{B}\left(\mathcal{H}
\right)^{\otimes\infty}$. First fix the matrix unit $ \left\{e_{pq}:
p,q=1,2,\ldots,n={\rm
dim}\,\mathcal{H}\right\}\subset\mathcal{B}\left(\mathcal{H}
\right)$ with the properties:
\begin{itemize}
\item {\rm (i)} projection $e_{kk}$ is minimal and
$e_{kk}A=c_{kk}e_{kk}$ $\left(c_{kk}\in\mathbb{C} \right)$ for all
$k=1,2,\ldots,n$;
\item {\rm (ii)} $e_{kk}\mathcal{H}_+\subset\mathcal{H}_+$ and
$e_{kk}\mathcal{H}_-\subset\mathcal{H}_-$ for all
$k=1,2,\ldots,n$.
\end{itemize}
Put $X= \left\{1,2,\ldots,n \right\}^{\times\infty}$. For
$x=\left(x_1,x_2,\ldots,x_l,\ldots \right)\in X$ we set
$\mathfrak{l}_A(x)=\left|
\left\{i:e_{x_i\,x_i}\mathcal{H}\subset\mathcal{H}_- \right\}
\right|$. Define subsequence $x_A=\left(x_{i_1},x_{i_2},\ldots
x_{i_l},\ldots \right)\in
\left\{ 1,2,\ldots,n\right\}^{\mathfrak{l}_A(x)}$ by induction
\begin{eqnarray}
i_1= {\rm min}\left\{i:e_{x_i\,x_i}\mathcal{H}\subset\mathcal{H}_-
\right\} \text{ and }\;i_k={\rm min}
\left\{i>i_{k-1}:e_{x_i\,x_i}\mathcal{H}\subset\mathcal{H}_-
\right\}.
\end{eqnarray}
For $s\in\mathfrak{S}_\infty$ denote by $c(x,s)$ the unique
permutation from
$\mathfrak{S}_{{\mathfrak{l}_A(x)}}\subset\mathfrak{S}_\infty$ such that
\begin{eqnarray}
s^{-1}\left(i_{c(x,s)(1)} \right)<s^{-1}\left(i_{c(x,s)(2)}
\right)<\ldots<s^{-1}\left(i_{c(x,s)(l)} \right)<\ldots ..
\end{eqnarray}
Let $\mathfrak{S}_\infty$ acts on $X$ as follows
\begin{eqnarray}
X\times\mathfrak{S}_\infty\ni(x,s)\mapsto
sx=\left(x_{s(1)},x_{s(2)},\ldots,x_{s(l)},\ldots \right)\in X.
\end{eqnarray}
By definition, $\left(
sx\right)_A=\left(x_{i_{c(x,s)(1)}},x_{i_{c(x,s)(2)}},\ldots,x_{i_{c(x,s)(l)}},\ldots
\right)$. Therefore,
\begin{eqnarray}\label{cocycle}
c(x,ts)=c(sx,t)c(x,s)\;\; \text{ for all
}\;t,s\in\mathfrak{S}_\infty;x\in X.
\end{eqnarray}
Given any $s\in\mathfrak{S}_\infty$ put
\begin{eqnarray*}
U_N(s)=\sum\limits_{ x_1,x_2, \ldots,x_N=1}^n {\rm sign}\,
\left(c(x,s)\right)\;e_{x_{s(1)}\,x_1}\otimes
e_{x_{s(2)}\,x_2}\otimes\ldots\otimes e_{x_{s(N)}\,x_N},
\end{eqnarray*}
where $N<\infty$ satisfies the condition: $s(i)=i$ for all $i\geq
N$, $x=\left(x_1,x_2,\ldots,x_N,\ldots \right)$.
We see at once that for $L>N$
\begin{eqnarray*}
U_N(s)\otimes\underbrace{{\rm I}\otimes{\rm
I}\otimes\ldots\otimes{\rm I}}_{L-N}=U_L(s).
\end{eqnarray*}
Thus operator $U(s)=U_N(s)\otimes{\rm I}\otimes{\rm
I}\otimes\ldots\in \mathcal{B}\left(\mathcal{H}
\right)^{\otimes\infty}=\bigcup\limits_{n=1}^\infty
\mathcal{B}\left(\mathcal{H} \right)^{\otimes n}$ is well defined.
It follows from
\ref{cocycle} that
\begin{eqnarray}
U(t)U(s)=U(ts) \text{ for all } t,s\in\mathfrak{S}_\infty.
\end{eqnarray}
It is clear that
\begin{eqnarray*}
{\rm sign}\,\left(c(x,s)c(y,s) \right)U(s)\left(e_{x_1\,y_1}\otimes
e_{x_2\,y_2}\otimes\ldots\otimes e_{x_N\,y_N}\otimes {\rm
I}\otimes{\rm I}\ldots\otimes{\rm
I}\otimes\ldots \right)U(s)^*\\
=e_{x_{s^{-1}(1)}\,y_{s^{-1}(1)}}\otimes e_{x_{s^{-1}(2)}\,y_{s^{-1}(2)}}\otimes\ldots\otimes
e_{x_{s^{-1}(N)}\,y_{s^{-1}(N)}}\otimes {\rm I}\otimes{\rm
I}\ldots\otimes{\rm I}\otimes\ldots.
\end{eqnarray*}
If $x$, and $y$ satisfies the condition:
$e_{x_i\,x_i}\mathcal{H}\subset\mathcal{H}_-$ if and only if, when
$e_{y_i\,y_i}\mathcal{H}\subset\mathcal{H}_-$,
\noindent then, by definition cocycle $c$, we have $c(x,s)=c(y,s)$.
Therefore,
\begin{eqnarray}\label{UmatrixUnit}
\begin{split}
U(s)\left(e_{x_1\,y_1}\otimes e_{x_2\,y_2}\otimes\ldots\otimes
e_{x_N\,y_N}\otimes {\rm I}\otimes{\rm I}\ldots \right)U(s)^*\\
=e_{x_{s^{-1}(1)}\,y_{s^{-1}(1)}}\otimes e_{x_{s^{-1}(2)}\,y_{s^{-1}(2)}}\otimes\ldots\otimes
e_{x_{s^{-1}(N)}\,y_{s^{-1}(N)}}\otimes {\rm I}\otimes{\rm I}\ldots.
\end{split}
\end{eqnarray}
Hence, using properties {\rm(2)}-{\rm(3)} on the page
\pageref{page11}, we obtain
\begin{eqnarray}\label{relurho}
\begin{split}
U(s)\left(\rho\left(\gamma_1\right)\otimes\rho\left(\gamma_2\right)\otimes\ldots
\otimes\rho\left(\gamma_N\right)\otimes \ldots
\right)U(s)^*\\
=\rho\left(\gamma_{s^{-1}(1)}\right)\otimes
\rho\left(\gamma_{s^{-1}(2)}\right)\otimes
\ldots\otimes\rho\left(\gamma_{s^{-1}(N)}\right)\otimes\ldots
\end{split}
\end{eqnarray}
for all $s\in\mathfrak{S}_\infty$, $\gamma_l\in\Gamma $.
Now we define the operators $\Pi_A^\rho(s)$,
$\left(s\in\mathfrak{S}_\infty \right)$ and $\Pi_A^\rho(\gamma )$,
$\left(\gamma =\left(\gamma_1,\gamma_2,\ldots
\right)\in\Gamma^\infty_e \right)$ on $\mathcal{H}_A^\rho$ as
follows
\begin{eqnarray}\label{Piarho}
\begin{split}
\Pi_A^\rho(s)v=U(s)v,\;\;\;v\in \mathcal{H}_A^\rho;\\
\Pi_A^\rho(\gamma)v=\left(\rho\left(\gamma_1\right)\otimes\rho\left(\gamma_2\right)\otimes\ldots
\right)v.
\end{split}
\end{eqnarray}
By (\ref{relurho}), $\Pi_A^\rho$ can be extended to the unitary
representation of $\Gamma\wr\mathfrak{S}_\infty$.
The next proposition follows from the definition of Hilbert space
$\mathcal{H}_A^\rho$ (see paragraph \ref{paragraph2.2}) and
proposition \ref{multiplicativity}.
\begin{Prop}\label{Prop11a}
Let $I$ be the unit in $\mathcal{B}\left(\mathcal{H}
\right)^{\otimes\infty}$. Identify the elements of
$\mathcal{B}\left(\mathcal{H} \right)^{\otimes\infty}$ with the
corresponding vectors in $\mathcal{H}_A^\rho$. Put $\psi_A^\rho
\left(s\gamma \right)=\left(\Pi_A^\rho(s)\Pi_A^\rho(\gamma )I,I
\right)$. Then $\phi_A^\rho$ is indecomposable
$\mathfrak{S}_\infty$-central state on
$\Gamma\wr\mathfrak{S}_\infty$ (see definitions \ref{indecomposable}
and \ref{def central}).
\end{Prop}
Let $A_1$, $A_2$ be the self-adjoint operators of the {\it trace
class} (see {\cite{RS}}) from $\mathcal{B}(\mathcal{H})$ with the
property ${\rm Tr} (|A_j|)\leq1$, $\left(j=1,2 \right)$, and let
$\rho_1$, $\rho_2$ be the unitary representations of $\Gamma$:
$\rho_i:\gamma\in \Gamma\mapsto\rho_i(\gamma)\in
\mathcal{B}(\mathcal{H})$.
\begin{Prop}\label{Prop12}
Let $\left(\mathcal{H}_i,A_i,\rho_i,\hat{\xi}_i\right)$, $i=1,2$
satisfy assumptions {\rm(1)}-{\rm(4)} (paragraph
\ref{paragraph2.1}). Equality
$\psi_{A_1}^{\rho_1}=\psi_{A_2}^{\rho_2}$ holds if and only if there
exists isometry $\mathcal{U}:\mathcal{H}_1\mapsto\mathcal{H}_2$ such that
\begin{eqnarray}\label{unitaryeq}
\hat{\xi}_2=\mathcal{U}\hat{\xi}_1,\;A_2=\mathcal{U}A_1\mathcal{U}^{-1}
\text{ and }\;\; \rho_2(\gamma )=\mathcal{U}\rho_1(\gamma
)\mathcal{U}^{-1} \text{ for all } \gamma \in\Gamma.
\end{eqnarray}
\end{Prop}
\begin{proof}
Assume (\ref{unitaryeq}) hold. It follows from (\ref{psik}) and
proposition \ref{Prop11a} that
$\psi_{A_1}^{\rho_1}=\psi_{A_2}^{\rho_2}$.
Conversely, suppose that $\psi_{A_1}^{\rho_1}=\psi_{A_2}^{\rho_2}$.
Denote by $\Pi_A^{\rho\,0}$ the restriction $\Pi_A^\rho$ to subspace $
\left[ \Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)I\right]$
generated by the vectors $
\left\{\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)I
\right\}$. Let $\left(l\,k \right)$ be the transposition
interchanging $l$ and $k$. According to the construction of
representation $\Pi_A^\rho$ and properties {\rm (i)-(ii)} from
paragraph \ref{paragraph2.3}, there exists operator
\begin{eqnarray}\label{astrans}
\mathcal{O}_l=w-\lim\limits_{k\to\infty}\Pi_A^\rho\left( \left(l\,k
\right)\right)
\end{eqnarray}
and
\begin{eqnarray}\label{Oaction}
\mathcal{O}_l\left(a_1\otimes a_2\otimes\ldots\right)=b_1\otimes
b_2\otimes\ldots,\text{ where }
b_k=\left\{\begin{array}{ll}a_k,&\textit{ if }\;k\neq l,\\
Aa_k,&\textit{ if }\;k=l. \end{array}\right.
\end{eqnarray}
Let $\mathfrak{A}_l^{A\,\rho}$ be $w^*$-algebra in
$\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)^{\prime\prime}$ generated by $\mathcal{O}_l$ and
$\underbrace{{\rm I}\otimes\ldots\otimes{\rm
I}}_{l-1}\otimes\rho(\gamma )\otimes{\rm I}\otimes{\rm
I}\otimes\ldots$, $\gamma \in\Gamma$. Denote by $\mathcal{P}_0$ the
orthogonal projection $\mathcal{H}_A^\rho$ onto $ \left[
\mathfrak{A}_l^{A\,\rho} I\right]$.
First we prove that $w^*$-algebra $
\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}\subset\mathcal{B}(\mathcal{H})$ generated by
$A$ and $\rho(\Gamma)$ is isomorphic to $w^*$-algebra
$\mathfrak{A}_l^{A\,\rho}\mathcal{P}_0$. Namely, the map
\begin{eqnarray}\begin{split}
\mathfrak{m}_l:A\mapsto \mathcal{O}_l\mathcal{P}_0,\\
\mathfrak{m}_l:\rho(\gamma)\mapsto \left(\underbrace{{\rm
I}\otimes\ldots\otimes{\rm I}}_{l-1}\otimes\rho(\gamma )\otimes{\rm
I}\otimes{\rm I}\otimes\ldots\right)\mathcal{P}_0
\end{split}\end{eqnarray} extends to an isomorphism of $\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}$ onto
$\mathfrak{A}_l^{A\,\rho}\mathcal{P}_0$.
Using (\ref{Oaction}) and definition of $\Pi_A^\rho$, we can
consider $\mathfrak{m}_l$ as the GNS-representation of $
\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}\subset\mathcal{B}(\mathcal{H})$
corresponding to $\psi_k$ (see (\ref{psik})). Thus ${\rm Ker}\,
\mathfrak{m}_l= \left\{a\in \left\{A,\rho(\Gamma)
\right\}^{\prime\prime}:\mathfrak{m}_l\,(a)=0\right\}$ is weakly
closed two-sided ideal. Therefore, there exists unique orthogonal
projection $e$ from the center of $\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}$ such that
\begin{eqnarray}\label{Kerml}
{\rm Ker}\,\mathfrak{m}_l
=e\left\{A,\rho(\Gamma)\right\}^{\prime\prime} \text{ (see
\cite{Tak1})}.
\end{eqnarray}
Let us prove that $e=0$.
Denote by $c\left(\widetilde{P}\right)$ central support of
orthogonal projection $\widetilde{P}\in \left\{A,\rho(\Gamma)
\right\}^\prime$: $\widetilde{P}\mathcal{H}=\widetilde{\mathcal{H}}$
(see property {\rm (2)} from paragraph \ref{paragraph2.1}).
Let us first show that
\begin{eqnarray}\label{ecentrsupp}
e\,c\left(\widetilde{P}\right)=0.
\end{eqnarray}
Conversely, suppose that $e\,c\left(\widetilde{P}
\right)\neq 0$. Hence, since the map $\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}\,c\left(\widetilde{P}\right)\ni a\mapsto
a\widetilde{P}\in\left\{A,\rho(\Gamma)
\right\}^{\prime\prime}\widetilde{P}$ is isomorphism, we obtain
$e\,\widetilde{P}\neq0$. It follows from properties {\rm (1)}-{\rm
(3)} (paragraph \ref{paragraph2.1})) that
$e\,\left(P_{]0,1]}+P_{[-1,0[}\right)\neq 0$. Thus, by
(\ref{psik}), $\psi_l(e)\neq 0$. Therefore,
$e\notin {\rm Ker}\,\mathfrak{m}_l$. This
contradicts property (\ref{Kerml}).
Now, using (\ref{ecentrsupp}) and property {\rm (2)} (paragraph
\ref{paragraph2.1})), we have
\begin{eqnarray}
e\,\left(I-c\left(\widetilde{P} \right) \right)\mathcal{H}\subseteq
\mathcal{H}_{reg}.
\end{eqnarray}
Therefore, if $e\,\left(I-c\left(\widetilde{P} \right)
\right)\neq0$, then, using property {\rm (4)} (paragraph
\ref{paragraph2.1}), we obtain
\begin{eqnarray}
e\,\left(I-c\left(\widetilde{P} \right)
\right)\mathfrak{e}_{l1}^\prime\hat{\xi }\neq0.
\end{eqnarray}
Again, by
(\ref{psik}), $\psi_l(e)\neq 0$ and $e\notin {\rm
Ker}\,\mathfrak{m}_l$. It follows from (\ref{Kerml}) that
\begin{eqnarray}
e\,\left(I-c\left(\widetilde{P} \right) \right)=0.
\end{eqnarray}
Hence, using (\ref{ecentrsupp}), we obtain
\begin{eqnarray}\label{Kerzero}
{\rm Ker}\,\mathfrak{m}_l =0.
\end{eqnarray}
Now we suppose that $\phi_{A_1}^{\rho_1}=\phi_{A_2}^{\rho_2}$. Let
$\mathcal{O}_l^{(1)}$ and $\mathcal{O}_l^{(2)}$ be the operators,
which are defined by formula (\ref{astrans}) for representations
$\Pi_{A_1}^{\rho_1}$ and $\Pi_{A_2}^{\rho_2}$ respectively. If
$\mathfrak{I}_l$ is the extension the map
\begin{eqnarray*}
\mathcal{O}_l^{(1)}\mathcal{P}_0\mapsto\mathcal{O}_l^{(2)}\mathcal{P}_0,\\
\underbrace{{\rm I}\otimes\ldots\otimes{\rm
I}}_{l-1}\otimes\rho_1(\gamma )\otimes{\rm I}\otimes{\rm
I}\otimes\ldots\mapsto \underbrace{{\rm I}\otimes\ldots\otimes{\rm
I}}_{l-1}\otimes\rho_2(\gamma )\otimes{\rm I}\otimes{\rm
I}\otimes\ldots.
\end{eqnarray*}
by multiplication, then
\begin{eqnarray}\label{invform}
\left(\mathfrak{I}_l(a)I,\mathfrak{I}_l(b)I\right)=\left(aI,bI\right)
\text{ for all } \;a,b\in \mathfrak{A}_l^{A_1\,\rho_1}\mathcal{P}_0.
\end{eqnarray}
It follows from (\ref{Kerzero}) that the map
\begin{eqnarray}\label{thetaiso}
\left\{A_1,\rho_1(\Gamma) \right\}^{\prime\prime}\ni
a\stackrel{\theta}{\mapsto} \mathfrak{m}_l^{-1}
\circ\mathfrak{I}_l\circ\mathfrak{m}_l(a)\in
\left\{A_2,\rho_2(\Gamma) \right\}^{\prime\prime}
\end{eqnarray}
is an isomorphism. Since $\phi_{A_1}^{\rho_1}=\phi_{A_2}^{\rho_2}$,
then, using definition of $\phi_{A}^{\rho}$, in particular (\ref{psik}), obtain for all
$ v\in\left\{A_1,\rho_1(\Gamma) \right\}^{\prime\prime}$:
\begin{eqnarray}\label{psieq}
\begin{split}
{\rm Tr}\left(v|A_1| \right)+\left(1-{\rm Tr}\left(|A_1| \right)
\right)\left(v\hat{\xi}_1,\hat{\xi}_1 \right)\\= {\rm
Tr}\left(\theta(v)|A_2| \right)+\left(1-{\rm Tr}\left(|A_2| \right)
\right)\left(\theta(v)\hat{\xi}_2,\hat{\xi}_2 \right).
\end{split}
\end{eqnarray}
Without loss of generality we can assume that
$\left\{A_1,\rho_1(\Gamma) \right\}^{\prime\prime},
\left\{A_2,\rho_2(\Gamma) \right\}^{\prime\prime}\subset\mathcal{B}\left(\mathcal{H}
\right)$. Let $P_{[-1,0[}^{(i)}$, $P_{]0,1]}^{(i)}$ be the spectral
projections of $A_i$ $(i=1,2)$. Put
$P_\pm^{(i)}=P_{[-1,0[}^{(i)}+P_{]0,1]}^{(i)}$. It is clear $\left({\rm Ker}\, A_i\right)^\perp=
P_\pm^{(i)}\mathcal{H}$. Denote by $\widetilde{\mathcal{H}}_i$
subspace $ \left[\left\{A_i,\rho_i(\Gamma) \right\}^{\prime\prime}
P_\pm^{(i)}\mathcal{H}_i\right]$. Let $\widetilde{P}_i$ be the
orthogonal projection of $\mathcal{H}_i$ onto
$\widetilde{\mathcal{H}}_i$. Put $P^{(i)}_{reg}=I-\widetilde{P}_i$. For $\alpha \in {\rm Spectrum }\, A_i$
denote by $P_\alpha^{(i)}$ the corresponding spectral projection.
Now, using properties of $\left(A_i,\rho_i \right)$ (see paragraph
{\ref{paragraph2.1}}), we have
\begin{eqnarray}
{\rm
dim}\,P_\alpha^{(i)}\mathcal{H}<\infty\;\text{ and
}\;P_\pm^{(i)}=\sum\limits_{\alpha \in{\rm Spectrum }\, A_i:\alpha
\neq0}P_\alpha^{(i)}.
\end{eqnarray}
Therefore, there exists collection $ \left\{c^{(i)}_j
\right\}_{j=1}^{N}$ of pairwise orthogonal projections from the
center of $w^*$-algebra
$P_\pm^{(i)}\left\{A_i,\rho_i(\Gamma)
\right\}^{\prime\prime}P_\pm^{(i)}$ with properties
\begin{eqnarray}\label{properties of central proj}
\begin{split}
\theta\left( c^{(1)}_j\right)=c^{(2)}_j \text{ (see (\ref{thetaiso})) };\;\;\;\sum\limits_{j=1}^{N}c^{(i)}_j=P_\pm^{(i)};\\
c^{(i)}_jP_\pm^{(i)}\left\{A_i,\rho_i(\Gamma)
\right\}^{\prime\prime}P_\pm^{(i)}c^{(i)}_j \text{ is a factor of type
} I_{n_j}.
\end{split}
\end{eqnarray}
Fix matrix unit $ \left\{f^{(j)}_{k\,l} \right\}_{k,l=1}^{n_j}\subset
c^{(1)}_jP_\pm^{(1)}\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}P_\pm^{(1)}c^{(1)}_j$, which is a linear basis in
$c^{(1)}_jP_\pm^{(1)}\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}P_\pm^{(1)}c^{(1)}_j$, minimal projections
$\left\{f^{(j)}_{k\,k} \right\}_{k=1}^{n_j}$ satisfy condition
\begin{eqnarray}\label{spectrprmatrixunit}
P_\alpha^{(1)}f^{(j)}_{k\,k} =f^{(j)}_{k\,k}P_\alpha^{(1)} \;\text{
for all } \alpha \in {\rm Spectrum}\, A_1;\; k,j\in\mathbb{N}.
\end{eqnarray}
Now, using (\ref{invform}), (\ref{thetaiso}), (\ref{psieq}) and
definition of $\Pi_A^\rho$ (see paragraphs
\ref{paragraph2.1},\ref{paragraph2.2}, \ref{paragraph2.3}), we have
\begin{eqnarray}
{\rm Tr}\, \left( f^{(j)}_{k\,k}\right)= {\rm Tr}\, \left(\theta\left(
f^{(j)}_{k\,k}\right)\right) \text{ for all } \; k,j\in\mathbb{N}.
\end{eqnarray}
Therefore, there exists isometry $U:P_\pm^{(1)}\mathcal{H}_1\mapsto
P_\pm^{(1)}\mathcal{H}_2 $ such that
$UP_\pm^{(1)}\mathcal{H}_1=P_\pm^{(1)}\mathcal{H}_2$ and
\begin{eqnarray}\label{isomorphismSp}
Uf^{(j)}_{k\,k}U^{-1}=\theta\left( f^{(j)}_{k\,k}\right)
\text{ for } \; k=1,2,\ldots n_j;\;\;j=1,2,\ldots,N.
\end{eqnarray}
Let $\mathcal{C}_i$ be the center of $w^*$-algebra
$\left\{A_i,\rho_i(\Gamma)
\right\}^{\prime\prime}$ and let
$c\left(P_\pm^{(i)}\right)\in\mathcal{C}_i$ be the central support
of $P_\pm^{(i)}$. It follows from this and (\ref{properties of central proj})
that there exist pairwise orthogonal projections
$ \left\{C^{(i)}_j \right\}_{j=1}^{N}\subset c\left(P_\pm^{(i)}\right)\cdot\mathcal{C}_i $
with the next properties
\begin{eqnarray}
\begin{split}
c^{(i)}_j= C^{(i)}_j\cdot
P_\pm^{(i)},\;\;\;\;\sum\limits_{j=1}^NC^{(i)}_j=c\left(
P_\pm^{(i)}\right),\\
C^{(i)}_j\left\{A_i,\rho_i(\Gamma)
\right\}^{\prime\prime}C^{(i)}_j \text{ is a factor of type
} I_{N_j}.
\end{split}
\end{eqnarray}
In $C^{(1)}_j\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}C^{(1)}_j$ there exists matrix unit
$ \left\{f^{(j)}_{k\,l} \right\}_{k,l=1}^{N_j}$ $\left(n_j\geq N_j \right)$.
Now, applying (\ref{isomorphismSp}), we obtain that
\begin{eqnarray}
\widetilde{U}=\sum_{j=1}^N\sum_{k=1}^{N_j}\theta\left(f_{k1}^{(j)}\right)Uf_{1k}
\end{eqnarray}
is an isometry of $c\left(P_\pm^{(1)}\right)\mathcal{H}_1 $ onto
$c\left(P_\pm^{(2)}\right)\mathcal{H}_2 $. An easy computation shows
that $\widetilde{U}f_{kl}^{(j)}
\widetilde{U}^{-1}=\theta\left(f_{kl}^{(j)} \right)$ for
$k,l=1,2,\ldots,N_j;$ $j=1,2,\ldots,N$. Thus
\begin{eqnarray}\label{Utilde}
\theta(a)=\widetilde{U}a \widetilde{U}^{-1}\; \text{ for all }\;
a\in c\left(P_\pm^{(1)}\right)\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}.
\end{eqnarray}
Hence, using (\ref{psieq}) and relations
$\theta\left(\left|A_1\right| \right)=\left|A_2\right|$,
$\theta\left(c\left(P_\pm^{(1)}\right)
\right)=c\left(P_\pm^{(2)}\right)$, which follows from the
definition of $\theta$ (see (\ref{thetaiso})), we have
\begin{eqnarray}\label{0.68}
\begin{split}
\left(\left(I-c\left(P_\pm^{(2)}\right)
\right)\theta(v)\hat{\xi}_2,\hat{\xi}_2
\right)=\left(\left(I-c\left(P_\pm^{(1)}\right)
\right)v\hat{\xi}_1,\hat{\xi}_1\right).
\end{split}
\end{eqnarray}
Since $\widetilde{P}_i\leq c\left(P_\pm^{(i)}\right)$, then
\begin{eqnarray}\label{0.69}
I-c\left(P_\pm^{(i)}\right)\leq P_{reg}^{(i)}, \;\;\; i=1,2.
\end{eqnarray}
Denote by
$\left\{\mathfrak{e}_{kl}^{(i)\prime},\;k,l\in\mathbb{N}\right\}$
$\left(i=1,2 \right)$ the matrix unit from property {\rm (4)} of paragraph
\ref{paragraph2.1}. Now we define map $V$ as follows
\begin{eqnarray*}
a\left(I-c\left(P_\pm^{(1)}\right)\right)\hat{\xi}_1
\stackrel{V}{\mapsto}\theta(a) \left(I-c\left(P_\pm^{(2)}\right)\right)\hat{\xi}_2, \;\text{ where }\;
a\in\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}.
\end{eqnarray*}
By (\ref{0.68}) and (\ref{0.68}), $V$ extends to isometry $V$
of $\left(I-c\left(P_\pm^{(1)}\right)\right)\mathfrak{e}_{11}^{(1)\prime}
\mathcal{H}_1\subset P_{reg}^{(1)}\mathcal{H}_1$
onto $\left(I-c\left(P_\pm^{(2)}\right)\right)\mathfrak{e}_{11}^{(1)\prime}
\mathcal{H}_2\subset P_{reg}^{(2)}\mathcal{H}_2$ and for all
$a\in\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}$
\begin{eqnarray*}
V\left(I-c\left(P_\pm^{(1)}\right)\right)a\mathfrak{e}_{11}^{(1)\prime}V^{-1}=
\left(I-c\left(P_\pm^{(2)}\right)\right)\theta(a)\mathfrak{e}_{11}^{(2)\prime}.
\end{eqnarray*}
It follows from this that $\widetilde{V}=\sum\limits_{k=1}^\infty
\mathfrak{e}_{k1}^{(2)\prime}V \left(I-c\left(P_\pm^{(1)}\right)\right)\mathfrak{e}_{1k}^{(1)\prime}$
is an isometry of $\left(I-c\left(P_\pm^{(1)}\right)\right)
\mathcal{H}_1$ onto $\left(I-c\left(P_\pm^{(2)}\right)\right)
\mathcal{H}_2$, satisfying the next relation
\begin{eqnarray*}
\widetilde{V}\left(I-c\left(P_\pm^{(1)}\right)\right)a\widetilde{V}^{-1}=
\left(I-c\left(P_\pm^{(2)}\right)\right)\theta(a) \;\;\;\;\; \left(a\in\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}\right).
\end{eqnarray*}
Hence, using (\ref{Utilde}), we obtain that
$W=\widetilde{U}c\left(P_\pm^{(1)}\right)+
\widetilde{V}\left(I-c\left(P_\pm^{(1)}\right)\right)$ is an isometry of
$\mathcal{H}_1$ onto $\mathcal{H}_2$
and
\begin{eqnarray}
WaW^{-1}=\theta(a) \text{ for all }\;\; a\in\left\{A_1,\rho_1(\Gamma)
\right\}^{\prime\prime}.
\end{eqnarray}
Now, on account of definition of $\theta$ and (\ref{psieq}) one can easy to check that
\begin{eqnarray}\label{0.71}
\begin{split}
W\hat{\xi}_1\perp \left[\left\{A_2,\rho_2(\Gamma) \right\}^{\prime\prime}
P_\pm^{(2)}\mathcal{H}_2\right]=\widetilde{\mathcal{H}}_2\;\;\;\text{ and }\\
\left(aW\hat{\xi}_1, W\hat{\xi}_1 \right)=\left(a\hat{\xi}_2,
\hat{\xi}_2 \right) \text{ for all }\;\;
a\in\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime\prime}.
\end{split}
\end{eqnarray}
Define linear map $K$ by $K\left(v \right)=
\left\{\begin{array}{ll}
a\hat{\xi}_2 ,&\text{ if }\;v=aW\hat{\xi}_1 \;\;\; a\in a\in\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime\prime},\\
0 ,&\text{ if }\;v\in \mathcal{H}_2\ominus\left[\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime\prime}
\hat{\xi}_2\right] .\end{array}\right.$
It follows from (\ref{0.71}) that $K$ extends to the partial isometry from
$\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime}$. Therefore, there exists unitary $\widetilde{K}\in\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime}$ with the property: $\widetilde{K}v=Kv$ for all
$v\in\left[\left\{A_2,\rho_2(\Gamma)
\right\}^{\prime\prime}
W\hat{\xi}_1\right]$. Thus $\mathcal{U}=\widetilde{K}W$ satisfies
the conditions of proposition \ref{Prop12}.
\end{proof}
\paragraph{The parameters of the states from paragraph \ref{parnatexmmpl}.}
Here we follow the notation of paragraphs \ref{parnatexmmpl} and
\ref{paragraph2.1}.
\subparagraph{State $\varphi_{sp}$.} Below we find parameters
$\left(\mathcal{H}, A,\widetilde{\mathcal{H}},\rho \right)$ from
paragraph \ref{paragraph2.1} such that
$\varphi_{sp}=\psi_A^\rho$, where $\psi_A^\rho$ defined in
proposition \ref{Prop11a}.
Let $\left(\rho,\mathcal{H}_\varphi, \xi_\varphi \right)$ be
GNS-representation of group $\Gamma$ corresponding to $\varphi$,
where $\varphi(\gamma)=\left(\rho(\gamma)\xi_\varphi,\xi_\varphi
\right)$ for all $\gamma \in\Gamma $ and $\mathcal{H}_\varphi=
\left[\rho\left(\Gamma \right)\xi_\varphi\right]$.
An easy computation shows that $\mathcal{H}=\mathcal{H}_\varphi$,
$A$ acts by
\begin{eqnarray}
A\xi= \left(\xi,\xi_\varphi \right)\xi_\varphi \;\;\;
(\xi\in\mathcal{H}),
\end{eqnarray}
and $\widetilde{\mathcal{H}}=\mathcal{H}$. It is clear
$\mathcal{H}_{reg}=0$.
\subparagraph{State $\varphi_{reg}$.} As above $\left(\rho_\varphi,\mathcal{H}_\varphi, \xi_\varphi
\right)$ is GNS-representation of $\Gamma$. If $\left(\rho_\varphi^{(k)},
\mathcal{H}_\varphi^{(k)}, \xi_\varphi^{(k)}\right)$ is $k$-th
copy of $\left(\rho_\varphi,\mathcal{H}_\varphi, \xi_\varphi
\right)$ then $$\mathcal{H}=\mathcal{H}_{reg}=
\bigoplus\limits_{k=1}^\infty\left(\rho_\varphi^{(k)},
\mathcal{H}_\varphi^{(k)}, \xi_\varphi^{(k)}\right).$$
It is obvious, $A\equiv 0$. Now define $\mathfrak{e}_{kl}^\prime$
by
$$\mathfrak{e}_{kl}^\prime\left(\xi_1, \xi_2,\ldots \right)=
\left(\underbrace{0,\ldots,0}_{k-1},\xi_l,0,0,\ldots\right).$$
Put $\rho=\bigoplus\limits_{k=1}^\infty\rho_\varphi^{(k)}$,
$\hat{\xi}=\left(\xi_\varphi,0,0,\ldots \right)$. It is easy to
check that $\varphi_{reg}=\psi_0^\rho$.
\paragraph{$\mathfrak{S}_\infty$-invariance of $\psi_A^\rho$.} The
next assertion follows from definition of $\psi_A^\rho$.
\begin{Prop}\label{Prop13a}
Let $s\in\mathfrak{S}_\infty$,
$\gamma=\left(\gamma_1,\gamma_2,\ldots\right)\in\Gamma^\infty_0$. If
$s \gamma = \prod\limits_{p\in \mathbb{N}\diagup s}
s_p \gamma(p)$, where $s_p \gamma(p)$ is generalized cycle of
$s\gamma$ (see (\ref{product})), then
$\psi_A^\rho\left(s\gamma \right)=\prod\limits_{p\in \mathbb{N}\diagup s}
\psi_A^\rho\left(s_p \gamma(p) \right)$. In particular, it follows
from Proposition \ref{multiplicativity} that $\psi_A^\rho$ is
indecomposable state on $\Gamma\wr\mathfrak{S}_\infty$.
\end{Prop}
Denote by $\left(n_1\;\,n_2\;\,\ldots \;\, n_k \right)$ cycle $
\left\{n_1\mapsto n_2\mapsto\ldots\mapsto n_k\mapsto n_1
\right\}\in\mathfrak{S}_\infty$. Suppose that
$\gamma=\left(\gamma_1,\gamma_2,\ldots\right)\in \Gamma^\infty_e$
satisfies the condition: $\gamma_i=e$ for all $i\notin
\left\{n_1,n_2,\ldots,n_k \right\}$. If ${\rm Tr}\left(|A|
\right)=1$, $c_k=\left(n_1\;\,n_2\;\,\ldots \;\, n_k \right)$ then,
using (\ref{psik}), we have
\begin{eqnarray}\label{formula1}
\psi_A^\rho\left(c_k\gamma\right)={\rm Tr}^{\otimes N}\big(
U\left(c_k\right)\left(\rho\left(\gamma_1\right)\otimes\rho\left(\gamma_2\right)\otimes\ldots
\otimes\rho\left(\gamma_N\right) \right)
A^{\otimes N}\big)
\end{eqnarray}
for all $N\geq {\rm max}\left\{n_1,n_2,\ldots,n_k \right\}$, where
${\rm Tr}^{\otimes N}$ is the ordinary trace on
$\mathcal{B}\left(\mathcal{H} \right)^{\otimes N}$, $ A^{\otimes
N}=\underbrace{A\otimes\ldots\otimes A}_N$. The next lemma extends
formula {\ref{formula1}} on the general case.
\begin{Lm}\label{Lm14}
If $k>1$ then
\begin{eqnarray*}
\psi_A^\rho\left(c_k\gamma\right)=
{\rm Tr}^{\otimes N}\big(
U\left(\left(n_1\;\,n_2\;\,\ldots \;\, n_k \right)\right)\left(\rho\left(\gamma_{n_1}\right)
\otimes\rho\left(\gamma_{n_2}\right)\otimes\ldots
\otimes\rho\left(\gamma_{n_k}\right) \right)
A^{\otimes k}\big).
\end{eqnarray*}
\end{Lm}
\begin{proof}
Let $\widetilde{P}$ be an orthogonal projection on subspace
$\widetilde{\mathcal{H}}=\mathcal{H}_+\oplus\mathcal{H}_-$ (see
paragraph \ref{paragraph2.1}). Put $E=E_1\otimes
E_2\otimes\ldots\otimes E_N\otimes\ldots$, where $E_i=
\left\{\begin{array}{ll}\widetilde{P}+\mathfrak{e}^\prime_{ii},
&\textit{ if }\;i=n_j,\\
I_\mathcal{H},&\textit{ if } i\neq n_j \text{ for all } j\in \left\{1,2,\ldots,k \right\}.
\end{array}\right.$
Considering identical operator $I\in\mathcal{B}\left(\mathcal{H}
\right)$ as element of $\mathcal{H}_A^\rho$, we obtain from
(\ref{psik}), (\ref{psik1}), (\ref{psik2})
\begin{eqnarray}\label{EII}
E I=I.
\end{eqnarray}
It follows from (\ref{UmatrixUnit}) that
\begin{eqnarray}\label{UEU}
\widetilde{E}=U\left(c_k \right)EU\left(c_k \right)^*E
=\widetilde{E}_1\otimes
\widetilde{E}_2\otimes\ldots\otimes\widetilde{E}_N\otimes\ldots,
\end{eqnarray}
where $\widetilde{E}_i=\left\{\begin{array}{ll}\widetilde{P},
&\textit{ if }\;i=n_j,\\
I_\mathcal{H},&\textit{ if } i\neq n_j \text{ for all } j\in \left\{1,2,\ldots,k \right\}.
\end{array}\right.$
By properties (1)-(4) from paragraph \ref{paragraph2.1}, using
(\ref{Piarho}) and (\ref{UmatrixUnit}), we obtain
\begin{eqnarray}\label{relee}
\Pi_A^\rho\left(\gamma\right)E=E\Pi_A^\rho\left(\gamma\right),\;
\Pi_A^\rho\left(\gamma\right)\widetilde{E}
=\widetilde{E}\Pi_A^\rho\left(\gamma\right).
\end{eqnarray}
Thus
\begin{eqnarray}
\begin{split}
\psi_A^\rho\left(c_k\gamma\right)=
\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )I,I
\right)\stackrel{(\ref{EII})}{=}
\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )EI,EI \right)\\
=\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )
\Pi_A^\rho\left(c_k \right)^*\left[\Pi_A^\rho\left(c_k \right)E
\Pi_A^\rho\left(c_k \right)^*\right]\Pi_A^\rho\left(c_k \right)I,EI
\right)\\
\stackrel{(\ref{relee})}{=}
\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )
\Pi_A^\rho\left(c_k \right)^*\Pi_A^\rho\left(c_k \right)I,\left[\Pi_A^\rho\left(c_k \right)E
\Pi_A^\rho\left(c_k \right)^*\right]EI \right)\\
\stackrel{(\ref{UEU})}{=}
\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )I,
\widetilde{E}I \right)\stackrel{(\ref{UEU}),(\ref{UmatrixUnit})}{=}
\left(\Pi_A^\rho\left(c_k \right)\Pi_A^\rho(\gamma )\widetilde{E}I,
\widetilde{E}I \right).
\end{split}
\end{eqnarray}
Hence, applying (\ref{psik}), (\ref{psik1}), (\ref{psik2}), obtain
for $N\geq {\rm max}\left\{n_1,n_2,\ldots,n_k \right\}$
$\psi_A^\rho\left(c_k\gamma\right)=\,_1\psi_N
\left(\widetilde{E}U\left(c_k\right)
\left(\rho\left(\gamma_1\right)\otimes\rho
\left(\gamma_2\right)\otimes\ldots
\otimes\rho\left(\gamma_N\right) \right)\widetilde{E}\right)$. Since
$\widetilde{P}\perp\mathfrak{e}_{kk}^\prime$ for all $k$, then
$_1\psi_N\left(\widetilde{E}U\left(c_k\right)
\left(\rho\left(\gamma_1\right)\otimes\rho\left(\gamma_2\right)\otimes\ldots
\otimes\rho\left(\gamma_N\right) \right)\widetilde{E}\right)\\=
{\rm Tr}^{\otimes N}\big(
U\left(\left(n_1\;\,n_2\;\,\ldots \;\, n_k \right)\right)\left(\rho\left(\gamma_{n_1}\right)
\otimes\rho\left(\gamma_{n_2}\right)\otimes\ldots
\otimes\rho\left(\gamma_{n_k}\right) \right)
A^{\otimes k}\big)$.
\end{proof}
\begin{Rem}
One should notice that in the case in which $c_k=1$,
\begin{eqnarray}
\psi_A^\rho\left(\gamma \right)=\prod\limits_{n=1}^\infty
\left[{\rm Tr}\left(\rho\left(\gamma_n \right)|A| \right)+\left(1-{\rm
Tr}\left(|A| \right) \right)\left(\rho\left(\gamma_n
\right)\hat{\xi},\hat{\xi} \right) \right].
\end{eqnarray}
\end{Rem}
Hence, taking into account Proposition \ref{Prop13a}, Lemma
\ref{Lm14} and (\ref{formula1}), we obtain the next important
property
\begin{eqnarray}\label{sinfinv}
\psi_A^\rho\left(sgs^{-1} \right)=\psi_A^\rho\left(g \right) \text{
for all } s\in\mathfrak{S}_\infty, g\in \Gamma\wr
\mathfrak{S}_\infty.
\end{eqnarray}
\subsubsection{KMS-condition for the $\mathfrak{S}_\infty$-central states.}
\label{KMSsec}
\paragraph{KMS-condition for $\psi_A^\rho$.}
To the general definition of the KMS-condition we refer the reader
to the book \cite{Tak}. Here we introduce the definition of the
KMS-condition for the indecomposable states only.
\begin{Def}\label{KMS} Let $\varphi$ be an indecomposable state on the group
$G$. Let $\left(\pi_\varphi,\mathcal{H}_\varphi,\xi_\varphi\right)$ be the
corresponding GNS-construction, where $\xi_\varphi$ is such that
$\varphi(g)=\left(\pi_\varphi(g)\xi_\varphi,\xi_\varphi\right)$ for each
$g\in G$. We say that $\varphi$ satisfies the KMS-condition or $\varphi$
is KMS-state, if $\xi_\varphi$ is separating\footnote{This means that for
every $a\in\pi_\varphi(G)^{\prime\prime}$ the conditions $a\xi_\varphi =0$
and $a=0$ are equivalent.} for the $w^*$-algebra
$\pi_\varphi(G)^{\prime\prime}$, generated by operators $\pi_\varphi(G)$.
\end{Def}
The main result of this paragraph is the following:
\begin{Th}\label{theorem15}
Let $\left(A, \hat{\xi}, \mathcal{H}_{reg}, \mathfrak{e}_{kl}^\prime
\right)$ satisfy the conditions {\rm (1)}-{\rm (4)} from paragraph
\ref{paragraph2.1}. State $\psi_A^\rho$ satisfies the KMS-condition
if and only if ${\rm Ker}\, A=\mathcal{H}_{reg}$ and $\hat{\xi}$ is
cyclic and separating for the restriction
$\rho_{11}=\rho\Big|_{\mathfrak{e}_{11}^\prime\mathcal{H}_{reg}}$ of
representation $\rho$ to subspace
$\mathfrak{e}_{11}^\prime\mathcal{H}$.
\end{Th}
As a preliminary to the proof of the theorem, we will discuss two
auxiliary lemmas.
\begin{Lm}\label{lemma16}
Let $\left(\pi_{\psi_k},H_{\psi_k},\xi_{\psi_k} \right)$ be
GNS-representation of $\mathcal{B}\left(\mathcal{H} \right)$ corresponding
to state $\psi_k$ (see (\ref{psik})). Fix any $\epsilon >0$ and denote by
$P_{[\epsilon,1 ]}$ the spectral projection of $\left|A \right|$. Then for
each $a\in\mathcal{B}\left(\mathcal{H} \right)$ the map
\begin{eqnarray*}
\mathfrak{R}_ {P_{[\epsilon,1 ]}aP_{[\epsilon,1
]}}:x\mapsto x\cdot P_{[\epsilon,1
]}\,a\,P_{[\epsilon,1 ]}
\end{eqnarray*}
may be extended
by continuous to the bounded operator on $H_{\psi_k}$ and
$\left\| \mathfrak{R}_ {P_{[\epsilon,1 ]}aP_{[\epsilon,1
]}} \right\|_{H_{\psi_k}}\leq \frac{\left\|a
\right\|}{\sqrt{\epsilon} }$.
\end{Lm}
\begin{proof}
Put $b=P_{[\epsilon,1 ]}aP_{[\epsilon,1 ]}$. Then
\begin{eqnarray*}
&\left(\mathfrak{R}_bx,\mathfrak{R}_bx \right)_{H_{\psi_k}}= {\rm
Tr}\left(b|A|b^*x^*x \right)\leq \left\| b|A|b^*\right\| {\rm
Tr}\left( P_{[\epsilon,1 ]}x^*x\right)\\
&=\left\| b|A|b^*\right\|\cdot{\rm Tr}\left(|A|
\cdot\left[\sum\limits_{\lambda\in[\epsilon,1 ]\cap\,{\rm
Spectrum}\,|A|} \lambda^{-1}P_\lambda\right]x^*x\right)\\
&\leq\epsilon^{-1}\cdot\left\| b|A|b^*\right\|\cdot{\rm
Tr}\left(|A|P_{[\epsilon,1 ]}x^*x
\right)\leq\epsilon^{-1}\cdot\left\| b|A|b^*\right\|\cdot{\rm
Tr}\left(|A|x^*x\right)\leq\\
&\stackrel{\ref{psik}}{=}\epsilon^{-1}\cdot\left\|
b|A|b^*\right\|\psi_k\left( x^*x\right)\leq\epsilon^{-1}\cdot\left\|
b\right\|^2\left(x^*x \right)_{H_{\psi_k}}.
\end{eqnarray*}
\end{proof}
\begin{Lm}\label{lemma17}
Suppose that for $\left(A, \hat{\xi}, \mathcal{H}_{reg},
\mathfrak{e}_{kl}^\prime \right)$ the conditions {\rm (1)}-{\rm (4)} from
paragraph \ref{paragraph2.1} hold. Denote by $P_0$ and $P_{reg}$ the
orthogonal projections onto ${\rm Ker}\,A$ and $\mathcal{H}_{reg}$
respectively. Let $ \left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right) I\right]$ be the subspace in $\mathcal{H}_A^\rho$ (see paragraphs
\ref{paragraph2.2}, \ref{paragraph2.3}), generated by
$\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right) I$. For $m\in
\left\{\rho\left(\Gamma \right) \right\}^\prime\subset\mathcal{B}(\mathcal{H})$
define the linear map
$\mathfrak{R}_m^{(k)}:\mathcal{B}(\mathcal{H})^{\otimes\infty}\mapsto\mathcal{B}(\mathcal{H})^{\otimes\infty}$
as follows
\begin{eqnarray}
\begin{split}
\mathfrak{R}_m^{(k)}\left(a_1\otimes\ldots\otimes a_k\otimes
a_{k+1}\otimes\ldots \right)\\
=a_1\otimes\ldots\otimes a_k\cdot
\mathfrak{e}_{kk}^\prime\cdot m\cdot \mathfrak{e}_{kk}^\prime\otimes a_{k+1}\otimes\ldots.
\end{split}
\end{eqnarray}
If $P_0=P_{reg}$ then
\begin{itemize}
\item {\rm (i)} $\mathfrak{R}_m^{(k)}\left(
\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)
I\right)\subset\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right) I\right]$;
\item {\rm (ii)} the extension of $\mathfrak{R}_m^{(k)}
\Big|_{\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)I}$
by continuous is bounded operator in
$\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)
I\right] \subset\mathcal{H}_A^\rho$.
\end{itemize}
\end{Lm}
\begin{proof}
To prove {\rm (i)}, it suffices to show that
$\mathfrak{R}_m^{(k)}(I)\in
\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right) I\right]$. Indeed, by property {\rm (4)},
for any $\epsilon
>0$ there exists $a_\epsilon =
\sum\limits_{g\in \Gamma_\epsilon } c_\gamma \rho(\gamma)$, where $\Gamma_\epsilon$ is
a finite subset in $\Gamma$, satisfying
\begin{eqnarray*}
\left\| \mathfrak{e}_{1k}^\prime m \mathfrak{e}_{k1}^\prime\hat{\xi}-
a_\epsilon \hat{\xi}\right\|_\mathcal{H}<\epsilon.
\end{eqnarray*}
Hence, considering $\mathfrak{R}_m^{(k)}(I)$ and $a_\epsilon^{(k)}=
\underbrace{I\otimes\ldots\otimes I}_{k-1}\otimes P_{reg}a_\epsilon P_{reg}\otimes I\otimes\ldots $
as the elements from $\mathcal{H}_A^\rho$, we have
\begin{eqnarray}\label{rightm}
\left\| \mathfrak{R}_m^{(k)}(I)- a_\epsilon^{(k)} \right\|_{\mathcal{H}_A^\rho}<\epsilon.
\end{eqnarray}
It follows from (\ref{astrans}) and (\ref{Oaction}), that operator of the
left multiplication on $\underbrace{I\otimes\ldots\otimes I}_{k-1}\otimes
P_0\otimes I\otimes\ldots $ lies in
$\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)^{\prime\prime}$.
Hence, since $P_0=P_{reg}$, we get $a_\epsilon^{(k)}\in
\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty \right)^{\prime\prime}$.
Therefore, using (\ref{rightm}), we obtain $\mathfrak{R}_m^{(k)}(I)\in
\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right) I\right]$.
Let us prove statement {\rm (ii)}.
Put $\mathfrak{S}_\infty^{(k)}=
\left\{s\in\mathfrak{S}_\infty:s(k)=k \right\}$.
First, using (\ref{sinfinv}),
we observe that
\begin{eqnarray}\label{centraliser}\begin{split}
\left(a_1b_1I,a_2b_2I\right)_{\mathcal{H}_A^\rho}=
\left(a_1b_1b_2^*0I,a_2I\right)_{\mathcal{H}_A^\rho}\\
\text{ for all
}\; a_1, a_2\in\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)^{\prime\prime} \text{ and }\;
b_1, b_2\in\Pi_A^\rho\left(\mathfrak{S}_\infty
\right)^{\prime\prime}.
\end{split} \end{eqnarray}
Denote be $\mathcal{L}_{P_0}^{(k)}$ operator of the
left multiplication on $\underbrace{I\otimes\ldots\otimes I}_{k-1}\otimes
P_0\otimes I\otimes\ldots $. By
(\ref{astrans}) and (\ref{Oaction}), $\mathcal{L}_{P_0}^{(k)}\in
\Pi_A^\rho\left(\mathfrak{S}_\infty\right)^{\prime\prime}$.
Therefore, $ \left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)\left(I- \mathcal{L}_{P_0}^{(k)}\right)I
\right]$, $\mathbf{H}_l= \left[\Pi_A^\rho
\left(\left(k\;\,l \right)\cdot\mathfrak{S}_\infty^{(k)} \right)
\Pi_A^\rho\left(\Gamma^\infty_e\right)
\mathcal{L}_{P_0}^{(k)}I \right]$
$\left(l\in\mathbb{N} \right)$ are the subspaces in
$ \left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)I
\right]$ and, according to (\ref{centraliser}), we have
\begin{eqnarray}\label{perp1}
\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)\left(I- \mathcal{L}_{P_0}^{(k)}\right)I
\right] \perp \mathbf{H}_l \text{ for all }
l\in\mathbb{N}.
\end{eqnarray}
Now we prove that subspaces $ \left\{\mathbf{H}_l
\right\}_{l\in\mathbb{N}}$ are pairwise orthogonal. For convenience
we assume that $k=1$. Denote by $E_m$ the orthogonal projection on
subspace
$\mathbb{C}\mathfrak{e}_{m1}^\prime\hat{\xi}\subset\mathcal{H}$
$\left(m\in\mathbb{N} \right)$. Put $A_m=A+\left(I-{\rm Tr}\,|A|
\right)E_m$,
$\mathfrak{E}_m^{(i)\prime}=\underbrace{I\otimes\ldots\otimes
I}_{i-1}\otimes \mathfrak{e}_{mm}^\prime\otimes I\otimes\ldots$ and
$E_m^{(i)}=\underbrace{I\otimes\ldots\otimes I}_{i-1}\otimes
E_m\otimes I\otimes\ldots$. By definition,
\begin{eqnarray}\label{ee}
E_m^{(i)}\mathfrak{E}_l^{(i)\prime}=\delta_{ml}E_m^{(i)}, \text{
where }\delta_{ml} \text{ is Kronecker's delta}.
\end{eqnarray}
It follows from the definition of $A_m$ that for $s^{-1}(1)\neq
1$ and $n>s^{-1}(1)$
\begin{eqnarray}\label{zero}
\mathfrak{E}_1^{\left(s^{-1}(1)\right)\prime}\cdot\bigotimes_{m=1}^n
A_m=0.
\end{eqnarray}
Fix any
$\widetilde{\gamma }, \widehat{\gamma }\in\Gamma^\infty_e$, $ s_1\in
\left(1\,\;l_1 \right)\mathfrak{S}_\infty^{(1)}$ and $s_2\in
\left(1\,\;l_2 \right)\mathfrak{S}_\infty^{(1)}$. Let us show that
for $l_1\neq l_2$
\begin{eqnarray}
\kappa=\left(\Pi_A^\rho\left(s_1\widetilde{\gamma }
\right)\mathcal{L}_{P_0}^{(1)}I,
\Pi_A^\rho\left(s_2\widehat{\gamma }
\right)\mathcal{L}_{P_0}^{(1)}I
\right)_{\mathcal{H}_A^\rho}=0.
\end{eqnarray}
Let ${\rm Tr}^{\otimes n}$ be the ordinary trace on $w^*$-factor
$\mathcal{B}\left(\mathcal{H} \right)^{\otimes n}$.
If $s=s_2^{-1}s_1$,
$\gamma_m=\widehat{\gamma}_{s(m)}^{\,-1}\cdot\widetilde{\gamma}_m\in\Gamma
$, $\gamma=\left(\gamma_1, \gamma_2,\ldots\right)$ and $n> {\rm max}
\left\{{\rm max}\left\{i:\gamma_i\neq e \right\},{\rm max}
\left\{i:s(i)\neq i \right\}\right\}$ then, using definition of
$\Pi_A^\rho$ (see (\ref{Piarho})), we have
\begin{eqnarray}
\kappa={\rm Tr}^{\otimes n}\left(E_1^{(1)}\cdot
U_n(s)\cdot\bigotimes_{m=1}^n\rho \left(\gamma_m \right) \cdot
E_1^{(1)}\cdot\bigotimes_{m=1}^n A_m\right),
\end{eqnarray}
where $U_n(s)$ is defined in paragraph \ref{paragraph2.3}. Hence,
applying property {\rm (4)} from paragraph {\ref{paragraph2.1}},
(\ref{ee}) and (\ref{UmatrixUnit}), we obtain
\begin{eqnarray*}
\kappa={\rm Tr}^{\otimes n}\left(E_1^{(1)}\cdot
U_n(s)\,\left(U_n(s)\right)^*\mathfrak{E}_1^{(1)\prime}\,
U_n(s)\cdot\bigotimes_{m=1}^n\rho \left(\gamma_m \right) \cdot
E_1^{(1)}\cdot\bigotimes_{m=1}^n A_m\right)\\
\stackrel{(\ref{UmatrixUnit})}{=}{\rm Tr}^{\otimes
n}\left(E_1^{(1)}\cdot
U_n(s)\,\mathfrak{E}_1^{\left(s^{-1}(1)\right)\prime}\,
\cdot\bigotimes_{m=1}^n\rho \left(\gamma_m \right) \cdot
E_1^{(1)}\cdot\bigotimes_{m=1}^n A_m\right)\\
\stackrel{{\rm property (4)}}{=}{\rm Tr}^{\otimes
n}\left(E_1^{(1)}\cdot U_n(s) \cdot\bigotimes_{m=1}^n\rho
\left(\gamma_m \right) \cdot
E_1^{(1)}\cdot\mathfrak{E}_1^{\left(s^{-1}(1)\right)\prime}\cdot\bigotimes_{m=1}^n
A_m\right)\stackrel{(\ref{zero})}{=} 0.
\end{eqnarray*}
Therefore,
\begin{eqnarray}\label{perp2}
\mathbf{H}_l\perp \mathbf{H}_m \text{ for all } l\neq m.
\end{eqnarray}
As in the proof of {\rm (i)},
$\mathfrak{R}_m^{(1)}(I)=\mathfrak{e}_{11}^\prime m\mathfrak{e}_{11}^\prime
\otimes I\otimes I\otimes\ldots$ lies in subspace $
\left[\Pi_A^\rho\left(\Gamma_e^\infty \right)
\mathfrak{L}_{P_0}^{(1)}I\right]\subset \mathbf{H}_1$. Therefore,
\begin{eqnarray}
\Pi_A^\rho
\left(\left(1\;\,l \right)\cdot\mathfrak{S}_\infty^{(1)} \right)
\Pi_A^\rho\left(\Gamma^\infty_e\right)
\mathcal{L}_{P_0}^{(1)}\mathfrak{R}_m^{(1)}(I)\subset\mathbf{H}_l.
\end{eqnarray}
Further, using (\ref{UmatrixUnit}) and relation
\begin{eqnarray*}
\mathfrak{R}_m^{(1)}\Pi_A^\rho
\left(\left(1\;\,l \right)\cdot s \right)
\Pi_A^\rho(\gamma )
\mathcal{L}_{P_0}^{(1)}
(I)\stackrel{(\ref{UmatrixUnit})}{=}
\mathcal{L}_{\mathfrak{e}_{11}^\prime m\mathfrak{e}_{11}^\prime}^{(l)}
\Pi_A^\rho
\left(\left(1\;\,l \right)\cdot s \right)
\Pi_A^\rho(\gamma )
\mathcal{L}_{P_0}^{(1)}(I),
\end{eqnarray*}
where $s\in\mathfrak{S}_\infty^{(1)}$, $\gamma \in\Gamma^\infty_e$,
we obtain that $\mathfrak{R}_m^{(1)}$ is the bounded operator on
$\mathbf{H}_l$ and $\left\|\mathfrak{R}_m^{(1)}
\right\|_{\mathbf{H}_1}\leq \left\|\mathfrak{e}_{11}^\prime
m\mathfrak{e}_{11}^\prime \right\|_{\mathcal{H}}$. Since, by
(\ref{perp1}) and (\ref{perp2}),
\begin{eqnarray}
\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)I
\right]=
\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)\left(I- \mathcal{L}_{P_0}^{(1)}\right)I
\right]\bigoplus\limits_{m=1}^\infty \mathbf{H}_m,
\end{eqnarray}
and $\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)\left(I- \mathcal{L}_{P_0}^{(1)}\right)I
\right]\subset {\rm Ker}\,\mathfrak{R}_m^{(1)}$, operator
$\mathfrak{R}_m^{(1)}$ is bounded on subspace $\left[\Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)I
\right]$.
\end{proof}
\begin{proof}[{\bf The proof of Theorem \ref{theorem15}.}]
Let $\Pi_A^{\rho\,0}$ be the restriction $\Pi_A^\rho$ to subspace $
\left[ \Pi_A^\rho\left(\Gamma\wr\mathfrak{S}_\infty
\right)I\right]$. Obvious, $\Pi_A^{\rho\,0}$ and GNS-representation
of $\Gamma\wr\mathfrak{S}_\infty$, corresponding to $\psi_A^\rho$,
are naturally unitary equivalent. Let us prove that $I$ is the
cyclic vector for $\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^\prime$.
For any $n\in\mathbb{N}$ fix $\gamma
=\left(\gamma_1,\gamma_2,\ldots,\gamma_n,e,e,\ldots\right)\in\Gamma^\infty_e$
and $s\in\mathfrak{S}_n$. Put $\eta=\Pi_A^\rho(\gamma)I=
\left(\bigotimes\limits_{m=1}^n \rho\left(\gamma_m\right)\right)\otimes I\otimes
I\otimes\ldots\in \left[\Pi_A^{\rho\,0}\left(\Gamma^\infty_e
\right)I \right]\subset
\left[\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty \right)I \right]$.
If $P_{[\epsilon,1 ]}$ is the spectral projection of $|A|$ then, by
(\ref{astrans}), (\ref{Oaction}) and lemma \ref{lemma17} (i), for every
$m_j^\prime\in \rho(\Gamma)^\prime$
\begin{eqnarray*}
a_\epsilon
=\left(\bigotimes\limits_{j=1}^n\left(P_{[\epsilon,1
]}\rho\left(\gamma_j\right)P_{[\epsilon,1
]}+\mathfrak{e}_{jj}^\prime m_j^\prime\mathfrak{e}_{jj}^\prime
\right) \right)\otimes I\otimes I\otimes
\ldots\in\left[\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)I \right].
\end{eqnarray*}
Since $\hat{\xi}$ is cyclic and separating for the restriction
$\rho_{11}=\rho\Big|_{\mathfrak{e}_{11}^\prime\mathcal{H}_{reg}}$
and ${\rm Ker}\, A=\mathcal{H}_{reg}$, then for any $\delta >0$
there exist $\epsilon >0$ and
$ \left\{m_j^\prime \right\}_{j=1}^n\subset\rho(\Gamma)^\prime$
such that
\begin{eqnarray*}
\left\| \Pi_A^\rho(\gamma) I - a_\epsilon
\right\|_{\mathcal{H}_A^\rho}<\delta.
\end{eqnarray*}
But, by lemmas \ref{lemma16}-\ref{lemma17}, operator
$\mathfrak{R}_{a_\epsilon }$ of right multiplication on
$a_\epsilon$ lies in $\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^\prime$. Therefore,
\begin{eqnarray}\label{gammaprim}
\Pi_A^\rho(\gamma) I\in
\left[\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^\prime I \right].
\end{eqnarray}
Now we note that, by (\ref{sinfinv}), the right multiplication on
$U(s)$ defines the unitary operator
$\mathfrak{R}_{U(s)}\in\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^\prime$. It follows from (\ref{gammaprim}) that
$\Pi_A^\rho\left(\gamma\,s \right)I=
\mathfrak{R}_{U(s)}\left(\Pi_A^\rho(\gamma) I
\right)\in\left[\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^\prime I \right]$. Therefore $I$ is the cyclic vector for
$\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty \right)^\prime$.
Conversely, suppose that $\psi_A^\rho$ is KMS-state on
$\Gamma\wr\mathfrak{S}_\infty$. Define state $\widehat{\psi}_A^\rho\in
\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty \right)^{\prime\prime}_*$
as follows
\begin{eqnarray}
\widehat{\psi}_A^\rho(a)=\left(a I,I \right)_{\mathcal{H}_A^\rho}.
\end{eqnarray}
Then, by propositions \ref{multiplicativity} and \ref{Prop13a},
$\widehat{\psi}_A^\rho$ is faithful state. This means that for every $a\in
\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty \right)^{\prime\prime}$
the conditions $\widehat{\psi}_A^\rho(a^*a) =0$ and $a=0$ are equivalent.
Let us prove that ${\rm Ker}\,A=\mathcal{H}_{reg}$. If
$\mathcal{H}_{reg}\varsubsetneqq {\rm Ker}\,A$ then, by properties {\rm
(1)}-{\rm (4)} from paragraph \ref{paragraph2.1}, there exists $\gamma
\in\Gamma$ such that
\begin{eqnarray}
\rho(\gamma)\left(P_{]0,1]}+P_{[-1,0[}\right)\neq
\left(P_{]0,1]}+P_{[-1,0[}\right)\rho(\gamma).
\end{eqnarray}
It follows from this
\begin{eqnarray*}
Q=\left(\left(P_{]0,1]}+P_{[-1,0[}\right)\vee
\rho(\gamma)\left(P_{]0,1]}+P_{[-1,0[}\right)\rho(\gamma )^*\right)-
\left(P_{]0,1]}+P_{[-1,0[}\right)\neq 0.
\end{eqnarray*}
Since $Q\in \mathfrak{A}$, where $\mathfrak{A}$ is defined in property {\rm
(1)} from paragraph \ref{paragraph2.1}, then, by
(\ref{astrans})-(\ref{Oaction}), operator $\mathfrak{L}_Q^{(k)}$ of the
left multiplication on $\left(\otimes_{m=1}^{k-1}I\right)\otimes Q\otimes
I\otimes\ldots$ lies in $\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^{\prime\prime}$. Thus $\widehat{\psi}_A^\rho \left(
\mathfrak{L}_Q^{(k)} \right)= {\rm Tr}\,(Q\cdot|A|)=0$. But this
contradicts the faithfulness of $\widehat{\psi}_A^\rho$.
Now we prove that $\hat{\xi}$ is cyclic and separating for the
representation
$\rho_{11}=\rho\Big|_{\mathfrak{e}_{11}^\prime\mathcal{H}_{reg}}$.
Denote by $E_{11}$ the projection onto $\left[
\rho_{11}(\Gamma)^\prime\hat{\xi}\right]$ and suppose $\left[
\rho_{11}(\Gamma)^\prime\hat{\xi}\right]\subsetneqq\left[
\rho_{11}(\Gamma)\hat{\xi}\right]$. It follows from this that
\begin{eqnarray}\label{nokms}
E_{11}\in\rho_{11}(\Gamma)^{\prime\prime},\;
F_{11}=\mathfrak{e}_{11}^\prime - E_{11}\neq 0 \;\text{ and } \;
F_{11}\hat{\xi}=0.
\end{eqnarray}
Denote by $P_{reg}$ the orthogonal projection onto
$\mathcal{H}_{reg}$. Since ${\rm Ker}\, A=\mathcal{H}_{reg}$, then
\begin{eqnarray*}
P_{reg}\in\mathfrak{A}\; \text{ and } \;P_{reg}\cdot\rho\left(\Gamma
\right)^{\prime\prime}\cdot P_{reg}\subset\mathfrak{A}.
\end{eqnarray*}
Hence, by properties {\rm (2)} and {\rm (4)} from paragraph
\ref{paragraph2.1}, we obtain
\begin{eqnarray*}
F=\sum\limits_{m=1}^\infty \mathfrak{e}_{m1}^\prime\cdot F_{11}
\cdot \mathfrak{e}_{1m}^\prime\in P_{reg}\cdot\rho\left(\Gamma
\right)^{\prime\prime}.
\end{eqnarray*}
Hence, using (\ref{astrans})-(\ref{Oaction}), we obtain that
operator $\mathfrak{L}_F^{(k)}$ of the left multiplication on
$\left(\otimes_{m=1}^{k-1}I\right)\otimes F\otimes I\otimes\ldots$
lies in $\Pi_A^{\rho\,0}\left(\Gamma\wr\mathfrak{S}_\infty
\right)^{\prime\prime}$. It follows from this and (\ref{nokms}) that
$\widehat{\psi}_A^\rho \left( \mathfrak{L}_F^{(k)} \right)=0$.
\end{proof}
\subsubsection{The main result.}\label{Themainresult}
In this section we prove the main
result of this paper:
\begin{Th}\label{main} Let $\varphi$ be any indecomposable $\mathfrak{S}_\infty$-central
state on the group $\Gamma\wr \mathfrak{S}_\infty$. Then there exist
self-adjoint operator $A$ of the {\it trace class} (see
{\cite{RS}}) from $\mathcal{B}(\mathcal{H})$ and unitary
representation $\rho$ with the properties {\rm (1)}-{\rm (4)}
(paragraph \ref{paragraph2.1}) such that $\varphi =\psi_A^\rho$ (see
Proposition {\ref{Prop11a}}).
\end{Th}
We have divided the proof into a sequence of lemmas and
propositions. First we introduce some new objects and notations.
\paragraph{Asymptotical transposition.} Let
$\left(\pi_\varphi,H_\varphi,\xi_\varphi\right)$ be
GNS-representation of $\Gamma\wr\mathfrak{S}_\infty$ associated with
$\varphi$, where
$\varphi(g)=\left(\pi_\varphi (g)\xi_\varphi,\xi_\varphi \right)$
for all $g\in \Gamma\wr\mathfrak{S}_\infty$. In the sequel for
convenience we denote group $\Gamma\wr\mathfrak{S}_\infty$ by $G$.
Put
\begin{eqnarray*}
G_n(\infty)=&\Big\{ s\gamma\in G\big|\;s\in\mathfrak{S}_\infty,
\gamma=\left(\gamma_1,\gamma_2,\ldots\right)\in\Gamma^\infty_e,&\\
&s(l)=l \;\text{ and }\;
\gamma_l=e \;\text{ for } \;l=1,2,\cdots, n \Big\},&\\
G_n = &\left\{s\gamma\in G\big|\;s(l)=l \;\text{ and }\;
\gamma_l=e \;\text{ for all } \;l>n \right\},&\\
G^{(k)}= &\left\{s\gamma\in G\big| \;s(k)=k
\;\text{ and }\;\gamma_k=e\right\}.&
\end{eqnarray*}
It is clear that $G_0(\infty)=G$.
\begin{Prop}\label{Prop19}
Let $\left(i\;j \right)$ denotes the transposition exchanging $i$
and $j$. In the weak operator topology there exists
$\lim\limits_{j\to\infty}\pi_\varphi \left(\left(i\;j \right)
\right)$.
\end{Prop}
\begin{proof}
It is suffices to show that for any $g,h\in G$ there exists
$\lim\limits_{j\to\infty}\left(\pi_\varphi \left(\left(i\;j
\right)\right)\pi_\varphi (g)\xi_\varphi , \pi_\varphi
(g)\xi_\varphi \right)$.
Find $N>i$ such that $g,h\in G_n$ for all $n\geq N$. Since $\varphi$
is $\mathfrak{S}_\infty$-central, then
\begin{eqnarray*}
\left(\pi_\varphi \left(\left(i\;N \right)\right)\pi_\varphi
(g)\xi_\varphi , \pi_\varphi (g)\xi_\varphi \right)\\=
\left(\pi_\varphi \left(\left(i\;j \right)\right)\pi_\varphi
(g)\pi_\varphi ((n\;N))\xi_\varphi , \pi_\varphi (g)\pi_\varphi
((n\;N))\xi_\varphi \right)\\= \left(\pi_\varphi \left(\left(i\;n
\right)\right)\pi_\varphi (g)\xi_\varphi , \pi_\varphi
(g)\xi_\varphi \right).
\end{eqnarray*}
Thus $\lim\limits_{j\to\infty}\left(\pi_\varphi \left(\left(i\;j
\right)\right)\pi_\varphi (g)\xi_\varphi , \pi_\varphi
(g)\xi_\varphi \right)=\left(\pi_\varphi \left(\left(i\;N
\right)\right)\pi_\varphi (g)\xi_\varphi , \pi_\varphi
(g)\xi_\varphi \right)$.
\end{proof}
We will call $\mathcal{O}_i=\lim\limits_{j\to\infty}\pi_\varphi
\left(\left(i\;j \right) \right)$ the {\it asymptotical}
transposition.
\paragraph{The properties of the asymptotical transposition.}
\begin{Lm}\label{Fn}
Let $g,h\in G^{(n)}$. Then for each $k\neq n$ the next relation
holds: \begin{eqnarray}
\left(\pi_\varphi\left(g\cdot(n\;k)\cdot h\right)\xi_\varphi,
\xi_\varphi\right)=\left(\pi_\varphi(g)
\mathcal{O}_k\pi_\varphi(h)\xi_\varphi,\xi_\varphi\right)
\end{eqnarray}
\end{Lm}
\begin{proof}
Fix $N\in\mathbb{N}$ such that $g,h\in G_N\cap G^{(n)}$. Then for each $m>N$ we have:
$(n\;m)\cdot g=g\cdot(n\;m)$, $(n\;m)\cdot h=h\cdot(n\;m)$. Hence, by the
$\mathfrak{S}_\infty$-centrality of $\varphi $, we obtain
\begin{eqnarray*}
\left(\pi_\varphi\left(g \cdot(n\;k)\cdot
h\right)\xi_\varphi,\xi_\varphi\right)=\varphi\left(g\cdot (n\;k)
\cdot h\right)=\varphi\left((n\;m) \cdot g \cdot(n\;k)\cdot
h\cdot (n\;m)\right)=\\
\left(\pi_\varphi\left((n\;m) \cdot g (n\;k)\cdot
h \cdot(n\;m)\right)\xi_\varphi,\xi_\varphi\right)=
\left(\pi_\varphi\left(g\cdot (m\;k)\cdot
h\right)\xi_\varphi,\xi_\varphi\right).
\end{eqnarray*}
Approaching the limit as $m\to\infty$ we obtain the required
assertion.
\end{proof}
\begin{Lm}\label{abelian} The next relations hold true:
\begin{itemize}
\item[{\rm (1)}]\label{O1}$\mathcal{O}_k \mathcal{O}_n=\mathcal{O}_n \mathcal{O}_k$
for all $
k,n\in\mathbb{N}$;
\item[{\rm (2)}]\label{O2}$\mathcal{O}_k
\pi_\varphi\left(\gamma\right)=\pi_\varphi
\left(\gamma\right) \mathcal{O}_k$
for all $\gamma=\left(\gamma_1,\gamma_2,\ldots \right)
\in\Gamma_e^\infty$ such that $\gamma_k=e $;
\item[{\rm (3)}]\label{O3}$
\pi_\varphi(s) \mathcal{O}_k=\mathcal{O}_{s(k)} \pi_\varphi(s)$
for all $s\in\mathfrak{S}_{\infty}$.
\end{itemize}
\end{Lm}
The proof follows immediately from definition $\mathcal{O}_k$
(Proposition \ref{Prop19}). The details are left the reader.\qed
\medskip
We will use the notation $\mathfrak{A}_j$ \label{definition mathfrakAj} for the
$W^*-$algebra generated by the operators
$\pi_\varphi\left( \gamma\right)$, where
$\gamma=\left(e,\cdots,e,\gamma_j,e,\cdots \right)$
and $\mathcal{O}_j$. There is the natural isomorphism $\phi_{j,k}$ between
$\mathfrak{A}_j$ and
$\mathfrak{A}_k$ for any $k$ and $j$:
\begin{eqnarray}\label{algebras}
\phi_{j,k}:\mathfrak{A}_k\rightarrow
\mathfrak{A}_j,\;\phi_{j,k}(a)=\pi_\varphi\left((k\;j)\right)
a\pi_\varphi\left((k\;j)\right).
\end{eqnarray}
Observe that $\left(\phi_{j,k}(a)\xi_\varphi,\xi_\varphi\right)=
\left(a\xi_\varphi,\xi_\varphi\right)$ for all $k$, $j$ and $a\in \mathfrak{A}_k$.
The next statement is the simple technical generalization of proposition
\ref{multiplicativity}.
\begin{Lm}\label{lemma22}
Let $s = \prod\limits_{p\in \mathbb{N}\diagup s}
s_p $ be the decomposition of $s\in\mathfrak{S}_\infty$ into the
product of cycles $s_p$, where $p\subset\mathbb{N}$ is the corresponding
orbit. Fix any finite collection $\left\{ U_j \right\}_{j=1}^N$ of the
elements from $\pi_\varphi \left( G \right)^{\prime\prime}$. If
$U_j\in\mathfrak{A}_j$ then
\begin{eqnarray}\label{gen mult formula}
\left(\pi_\varphi(s)\prod \limits_j U_j\xi_\varphi,\xi_\varphi\right)=
\prod\limits_{p\in \mathbb{N}/s}
\left(\pi_\varphi(s_p)\prod \limits_{j\in p}
U_j\xi_\varphi,\xi_\varphi\right).
\end{eqnarray}
\end{Lm}
\begin{Prop}\label{fioncycle}
Let $s_p\in\mathfrak{S}_\infty$ be the cyclic permutation on the set
$p=\left\{k_1,k_2,\ldots,k_{|p\,|}\right\}\subset \mathbb{N},$
where $k_l=s^{1-\,l}(k_1).$ If $U_{k_i}\in \mathfrak{A}_{k_i}$ for all
$k_i\in p$ then
\begin{eqnarray}\label{mixture of U}
\begin{split}
\left(\pi_\varphi(s_p)U_{k_1}U_{k_2}\cdots
U_{k_{|p|}}\xi_\varphi,\xi_\varphi\right)\\=
\left(\phi_{k_{|p|}k_1}\left(U_{k_1}\right)\mathcal{O}_{k_{|p|}}
\phi_{k_{|p|}k_2}\left(U_{k_2}\right)\mathcal{O}_{k_{|p|}}\cdots\mathcal{O}_{k_{|p|}}
U_{k_{|p|}}\xi_\varphi,\xi_\varphi\right).
\end{split}
\end{eqnarray}
\end{Prop}
\begin{proof}
For convenience we suppose that $p=\{1,2,\ldots, n\}$ and
\begin{eqnarray*}
s_p(k)=\left\{\begin{array}{ll}
k-1,&\textit{if }k>1\\n,&\textit{if }k=1
\end{array}.\right.
\end{eqnarray*}
Since $s_p= (1\;n)(2\;n)\cdots (n-1\;\,n)$, we obtain
\begin{eqnarray*}
\begin{split}
&\left(\pi_\varphi \left( s_p \right) U_1U_2\cdots U_n\xi_\varphi,
\xi_\varphi\right)\\
&=
\left(\pi_\varphi \left( (1\;n)(2\;n)\cdots(n-2\;n) \right)
U_1U_2\cdots\pi_\varphi \left((n-1\;\,n) \right)U_{n-1} U_n\xi_\varphi,
\xi_\varphi\right)\\
&=\left(\pi_\varphi \left( (1\;n)(2\;n)\cdots(n-2\;n) \right)
U_1U_2\cdots \phi_{n,n-1}\left(U_{n-1} \right)
\pi_\varphi \left((n-1\;\,n)\right) U_n\xi_\varphi,
\xi_\varphi\right).
\end{split}
\end{eqnarray*}
Hence, using $\mathfrak{S}_\infty$-invariance of $\varphi$ and lemma
\ref{abelian}, for any $N>n$ we have
\begin{eqnarray*}
\left(\pi_\varphi \left( s_p \right) U_1U_2\cdots U_n\xi_\varphi,
\xi_\varphi\right)=\left(\pi_\varphi \left((n-1\;\,N) s_p (n-1\;\,N)\right) U_1U_2\cdots U_n\xi_\varphi,
\xi_\varphi\right)\\
=\left(\pi_\varphi \left( (1\;n)(2\;n)\cdots(n-2\;n) \right)
U_1U_2\cdots \phi_{n,n-1}\left(U_{n-1} \right)
\pi_\varphi \left((N\;\,n)\right) U_n\xi_\varphi,
\xi_\varphi\right).
\end{eqnarray*}
Approaching the limit as $N\to\infty$, we obtain
\begin{eqnarray*}
&\left(\pi_\varphi \left( s_p \right) U_1U_2\cdots U_n\xi_\varphi,
\xi_\varphi\right)\\
&=
\left(\pi_\varphi \left( (1\;n)(2\;n)\cdots(n-2\;n) \right)
U_1U_2\cdots U_{n-2} \phi_{n,n-1}\left(U_{n-1} \right)
\mathcal{O}_n U_n\xi_\varphi,
\xi_\varphi\right).
\end{eqnarray*}
Since $ \phi_{n,n-1}\left(U_{n-1} \right) \mathcal{O}_n$, then, by the
obvious induction, we have
\begin{eqnarray*}
&\left(\pi_\varphi \left( s_p \right) U_1U_2\cdots U_n\xi_\varphi,
\xi_\varphi\right)\\
&=
\left(\phi_{n,1}\left(U_1 \right)
\mathcal{O}_n \phi_{n,2}\left(U_{2} \right)
\mathcal{O}_n \cdots \phi_{n,n-2}\left(U_{n-2} \right)
\mathcal{O}_n \phi_{n,n-1}\left(U_{n-1} \right)
\mathcal{O}_n U_n\xi_\varphi,
\xi_\varphi\right).
\end{eqnarray*}
\end{proof}
The next statement is an analogue of Theorem 1 from
{\cite{Ok2}}.
\begin{Lm}\label{discrete}
Let $[a,b]$ belongs to $[-1,0]$ or
$[0,1]$. with the property
. Denote by $E_{[a,b]}^{(i)}$ the spectral projection of
self-adjoint operator $\mathcal{O}_i$.
If $\;\min\left\{ |a|,|b| \right\}>\varepsilon>0$ then
$\left( E_{[a,b]}^{(i)}
\xi_\varphi,\xi_\varphi\right)^2\geq\varepsilon
\left(E_{[a,b]}^{(i)}
\xi_\varphi,\xi_\varphi\right)$.
\end{Lm}
This result may be proved in much the same way as
theorem 1 from {\cite{Ok2}}. For convenience we give below the
full proof of lemma \ref{discrete}.
\begin{proof}
Using Lemma {\ref{Fn}}, we
have
\begin{eqnarray}\label{est}
\begin{split}
&\left|\left( \pi_\varphi\left( (i,i+1)\right)
E_{[a,b]}^{(i)}
\xi_\varphi,
E_{[a,b]}^{(i)}
\xi_\varphi \right)\right|=\\
&\left| \left( \mathcal{O}_i E_{[a,b]}^{(i)}\xi_\varphi,
E_{[a,b]}^{(i)} \xi_\varphi \right)\right|\geqslant\varepsilon
\left| \left( E_{[a,b]}^{(i)}\xi_\varphi, \xi_\varphi \right)\right|.&
\end{split}
\end{eqnarray}
Hence, applying
(\ref{algebras}) and lemma \ref{abelian}, we obtain
\begin{eqnarray*}
& E_{[a,b]}^{(i)}\pi_\varphi\left( (i,i+1)\right) E_{[a,b]}^{(i)}=
E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}
\pi_\varphi\left( (i,i+1)\right)
= \\&E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}\pi_\varphi\left((i,i+1)\right)
E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
&\left|\left( \pi_\varphi\left( (i,i+1)\right)
E_{[a,b]}^{(i)}
\xi_\varphi,
E_{[a,b]}^{(i)}
\xi_\varphi \right)\right|\\
&=\left|\left( \pi_\varphi\left( (i,i+1)\right)
E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}
\xi_\varphi, E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}\xi_\varphi \right)\right|\\
&\leq \left|\left(
E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}
\xi_\varphi, \xi_\varphi \right)\right|\stackrel{(Lemma
\ \ref{lemma22})}{=}
\left(E_{[a,b]}^{(i)} E_{[a,b]}^{(i+1)}\xi_\varphi, \xi_\varphi \right)^2.
\end{eqnarray*}
Hence, using ({\ref{est}}), we obtain the statement of lemma
\ref{discrete}.
\end{proof}
Let
$P_0^{(i)}$ be the orthogonal projection on
${\rm Ker}\,\mathcal{O}_i$.
Put $P_{\pm}^{(i)}=I-P_0^{(i)}$.
\begin{Lm}\label{separating}
Vector $\xi_\varphi$ is separating for $w^*$-algebra
$P^{(j)}_{\pm}\mathfrak{A}_j
P_{\pm}^{(j)}$.
\end{Lm}
\begin{proof}
Let $V\in P^{(j)}_{\pm}\mathfrak{A}_j
P_{\pm}^{(j)}$ and let $V\xi_\varphi=0$.
It suffices to show that
\begin{eqnarray}\label{3.10aa}
\left(\pi_\varphi\left(g
\right)\xi_\varphi,\mathcal{O}_jV^*\pi_\varphi\left(h
\right)\xi_\varphi\right)=0 \;\text{ for all } \;g,h\in G.
\end{eqnarray}
First we note that, by $\mathfrak{S}_\infty$-invariance $\varphi$,
\begin{eqnarray}\label{inva}
\pi_\varphi\left(s \right) V\pi_\varphi\left(s^{-1}
\right)\xi_\varphi=0 \text{ for all } s\in\mathfrak{S}_\infty.
\end{eqnarray}
Further, if $g\in G_N$ then for all $n>N$
\begin{eqnarray*}
\pi_\varphi\left(\left(j\;n
\right)\right)V^*\pi_\varphi\left(\left(j\;n
\right)\right)\pi_\varphi\left(g\right)= \pi_\varphi\left(g\right)
\pi_\varphi\left(\left(j\;n \right)\right)
V^*\pi_\varphi\left(\left(j\;n \right)\right).
\end{eqnarray*}
Hence, using definition of $\mathcal{O}_j$ (see proposition
\ref{Prop19}),
\begin{eqnarray*}
\begin{split}
\left(\pi_\varphi\left(g
\right)\xi_\varphi,\mathcal{O}_jV^*\pi_\varphi\left(h
\right)\xi_\varphi\right)=\lim\limits_{n\to\infty}
\left(\pi_\varphi\left(g
\right)\xi_\varphi,\pi_\varphi\left(\left(j\;n
\right)\right)V^*\pi_\varphi\left(h \right)\xi_\varphi\right)\\
=\lim\limits_{n\to\infty}\left(\pi_\varphi\left(\left(j\;n
\right)\right)V\pi_\varphi\left(\left(j\;n
\right)\right)\xi_\varphi,\pi_\varphi\left(g^{-1}
\right)\pi_\varphi\left(\left(j\;n \right)\right)\pi_\varphi\left(h
\right)\xi_\varphi\right)\stackrel{(\ref{inva})}{=}0.
\end{split}
\end{eqnarray*}
Thus (\ref{3.10aa}) is proved.
\end{proof}
The following statement is well known for the case of separating vector $\xi_\varphi$ (see
{\cite{Ok2}}).
In our case it follows from lemmas {\ref{discrete}} and {\ref{separating}}.
\begin{Co}\label{co26}
There exist at most countable set of numbers
$\alpha_i$ from $[-1,0)\cup (0,1]$
and a set of the pairwise orthogonal projections $
\left\{P_{\alpha_i}^{(j)}\right\}
\subset \mathfrak{A}_j$
such that
\begin{eqnarray}
\mathcal{O}_j=P_0^{(j)}+\sum\limits_i \alpha_i P_{\alpha_i}^{(j)}.
\end{eqnarray}
\end{Co}
\begin{Lm}\label{lemma27aa}
Let $\alpha,\beta\in {\rm Spectrum} \;\mathcal{O}_j$. If $\alpha \beta <0$ then
$P_{\alpha}^{(j)}\mathfrak{A}_jP_{\beta}^{(j)}=0$.
\end{Lm}
\begin{proof}
By lemma \ref{separating}, it suffices to show that
\begin{eqnarray}\label{103aa}
P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi =0\;\text{ for all }
U\in\mathfrak{A}_j.
\end{eqnarray}
First we note that
\begin{eqnarray*}
\left\|P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi\right\|^2=
\left( P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi,
\xi_\varphi\right)=\frac{1}{\alpha}\left( P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\mathcal{O}_jUP_{\beta}^{(j)}\xi_\varphi,\xi_\varphi\right).
\end{eqnarray*}
Hence, using proposition \ref{fioncycle}, we receive
\begin{eqnarray}
\left\|P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi\right\|^2=
\frac{1}{\alpha}\left( P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\pi_\varphi \left(\left(j\;\,j+1 \right) \right)
P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi,\xi_\varphi\right).
\end{eqnarray}
It follows from lemma \ref{abelian} that
\begin{eqnarray*}
\left\|P_{\alpha}^{(j)}UP_{\beta}^{(j)}\xi_\varphi\right\|^2=
\frac{1}{\alpha}\left( P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\phi_{j+1,j}\left(P_{\alpha}^{(j)}UP_{\beta}^{(j)} \right)
\pi_\varphi \left(\left(j\;\,j+1 \right) \right)\xi_\varphi,
\xi_\varphi\right)\\
=\frac{1}{\alpha}\left(
\phi_{j+1,j}\left(P_{\alpha}^{(j)}UP_{\beta}^{(j)} \right)
P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\pi_\varphi \left(\left(j\;\,j+1 \right) \right)\xi_\varphi,
\xi_\varphi\right)\\
=\frac{1}{\alpha}\left(
\phi_{j+1,j}\left(P_{\alpha}^{(j)}UP_{\beta}^{(j)} \right)
\pi_\varphi \left(\left(j\;\,j+1 \right) \right)
\phi_{j+1,j}\left(P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}\right)
\xi_\varphi,\xi_\varphi\right)\\
=\frac{1}{\alpha}\left(
P_{\alpha}^{(j)}UP_{\beta}^{(j)}
\pi_\varphi \left(\left(j\;\,j+1 \right) \right)
P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\xi_\varphi,\xi_\varphi\right)\\
\stackrel{\text{proposition \ref{fioncycle}}}{=}
\frac{1}{\alpha}\left(
P_{\alpha}^{(j)}UP_{\beta}^{(j)}
\mathcal{O}_j
P_{\beta}^{(j)}U^*P_{\alpha}^{(j)}
\xi_\varphi,\xi_\varphi\right)=
\frac{\beta}{\alpha}\left(
P_{\alpha}^{(j)}UP_{\beta}^{(j)}
U^*P_{\alpha}^{(j)}
\xi_\varphi,\xi_\varphi\right)\leq0.
\end{eqnarray*}
Therefore, (\ref{103aa}) holds true.
\end{proof}
The next assertion is an analogue
of the theorem 2 from {\cite{Ok2}}.
\begin{Lm}\label{P_alpha}
Let $\alpha\neq 0$ be the eigenvalue of operator $\mathcal{O}_j$
and let $P_\alpha^{(j)}$ be the corresponding spectral projection.
Take any orthogonal projection
$P\in P_\alpha^{(j)}\mathfrak{A}_jP_\alpha^{(j)}$ and put $\nu(P)=
\left( P\xi_\varphi,\xi_\varphi\right)/|\alpha|$.
Then
$\nu(P)\in\mathbb{N}\cup\{0\}$.
\end{Lm}
\begin{proof}
We use the arguments of
Kerov, Olshanski, Vershik {\cite{KOV}} and Okounkov
{\cite{Ok2}}. Let $j=1$.
First consider the case $\alpha>0$. For $n\in\mathbb{N}$
put $\eta_n=\prod_{m=0}^{n-1} \phi_{1+m,1}(P)\xi_\varphi$.
Let $s\in\mathfrak{S}_n$.
In each orbit $p\in\mathbb{N}/s$ fix number
$\mathfrak{s}(p)$.
Since $\prod_{m=0}^{n-1} \phi_{1+m,1}(P)$
is an orthogonal projection and
\begin{eqnarray*}
\pi_\varphi(s)\cdot\prod_{m=0}^{n-1} \phi_{1+m,1}(P)
=\prod_{m=0}^{n-1} \phi_{1+m,1}(P)\cdot\pi_\varphi(s),
\end{eqnarray*}
then we
have
\begin{eqnarray}\label{*}\begin{split}
&\left(\pi_\varphi(s)\eta_n,\eta_n\right)=
\left(\pi_\varphi(s)\prod_{m=0}^{n-1} \phi_{1+m,1}(P)\xi_\varphi,
\xi_\varphi\right)\\
& \stackrel{\text{lemma \ref{lemma22}}}{=}
\prod\limits_{p\in\left\{\mathbb{N}/s:p\,\subset [1,n]\right\}}
\left(\pi_\varphi(s_p)\prod\limits_{k\in p}
\phi_{k,j}(P)\xi_\varphi,
\xi_\varphi\right)\\
&\stackrel{\text{prop \ref{fioncycle}}}{=}
\prod\limits_{p\in\left\{\mathbb{N}/s:p\,\subset [1,n]\right\}}
\left(\phi_{{\mathfrak{s}(p)},1}\left(P\right)\cdot
\mathcal{O}_{\mathfrak{s}(p)}\cdot
\phi_{{\mathfrak{s}(p)},1}\left(P\right)\cdot\mathcal{O}_{\mathfrak{s}(p)}\cdots
\mathcal{O}_{\mathfrak{s}(p)}\cdot\phi_{{\mathfrak{s}(p)},1}\left(P\right)\xi_\varphi,\xi_\varphi\right)\\
& =\prod\limits_{p\in\left\{\mathbb{N}/s:p\,\subset [1,n]\right\}}\alpha^{|p|-1}
\left(\phi_{{\mathfrak{s}(p)},1}\left(P\right)\xi_\varphi,\xi_\varphi\right)
=\alpha^n\nu^{\,l(s)},
\end{split}
\end{eqnarray} where $l(s)$ is the number of cycles in the decomposition of permutation $s.$
Now define orthogonal projection
$Alt(n)\in\pi_\varphi\left(\mathfrak{S}_\infty
\right)^{\prime\prime} \subset\pi_\varphi(G)^{\prime\prime} $ by
\begin{eqnarray}
\label{Alt}
Alt(n)=\frac{1}{n!}\sum\limits_{s\in\mathfrak{S}_n}{\rm
sign}\,(s) \;\pi_\varphi(s).
\end{eqnarray}
Using (\ref{*}), we obtain:
\begin{eqnarray}\label{Alt eta}
\left(Alt(n)\eta_n,\eta_n\right)=\alpha^n\sum\limits_{s\in\mathfrak{S}_n}
{\rm sign}\,(s)\;\nu^{\,l(s)}.
\end{eqnarray}
In the same way as in \cite{Ok2}, applying
equality:
\begin{eqnarray*}\label{Alt equality}
\sum\limits_{s\in\mathfrak{S}_n}{\rm
sign}\,(s)\;\nu^{\,l(s)}=\nu(\nu-1)\cdots(\nu-n+1),
\end{eqnarray*}
we have
\begin{eqnarray}
0\leq
\left(\pi_\varphi(s)\eta_n,\eta_n\right)=\nu(\nu-1)\cdots(\nu-n+1).
\end{eqnarray}
Therefore, $\nu\in \mathbb{N}\cup\{0\}$.
The same proof remains for $\alpha<0$. In above reasoning
operator $Alt(n)$ it is necessary to replace by
$Sym(n)=\frac{1}{n!}\sum\limits_{s\in\mathfrak{S}_n}\pi_\varphi(s)$.
\end{proof}
For $\alpha\in {\rm Spectrum}\, \mathcal{O}_j$ denote by $P_\alpha^{(j)}$
the corresponding spectral projection (see corollary \ref{co26} ). It
follows from lemmas \ref{separating} and \ref{P_alpha} that for $\alpha
\neq 0$ $w^*$-algebra $P_\alpha^{(j)}\mathfrak{A}_jP_\alpha^{(j)}$ is
finite dimensional. Therefore, there exists finite collection $
\left\{P_{\alpha,i}^{(j)}\right\}_{i=1}^{n_\alpha }\subset P_\alpha^{(j)}
\mathfrak{A}_j P_\alpha^{(j)} $ of the {\it pairwise orthogonal}
projections with the properties:
\begin{eqnarray}\label{full system}
\begin{split}
&P_{\alpha,i}^{(j)}\xi_\varphi \neq 0\;\text{ and
}\;P_{\alpha,i}^{(j)} \;\text{ is minimal for all }
i=1,2,\ldots,\;n_\alpha;\\
&\sum\limits_{i=1}^{n_\alpha }P_{\alpha,i}^{(j)}=P_\alpha^{(j)}.
\end{split}
\end{eqnarray}
\begin{Prop}\label{P_zero}
Let $\mathcal{O}_j=P_0^{(j)}+\sum\limits_i \alpha_i
P_{\alpha_i}^{(j)}$. Put $P_+^{(j)}=\sum\limits_{i:\alpha_i>0}
P_{\alpha_i}^{(j)}$, $P_-^{(j)}=\sum\limits_{i:\alpha_i<0}
P_{\alpha_i}^{(j)}$ and $P_\pm^{(j)}=P_+^{(j)}+P_-^{(j)}$.
Then for each $U\in \mathfrak{A}_j$
\begin{eqnarray*}P_\pm^{(j)}UP_0^{(j)}\xi_\varphi=0.
\end{eqnarray*}
\end{Prop}
\begin{proof}
It is suffice to prove that $P_\alpha^{(j)}U P_0^{(j)}\xi_\varphi
=0$ for all nonzero $\alpha \in {\rm Spectrum}\,\mathcal{O}_j$. But
this fact follows from the next relations:
\begin{eqnarray*}
\left(P_\alpha^{(j)}U P_0^{(j)}\xi_\varphi,P_\alpha^{(j)}U
P_0^{(j)}\xi_\varphi \right)=\left(P_0^{(j)}U^* P_\alpha^{(j)}U
P_0^{(j)}\xi_\varphi,\xi_\varphi \right)\\
=\frac{1}{\alpha }\left(P_0^{(j)}U^*\mathcal{O}_j P_\alpha^{(j)}U
P_0^{(j)}\xi_\varphi,\xi_\varphi \right)\\
\stackrel{lemma\,\ref{Fn}}{=} \frac{1}{\alpha
}\left(P_0^{(j)}U^*\pi_\varphi \left(\left(j\;\,j+1 \right) \right)
P_\alpha^{(j)}U P_0^{(j)}\xi_\varphi,\xi_\varphi \right)\\
=\frac{1}{\alpha }\left(P_0^{(j)}U^*
P_\alpha^{(j+1)}\cdot\phi_{j+1,j}(U)\cdot P_0^{(j+1)}\pi_\varphi
\left(\left(j\;\,j+1 \right)\right) \xi_\varphi,\xi_\varphi
\right)\\
=\frac{1}{\alpha }\left(P_\alpha^{(j+1)}\cdot\phi_{j+1,j}(U)\cdot
P_0^{(j+1)}\cdot P_0^{(j)}U^* \pi_\varphi \left(\left(j\;\,j+1
\right)\right) \xi_\varphi,\xi_\varphi \right)\\
=\frac{1}{\alpha }\left(P_\alpha^{(j+1)}\cdot\phi_{j+1,j}(U)\cdot
P_0^{(j+1)}\cdot \pi_\varphi \left(\left(j\;\,j+1 \right)\right)
P_0^{(j+1)}\cdot \phi_{j+1,j}\left(U^*\right)\xi_\varphi,\xi_\varphi
\right)\\
\stackrel{lemma\,\ref{Fn}}{=}
\frac{1}{\alpha }\left(P_\alpha^{(j+1)}\cdot\phi_{j+1,j}(U)\cdot
P_0^{(j+1)}\cdot\mathcal{O}_{j+1} \cdot P_0^{(j+1)}\cdot
\phi_{j+1,j}\left(U^*\right)\xi_\varphi,\xi_\varphi \right)=0.
\end{eqnarray*}
\end{proof}
Put $\mathbb{H}_{reg}^{(j)}= \left[ \mathfrak{A}_j
P_0^{(j)}\xi_\varphi \right]$ and $\mathbb{H}_{\pm}^{(j)}=
\left[\mathfrak{A}_j P_\pm^{(j)}\xi_\varphi \right]$. The next
assertion follows from the previous proposition.
\begin{Co}\label{Co29}
\begin{itemize}
\item [{\rm (a)}] Subspaces $\mathbb{H}_{reg}^{(j)}$ and $\mathbb{H}_{\pm}^{(j)}$ are
orthogonal for each $j\in \mathbb{N}$;
\item [{\rm (b)}] if
$\sum\limits_{\alpha\in {\rm Spectrum}\,\mathcal{O}_j:\alpha \neq 0}
|\alpha| \cdot\nu \left(P_\alpha^{(j)} \right)=1$ (see lemma \ref{P_alpha})
then $P_0^{(j)}\xi_\varphi=0$.
\end{itemize}
\end{Co}
\begin{proof}
Property {\rm (a)} at once follows from proposition \ref{P_zero}. To
prove {\rm (b)} we note that $1=\left\|P_0^{(j)}\xi_\varphi
\right\|^2+\sum\limits_{\alpha\in {\rm
Spectrum}\,\mathcal{O}_j:\alpha \neq 0}
\left\|P_\alpha^{(j)}\xi_\varphi\right\|^2 $ $\stackrel{\text{lemma \ref{P_alpha}}}{=}$
$\left\|P_0^{(j)}\xi_\varphi \right\|^2\\+\sum\limits_{\alpha\in {\rm
Spectrum}\,\mathcal{O}_j,\alpha \neq 0}\alpha \cdot\nu \left(P_\alpha^{(j)}
\right)$. Therefore, $\left\|P_0^{(j)}\xi_\varphi \right\|^2=0$.
\end{proof}
\begin{Lm}
$\left(U\mathcal{O}_jVP_0^{(j)}\xi_\varphi,P_0^{(j)}\xi_\varphi
\right)=0$ for all $U,V\in\mathfrak{A}_j$.
\end{Lm}
The proof follows from the next relations:
\begin{eqnarray*}
\left(U\mathcal{O}_jVP_0^{(j)}\xi_\varphi,P_0^{(j)}\xi_\varphi
\right)\stackrel{\text{lemma\,\ref{Fn}}}{=}
\left(U\cdot\pi_\varphi \left(\left(j\;\,j+1 \right) \right)\cdot VP_0^{(j)}\xi_\varphi,P_0^{(j)}\xi_\varphi
\right)\\
=\left(P_0^{(j)}\cdot U\cdot \phi_{j+1,j}(V)\cdot
P_0^{(j+1)}\cdot\pi_\varphi \left(\left(j\;\,j+1 \right)
\right)\xi_\varphi,\xi_\varphi \right)\\
=\left( \phi_{j+1,j}(V)\cdot
P_0^{(j+1)}\cdot P_0^{(j)}\cdot U\cdot\pi_\varphi
\left(\left(j\;\,j+1 \right) \right)\xi_\varphi,\xi_\varphi
\right)\\
=\left( \phi_{j+1,j}(V)\cdot
P_0^{(j+1)}\cdot\pi_\varphi \left(\left(j\;\,j+1 \right)
\right)\cdot P_0^{(j+1)}\cdot \phi_{j+1,j}(U)\xi_\varphi,\xi_\varphi
\right)\\
\stackrel{\text{lemma\,\ref{Fn}}}{=} \left( \phi_{j+1,j}(V)\cdot
P_0^{(j+1)}\cdot\mathcal{O}_{j+1}\cdot P_0^{(j+1)}\cdot
\phi_{j+1,j}(U)\xi_\varphi,\xi_\varphi \right)=0.
\end{eqnarray*}\qed
\begin{Prop}\label{diff_Pa}
Let $ \left\{P_{\alpha,i}^{(j)} \right\}_{i=1}^{n_\alpha}$ $
\left(\alpha \in \left\{{\rm Spectrum}\,\mathcal{O}_j
\right\}\setminus 0\right)$ are the same as in (\ref{full system}).
If $P_{\alpha,i}^{(j)}\cdot P_{\beta,k}^{(j)}=0$ then $\left(
P_{\alpha,i}^{(j)}\cdot U\cdot
P_{\beta,k}^{(j)}\xi_\varphi,\xi_\varphi \right)=0$ for all
$U\in\mathfrak{A}_j$.
\end{Prop}
\begin{proof}
The statement follows from the next relations:
\begin{eqnarray*}
&\left(P_{\alpha,i}^{(j)}\cdot U\cdot
P_{\beta,k}^{(j)}\xi_\varphi,\xi_\varphi \right)= \frac{1}{\alpha}
\left( P_{\alpha,i}^{(j)}\cdot\mathcal{O}_j\cdot U\cdot
P_{\beta,k}^{(j)}\xi_\varphi,\xi_\varphi \right)\\
& \stackrel{\text{lemma\,\ref{Fn}}}{=}\frac{1}{\alpha}
\left( P_{\alpha,i}^{(j)}\cdot\pi_\varphi \left(\left(j\;\,j+1
\right) \right)\cdot U\cdot P_{\beta,k}^{(j)}\xi_\varphi,\xi_\varphi
\right)=\\
&\frac{1}{\alpha}\left(P_{\alpha,i}^{(j)}\cdot \phi_{j+1,j}(U)\cdot
P_{\beta,k}^{(j+1)}\cdot\pi_\varphi \left(\left(j\;\,j+1 \right)
\right)\xi_\varphi,\xi_\varphi \right)\\
&=\frac{1}{\alpha}
\left(
\phi_{j+1,j}(U)\cdot
P_{\beta,k}^{(j+1)}\cdot P_{\alpha,i}^{(j)}\cdot\pi_\varphi
\left(\left(j\;\,j+1 \right) \right)\xi_\varphi,\xi_\varphi
\right)\\
&=\frac{1}{\alpha} \left( \phi_{j+1,j}(U)\cdot
P_{\beta,k}^{(j+1)}\cdot \pi_\varphi \left(\left( j\;\,j+1
\right)\right)
\cdot P_{\alpha,i}^{(j+1)}\xi_\varphi,\xi_\varphi
\right)\\
&\stackrel{lemma\,\ref{Fn}}{=}
\frac{1}{\alpha}\left( \phi_{j+1,j}(U)\cdot
P_{\beta,k}^{(j+1)}\cdot \mathcal{O}_{j+1}
\cdot P_{\alpha,i}^{(j+1)}\xi_\varphi,\xi_\varphi
\right) \\
&=\left( \phi_{j+1,j}(U)\cdot P_{\beta,k}^{(j+1)}\cdot
P_{\alpha,i}^{(j+1)}\xi_\varphi,\xi_\varphi
\right)=0.
\end{eqnarray*}
\end{proof}
Now we give important
\begin{Co} \label{Co32}
Let $P_+^{(j)}$ and $P_-^{(j)}$ are the same as in proposition
\ref{P_zero}. Then subspaces $ \left[
\mathfrak{A}_jP_+^{(j)}\xi_\varphi \right]$ and $\left[
\mathfrak{A}_j P_-^{(j)}\xi_\varphi \right]$ are orthogonal.
\end{Co}
\begin{Prop}\label{Prop33}
Let $ \left\{P_{\alpha,i}^{(j)} \right\}_{i=1}^{n_\alpha}$ $
\left(\alpha \in \left\{{\rm Spectrum}\,\mathcal{O}_j
\right\}\setminus 0\right)$ are the same as in proposition
\ref{diff_Pa}. If there exists unitary $U\in\mathfrak{A}_j$ such
that $U\cdot P_{\alpha,i}^{(j)} \cdot U^*= P_{\beta,k}^{(j)} $ then
$\frac{\left(P_{\alpha,i}^{(j)}\xi_\varphi , \xi_\varphi
\right)}{|\alpha| }=\frac{\left(P_{\beta,k}^{(j)}\xi_\varphi ,
\xi_\varphi \right)}{|\beta| }$.
\end{Prop}
\begin{proof}
Let $\kappa_\alpha =\left(P_{\alpha,i}^{(j)}\xi_\varphi ,
\xi_\varphi \right)/|\alpha|$ and $\kappa_\beta
=\left(P_{\beta,k}^{(j)}\xi_\varphi , \xi_\varphi \right)/|\beta|$.
By lemma \ref{P_alpha}, $\kappa_\alpha, \kappa_\beta \in\mathbb{N}$.
Suppose for the convenience that $j=1$. For any $n\in\mathbb{N}$,
using (\ref{Alt}) and (\ref{Alt eta}), we obtain
\begin{eqnarray}\label{rel109}
\begin{split}
\left(Alt(n)\prod\limits_{m=1}^n\phi_{m,1}\left(P_{\alpha,i}^{(1)}
\right)\xi_\varphi,
\prod\limits_{m=1}^n\phi_{m,1}\left(P_{\alpha,i}^{(1)}
\right)\xi_\varphi
\right)=|\alpha|^n\prod\limits_{m=0}^{n-1}\left(\kappa_\alpha -m
\right);\\
\left(Alt(n)\prod\limits_{m=1}^n\phi_{m,1}\left(P_{\beta,k}^{(1)}
\right)\xi_\varphi,
\prod\limits_{m=1}^n\phi_{m,1}\left(P_{\beta,k}^{(1)}
\right)\xi_\varphi
\right)=|\beta|^n\prod\limits_{m=0}^{n-1}\left(\kappa_\beta -m
\right).
\end{split}
\end{eqnarray}
This implies for $n=\kappa_\alpha+1$ that
\begin{eqnarray}\label{rel110}
\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(P_{\alpha,i}^{(1)}
\right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(P_{\alpha,i}^{(1)}
\right)\xi_\varphi \right)=0.
\end{eqnarray}
Further, applying relation
\begin{eqnarray*}
Alt(n)\cdot \prod\limits_{m=1}^n \phi_{m,1}(a)=\prod\limits_{m=1}^n
\phi_{m,1}(a)\cdot Alt(n) \;\;(\text{for all }a\in\mathfrak{A}_1),
\end{eqnarray*}
we get
\begin{eqnarray*}
&0\leq\left(Alt(\kappa_\alpha+1)\cdot
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\beta,k}^{(m)} \xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\beta,k}^{(m)}
\xi_\varphi \right)\\
& =\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}
\phi_{m,1}\left(U^* \right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}
\phi_{m,1}\left(U^* \right)\xi_\varphi \right)\\
&=\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}
\phi_{m,1}\left(U^* \right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(U^*
\right)\xi_\varphi
\right)\\
&=\frac{1}{\alpha^{\kappa_\alpha+1} }
\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}\mathcal{O}_m
\phi_{m,1}\left(U^* \right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(U^*
\right)\xi_\varphi
\right)\\
&\stackrel{\text{lemma \ref{Fn}}}{=}
\frac{1}{\alpha^{\kappa_\alpha+1} }
\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}\pi_\varphi
\left(\left(m\;\, \kappa_\alpha+1\right) \right)\phi_{m,1}\left(U^*
\right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(U^*
\right)\xi_\varphi \right)\\
&=\frac{1}{\alpha^{\kappa_\alpha+1} }
\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\alpha,i}^{(m)}\phi_{m+\kappa_\alpha
+1,1}\left(U^* \right)\pi_\varphi \left(\left(m\;\,
\kappa_\alpha+1\right) \right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m,1}\left(U^*
\right)\xi_\varphi \right)\\
&=\frac{1}{\alpha^{\kappa_\alpha+1} }
\left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1} P_{\alpha,i}^{(m)}\pi_\varphi
\left(\left(m\;\, \kappa_\alpha+1\right) \right)\xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}\phi_{m+\kappa_\alpha
+1,1}\left(U^* \right)\phi_{m,1}\left(U^* \right)\xi_\varphi
\right)\\
&\leq\frac{1}{|\alpha|^{\kappa_\alpha+1} } \left\|
Alt(\kappa_\alpha+1) \prod\limits_{m=1}^{\kappa_\alpha+1}
P_{\alpha,i}^{(m)}\pi_\varphi \left(\left(m\;\,
\kappa_\alpha+1\right) \right)\xi_\varphi\right\|\\
&=\frac{1}{|\alpha|^{\kappa_\alpha+1} } \left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1} P_{\alpha,i}^{(m)}\pi_\varphi
\left(\left(m\;\, \kappa_\alpha+1\right)
\right)\xi_\varphi,\pi_\varphi \left(\left(m\;\,
\kappa_\alpha+1\right) \right)\xi_\varphi \right)^{1/2}\\
&\stackrel{\mathfrak{S}_\infty\text{-centrality of } \varphi }{=}
\frac{1}{|\alpha|^{\kappa_\alpha+1} } \left(Alt(\kappa_\alpha+1)
\prod\limits_{m=1}^{\kappa_\alpha+1}
P_{\alpha,i}^{(m)}\xi_\varphi,\xi_\varphi
\right)^{1/2}\stackrel{(\ref{rel110})}{=}0.
\end{eqnarray*}
Hence, applying (\ref{rel109}), we have
\begin{eqnarray*}
\left(Alt(\kappa_\alpha+1)\cdot
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\beta,k}^{(m)} \xi_\varphi,
\prod\limits_{m=1}^{\kappa_\alpha+1}P_{\beta,k}^{(m)} \xi_\varphi
\right)\\
=|\beta|^{\kappa_\alpha +1 }\kappa_\beta \left(\kappa_\beta-1 \right)
\left(\kappa_\beta-2\right)\left(\kappa_\beta-\kappa_a\right)=0.
\end{eqnarray*}
Therefore, $\kappa_\alpha \geq\kappa_\beta$. Similarly,
$\kappa_\alpha \leq\kappa_\beta$.
\end{proof}
\paragraph{The proof of theorem \ref{main}.}
Now we will give the description of parameters
$\left(A,\rho\right)$ from paragraph
\ref{paragraph2.1}, corresponding to $\varphi$.
First we describe the structure of $w^*$-algebra $\widetilde{P}_\pm^{(j)}
\mathfrak{A}_j$, \footnote{ see page \pageref{definition mathfrakAj} for
the definition of $\mathfrak{A}_j$} where $\widetilde{P}_\pm^{(j)}$ is the
orthogonal projection of $\left[\mathfrak{A}_j\xi_\varphi \right]$ onto $
\left[ \mathfrak{A}_jP_\pm^{(j)}\xi_\varphi\right]$ \footnote{$P_\pm^{(j)}$
is defined in proposition \ref{P_zero}}.
Let $\mathcal{C}_\pm^{(j)}$ be the center of
$\widetilde{P}_\pm^{(j)} \mathfrak{A}_j$. Denote by
$c(P)\in\mathcal{C}_\pm^{(j)}$ the central support of projection
$P\in\widetilde{P}_\pm^{(j)} \mathfrak{A}_j$. Let us prove that
\begin{eqnarray}\label{central111}
c \left( P_\pm^{(j)}\right)=\widetilde{P}_\pm^{(j)}.
\end{eqnarray}
Indeed, if $F=\widetilde{P}_\pm^{(j)}-c\left(P_\pm^{(j)}\right)$,
then for all $B\in\mathfrak{A}_j$ we have
$F BP_\pm^{(j)}\xi_\varphi$ $ =
BFP_\pm^{(j)}\xi_\varphi=0$. Therefore, $F=0$.
Since for any nonzero $\alpha \in \left\{{\rm Spectrum}\,\mathcal{O}_j\right\}\setminus 0$
in $P_\alpha ^{(j)} \mathfrak{A}_jP_\alpha ^{(j)}$ there exists
finite collection $
\left\{P_{\alpha,i}^{(j)}\right\}_{i=1}^{n_\alpha }$ of the {\it
minimal } projections with properties (\ref{full system}), then
$w^*$-algebra $P_\pm^{(j)}\mathfrak{A}_jP_\pm^{(j)}$ is
$*$-isomorphic to
the direct sum of full matrix algebras.
Thus, using (\ref{central111}), we find the collection $ \left\{F_m \right\}_{m=1}^N$ of pairwise
orthogonal projections from $\mathcal{C}_\pm^{(j)}$ such that
$F_m\cdot\widetilde{P}_\pm^{(j)} \mathfrak{A}_j\cdot F_m$ is a factor
of the type ${\rm I}_{k_m}$. Denote
$F_m\cdot\widetilde{P}_\pm^{(j)} \mathfrak{A}_j\cdot
F_m$ by $\mathcal{M}_{k_m}$. That is $P_\pm^{(j)}\mathfrak{A}_jP_\pm^{(j)}$ is
isomorphic to $\mathcal{M}_{k_1}\oplus \mathcal{M}_{k_2}\oplus\ldots$.
Let
$ \left\{e^{(m)}_{pq} \right\}_{p,q=1}^{k_m}$ be the matrix unit of
$\mathcal{M}_{k_m}$.
Without loss of the generality we suppose that for certain $l_m\leq k_m$
\begin{eqnarray}\label{111}
\begin{split}
\bigcup\limits_m\left\{e^{(m)}_{pp}
\right\}_{p=1}^{l_m}\subset\bigcup\limits_{\alpha \in{\rm
Spectrum}\,\mathcal{O}_j,\,\alpha \neq
0}\left\{P_{\alpha,i}^{(j)}\right\}_{i=1}^{n_\alpha }\;\;\text{ and }\\
\left\{\bigcup\limits_m\left\{e^{(m)}_{pp}
\right\}_{p=l_m+1}^{k_m}\right\}\, \bigcap \, \left\{\bigcup\limits_{\alpha \in{\rm
Spectrum}\,\mathcal{O}_j,\,\alpha \neq
0}\left\{P_{\alpha,i}^{(j)}\right\}_{i=1}^{n_\alpha }\right\}=\emptyset.
\end{split}
\end{eqnarray}
By lemmas \ref{separating}, \ref{P_alpha} and propositions \ref{diff_Pa},
\ref{Prop33}, minimal projections $\bigcup\limits_m\left\{e^{(m)}_{pp}
\right\}_{p=1}^{l_m}$ satisfy the next conditions
\begin{itemize}
\item {\rm (a)} if $e^{(m)}_{pp}\cdot\mathcal{O}_j=\alpha_p\cdot
e^{(m)}_{pp}$, where $\alpha_p\in{\rm Spectrum}\setminus 0$, then
there exists natural $q_m$ such that
$\frac{\left(e^{(m)}_{pp}\xi_\varphi,\xi_\varphi\right)}{\left|\alpha_p\right|
}=q_m$ for all $p=1,2,\ldots, l_m$;
\item {\rm (b)} if $p\neq q $ then $\left(
e^{(m)}_{pq}\xi_\varphi,\xi_\varphi\right)=0$ for all
$p,q=1,2,\ldots, l_m$; $m=1,2,\ldots, N$.
\end{itemize}
Further, using (\ref{111}), for $p>l_m$ we have
\begin{eqnarray*}
e_{pp}^{(m)}\cdot P_0^{(j)}=e_{pp}^{(m)}.
\end{eqnarray*}
It follows from this and proposition \ref{P_zero} that
\begin{eqnarray}\label{0.112}
\left(e_{pq}^{(m)} \xi_\varphi,\xi_\varphi\right)=0\;\text{ for }\;\;
p=1,2,\ldots,l_m; \;\; q=l_m+1, l_m+2, \ldots,k_m.
\end{eqnarray}
Let us prove that
\begin{eqnarray}\label{0.113}
\left(e_{pq}^{(m)} \xi_\varphi,\xi_\varphi\right)=0\;\text{ for }\;\;
p,q=l_m+1,l_m+2,\ldots,k_m.
\end{eqnarray}
For this it suffices to prove the next equality:
\begin{eqnarray}\label{0.114}
\left(e_{pp}^{(m)} \xi_\varphi,\xi_\varphi\right)=0\;\text{ for }\;\;
p,q=l_m+1,l_m+2,\ldots,k_m.
\end{eqnarray}
Fix $p>l_m$. Applying proposition \ref{fioncycle}, we have
\begin{eqnarray*}
&\left(e_{pp}^{(m)} \xi_\varphi,\xi_\varphi\right)=\frac{1}{\alpha_1 }
\left(e_{p1}^{(m)} \cdot\mathcal{O}_j\cdot e_{1p}^{(m)}\xi_\varphi,
\xi_\varphi \right)\\
&\stackrel{\text{proposition \ref{fioncycle}}}{=}
\frac{1}{\alpha_1 }\left(\pi_\varphi \left(\left(j\,\;j+1 \right) \right)\cdot
\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\cdot e_{1p}^{(m)}\xi_\varphi,
\xi_\varphi \right)\\
& =\frac{1}{\alpha_1 }
\left(\pi_\varphi \left(\left(j\,\;j+1 \right) \right)\cdot e_{1p}^{(m)}
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\xi_\varphi,
\xi_\varphi \right)\\
&=\frac{1}{\alpha_1 }
\left(
\phi_{j+1,j}\left(e_{1p}^{(m)}\right)\cdot\pi_\varphi \left(\left(j\,\;j+1 \right) \right)
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\xi_\varphi,
\xi_\varphi \right)\\
&=\frac{1}{\alpha_1 }
\left(\pi_\varphi \left(\left(j\;\,n \right) \right)\cdot
\phi_{j+1,j}\left(e_{1p}^{(m)}\right)\cdot\pi_\varphi \left(\left(j\,\;j+1 \right) \right)
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\cdot
\pi_\varphi \left(\left(j\;\,n \right) \right)\xi_\varphi,
\xi_\varphi \right)\\
&=\frac{1}{\alpha_1 }
\left(
\phi_{j+1,j}\left(e_{1p}^{(m)}\right)\cdot\pi_\varphi \left(\left(j+1\,\;n \right) \right)
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\xi_\varphi,
\xi_\varphi \right)\\
&=\lim\limits_{n\to\infty}\frac{1}{\alpha_1 }
\left(
\phi_{j+1,j}\left(e_{1p}^{(m)}\right)\cdot\pi_\varphi \left(\left(j+1\,\;n \right) \right)
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\xi_\varphi,
\xi_\varphi \right)\\
&=\frac{1}{\alpha_1 }
\left(
\phi_{j+1,j}\left(e_{1p}^{(m)}\right)\cdot\mathcal{O}_{j+1}
\cdot\phi_{j+1,j}\left(e_{p1}^{(m)}\right)\xi_\varphi,
\xi_\varphi \right)\\
&=\frac{1}{\alpha_1 }\left(e_{1p}^{(m)}\cdot
\mathcal{O}_j\cdot e_{p1}^{(m)}\xi_\varphi,\xi_\varphi\right)\stackrel{(\ref{111})}{=}0.
\end{eqnarray*}
Thus (\ref{0.114}) and (\ref{0.113}) are proved.
Define $\widehat{\varphi }\in \pi_\varphi \left(G\right)^{\prime\prime}_*$
by $\widehat{\varphi}(a)=\left(a\xi_\varphi,\xi_\varphi\right)$. Denote by
$\mathbb{M}_{q_m}$ the algebra of all complex matrices and put $\mathcal{N}_m=
\mathcal{M}_{k_m}\otimes \mathbb{M}_{q_m}$,
$A^{(m)}=\sum\limits_{p=1}^{l_m}\alpha_p\cdot e_{pp}^{(m)}\in\mathcal{M}_{k_m}$
(see property {\rm (a)} and (\ref{111})). Consider $w^*$-algebra
$\widetilde{\mathfrak{A}}_j=\left(\bigoplus\limits_{m=1}^N
F_m\mathfrak{A}_jF_m\otimes\mathbb{M}_{q_m}\right)\bigoplus
\left(I-\widetilde{P}_\pm^{(j)}\right)\mathfrak{A}_j$.
Observe that there exists the natural embedding
\begin{eqnarray}
\mathfrak{A}_j\ni a\stackrel{\mathfrak{i}}{\mapsto}
\sum\limits_{m=1}^N \left(F_maF_m\otimes I \right)+
\left(I-\widetilde{P}_\pm^{(j)}\right)a\in \widetilde{\mathfrak{A}}_j.
\end{eqnarray}
Now, using properties
{\rm (a)}-{\rm (b)}, (\ref{0.112}) and
(\ref{0.113}), we have for all
$a\in \mathfrak{A}_j$
\begin{eqnarray}\label{0.116}
\widehat{\varphi }
\left(a \right)=\sum_{m=1}^N {\rm Tr}_m\left(a\left|A^{(m)} \right|\otimes I \right)+
\left(a\left(I-\widetilde{P}_\pm^{(j)} \right)\xi_\varphi,\xi_\varphi\right),
\end{eqnarray}
where ${\rm Tr}_m$ is ordinary trace\footnote{If $e$ is minimal
projection from $\mathcal{N}_m$ then ${\rm Tr}_m(e)=1$. }
on $\mathcal{N}_m$.
Now we define parameters $\left\{\mathcal{H},A,\rho,\hat{\xi}\right\}$ from
paragraph \ref{paragraph2.1} such that
\begin{eqnarray}\label{0.117}
\varphi =\psi_A^\rho\;\;(\text{ see proposition \ref{Prop11a}}).
\end{eqnarray}
For this purpose we fix in each $\mathcal{N}_m=
\mathcal{M}_{k_m}\otimes \mathbb{M}_{q_m}$ minimal projection $e_m$. Define
state $f$ on $\widetilde{\mathfrak{A}}_j$ by
\begin{eqnarray}
f\left(\widetilde{a}\right)=\sum\limits_{m=1}^N{\rm Tr}_m\left(
e_m\widetilde{a} e_m\right) \;\;\;
\left(\widetilde{a}\in\widetilde{\mathfrak{A}}_j \right).
\end{eqnarray}
Let $\left(R_f,\mathcal{H}_f,\xi_f\right)$ be the corresponding GNS-representation of
$\widetilde{\mathfrak{A}}_j$. Now we define $\mathcal{H}$ by
\begin{eqnarray}
\mathcal{H}=\mathcal{H}_f\oplus
\left[\left(I-\widetilde{P}_\pm^{(1)}
\right)\mathfrak{A}_1\xi_\varphi \right]
\oplus
\left[\left(I-\widetilde{P}_\pm^{(2)}\right)\mathfrak{A}_2\xi_\varphi
\right]\oplus\ldots.
\end{eqnarray}
Representation $\rho$ acts on $\eta_p\in
\left[\left(I-\widetilde{P}_\pm^{(p)}\right)\mathfrak{A}_p\xi_\varphi
\right]$ as follows
\begin{eqnarray}
\rho\left(\gamma\right)\eta_p= \pi_\varphi
\left(\left(e,\ldots,\stackrel{p-th}{\gamma},e,\ldots \right) \right)\eta_p.
\end{eqnarray}
If $\eta\in\mathcal{H}_f$ then
\begin{eqnarray}
\rho\left(\gamma\right)\eta=R_f\circ\mathfrak{i}\left(
\pi_\varphi
\left(\left(e,\ldots,\stackrel{j-th}{\gamma},e,\ldots \right)
\right)\right)\eta.
\end{eqnarray}
Operator $A$ is defined by
\begin{eqnarray}
A\eta=\left\{\begin{array}{ll}
R_f\circ\mathfrak{i}\left(\sum\limits_{m=1}^NA^{(m)} \right)\eta,
&\text{ if }\;\eta\in\mathcal{H}_f,\\
0,&\text{ if }\;\eta\in\left[\left(I-\widetilde{P}_\pm^{(p)}\right)\mathfrak{A}_p\xi_\varphi
\right].\end{array}\right.
\end{eqnarray}
In the case $\sum\limits_{\alpha \in{\rm
Spectrum}\,\mathcal{O}_j,\,\alpha \neq
0}|\alpha|\nu\left(P_\alpha^{(j)}\right)=\sum\limits_{m=1}^{N}
\sum\limits_{p=1}^{k_m}\left|\alpha_p\right|<1$
(see corollary \ref{Co29} and property {\rm(a)}) vector $\hat{\xi}$ is defined
by
\begin{eqnarray}
\hat{\xi}=\frac{\left(I-\widetilde{P}_\pm^{(1)}\right)\xi_\varphi }{
\left\| \left(I-\widetilde{P}_\pm^{(1)}\right)\xi_\varphi \right\|}.
\end{eqnarray}
Now it follows from (\ref{0.116}) that for $a\in\mathfrak{A}_j$
\begin{eqnarray}
\begin{split}
&\widehat{\varphi}(a) ={\rm Tr}\left(R_f\left(\mathfrak{i}(a)\right)\cdot
\left|A\right| \right)\\
&+\left\| \left(I-\widetilde{P}_\pm^{(1)}\right)\xi_\varphi \right\|
\left(\pi_\varphi\left(\left(1\;\,j \right) \right)\cdot a\cdot\pi_\varphi\left(\left(1\;\,j \right) \right)\hat{\xi},\hat{\xi}\right).
\end{split}
\end{eqnarray}
Hence, applying lemma \ref{lemma22}, proposition \ref{fioncycle} and
definition of $\psi_A^\rho$, we can to receive equality (\ref{0.117}).
In particular, lemma \ref{lemma27aa} implies property {\rm(3)}
from paragraph \ref{paragraph2.1}. \qed
|
1,108,101,565,166 | arxiv | \section{Introduction and results}
\subsection{Introduction.}
The correlation length plays a fundamental role in our understanding of the properties of a statistical mechanical system.
It measures the typical distance over which the microscopic degrees of freedom are strongly correlated.
The usual way of defining it precisely is as the inverse of the rate of exponential decay of the 2-point function.
In systems in which the interactions have an infinite range, the correlation length can only be finite if these interactions decay at least exponentially fast with the distance.
Such a system is then said to have \emph{short-range} interactions.\footnote{While the terminology ``short-range'' \textit{vs.} ``long-range'' appears to be rather unprecise, different authors meaning quite different things by these terms, there is agreement on the fact that interactions decreasing exponentially fast with the distance are short-range.}
It is often expected that systems with short-range interactions all give rise to qualitatively similar behavior.
This then serves as a justification for considering mainly systems with nearest-neighbor interactions as a (hopefully generic) representant of this class.
As a specific example, let us briefly discuss one-dimensional systems with short-range interactions.
For those systems, the pressure as well as all correlation functions are always analytic functions of the interaction parameters.
A proof for interactions decaying at least exponentially fast was given by Ruelle~\cite{Ruelle-1975}, while the general case of interactions with a finite first moment was settled by Dobrushin~\cite{Dobrushin-1974} (see also~\cite{Cassandro+Olivieri-1981}).
This is known \emph{not} to be the case, at least for some systems, for interactions decaying
even slower with the distance~\cite{Dyson-1969, Frohlich+Spencer-1982}.
\medskip
In the present work, we consider a variety of lattice systems with exponentially decaying interactions.
We show that, in contrast to the expectation above, such systems can display qualitatively different behavior \emph{depending on the properties of the sub-exponential corrections}.
Under weak assumptions, the correlation length associated with systems whose interactions decay faster than any exponential tends to zero as the temperature tends to infinity.
In systems with exponentially decaying interactions, however, this cannot happen: indeed, the rate of exponential decay of the 2-point function can never be larger than the rate of decay of the interaction.
This suggests that, as the temperature becomes very large, one of the two following scenarii should occur: either there is a temperature \(T_{\mathrm{sat}}\) above which the correlation length becomes constant, or the correlation length asymptotically converges, as \(T\to\infty\), to the inverse of the rate of exponential decay of the interaction.
Notice that when the first alternative happens, the correlation length cannot be an analytic function of the temperature.
It turns out that both scenarios described above are possible.
In fact, both can be realized in the same system by considering the 2-point function in different directions.
What determines whether saturation (and thus non-analyticity) occurs is the correction to the exponential decay of the interactions.
We characterize explicitly the prefactors that give rise to saturation of the correlation length as a function of the relevant parameter (inverse temperature \(\beta\), magnetic field \(h\), etc).
Our analysis also applies to one-dimensional systems, thereby showing that the correlation length of one-dimensional systems with short-range interactions can exhibit a non-analytic behavior, in sharp contrast with the standard analyticity results mentioned above.
We also relate the change of behavior of the correlation length to a violation of the mass gap condition in the theory of correlations developed in the early 20th Century by Ornstein and Zernike, and explain how this affects the behavior of the prefactor to the exponential decay of the 2-point function.
\subsection{Convention and notation}
In this paper, \(|\cdot|\) denotes some arbitrary norm on \(\mathbb{R}^d\), while we reserve \(\|\cdot\|\) for the Euclidean norm.
The unit sphere in the Euclidean norm is denoted \(\mathbb{S}^{d-1}\). Given \(x\in\mathbb{R}^d\), \([x]\) denotes the (unique) point in \(\mathbb{Z}^d\) such that \(x\in [x]+[-\frac12,\frac12)^d\).
To lighten notation, when an element \(x\in\mathbb{R}^d\) is treated as an element of \(\mathbb{Z}^d\), it means that \([x]\) is considered instead.
\subsection{Framework and models}
For simplicity, we shall always work on \(\mathbb{Z}^d\), but the methods developed in this paper should extend in a straightforward manner to more general settings.
We consider the case where the interaction strength between two lattice sites \(i,j\) is given by \(J_{ij}=J_{i-j}=\psi(i-j)e^{-|i-j|}\), where \(|\cdot|\) is some norm on \(\mathbb{R}^d\); we shall always assume that both \(\psi\) and \(|\cdot|\) are invariant under lattice symmetries.
We will suppose \(\psi(y) >0\) for all \(y\neq 0\) to avoid technical issues.
We moreover require that \(\psi\) is a sub-exponential correction, that is,
\begin{equation}
\lim_{|y|\to\infty} \frac{1}{|y|} \log(\psi(y)) =0.
\label{eq:psi_subexp}
\end{equation}
The approach developed in this work is rather general and will be illustrated on various lattice spin systems and percolation models.
We will focus on suitably defined \emph{2-point functions} \(G_\lambda(x,y)\) (sometimes truncated), where \(\lambda\) is some external parameter.
We define now the various models that will be considered and give, in each case, the corresponding definition of \(G_\lambda\) and of the parameter \(\lambda\).
The following notation will occur regularly:
\begin{gather*}
\bar{J} = \sum_{x\in\mathbb{Z}^d} J_{0x}, \quad P(x) = J_{0x}/\bar{J}.
\end{gather*}
By convention, we set \(\bar{J} = 1\) (and thus \(P(x) = J_{0x}\)), since the normalization can usually be absorbed into the inverse temperature or in a global scaling of the field, and assume that \(J_{00} =0\) (so \(\bar{J} = \sum_{x\in\mathbb{Z}^d\setminus\{0\}} J_{0x} = 1\)).
All models will come with a parameter (generically denoted \(\lambda\)). They also all have a natural transition point \(\lambda_{\mathrm{c}}\) (possibly at infinity) where the model ceases to be defined or undergoes a drastic change of behavior.
We will always work in a regime \(\lambda\in[0, \lambda_{\mathrm{\mathrm{exp}}})\) where \(\lambda_{\mathrm{\mathrm{exp}}}\leq \lambda_{\mathrm{c}}\) is the point at which (quasi-)long range order occurs (see~\eqref{eq:lambdaqlr_def}) for the model. For all models under consideration, it is conjectured that \(\lambda_{\mathrm{\mathrm{exp}}} = \lambda_{\mathrm{c}}\).
\subsubsection{KRW model}
A walk is a finite sequence of vertices \((\gamma_0, \dots, \gamma_n)\) in \(\mathbb{Z}^d\). The length of \(\gamma\) is \(\abs{\gamma} =n\). Let \(\mathsf{W}(x,y)\) be the set of (variable length) walks with \(\gamma_0=x,\gamma_{\abs{\gamma}} = y\).
The 2-point function of the killed random walk (KRW) is defined by
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(x,y) = \sum_{\gamma\in\mathsf{W}(x,y)} \prod_{i=1}^{\abs{\gamma}} \lambda J_{\gamma_{i-1} \gamma_i}.
\end{equation}
\(\lambda_{\mathrm{c}}\) is defined by
\begin{equation*}
\lambda_{\mathrm{c}}= \sup\Bsetof{\lambda\geq 0}{\sum_{x\in\mathbb{Z}^d} G^{\mathrm{KRW}}_{\lambda}(0,x) <\infty}.
\end{equation*}
Our choice of normalization for \(J\) implies that \(\lambda_{\mathrm{c}}=1\).
\subsubsection{SAW model}
Self-Avoiding Walks are finite sequences of vertices \((\gamma_0, \dots, \gamma_n)\) in \(\mathbb{Z}^d\) with at most one instance of each vertex (that is, \(i\neq j\implies\gamma_i\neq\gamma_j\)).
Denote \(\abs{\gamma} = n\) the length of the walk.
Let \(\mathsf{SAW}(x,y)\) be the set of (variable length) SAW with \(\gamma_0=x,\gamma_{\abs{\gamma}}=y\).
We then let
\begin{equation}
G^{\mathrm{SAW}}_{\lambda}(x,y) = \sum_{\gamma\in\mathsf{SAW}(x,y)} \prod_{i=1}^{\abs{\gamma}} \lambda J_{\gamma_{i-1} \gamma_i}.
\end{equation}
\(\lambda_{\mathrm{c}}\) is defined by
\begin{equation*}
\lambda_{\mathrm{c}}= \sup\Bsetof{\lambda\geq 0}{\sum_{x\in\mathbb{Z}^d} G^{\mathrm{SAW}}_{\lambda}(0,x) <\infty}.
\end{equation*}
Since \(G^{\mathrm{SAW}}_{\lambda}(x,y) \leq G^{\mathrm{KRW}}_{\lambda}(x,y)\), it follows that \(\lambda_{\mathrm{c}}^{\mathrm{SAW}}\geq \lambda_{\mathrm{c}}^{\mathrm{KRW}} = 1\).
\subsubsection{Ising model}
The Ising model at inverse temperature \(\beta\geq 0\) and magnetic field \(h\in\mathbb{R}\) on \(\mathbb{Z}^d\) is the probability measure on \(\{-1,+1\}^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\sigma\in\{-1,+1\}^{\Lambda_N}\) and \(\Lambda_N=[-N,N]^{d}\cap\mathbb{Z}^{d}\)).
\[
\mu^{\mathrm{Ising}}_{\Lambda_N;\beta,h}(\sigma) = \frac{1}{Z_{\Lambda_N;\beta,h}^{\mathrm{Ising}}} e^{-\beta\mathscr{H}_N(\sigma)},
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} \sigma_i\sigma_j - h\sum_{i\in\Lambda_N}\sigma_i
\]
and partition function \(Z_{\Lambda_N;\beta,h}^{\mathrm{Ising}}\).
The limit \(\mu^{\mathrm{Ising}}_{\beta,h}=\lim_{N\to\infty}\mu^{\mathrm{Ising}}_{\Lambda_N;\beta,h}\) is always well defined and agrees with the unique infinite-volume measure whenever \(h\neq 0\) or \(\beta<\beta_{\mathrm{c}}\), the critical point of the model.
For this model, we will consider two different situations, depending on which parameter we choose to vary:
\begin{itemize}
\item When \(h=0\), we consider
\begin{equation}
G^{\mathrm{Ising}}_{\beta}(x,y) = \mu^{\mathrm{Ising}}_{\beta,0}(\sigma_x\sigma_y)
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
In this case, \(\lambda_{\mathrm{c}} = \beta_{\mathrm{c}}(d)\) marks the boundary of the high-temperature regime (\(\lim_{\norm{x}\to\infty}\mu^{\mathrm{Ising}}_{\beta,0}(\sigma_0\sigma_x) =0\) for \(\beta< \beta_{\mathrm{c}}\) and is \(>0\) for \(\beta>\beta_{\mathrm{c}}\)).
\item When \(h>0\), we allow arbitrary values of \(\beta\geq 0\) and consider
\begin{equation}
G^{\mathrm{IPF}}_{\beta,h}(x,y) = \mu^{\mathrm{Ising}}_{\beta,h}(\sigma_x\sigma_y) - \mu^{\mathrm{Ising}}_{\beta,h}(\sigma_x)\mu^{\mathrm{Ising}}_{\beta,h}(\sigma_y)
\quad\text{ and }\quad
\lambda = e^{-h}.
\end{equation}
Of course, here \(\lambda_{\mathrm{c}}=1\).
The superscript \(\mathrm{IPF}\) stands for ``Ising with a Positive Field''.
\end{itemize}
\subsubsection{Lattice GFF}
The lattice Gaussian Free Field with mass \(m\geq 0\) on \(\mathbb{Z}^d\) is the probability measure on \(\mathbb{R}^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\sigma\in\mathbb{R}^{\Lambda_N}\))
\[
\dd\mu^{\mathrm{GFF}}_{m,\Lambda_N}(\sigma) = \frac{1}{Z_{m,\Lambda_N}^{\mathrm{GFF}}} e^{-\mathscr{H}_N(\sigma)-m^2\sum_{i\in\Lambda_N}\sigma_i^2 } \,\dd\sigma,
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} (\sigma_i-\sigma_j)^2
\]
and partition function \(Z_{m,\Lambda_N}^{\mathrm{GFF}}\). Above, \(\dd\sigma\) denotes the Lebesgue measure on \(\mathbb{R}^{\Lambda_N}\).
The limit \(\mu^{\mathrm{GFF}}_{m}=\lim_{N\to\infty}\mu^{\mathrm{GFF}}_{m,\Lambda_N}\) exists and is unique for any \(m>0\).
When considering the measure at \(m=0\), we mean the measure \(\mu^{\mathrm{GFF}}=\lim_{m\downarrow 0} \mu^{\mathrm{GFF}}_{m}\).
The latter limit exists when \(d\geq 3\), but not in dimensions \(1\) and \(2\).
For this model, we define
\begin{equation}
G^{\mathrm{GFF}}_{(1+m^2)^{-1}}(x,y) = \mu^{\mathrm{GFF}}_{m}(\sigma_x\sigma_y),\quad \lambda = \frac{1}{1+ m^2}.
\end{equation}
The 2-point function of the GFF has a nice probabilistic interpretation: let \(P\) be the probability measure on \(\mathbb{Z}^d\) given by \(P(x)=J_{0x}\).
Let \(P_x^{m}=P_{J,x}^m\) denote the law of the random walk started at \(x\) with killing \(\frac{m^2}{1+m^2}\) and \textit{a priori} i.i.d.\xspace steps of law \(P\) and let \(E_x^m\) be the corresponding expectation.
Let \(X_i\) be the \(i\)th step and \(S_0=x,\ S_k= S_{k-1}+ X_k\) be the position of the walk at time \(k\).
Denote by \(T\) the time of death of the walk. One has \(P^m(T=k)=(1+m^2)^{-k} m^2\).
The 2-point function can then be expressed as
\begin{equation}\label{eq:GFF_RW_Rep_Cov}
G_{\lambda}^{\mathrm{GFF}}(x,z) = \frac{1}{1+m^2}E_x^m\Big[\sum_{k=0}^{T-1} \IF{S_k=z}\Big].
\end{equation}
Thanks to the normalization \(\bar{J}=1\), it is thus directly related to the \(\mathrm{KRW}\) via the identity
\begin{equation}\label{eq:GFF_to_KRW}
G_{\lambda}^{\mathrm{GFF}}(x,z) = \lambda G_{\lambda}^{\mathrm{KRW}}(x,z).
\end{equation}
In particular, \(\lambda_{\mathrm{c}} = 1\) (which corresponds to \(m=0\)) and
\(\sup_{x\in\mathbb{Z}^d} G_{\lambda}^{\mathrm{GFF}}(0,x) < \infty\) for all \(\lambda \in [0,\lambda_{\mathrm{c}})\) in any dimension.
\subsubsection{Potts model and FK percolation}
The \(q\)-state Potts model at inverse temperature \(\beta\geq 0\) on \(\mathbb{Z}^d\) with free boundary condition is the probability measure on \(\{1, 2, \dots, q\}^{\mathbb{Z}^d}\) (\(q\geq 2\)) given by the weak limit of the finite-volume measures (for \(\sigma\in\{1, \dots, q\}^{\Lambda_N}\))
\[
\mu^{\mathrm{Potts}}_{\Lambda_N;\beta,q}(\sigma) = \frac{1}{Z_{\Lambda_N;\beta,q}^{\mathrm{Potts}}} e^{-\beta\mathscr{H}_N(\sigma)}
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} \IF{\sigma_i=\sigma_j}
\]
and partition function \(Z_{\Lambda_N;\beta,q}^{\mathrm{Potts}}\).
We write \(\mu^{\mathrm{Potts}}_{\beta,q} =\lim_{N\to\infty} \mu^{\mathrm{Potts}}_{\Lambda_N;\beta,q}\); this limit can be shown to exist.
From now on, we omit \(q\) from the notation, as in our study \(q\) remains fixed, while \(\beta\) varies.
For this model, we consider
\begin{equation}
G^{\mathrm{Potts}}_{\beta}(x,y) = \mu^{\mathrm{Potts}}_{\beta}(\IF{\sigma_x=\sigma_y})- 1/q
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
As in the Ising model, we are interested in the regime \(\beta < \beta_{\mathrm{c}}\), where \(\beta_{\mathrm{c}}\) is the inverse temperature above which long-range order occurs (that is, \(\inf_{x}G^{\mathrm{Potts}}_{\beta}(0,x)>0\) for all \(\beta>\beta_{\mathrm{c}}\), see below). We thus again have \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}(q,d)\).
One easily checks that the Ising model (with \(h=0\)) at inverse temperature \(2\beta\) corresponds to the \(2\)-state Potts model at inverse temperature \(\beta\).
Intimately related to the Potts model is the FK percolation model.
The latter is a measure on edge sub-graphs of \((\mathbb{Z}^d, E_d)\), where \(E_d=\bigl\{\{i,j\}\subset\mathbb{Z}^d\bigr\}\), depending on two parameters \(\beta\in\mathbb{R}_{\geq 0}\) and \(q\in\mathbb{R}_{>0}\), obtained as the weak limit of the finite-volume measures
\begin{equation}
\Phi^{\mathrm{FK}}_{\Lambda_N;\beta,q}(\omega) = \frac{1}{Z^{\mathrm{FK}}_{\Lambda_N;\beta,q}} \prod_{\{i,j\}\in\omega}(e^{\beta J_{ij}}-1) q^{\kappa(\omega)},
\end{equation}
where \(\kappa(\omega)\) is the number of connected components in the graph with vertex set \(\Lambda_N\) and edge set \(\omega\) and \(Z^{\mathrm{FK}}_{\Lambda_N;\beta,q}\) is the partition function.
In this paper, we always assume that \(q\geq 1\). We use the superscript \(\mathrm{Bern}\) for the case \(q=1\) (Bernoulli percolation).
When \(q\in\mathbb{N}\) with \(q\geq 2\), one has the correspondence
\begin{equation}
\label{eq:Potts_FK_Corresp}
\mu^{\mathrm{Potts}}_{\beta,q}(\IF{\sigma_x=\sigma_y})- \frac{1}{q} = \frac{q-1}{q} \, \Phi^{\mathrm{FK}}_{\beta,q}(x\leftrightarrow y).
\end{equation}
For the FK percolation model, we consider
\begin{equation}
G^{\mathrm{FK}}_{\beta}(x,y) = \Phi^{\mathrm{FK}}_{\beta,q}(x\leftrightarrow y)
\quad\text{ and }\quad
\lambda = \beta,
\end{equation}
where \(\{x\leftrightarrow y\}\) is the event that \(x\) and \(y\) belong to the same connected component.
As for the Potts model, \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}(q,d)\); here, this corresponds to the value at which the percolation transition occurs.
\subsubsection{XY model}
The XY model at inverse temperature \(\beta\geq 0\) on \(\mathbb{Z}^d\) is the probability measure on \((\mathbb{S}^{1})^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\theta\in[0,2\pi)^{\Lambda_N}\))
\[
\dd\mu^{\mathrm{XY}}_{\Lambda_N;\beta}(\theta) = \frac{1}{Z_{\Lambda_N;\beta}^{\mathrm{XY}}} e^{-\beta\mathscr{H}_N(\theta)}\,\dd\theta
\]
with Hamiltonian
\[
\mathscr{H}_N(\theta) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij}\cos(\theta_i-\theta_j)
\]
and partition function \(Z_{\Lambda_N;\beta}^{\mathrm{XY}}\).
In this case, we consider
\begin{equation}
G_{\beta}^{\mathrm{XY}}(x,y) = \mu^{\mathrm{XY}}_{\beta}\big(\cos(\theta_x-\theta_y)\big)
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
In dimension \(1\) and \(2\), \(\lambda_{\mathrm{c}}\) is the point at which quasi-long-range order occurs (failure of exponential decay; in particular, \(\lambda_{\mathrm{c}} =\infty\) when \(d=1\)).
In dimension \(d\geq 3\), we set \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}^{\mathrm{XY}}(d)\) the inverse temperature above which long-range order occurs (spontaneous symmetry breaking).
\subsection{Inverse correlation length}
To each model introduced in the previous subsection, we have associated a suitable 2-point function \(G_\lambda\) depending on a parameter \(\lambda\) (for instance, \(\lambda=(1+m^2)^{-1}\) for the GFF and \(\lambda=\beta\) for the Potts model).
Each of these 2-point functions gives rise to an \emph{inverse correlation length} associated to a direction \(s\in\mathbb{S}^{d-1}\) via
\begin{equation*}
\nu_s(\lambda) = -\lim_{n\to\infty} \frac{1}{n} \log G_{\lambda}(0,ns) .
\end{equation*}
This limit can be shown to exist in all the models considered above in the regime \(\lambda\in[0,\lambda_{\mathrm{c}})\).
When highlighting the model under consideration, we shall write, for example, \(\nu_s^{\mathrm{Ising}}(\lambda)\).
We also define \(\lambda_{\mathrm{\mathrm{exp}}}\) as
\begin{equation}
\label{eq:lambdaqlr_def}
\lambda_{\mathrm{\mathrm{exp}}} = \min\bigl(\lambda_{\mathrm{c}},\inf\setof{\lambda\geq 0}{\inf_s \nu_s(\lambda)=0}\bigr).
\end{equation}
(Let us note that the infimum over \(s\) is actually not required in this definition, as follows from Lemma~\ref{lem:rate_equiv_directions} below.) It marks the boundary of the regime in which \(\nu\) is non-trivial. It is often convenient to extend the function \(s\mapsto\nu_s(\lambda)\) to a function on \(\mathbb{R}^d\) by positive homogeneity. In all the models we consider, the resulting function is convex and defines a norm on \(\mathbb{R}^d\) whenever \(\lambda<\lambda_{\exp}\).
These and further basic properties of the inverse correlation length are discussed in Section~\ref{sec:BasicPropICL}.
\smallskip
The dependence of \(\nu_s(\lambda)\) in the parameter \(\lambda\) is the central topic of this paper.
\subsection{Mass gap, a comment on the Ornstein--Zernike theory}\label{sec:RemOZ}
For off-critical models, the Ornstein--Zernike (OZ) equation is an identity satisfied by \(G_{\lambda}\), first postulated by Ornstein and Zernike (initially, for high-temperature gases):
\begin{equation}
\label{eq:OZ}
G_{\lambda}(0,x) = D_{\lambda}(0,x) + \sum_{y} G_{\lambda}(y,x)D_{\lambda}(0,y) ,
\end{equation}
where \(D_{\lambda}\) is the direct correlation function (this equation can be seen as \emph{defining} \(D_{\lambda}\)), which is supposed to behave like the interaction: \(D_{\lambda}(x,y) \simeq J_{xy}\).
On the basis of~\eqref{eq:OZ}, Ornstein and Zernike were able to predict the sharp asymptotic behavior of \(G_{\lambda}\), provided that the following \emph{mass gap hypothesis} holds:
there exists \(c=c(\lambda)>0\) such that
\begin{equation*}
D_{\lambda}(0,x) \leq e^{-c|x|} G_{\lambda}(0,x).
\end{equation*}
This hypothesis is supposed to hold in a vast class of high-temperature systems with finite correlation length.
One of the goals of the present work is to show that this hypothesis is doomed to \emph{fail} in certain simple models of this type at very high temperature and to provide some necessary conditions for the presence of the mass gap.
To be more explicit, in all models considered, we have an inequality of the form \(G_{\lambda}(0,x)\geq C J_{0x} = C\psi(x)e^{-|x|}\).
In particular, this implies that \(\nu_s \leq |s|\) for all \(s\in\mathbb{S}^{d-1}\).
We will study conditions on \(\psi\) and \(\lambda\) under which the inequality is either strict (``mass gap'') or an equality (saturation).
We will also be concerned with the asymptotic behavior of \(G_{\lambda}\) in the latter case, while the ``mass gap'' pendant of the question will only be discussed for the simplest case of \(\mathrm{KRW}\), the treatment of more general systems being postponed to a forthcoming paper.
A useful consequence of the OZ-equation~\eqref{eq:OZ}, which is at the heart of the derivation of the OZ prefactor, is the following (formal) identity
\begin{equation*}
\label{eq:OZ_paths}
G_{\lambda}(0,x) = \sum_{\gamma\in\mathsf{W}(0,x)}\prod_{i=1}^{|\gamma|} D_{\lambda}(\gamma_{i-1},\gamma_i).
\end{equation*}One can see Simon-Lieb type inequalities
\begin{equation*}
G_{\lambda}(0,x) \leq D_{\lambda}(0,x) + \sum_{y} G_{\lambda}(y,x)D_{\lambda}(0,y),
\end{equation*}as approaching the OZ equation. In particular, this inequality with \(D_{\lambda}(0,y)\simeq J_{xy}\) is directly related to our assumption~\ref{hyp:weak_SL} below.
\subsection{A link with condensation phenomena}
Recall that the (probabilistic version of) condensation phenomena can be summarized as follows:
take a family of real random variables \(X_1, \ldots, X_N\) (with \(N\) possibly random) and constrain their sum to take a value much larger than \(E[\sum_{k=1}^N X_i]\).
Condensation occurs if most of the deviation is realized by a single one of the \(X_k\)s.
In the case of condensation, large deviation properties of the sum are ``equivalent'' to those of the maximum (see, for instance, \cite{Godreche-2019} and references therein for additional information).
In our case, one can see the failure of the mass gap condition as a condensation transition: suppose the OZ equation holds.
\(G(0,x)\) is then represented as a sum over paths of some path weights.
The exponential cost of a path going from \(0\) to \(x\) is always at least of the order \(|x|\).
Once restricted to paths with exponential contribution of this order, the geometry of typical paths will be governed by a competition between entropy (combinatorics) and the sub-exponential part \(\psi\) of the steps weight.
In the mass gap regime, typical paths are constituted of a number of microscopic steps growing linearly with \(\|x\|\): in this situation, entropy wins over energy and the global exponential cost per unit length is decreased from \(|s|\) to some \(\nu_s<|s|\).
One recovers then the behavior of \(G\) predicted by Ornstein and Zernike.
In contrast, in the saturated regime, typical paths will have one giant step (a condensation phenomenon) and the behavior of \(G\) is governed by this kind of paths, which leads to \(G(0,x)\simeq D(0,x)\simeq J_{0x}\).
\subsection{Assumptions}
To avoid repeating the same argument multiple times, we shall make some assumptions on \(G_\lambda\) and prove the desired results based on those assumptions only (basically, we will prove the relevant claims for either \(\mathrm{KRW}\) or \(\mathrm{SAW}\) and the assumptions allow a comparison with those models).
Proofs (or reference to proofs) that the required properties hold for the different models we consider are collected in Appendix~\ref{app:Properties}.
\medskip
\begin{enumerate}[label={\ensuremath{\mathrm{[A_\arabic*]}}}, start=0]
\item \label{hyp:G_Bounded_Pos_nu_pos}
For any \(\lambda\in [0, \lambda_{\mathrm{c}})\), \(G_{\lambda}(x,y)\geq 0\) for any \(x, y\in \mathbb{Z}^d\) \text{ and }\({\sup_{x\in\mathbb{Z}^d} G_{\lambda}(0,x) < \infty}\).
\item \label{hyp:sub_mult}
For any \(\lambda \in [0, \lambda_{\mathrm{c}})\), there exists \(a_{\lambda} > 0\) such that, for any \(x, y, z\in\mathbb{Z}^d\),
\begin{equation}
\label{eq:G_sub_mult}
G_{\lambda}(x,y) \geq a_{\lambda} G_{\lambda}(x,z) G_{\lambda}(z,y).
\end{equation}This property holds at \(\lambda_{\mathrm{c}}\) if \(\sup_{x}G_{\lambda_{\mathrm{c}}}(0,x)<\infty\).
\item \label{hyp:left_cont}
For any \(x, y\in \mathbb{Z}^d\), \(\lambda\mapsto G_{\lambda}(x,y)\) is non-decreasing and left-continuous on \([0, \lambda_{\mathrm{c}})\). This continuity extends to \([0,\lambda_{\mathrm{c}}]\) if \(G_{\lambda_{\mathrm{c}}}(x,y)\) is well defined.
\item \label{hyp:weak_SL}
There exists \(\alpha\geq 0\) such that, for any \(0\leq \lambda < \lambda_{\mathrm{c}}\), there exists \(C\geq 0\) such that for any \(x, y\in\mathbb{Z}^d\),
\begin{equation}
\label{eq:weak_SL}
G_{\lambda}(x,y) \leq C G_{\alpha\lambda}^{\mathrm{KRW}}(x,y).
\end{equation}
\item \label{hyp:J_path_lower_bnd}
For any \(\lambda \in [0, \lambda_{\mathrm{c}})\), there exist \(c_\lambda>0\) and \(C_{\lambda}>0\) such that, for any collection \(\Gamma\subset\mathsf{SAW}(x,y)\), one has
\begin{equation}\label{eq:J_path_lower_bnd}
G_{\lambda}(x,y) \geq c_\lambda \sum_{\gamma\in\Gamma}(C_{\lambda})^{\abs{\gamma}}\prod_{k=1}^{\abs{\gamma}} J_{\gamma_{k-1} \gamma_k}.
\end{equation}
\end{enumerate}
\medskip
Our choice of \(\lambda_{\mathrm{c}}\) and of \(G_{\lambda}\) ensures that~\ref{hyp:G_Bounded_Pos_nu_pos} is always satisfied.
Assumption~\ref{hyp:sub_mult} holds as soon as the model enjoys some GKS or FKG type inequalities. Assumption~\ref{hyp:left_cont} is often a consequence of the monotonicity of the Gibbs state with respect to \(\lambda\).
The existence of a well-defined high-temperature regime (or rather the \emph{proof} of its existence) depends on this monotonicity.
Assumption~\ref{hyp:weak_SL} is directly related to the Ornstein--Zernike equation~\eqref{eq:OZ} in the form given in~\eqref{eq:OZ_paths}. It is easily deduced from a weak form of Simon--Lieb type inequality, see Section~\ref{sec:RemOZ}.
Assumption~\ref{hyp:J_path_lower_bnd} may seem to be a strong requirement but is usually a consequence of a path representation of correlation functions, some form of which is available for vast classes of systems.
\bigskip
Part of our results will also require the following additional regularity assumption on the prefactor \(\psi\):
\begin{enumerate}[label={\ensuremath{\mathrm{[H_\arabic*]}}}, start=0]
\item \label{hyp:PsiQuasiIsotropic}
There exist \(C_\psi^+, C_\psi^- > 0\) and \(\psi_0:\mathbb{N}_{>0}\to\mathbb{R}\) such that, for all \(y\in\mathbb{Z}^d\setminus\{0\}\),
\[
C^-_\psi \psi_0(\normI{y}) \leq \psi(y) \leq C^+_\psi \psi_0(\normI{y}).
\]
\end{enumerate}
\subsection{Surcharge function}
Our study has two ``parameters'': the prefactor \(\psi\), and the norm \(|\cdot|\).
It will be convenient to introduce a few quantities associated to the latter.
First, two convex sets are important: the unit ball \(\mathscr{U}\subset \mathbb{R}^d\) associated to the norm \(|\cdot|\) and the corresponding \emph{Wulff shape}
\[
\mathscr{W} = \setof{t\in\mathbb{R}^d}{\forall x\in\mathbb{R}^d,\, t\cdot x \leq |x|}.
\]
Given a direction \(s\in \mathbb{S}^{d-1}\), we say that the vector \(t\in\mathbb{R}^d\) is dual to \(s\) if
\(t\in\partial\mathscr{W}\) and \(t\cdot s = |s|\). A direction \(s\) possesses a unique dual vector \(t\) if and only if \(\mathscr{W}\) does not possess a facet with normal \(s\). Equivalently, there is a unique dual vector when the unit ball \(\mathscr{U}\) has a unique supporting hyperplane at \(s/|s|\). (See Fig.~\ref{fig:duality} for an illustration.)
\begin{figure}[ht]
\includegraphics{UnitBallL1.pdf}
\hspace*{1cm}
\includegraphics{WulffL1.pdf}
\hspace*{1cm}
\includegraphics{WulffL1bis.pdf}
\caption{Left: The unit ball for the norm \(|\cdot|=\normI{\cdot}\). Middle: the corresponding Wulff shape \(\mathscr{W}\) with two vectors \(t_1\) and \(t_2\) dual to \(s=(1,0)\). Right: the set \(\mathscr{W}\) with the unique vector \(t\) dual to \(s=\frac{1}{\sqrt{5}}(2,1)\).}
\label{fig:duality}
\end{figure}
The \emph{surcharge function} associated to a dual vector \(t\in\partial\mathscr{W}\) is then defined by
\begin{equation*}
\mathfrak{s}_t(x) = |x|- x\cdot t.
\end{equation*}
It immediately follows from the definition that \(\mathfrak{s}_t(x)\geq 0\) for all \(x\in\mathbb{Z}^d\) and \(\mathfrak{s}_t(s)=0\) if \(t\) is a vector dual to \(s\).
The surcharge function plays a major role in the Ornstein--Zernike theory as developed in~\cite{Campanino+Ioffe-2002,Campanino+Ioffe+Velenik-2003,Campanino+Ioffe+Velenik-2008}.
Informally, \(\mathfrak{s}_t(s')\) measures the additional cost (per unit length) that a step in direction \(s'\) incurs when your goal is to move in direction \(s\).
As far as we know, it first appeared, albeit in a somewhat different form, in~\cite{Alexander-1990}.
\subsection{Quasi-isotropy}\label{sec:QuasiIsotropy}
Some of our results hinge on a further regularity property of the norm \(|\cdot|\).
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) be a dual vector.
Write \(s_0 = s/|s|\in\partial\mathscr{U}\) and \(\hat{t} = t/\norm{t} \in \mathbb{S}^{d-1}\). Let \(T_{s_0}\mathscr{U}\) be the tangent hyperplane to \(\mathscr{U}\) at \(s_0\) with normal \(\hat{t}\) (seen, as usual, as a vector space). It is always possible to choose the dual vector \(t\) such that the following holds (we shall call such a \(t\) \emph{admissible}\footnote{When there are multiple tangent hyperplanes to \(\partial\mathscr{U}\) at \(s_0\), convexity and symmetry imply that all non-extremal elements of the normal cone are admissible.}).
There exist \(\epsilon > 0\) and a neighborhood \(\mathscr{N}\) of \(s_0\) such that \(\partial\mathscr{U}\cap\mathscr{N}\) can be parametrized as (see Fig.~\ref{fig:paramU})
\[
\partial\mathscr{U} \cap \mathscr{N} = \setof{s_0 + \tau v - f(\tau v)\hat{t}}{v\in T_{s_0}\mathscr{U}\cap\mathbb{S}^{d-1},\, |\tau|<\epsilon},
\]
for some convex nonnegative function \(f:T_{s_0} \to \mathbb{R}\) satisfying \(f(0)=0\).
\begin{figure}
\centering
\includegraphics{parametrization.pdf}
\caption{The local parametrization of \(\partial\mathscr{U}\) in a neighborhood of \(s_0\).}
\label{fig:paramU}
\end{figure}
\medskip
We will say that \(\partial\mathscr{U}\) is \emph{quasi-isotropic} in direction \(s\) if the qualitative behavior of \(f\) is the same in all directions \(v\): there exist \(c_+\geq c_- > 0\) and an non-decreasing non-negative convex function \(g\) such that, for all \(v\in T_{s_0}\mathscr{U}\cap\mathbb{S}^{d-1}\) and all \(\tau\in (0, \epsilon)\),
\begin{equation}\label{eq:QuasiIsotropy}
c_+ g(\tau)\geq f(\tau v) \geq c_- g(\tau) .
\end{equation}
Taking \(\mathscr{N}\) and \(\epsilon\) smaller if necessary, we can further assume that either \(g(\tau)>0\) for all \(\tau\in (0, \epsilon)\), or \(g(\tau)\equiv 0\) on \((0, \epsilon)\) (the latter occurs when \(s_0\) is in the ``interior'' of a facet of \(\partial\mathscr{U}\)).
\medskip
A sufficient, but by no means necessary, condition ensuring that quasi-isotropy is satisfied in all directions \(s\) is that the unit ball \(\mathscr{U}\) has a \(C^2\) boundary with everywhere positive curvature. Other examples include, for instance, all \(\ell^p\)-norms, \(1\leq p\leq\infty\).
\subsection{Main results: discussion}
We first informally discuss our results. Precise statements can be found in Theorem~\ref{thm:main} below.
It immediately follows from~\ref{hyp:J_path_lower_bnd} that
\begin{equation}\label{eq:TrivialUpperBoundOnICL}
\nu_s(\lambda) \leq |s| .
\end{equation}
We say that there is \emph{saturation} at \(\lambda\) in the direction \(s\) if \(\nu_s(\lambda) = |s|\).
The function \(\lambda\mapsto \nu_s(\lambda)\) is non-increasing (see~\eqref{eq:nu_monotonicity}) and \(\lim_{\lambda\searrow 0} \nu_s(\lambda) = |s|\) (see Lemma~\ref{lem:lambda_equal_zero}).
We can thus define
\[
\lambda_{\mathrm{sat}}(s) = \sup\setof{\lambda}{\nu_s(\lambda) = |s|}.
\]
In several cases, we will be able to prove that \(\lambda_{\mathrm{sat}}(s)<\lambda_{\mathrm{\mathrm{exp}}}\).
The main question we address in the present work is whether \(\lambda_{\mathrm{sat}}(s) > 0\).
Note that, when \(\lambda_{\mathrm{sat}} \in (0,\lambda_{\mathrm{\mathrm{exp}}})\), the function \(\lambda\mapsto\nu_s(\lambda)\) is not analytic in \(\lambda\).
Our main result can then be stated as follows: provided that suitable subsets of \ref{hyp:G_Bounded_Pos_nu_pos}--\ref{hyp:J_path_lower_bnd} and~\ref{hyp:PsiQuasiIsotropic} hold and \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\in\mathbb{S}^{d-1}\),
\[
\lambda_{\mathrm{sat}}(s) > 0 \quad\Leftrightarrow\quad \sum_{y\in\mathbb{Z}^d} \psi(y)e^{-\mathfrak{s}_t(y)} < \infty ,
\]
where \(t\) is an arbitrary vector dual to \(s\).
\smallskip
What happens when quasi-isotropy fails in direction \(s\) is still mostly open; a discussion can be found in Section~\ref{sec:FailureQuasiIsotropy}.
\begin{remark}
In a sense, exponentially decaying interactions are ``critical'' regarding the presence of a mass gap regime/condensation phenomenon.
Indeed, on the one hand, any interaction decaying slower than exponential will lead to absence of exponential decay (e.g., \(G_\lambda(0,x)\geq C_{\lambda} J_{0x}\) by~\ref{hyp:J_path_lower_bnd} in all the models considered here).
This is a ``trivial'' failure of mass gap, as the model is not massive.
Moreover, the behavior \(G_\lambda(0,x)\asymp J_{0x}\) at any values of \(\lambda\) was proven in some cases: see \cite{Newman+Spohn-1998} for results on the Ising model and \cite{Aoun-2020} for the Potts model.
On the other hand, interactions decaying faster (that is, such that \(\sup_{x\in\mathbb{Z}^d} J_{0x}e^{C\norm{x}} < \infty\) for all \(C>0\)) always lead to the presence of a mass gap (finite-range type behavior).
Changing the prefactor to exponential decay is thus akin to exploring the ``near-critical'' regime.
\end{remark}
\subsection{Main Theorems}\label{sec:MainTheorems}
We gather here the results that are proved in the remainder of the paper.
Given a norm \(|\cdot|\) and \(s\in\mathbb{S}^{d-1}\), fix a vector \(t\) dual to \(s\) and define
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t) = \sum_{x\in\mathbb{Z}^d\setminus \{0\}} \psi(x) e^{-\mathfrak{s}_t(x)}.
\end{equation}
Our first result provides criteria to determine whether \(\lambda_{\mathrm{sat}}>0\).
\begin{theorem}
\label{thm:main}
Suppose~\ref{hyp:G_Bounded_Pos_nu_pos},~\ref{hyp:sub_mult},~\ref{hyp:left_cont},~\ref{hyp:weak_SL},~\ref{hyp:J_path_lower_bnd} are satisfied. Let \(s\in\mathbb{S}^{d-1}\). Then,
\begin{itemize}
\item If there exists \(t\) dual to \(s\) with \(\tilde{\Xi}(|\cdot|, \psi, t)<\infty\), there exists \(0<\lambda_0\leq\lambda_{\mathrm{\mathrm{exp}}}\) such that \(\nu_{s}(\lambda)=|s|\) for any \(\lambda<\lambda_0\).
\item Assume~\ref{hyp:PsiQuasiIsotropic}. If there exists an admissible \(t\) dual to \(s\) such that \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) and \(\tilde{\Xi}(|\cdot|, \psi, t)=\infty\), then \(\nu_{s}(\lambda)<|s|\) for any \(\lambda\in(0, \lambda_{\mathrm{\mathrm{exp}}})\).
\end{itemize}
In particular, when \(\tilde{\Xi}(|\cdot|, \psi, t)<\infty\) for some \(t\) dual to \(s\), there exists \(\lambda_{\mathrm{sat}}\in (0,\lambda_{\mathrm{\mathrm{exp}}}]\) such that \(\nu_{s}(\lambda) =|s|\) when \(\lambda<\lambda_{\mathrm{sat}}\) and \(\nu_{s}(\lambda) <|s|\) when \(\lambda>\lambda_{\mathrm{sat}}\).
\end{theorem}
\begin{corollary}
\label{cor:main}
The claim in Theorem~\ref{thm:main} applies to all the models considered in this paper (that is, \(\mathrm{KRW},\mathrm{SAW}, \mathrm{Ising}, \mathrm{IPF}, \mathrm{FK}, \mathrm{Potts}, \mathrm{GFF}, \mathrm{XY}\)).
\end{corollary}
\begin{remark}\label{rem:DirectionDepSaturation}
Whether \(\lambda_{\mathrm{sat}}(s)>0\) depends in general on the direction \(s\).
To see this, consider the case \(|\cdot|= \normIV{\cdot}\) on \(\mathbb{Z}^2\) with \(\psi(x) = \norm{x}^{-\alpha}\) with \(7/4 \geq \alpha > 3/2\).
In order to determine whether \(\lambda_{\mathrm{sat}}(s)>0\), it will be convenient to use the more explicit criterion derived in Lemma~\ref{lem:ExplicitCond}.
The latter relies on the local parametrization of \(\partial\mathscr{U}\), as described in Section~\ref{sec:QuasiIsotropy}.
Below, we use the notation introduced in the latter section.
In particular, \(\lambda_{\mathrm{sat}}(s)>0\) if and only if
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} < \infty ,
\]
where we can take \(\psi_0(\ell) = \ell^{-\alpha}\) (remember condition~\ref{hyp:PsiQuasiIsotropic}).
On the one hand, let us first consider the direction \(s=(0,1)\).
The corresponding dual vector is \(t=s\).
In this case, one finds that \(f(\tau) = \frac14\tau^4 + \mathsf{O}(\tau^8)\).
We can thus take \(g(\tau) = \tau^4\).
In particular,
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1}
= \sum_{\ell\geq 1} \ell^{3/4-\alpha}
= \infty ,
\]
so that \(\lambda_{\mathrm{sat}}(s)=0\).
On the other hand, let us consider the direction \(s'=2^{-1/2}(1,1)\).
The dual vector is \(t'=2^{-3/4}(1,1)\).
In this case, one finds that \(f(\tau) = 3\cdot2^{-5/4}\cdot\tau^2 + \mathsf{O}(\tau^4)\).
We can thus take \(g(\tau) = \tau^2\).
In particular,
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1}
= \sum_{\ell\geq 1} \ell^{1/2-\alpha}
< \infty ,
\]
so that \(\lambda_{\mathrm{sat}}(s)>0\).
\end{remark}
The next theorem lists some cases in which we were able to establish the inequality \(\lambda_{\mathrm{sat}}< \lambda_{\mathrm{\mathrm{exp}}}\).
\begin{theorem}\label{thm:NotSatCloseToLambdaC}
The inequality \(\lambda_{\mathrm{sat}}^*<\lambda_{\exp}^*\) holds whenever one of the following is true:
\begin{itemize}
\item \(d=1\) and \(*\in\{\mathrm{Ising},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY},\ \mathrm{KRW}\} \);
\item \(d\geq 2\), \(*\in \{\mathrm{Ising},\mathrm{Bern}\}\) and \(\lambda_{\mathrm{c}}^{*}=\lambda_{\mathrm{\mathrm{exp}}}^{*}\);
\item \(d\geq 3\), \(*\in\{\mathrm{GFF},\mathrm{KRW}\}\) and \(\lambda_{\mathrm{c}}^{*}=\lambda_{\mathrm{\mathrm{exp}}}^{*}\).
\end{itemize}
\end{theorem}
Finally, the next theorem establishes a form of condensation in part of the saturation regime.
\begin{theorem}
\label{thm:prefactor}
Suppose \(*\in\{\mathrm{SAW},\ \mathrm{Ising},\ \mathrm{IPF},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY}\} \). Suppose moreover that \(\psi\) is one of the following:
\begin{itemize}
\item \(\psi(x) \propto \vert x\vert^{-\alpha}\), \(\alpha> 0\),
\item \(\psi(x) \propto e^{-a\vert x\vert^{\alpha}}\), \(a> 0, 0<\alpha <1\).
\end{itemize}
Then, if \(s\in\mathbb{S}^{d-1}\) is such that \(\tilde{\Xi}(|\cdot|, \psi, t) <\infty\) for some \(t\) dual to \(s\), there exists \(\lambda_1>0\) such that, for any \(\lambda <\lambda_{1}\), there exist \(c_{\pm}=c_{\pm}(\lambda)>0\) such that
\begin{equation}
c_-(\lambda) J_{0,ns}\leq G_{\lambda}^*(0,ns) \leq c_+(\lambda) J_{0,ns}.
\end{equation}
\end{theorem}
\subsection{``Proof'' of Theorem~\ref{thm:main}: organization of the paper}
We collect here all pieces leading to the proof of Theorem~\ref{thm:main} and its corollary.
First, we have that any model \(*\in\{\mathrm{SAW},\ \mathrm{Ising},\ \mathrm{IPF},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY}\}\) satisfies~\ref{hyp:G_Bounded_Pos_nu_pos},~\ref{hyp:sub_mult},~\ref{hyp:left_cont},~\ref{hyp:weak_SL}, and~\ref{hyp:J_path_lower_bnd} (see Appendix~\ref{app:Properties}).
We omit the explicit model dependence from the notation.
We therefore obtain from Claims~\ref{claim:nu_exists},~\ref{claim:nu_monotone}, and~\ref{claim:nu_trivialUB} and Lemma~\ref{lem:lambda_equal_zero} that, for any \(s\in\mathbb{S}^{d-1}\),
\begin{itemize}
\item \(\nu_s(\lambda)\) is well defined for \(\lambda\in[0, \lambda_{\mathrm{c}})\),
\item \(\lambda\mapsto \nu_s(\lambda)\) is non-increasing,
\item \(\lim_{\lambda\searrow 0} \nu_s(\lambda) = |s|\).
\end{itemize}
In particular, setting
\begin{equation}
\lambda_{\mathrm{sat}} = \lambda_{\mathrm{sat}}(s) = \sup\setof{\lambda\geq 0}{\nu_s(\lambda) =|s|},
\end{equation}
it follows from monotonicity that
\begin{itemize}
\item for any \(\lambda \in (0, \lambda_{\mathrm{sat}})\), \(\nu_s(\lambda) = |s|\),
\item for any \(\lambda \in (\lambda_{\mathrm{sat}}, \lambda_{\mathrm{\mathrm{exp}}})\), \(0<\nu_s(\lambda) < |s|\).
\end{itemize}
Via a comparison with the KRW given by~\ref{hyp:weak_SL}, Lemmas~\ref{lem:SaturationKRW}, and~\ref{lem:SaturationAtSmallLambda} establish that
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t)<\infty \implies \lambda_{\mathrm{sat}}(s)>0,
\end{equation}
while Lemma~\ref{lem:mass_gap_non_summable_surcharge} implies that, when $\psi$ satisfies~\ref{hyp:PsiQuasiIsotropic} and \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) (with an admissible \(t\)),
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t)=\infty \implies \lambda_{\mathrm{sat}}(s)=0,
\end{equation}
via a comparison with a suitable SAW model, allowed by~\ref{hyp:J_path_lower_bnd}.
These results are complemented in Section~\ref{sec:lambda_sat_less_lambda_c} by the inequality \(\lambda_{\mathrm{sat}} < \lambda_{\mathrm{\mathrm{exp}}}\) for some particular cases (as stated in Theorem~\ref{thm:NotSatCloseToLambdaC}), using ``continuity'' properties of the models \emph{at} \(\lambda_{\mathrm{c}}\) and the conjectured equality \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\).
Whether \(\lambda_{\mathrm{sat}} < \lambda_{\mathrm{\mathrm{exp}}}\) always holds or not is an open problem (see Section~\ref{sec:open_problems}).
A proof that a condensation phenomenon (Theorem~\ref{thm:prefactor}) indeed occurs is presented in Section~\ref{sec:pre_factor}. It is carried out for a more restricted family of \(\psi\) than our main saturation result and only proves condensation in a restricted regime (see Section~\ref{sec:open_problems} for more details).
\subsection{Open problems and conjectures}\label{sec:open_problems}
The issues raised in the present work leave a number of interesting avenues open. We list some of them here, but defer the discussion of the issues related to quasi-isotropy to the next section.
\subsubsection{Is \(\lambda_{\mathrm{sat}}\) always smaller than \(\lambda_{\mathrm{\mathrm{exp}}}\)?}
While this work provides precise criteria to decide whether \(\lambda_{\mathrm{sat}}(s)>0\), we were only able to obtain an upper bound in a limited number of cases. It would in particular be very interesting to determine whether it is possible that \(\lambda_{\mathrm{sat}}\) coincides with \(\lambda_{\mathrm{\mathrm{exp}}}\), that is, that the correlation length \emph{remains constant in the whole high-temperature regime}. Let us summarize that in the following
\begin{open}
Is it always the case that \(\lambda_{\mathrm{sat}}(s) < \lambda_{\mathrm{\mathrm{exp}}}\)?
\end{open}
One model from which insight might be gained is the \(q\)-state Potts model with large \(q\). In particular, one might try to analyze the behavior of \(\nu_s(\lambda)\) for very large values of \(q\), using the perturbative tools available in this regime.
\subsubsection{What can be said about the regularity of \(\lambda\mapsto\nu_s(\lambda)\)?}
In several cases, we have established that, under suitable conditions, \(\lambda_{\mathrm{\mathrm{exp}}} > \lambda_{\mathrm{sat}}(s) > 0\). In particular, this implies that \(\nu_s\) is not analytic in \(\lambda\) \emph{at} \(\lambda_{\mathrm{sat}}(s)\). We believe however that this is the only point at which \(\nu_s\) fails to be analytic in \(\lambda\).
\begin{conjecture}
The inverse correlation length \(\nu_s\) is always an analytic function of \(\lambda\) on \((\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\).
\end{conjecture}
(Of course, the inverse correlation length is trivially analytic in \(\lambda\) on \([0,\lambda_{\mathrm{sat}}(s))\) when \(\lambda_{\mathrm{sat}}(s)>0\).)
\begin{conjecture}
Assume that \(\lambda_{\mathrm{sat}}(s)>0\). Then, the inverse correlation length \(\nu_s\) is a continuous function of \(\lambda\) at \(\lambda_{\mathrm{sat}}(s)\).
\end{conjecture}
Once this is settled, one should ask more refined questions, including a description of the qualitative behavior of \(\nu_s(\lambda)\) close to \(\lambda_{\mathrm{sat}}(s)\), similarly to what was done in~\cite{Ott+Velenik-2018} in a case where a similar saturation phenomenon was analyzed in the context of a Potts model/FK percolation with a defect line.
\subsubsection{Sharp asymptotics for \(G_\lambda(0,x)\)}
As we explain in Section~\ref{sec:pre_factor}, the transition from the saturation regime \([0, \lambda_{\mathrm{sat}}(s))\) to the regime \((\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\) manifests itself in a change of behavior of the prefactor to the exponential decay of the 2-point function \(G_\lambda(0,ns)\). Namely, in the former regime, the prefactor is expected to always behave like \(\psi(ns)\), while in the latter regime, it should follow the usual OZ decay, that is, be of order \(n^{-(d-1)/2}\). This change is due to the failure of the mass gap condition of the Ornstein--Zernike theory when \(\lambda<\lambda_{\mathrm{sat}}(s)\). It would be interesting to obtain more detailed information.
\begin{conjecture}
For all \(\lambda\in(\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\), \(G_\lambda(0,ns)\) exhibits OZ behavior: there exists \(C=C(s,\lambda) > 0\) such that
\[
G_\lambda(0,ns) = C n^{-(d-1)/2}\, e^{-\nu_s(\lambda) n} (1+{\mathsf o}(1)).
\]
\end{conjecture}
This type of asymptotic behavior has only been established for finite-range interactions: see~\cite{Campanino+Ioffe+Velenik-2003} for the Ising model at \(\beta<\beta_{\mathrm{c}}\), \cite{Campanino+Ioffe+Velenik-2008} for the Potts model (and, more generally FK percolation) at \(\beta<\beta_{\mathrm{c}}\) and~\cite{Ott-2019} for the Ising model in a nonzero magnetic field (see also~\cite{Ott+Velenik-2019} for a review).
We shall come back to this problem in a future work. In the present paper, we only provide a proof in the simplest setting, the killed random walk (see Section~\ref{sec:pre_factorOZ}).
\smallskip
One should also be able to obtain sharp asymptotics in the saturation regime, refining the results in Section~\ref{sec:pre_factor}. Let \(t\) be a dual vector to \(s\). We conjecture the following to hold true.
\begin{conjecture}
For all \(\lambda\in [0, \lambda_{\mathrm{sat}}(s))\), there exists $C(\lambda,s)>0$ such that \(G_\lambda(0,ns)\) exhibits the following behavior:
\[
G_\lambda(0,ns) = C(\lambda,s) \, \psi(ns)\, e^{-|s| n} (1+{\mathsf o}(1)),
\]
\end{conjecture}
In this statement, $C(\lambda,s)$ depends also on the model considered. Similar asymptotics have been obtained for models with interactions decaying slower than exponential: see~\cite{Newman+Spohn-1998} for the Ising model and~\cite{Aoun-2020} for the \(q\)-state Potts model. In those cases, the constant $C(\lambda,s)$ is replaced by the susceptibility divided by $q$.
\medskip
Finally, the following problem remains completely open.
\begin{open}
Determine the asymptotic behavior of \(G_\lambda(0,ns)\) at \(\lambda_{\mathrm{sat}}(s)\).
\end{open}
\subsubsection{Sharpness}
In its current formulation, Theorem~\ref{thm:NotSatCloseToLambdaC} partially relies on the equality between \(\lambda_{\mathrm{c}}\) and \(\lambda_{\mathrm{\mathrm{exp}}}\). As already mentioned, we expect this to be true for all models considered in the present work.
\begin{conjecture}
For all models considered in this work, \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\).
\end{conjecture}
We plan to come back to this issue in a future work.
\subsection{Behavior when quasi-isotropy fails}\label{sec:FailureQuasiIsotropy}
In this section, we briefly discuss what we know about the case of a direction \(s\in\mathbb{S}^{d-1}\) in which the quasi-isotropy condition fails.
As this remains mostly an open problem, our discussion will essentially be limited to one particular example.
What remains valid more generally is discussed afterwards.
\medskip
We restrict our attention to \(d=2\).
Let us consider the norm \(|\cdot|\) whose unit ball consists of four quarter-circles of (Euclidean) radius \(\frac12\) and centers at \((\pm\frac12,\pm\frac12)\), joined by 4 straight line segments; see Fig.~\ref{fig:surcharge}, left.
(The associated Wulff shape is depicted in the same figure, middle.)
We are interested in the direction \(s=\frac1{\sqrt{5}}(2,1)\), in which \(\partial\mathscr{U}\) is \emph{not} quasi-isotropic.
The corresponding dual vector is \(t=(1,0)\).
The associated surcharge function \(\mathfrak{s}_t\) is plotted on Fig.~\ref{fig:surcharge}, right.
Observe how the presence of a facet with normal \(t\) in \(\partial\mathscr{U}\) makes the surcharge function degenerate: the surcharge associated to any increment in the cone \(\setof{(x,y)\in\mathbb{Z}^2}{0\leq x\leq \abs{y}/2}\) vanishes.
The direction \(s\) falls right at the boundary of this cone of zero-surcharge increments.
\begin{figure}
\centering
\includegraphics[width=4cm]{UnitBallPatch.pdf}
\hspace{1cm}
\includegraphics[width=4cm]{Wulff-BdPt.pdf}
\hspace{1cm}
\includegraphics[width=4cm]{surcharge-BdPt.pdf}
\caption{Left: the unit ball associated to the norm \(|\cdot|\) in the example of Section~\ref{sec:FailureQuasiIsotropy}. Middle: the corresponding Wulff shape. Right: polar plot of the surcharge function associated to the direction \(s=\frac{1}{\sqrt{5}}(2,1)\).}
\label{fig:surcharge}
\end{figure}
A priori, our criteria do not allow us to decide whether \(\lambda_{\mathrm{sat}}(s)>0\), since \(\partial\mathscr{U}\) (and thus the surcharge function) displays qualitatively different behaviors on each side of \(s\).
However, it turns out that, in this particular example, one can determine what is happening, using a few observations.
First, the argument in Lemma~\ref{lem:mass_gap_non_summable_surcharge} still applies provided that the sums corresponding to both halves of the cone located on each side of \(s\) diverge.
The corresponding conditions ensuring that \(\lambda_{\mathrm{sat}}(s)=0\) as given in~\eqref{eq:ExplicitCondition}, reduce to
\[
\sum_{\ell\geq 1} \ell \psi_0(\ell) = \infty
\]
for the cone on the side of the facet, and
\[
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) = \infty
\]
on the side where the curvature is positive.
Obviously, both sums diverge as soon as the second one does, while both are finite whenever the first one is.
We conclude from this that \(\lambda_{\mathrm{sat}}(s) > 0\) when
\[
\sum_{\ell\geq 1} \ell \psi_0(\ell) < \infty,
\]
while \(\lambda_{\mathrm{sat}}(s) = 0\) when
\[
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) = \infty.
\]
Of course, this leaves undetermined the behavior when both
\begin{equation}\label{eq:ImpossibleGap}
\sum_{\ell\geq 1} \ell \psi_0(\ell) = \infty
\quad\text{ and }\quad
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) < \infty.
\end{equation}
However, the following simple argument allows one to determine what actually occurs in such a case.
First, observe that, since \(\nu_{s'}\leq |s'|\) for all \(s'\in\mathbb{R}^d\), the unit ball \(\mathscr{U}_\nu\) associated to the norm \(x\mapsto\nu_x(\lambda)\) always satisfies \(\mathscr{U}_\nu \supset \mathscr{U}\).
We now claim that this implies \(\lambda_{\mathrm{sat}}(s) > 0\) if and only if \(\sum_{\ell\geq 1} \ell \psi_0(\ell) < \infty\).
Indeed, suppose \(\lambda_{\mathrm{sat}}(s) > 0\).
Then, for small enough values of \(\lambda\), the boundaries of \(\mathscr{U}_\nu\) and \(\mathscr{U}\) coincide along the 4 circular arcs (including the points between the arcs and the facets).
But convexity of \(\mathscr{U}_\nu\) then implies that they must coincide everywhere, so that \(\lambda_{\mathrm{sat}}(s')>0\) in every direction \(s'\) pointing inside the facets.
But the latter can only occur if \(\sum_{\ell\geq 1} \ell \psi(\ell) < \infty\).
In particular, the case~\eqref{eq:ImpossibleGap} implies \(\lambda_{\mathrm{sat}}(s) = 0\).
\medskip
As long as we consider a two-dimensional setting, the first part of the above argument applies generally, that is, whenever quasi-isotropy fails.
The second part, however, makes crucial use of the fact that \(s\) is in the boundary of a facet of \(\partial\mathscr{U}\).
We don't know how to conclude the analysis when this is not the case.
\medskip
In higher dimensions, the situation is even less clear.
\begin{open}
Provide a necessary and sufficient condition ensuring that \(\lambda_{\mathrm{sat}}(s)>0\) in a direction \(s\in\mathbb{S}^{d-1}\) in which \(\partial\mathscr{U}\) fails to be quasi-isotropic.
\end{open}
\section{Some basic properties}\label{sec:BasicProperties}
\subsection{Basic properties of the inverse correlation length} \label{sec:BasicPropICL}
A first observation is
\begin{claim}
\label{claim:nu_exists}
Suppose~\ref{hyp:sub_mult} holds.
Then, \(\nu_s(\lambda)\) exists for any \(\lambda\in[0, \lambda_{\mathrm{c}})\) and \(s\in\mathbb{S}^{d-1}\).
Moreover
\begin{equation}\label{eq:nu_infimum}
G_{\lambda}(0,ns) \leq a_{\lambda}^{-1} e^{-\nu_s(\lambda) n}.
\end{equation}
\end{claim}
The proof is omitted, as it is a simple variation of the classical subadditive argument.
\begin{claim}
\label{claim:nu_norm}
Suppose~\ref{hyp:sub_mult} holds. For \(\lambda<\lambda_{\exp}\), the function on \(\mathbb{R}^d\) defined by \(\nu_x(\lambda) = \norm{x}\cdot \nu_{x/\norm{x}}(\lambda)\) when \(x\neq 0\) and \(\nu_0(\lambda)=0\) is convex and defines a norm on \(\mathbb{R}^d\).
\end{claim}
Again, the proof is omitted, as it is a standard consequence of Assumption~\ref{hyp:sub_mult}.
Our third and fourth (trivial) observations are
\begin{claim}
\label{claim:nu_monotone}
Suppose~\ref{hyp:left_cont} holds.
Then, for any \(s\in \mathbb{S}^{d-1}\), any \(x, y\in\mathbb{Z}^d\) and any \(0 \leq \lambda \leq \lambda' < \lambda_{\mathrm{c}}\),
\begin{equation}\label{eq:nu_monotonicity}
G_{\lambda}(x,y) \leq G_{\lambda'}(x,y)
\quad\text{ and }\quad
\nu_s(\lambda) \geq \nu_s(\lambda').
\end{equation}
\end{claim}
\begin{claim}
\label{claim:nu_trivialUB}
Let \(s\in\mathbb{S}^{d-1}\). Suppose \(\nu_s(\lambda)\) is well defined and that~\ref{hyp:J_path_lower_bnd} holds.
Then, \(\nu_s\leq |s|\).
\end{claim}
Finally, we look at the behavior of \(\nu\) when \(\lambda\searrow 0\).
\begin{lemma}
\label{lem:lambda_equal_zero}
Suppose~\ref{hyp:weak_SL} and~\ref{hyp:J_path_lower_bnd} hold. Then, for any \(s\in\mathbb{S}^{d-1}\), \(\lim_{\lambda\searrow 0} \nu_{s}(\lambda) = |s|\).
\end{lemma}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\).
By~\ref{hyp:J_path_lower_bnd}, \(\nu_s\leq |s|\).
Let \(\alpha\) be given by~\ref{hyp:weak_SL}.
Fix any \(\epsilon>0\).
Then, let \(\lambda<\bigl( \alpha\sum_{y\neq 0} \psi(y)e^{-\epsilon|y|} \bigr)^{-1}\).
We claim that \(G_{\lambda}(0,ns)\leq c(\lambda,\epsilon) e^{-(1-\epsilon)n|s|}\) which gives the desired claim.
Indeed,
\begin{align*}
G_{\lambda}(0,ns)
&\leq
CG_{\alpha\lambda}^{\mathrm{KRW}}(0,ns) \\
&=
C\sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i=ns}} \prod_{i=1}^{k} \alpha\lambda \psi(y_i)e^{-|y_i|}\\
&\leq
Ce^{-(1-\epsilon)n|s|}\sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i=ns}} \prod_{i=1}^{k} \alpha\lambda \psi(y_i)e^{-\epsilon|y_i|}\\
&\leq
Ce^{-(1-\epsilon)n|s|}\sum_{k\geq 1} \Big(\lambda \sum_{y\neq 0} \alpha \psi(y)e^{-\epsilon|y|}\Big)^{\!k}.
\qedhere
\end{align*}
\end{proof}
\subsection{Weak equivalence of directions}
Let us introduce
\begin{equation}
\nu_+(\lambda) = \max_{s\in\mathbb{S}^{d-1}} \nu_{s}(\lambda)
\quad\text{ and }\quad
\nu_-(\lambda) = \min_{s\in\mathbb{S}^{d-1}} \nu_{s}(\lambda) .
\end{equation}
The existence of these quantities follows from the fact that \(s\mapsto \nu_s(\lambda)\) is continuous (indeed, it is the restriction of a norm on \(\mathbb{R}^d\) to the set \(\mathbb{S}^{d-1}\)).
\begin{lemma}
\label{lem:rate_equiv_directions}
Suppose~\ref{hyp:sub_mult} holds.
Then, \(d\cdot\nu_-(\lambda)\geq \nu_+(\lambda) \geq \nu_-(\lambda)\).
\end{lemma}
\begin{proof}
The second inequality holds by definition.
To obtain the first one, set \(s^*\) to be a direction realizing the minimum.
By lattice symmetries, all its \(\pi/2\) rotations around a coordinate axis also achieve the minimum.
For a fixed direction \(s\), denote by \(s^*_1, \dots, s^*_d\) a basis of \(\mathbb{R}^d\) constituted of rotated versions of \(s^*\) such that \(s = \sum_{i=1}^d \alpha_i s^*_i\) with \(1 \geq \alpha_i \geq 0\).
Then, for any \(n\), \(n s = \sum_{i=1}^d n\alpha_i s_i^*\).
So (integer parts are implicitly taken), by~\ref{hyp:sub_mult},
\begin{equation}
-\log G_{\lambda}(0,ns)
\leq
- \sum_{i=1}^d \log G_{\lambda}(0, n\alpha_i s_i^*) - d\log(a_{\lambda})
= \sum_{i=1}^d n\alpha_i\nu_-(1+{\mathsf o}_n(1)).
\end{equation}
In particular, \(\lim_{n\to\infty} -\log G_{\lambda}(0, ns)/n\leq d\cdot\nu_-\).
\end{proof}
\subsection{Left-continuity of \(\lambda\mapsto\nu_{s}(\lambda)\)}
\begin{lemma}
\label{lem:nu_left_cont}
Suppose~\ref{hyp:sub_mult} and~\ref{hyp:left_cont} hold. Let \(s\in \mathbb{S}^{d-1}\). Let \(\lambda'\in (0,\lambda_{\mathrm{c}}]\) be such that
\begin{itemize}
\item \(G_{\lambda'}\) is well defined.
\item There exists \(\delta>0\) such that \(\inf_{\lambda\in(\lambda'-\delta,\lambda']}a_{\lambda}>0\) (where \(a_{\lambda}\) is given by~\ref{hyp:sub_mult}).
\end{itemize}
Then, the function \(\lambda\mapsto \nu_s(\lambda)\) is left-continuous at \(\lambda'\).
\end{lemma}
\begin{proof}
Fix \(\lambda'\in (0,\lambda_{\mathrm{c}}]\) such that \(G_{\lambda'}\) is well defined and \(s\in\mathbb{S}^{d-1}\). Let \(\delta\) be given by our hypotheses and let \(I=(\lambda'-\delta,\lambda']\), and \(C=-\log(\inf_{\lambda\in I}a_{\lambda})\). Set
\begin{equation*}
f_n(\lambda) = -\log G_{\lambda}(0,ns).
\end{equation*}
Then, for any \(\lambda\in I\) and \(n,m\in\mathbb{Z}_{>0}\), \(f_{n+m}(\lambda)\leq f_{n}(\lambda)+f_{m}(\lambda)+C\). In particular, for any \(n\geq 1\) and any \(\lambda\in I\),
\begin{equation*}
\nu_s(\lambda)=\lim_{q\to\infty} \frac{f_{qn}(\lambda)}{qn} \leq \frac{f_n(\lambda)}{n} + \frac{C}{n}.
\end{equation*}
Fix \(\epsilon>0\). Choose \(n_0\) such that \(C/n_0<\epsilon/3\) and \(\abs{\frac{f_{n_0}(\lambda')}{n_0}-\nu_s(\lambda')}\leq \epsilon/3\). By left-continuity of \(G_{\lambda}(0,n_0s)\) at \(\lambda'\), one can choose \(\epsilon'_0>0\) such that
\begin{equation*}
\abs{\frac{f_{n_0}(\lambda'-\epsilon')}{n_0} - \frac{f_{n_0}(\lambda')}{n_0}}\leq \epsilon/3
\end{equation*}for any \(\epsilon'<\epsilon'_0\). In particular, for any \(\epsilon'<\epsilon'_0\),
\begin{align*}
0\leq \nu_{s}(\lambda'-\epsilon')-\nu_s(\lambda')
&
\leq \frac{f_{n_0}(\lambda'-\epsilon')}{n_0} + \frac{C}{n_0} - \nu_s(\lambda')\\
&
\leq \abs{\frac{f_{n_0}(\lambda'-\epsilon')}{n_0} - \frac{f_{n_0}(\lambda')}{n_0}} + \epsilon/3 + \abs{\frac{f_{n_0}(\lambda')}{n_0} - \nu_s(\lambda')}\\
&
\leq \epsilon,
\end{align*}
where we used~\eqref{eq:nu_monotonicity} in the first line.
\end{proof}
\section{``Summable'' case}
In this section, we consider directions \(s\in\mathbb{S}^{d-1}\) for which
\begin{equation}\label{eq:SummabilityCondition}
\sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} < \infty,
\end{equation}
where \(t\) is any vector dual to \(s\).
In this case, we first prove that saturation occurs in direction \(s\) at small enough values of \(\lambda\), whenever the model at hand satisfies~\ref{hyp:weak_SL}.
Then, we complement this result by showing, in some models, that saturation does not occur for values of \(\lambda\) close enough to \(\lambda_{\mathrm{\mathrm{exp}}}\).
\subsection{Saturation at small \(\lambda\)}
\begin{lemma}\label{lem:SaturationKRW}
Let \(s\in\mathbb{S}^{d-1}\) and fix some vector \(t\) dual to \(s\). Assume that~\eqref{eq:SummabilityCondition} holds.
Then, one can define \(0<\tilde{\lambda}\equiv \tilde{\lambda}^{\mathrm{KRW}}\leq \lambda_{\mathrm{c}}\) (given by~\eqref{eq:lambda_tilde_KRW}) such that, for any \(\lambda \in (0, \tilde{\lambda})\), \(\nu_s^{\mathrm{KRW}}(\lambda) = |s|\).
Moreover, when \(d=1\), \(\tilde{\lambda}^{\mathrm{KRW}} = \lambda_{\mathrm{sat}}^{\mathrm{KRW}}\).
\end{lemma}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\). Assume that~\eqref{eq:SummabilityCondition} holds.
Let \(G_{\lambda} \equiv G^{\mathrm{KRW}}_{\lambda}\).
Set
\begin{equation}\label{eq:lambda_tilde_KRW}
\tilde{\lambda} = \min\Bigl\{\Bigl( \sum_{y\neq 0} \psi(y)e^{-\mathfrak{s}_t(y)}\Bigr)^{-1}, 1 \Bigr\} > 0.
\end{equation}
(Recall that \(\lambda_{\mathrm{c}}=1\) for the KRW.) Suppose \(\lambda< \tilde{\lambda}\). Let us introduce
\begin{align*}
A_k(n)
&=
\sum_{\substack{y_1, \dots, y_k\in\mathbb{Z}^d \setminus \{0\} \\ \sum_{i=1}^k y_i= ns }}\prod_{i=1}^k \lambda J_{y_i} \\
&=
e^{-n|s|} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z}^d \setminus \{0\} \\ \sum_{i=1}^k y_i= ns }} \prod_{i=1}^k \psi(y_i)e^{-\mathfrak{s}_t(y_i)}
\leq e^{-n|s|} \Bigl(\lambda \sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} \Bigr)^{\!k}.
\end{align*}
Since \(\lambda \sum_{y\neq 0} \psi(y)e^{-\mathfrak{s}_t(y)} < 1\) for all \(\lambda\in [0, \tilde{\lambda})\), the first part of the result follows from
\begin{equation}\label{eq:UB_A_n}
G_{\lambda}(0,ns)= \sum_{k=1}^{\infty} A_k(n),
\end{equation}
which is a decomposition according to the length of the walk.
To get the second part of the \(d=1\) case, one can assume \(\tilde{\lambda}<1=\lambda_{\mathrm{c}}\) (the claim being empty otherwise).
Without loss of generality, we consider \(s=1\).
The unique dual vector is \(t = |1|\).
Let \(\lambda \in (\tilde{\lambda}, \lambda_{\mathrm{c}})\).
As \(\lambda < \lambda_{\mathrm{c}}\), \(\nu_{1}(\lambda)\) is the radius of convergence of \(\mathbb{G}_{\lambda}(z) = \sum_{n\geq 1} e^{zn} G_{\lambda}(0,n)\).
It is therefore sufficient to find \(\epsilon>0\) such that \({\mathbb{G}_{\lambda}((1-\epsilon)|1|)} =\infty\).
The summability of \(\mathbb{G}_{\lambda}((1-\epsilon)|1|)\) is equivalent to the summability of
\begin{align*}
\sum_{n\geq 1} e^{(1-\epsilon)|1|n} G_{\lambda}(0,n)
&=
\sum_{n\geq 1} e^{(1-\epsilon)tn} \sum_{k\geq 1} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\} \\ \sum_{i=1}^k y_i= n }} \prod_{i=1}^k \lambda \psi(y_i) e^{-|y_i|}\\
&=
\sum_{k\geq 1} \sum_{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\}}\prod_{i=1}^k \lambda \psi(y_i) e^{-|y_i| +(1-\epsilon) t y_i}\\
&=
\sum_{k\geq 1} \Bigl( \lambda\sum_{y\in\mathbb{Z} \setminus \{0\}} \psi(y_i) e^{-\mathfrak{s}_t(y)} e^{-\epsilon |1| y} \Bigr)^{\!k}.
\end{align*}
Now, \(f(\epsilon) = \lambda\sum_{y\in\mathbb{Z} \setminus \{0\}} \psi(y) e^{-\mathfrak{s}_t(y)} e^{-\epsilon |1|y}\) is continuous in \(\epsilon\) on \([0,\infty)\), and \(f(0)>1\) by choice of \(\lambda\).
So, it is still \(>1\) for some \(\epsilon>0\), implying the claim.
\end{proof}
\begin{remark}
The statement of Lemma~\ref{lem:SaturationKRW} obviously extends to the Gaussian Free Field via~\eqref{eq:GFF_to_KRW}.
\end{remark}
We can now push the result to other models.
\begin{lemma}\label{lem:SaturationAtSmallLambda}
Suppose~\ref{hyp:weak_SL} holds.
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) dual to \(s\). Assume that~\eqref{eq:SummabilityCondition} holds.
Then, there exists \(\tilde\lambda>0\) such that, for any \(\lambda \in [0, \tilde\lambda)\), \(\nu_s(\lambda) = |s|\).
\end{lemma}
\begin{proof}
Let \(\alpha\) be given by~\ref{hyp:weak_SL}.
Set
\begin{equation*}
\tilde{\lambda} = \frac{1}{\alpha} \tilde{\lambda}^{\mathrm{KRW}} > 0.
\end{equation*}
By~\ref{hyp:weak_SL} and Lemma~\ref{lem:SaturationKRW}, for \(\lambda<\tilde{\lambda}'\),
\begin{equation*}
G_{\lambda}(0,ns) \leq C G_{\alpha\lambda}^{\mathrm{KRW}}(0,ns) \leq ce^{-n|s|}
\end{equation*}
for some \(\lambda\)-dependent constant \(c\), as \(\alpha\lambda < \tilde{\lambda}^{\mathrm{KRW}}\).
\end{proof}
\subsection{Prefactor for \(\mathrm{KRW}\) when \(\lambda<\lambda_{\mathrm{sat}}\)}\label{sec:pre_factor}
We first show the condensation phenomenon mentioned in the introduction for polynomial prefactors.
Namely, we prove
\begin{lemma}\label{lem:pre_fact_polynomial}
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) dual to \(s\).
Suppose that \(\psi(x)=C_{\alpha}\vert x\vert^{-\alpha}\) and that~\eqref{eq:SummabilityCondition} holds. Then, there exists \(\tilde{\lambda}>0\) (the same as in Lemma~\ref{lem:SaturationAtSmallLambda}) such that, for any \(\lambda<\tilde{\lambda}\), there exists \(c_+=c_{+}(\lambda)>0\) such that
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(0,ns) \leq c_+ J_{0,ns}.
\end{equation}
\end{lemma}
\begin{remark}
As \(\mathfrak{s}_t \geq 0\), \(\alpha>d\) always implies~\eqref{eq:SummabilityCondition}.
\end{remark}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\).
Denote \(G_{\lambda}\equiv G^{\mathrm{KRW}}_{\lambda}\).
Let \(\tilde{\lambda}\) be given by~\eqref{eq:lambda_tilde_KRW} and fix \(\lambda<\tilde{\lambda}\). Start as in the proof of Lemma~\ref{lem:SaturationKRW}.
Define
\begin{equation*}
A_k(n)
=
\sum_{\substack{\gamma\in\mathsf{W}(0,ns)\\ \abs{\gamma} = k}} \prod_{i=1}^k \lambda J_{\gamma_{i-1}\gamma_i}
=
e^{-n|s|} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i = ns}} \prod_{i=1}^k \lambda \psi(y_i) e^{-\mathfrak{s}_t(y_i)}
\leq
e^{-n|s|} (\lambda\tilde{\lambda}^{-1})^k.
\end{equation*}
Since \(\lambda < \tilde{\lambda}\), the inequality above implies that there exist \(C_{1},C_{2}>0\) such that
\begin{equation*}
\sum\limits_{k=C_{1}\log(n)}^{\infty}\sum_{\substack{\gamma\in\mathsf{W}(0,ns) \\ \abs{\gamma}=k}}\lambda^{k}\prod_{i=1}^{k}J_{\gamma_{i-1} \gamma_i}\leq C_{2}J_{0,ns}.
\end{equation*}
Therefore, we can assume that \(k\leq C_{1}\log(n)\).
Let \(\gamma\in\mathsf{W}(0,ns)\) with \(\vert\gamma\vert=k\).
Since \(k<n\), there exists \(j\) such that \(\vert\gamma_{j}-\gamma_{j-1}\vert\geq \vert ns\vert /k\).
Then, we can write
\begin{align*}
A_k(n)
&\leq
k\sum_{y: \vert y\vert\geq \vert ns\vert /k} \psi(y)e^{-\vert y\vert }\sum_{\substack{\gamma\in\mathsf{W}(0,ns-y) \\ \abs{\gamma}=k-1}} \lambda^{k}\prod_{i=1}^{k-1}J_{\gamma_{i-1} \gamma_i}\\
&\leq
k e^{-n|s|}\psi(ns/k) \lambda \sum_{\substack{y_1,\dots y_{k-1}\\ \vert\sum y_i -ns\vert\geq \vert ns\vert /k } } \prod_{i=1}^{k-1} \lambda\psi(y_i)e^{-\mathfrak{s}_t(y_i)}\\
&\leq
C_3k^{1+\alpha} e^{-n|s|}\psi(ns) \lambda \Big(\sum_{y_1\neq 0} \lambda\psi(y_1)e^{-\mathfrak{s}_t(y_1)}\Big)^{k-1}\\
&= C_3J_{0,ns} k^{1+\alpha} \tilde{\lambda} (\lambda \tilde{\lambda}^{-1})^{k},
\end{align*}
where we used \(\vert y\vert\geq \vert ns\vert /k\) and \(\mathfrak{s}_t\geq 0\) in the second line, the polynomial form of \(\psi\) in the third one, and the definition of \(\tilde{\lambda}\) in the last one. \(C_3\) is a constant depending on \(|\ |\) and \(\alpha\) only.
This yields
\begin{align*}
\sum_{k=1}^{C_{1}\log(n)} A_k(n)
\leq
C_3J_{0,ns} \tilde{\lambda} \sum_{k=1}^{\infty}k^{\alpha+1} (\lambda \tilde{\lambda}^{-1})^k.
\end{align*}
Since \(\lambda<\tilde{\lambda}\), the last sum converges, which concludes the proof.
\end{proof}
We now show the same condensation phenomenon for a class of fast decaying prefactors in a perturbative regime of \(\lambda\). Namely, we assume that the function \(\psi\) satisfies
\begin{enumerate}[label={\ensuremath{\mathrm{[H_\arabic*]}}}, start=1]
\item \label{hyp:psi_hyp1}
\(\psi(y)\) depends only on \(\abs{y}\) and is decreasing in \(\abs{y}\).
\item \label{hyp:psi_hyp2} there exist \(c>0\) and \(0<a\leq 1\) such that
\begin{equation}\label{eq:prefactor_super_summability}
\sum_{y\neq 0} \psi(y)^{a} e^{-\mathfrak{s}_t(y)} < \infty,
\end{equation}
and, for every \(n, m\in\mathbb{R}_+\) with \(m\leq n\),
\begin{equation}\label{eq:prefactor_factor_bnd}
\psi(n)\psi(m)\leq c\psi(n+m)\psi(m)^{a}.
\end{equation}
\end{enumerate}
These assumptions are in particular true for prefactors exhibiting stretched exponential decay, \(\psi (x)= C\exp (-b \abs{x}^{\gamma} )\) with \(b>0\) and \(0<\gamma <1\), as well as for power-law decaying prefactors \(\psi(x)= C\abs{x}^{-\alpha}\) with \(\alpha>d\).
\begin{lemma}
\label{lem:pre_fact_fast_dec}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\). Assume that \(\psi\) is such that~\ref{hyp:psi_hyp1} and~\ref{hyp:psi_hyp2} hold (in particular, \eqref{eq:SummabilityCondition} holds for \(t\)).
Then, there exists \(\lambda_{0}>0\) such that, for any \(\lambda < \lambda_{0}\), one can find \(c_+>0\) such that
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(0,ns) \leq c_+ J_{0,ns} .
\end{equation}
\end{lemma}
\begin{remark}
On can notice that in the case \(\psi(x) = C_{\alpha}|x|^{-\alpha}\),~\eqref{eq:prefactor_factor_bnd} is satisfied with \(a=1\). In which case, \(c= 2^{\alpha}\) and~\eqref{eq:prefactor_super_summability} is simply~\eqref{eq:SummabilityCondition}. The condition is therefore the same as the one of Lemma~\ref{lem:pre_fact_polynomial} but the \(\lambda_0\) of Lemma~\ref{lem:pre_fact_fast_dec} is smaller than the \(\tilde{\lambda}\) of Lemma~\ref{lem:pre_fact_polynomial} (\(\tilde{\lambda} = 2^{\alpha} \lambda_0\)).
\end{remark}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\) and let \(\psi\) be as in the statement. Write \(G_{\lambda}\equiv G^{\mathrm{KRW}}_{\lambda}\).
Let \(c, a\) be given by~\ref{hyp:psi_hyp2}.
Let \(\lambda_0\) be given by
\begin{equation*}
\lambda_0 = \Bigl(c\sum_{y\neq 0} \psi(y)^{a}e^{-\mathfrak{s}_t(y)} \Bigr)^{\!-1}>0.
\end{equation*}
We can rewrite \(G_{\lambda}\) as
\begin{align*}
e^{n|s|} G_{\lambda}(0,ns)
&=
\sum\limits_{k=1}^{\infty} \lambda^k \sum_{\substack{y_1, \dots, y_k \\ \sum_{i=1}^k y_i=ns }} \prod_{i=1}^k \psi(y_i) e^{-\mathfrak{s}_t(y_i)}\\
&\leq
\sum_{k=1}^\infty \lambda^k k\sum_{\substack{y_1, \dots, y_{k-1}\\ \abs{ns -\sum_{i=1}^{k-1} y_i} \geq \max_i\abs{y_i}}} \psi\Bigl(ns - \sum_{i=1}^{k-1} y_i\Bigr) \prod_{i=1}^{k-1} \psi(y_i) e^{-\mathfrak{s}_t(y_i)},
\end{align*}
where we used \(\mathfrak{s}_t \geq 0\).
Now, iterating~\eqref{eq:prefactor_factor_bnd} \(k\) times yields that, for any \(k\geq 1\) and any \(y_1, \dots, y_{k-1}\neq 0\) such that \(\abs{ns - \sum_{i=1}^{k-1}y_i} \geq \max_i\abs{y_i}\),
\begin{equation*}
\psi\Bigl(ns-\sum_{i=1}^{k-1} y_i\Bigr) \prod_{i=1}^{k-1} \psi(y_i)
\leq
c^k \psi(ns) \prod_{i=1}^{k-1} \psi(y_i)^{a}.
\end{equation*}
This gives
\begin{equation*}
G_{\lambda}(0,ns)
\leq
e^{-n|s|} \psi(ns) \lambda c \sum_{k=1}^\infty k \Bigl(\lambda c\sum_{y\neq 0} \psi(y)^{a} e^{-\mathfrak{s}_t(y)} \Bigr)^{\!k-1}.
\end{equation*}
The result follows since \(\lambda<\lambda_0\).
\end{proof}
As for the saturation result, one can use~\ref{hyp:weak_SL} to push the result to other models.
\begin{corollary}
\label{cor:condensation}
Assume that~\ref{hyp:weak_SL} and~\ref{hyp:J_path_lower_bnd} hold.
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) be a dual vector. Suppose that \(\psi\) fulfill the hypotheses of either Lemma~\ref{lem:pre_fact_polynomial} or Lemma~\ref{lem:pre_fact_fast_dec}.
Then, there exists \(\lambda_0>0\) such that, for any \(\lambda<\lambda_0\),
\begin{equation*}
c_-(\lambda)J_{0,ns}\leq G_{\lambda}(0,ns) \leq c_+(\lambda) J_{0,ns},
\end{equation*}
for some \(c_+(\lambda),c_-(\lambda)>0\).
\end{corollary}
The use of~\ref{hyp:J_path_lower_bnd} to obtain the lower bound is obviously an overkill and the inequality follows from the less restrictive versions of the arguments we use in Appendix~\ref{app:Properties}.
\subsection{Prefactor for \(\mathrm{KRW}\) when \(\lambda>\lambda_{\mathrm{sat}}\)}\label{sec:pre_factorOZ}
In this section, we establish Ornstein--Zernike asymptotics for \(\mathrm{KRW}\) whenever there is a mass gap (that is, when saturation does not occur). We expect similar results for general models, but the proofs would be much more intricate. We will come back to this issue in another paper.
\begin{lemma}\label{lem:OZ}
Let \(s\in\mathbb{S}^{d-1}\) and \(\lambda\in (\lambda_{\mathrm{sat}}(s),\lambda_{\mathrm{\mathrm{exp}}})\). There exists \(C_{\lambda}=C(\lambda)>0\) such that
\begin{equation}
G_{\lambda}^{\mathrm{KRW}}(0,ns)=\dfrac{C_{\lambda}}{\vert ns\vert^{(d-1)/2}}e^{-\nu_{s}(\lambda)n}(1+o_{n}(1)).
\end{equation}
\end{lemma}
\begin{proof}
We follow the ideas developed in \cite{Campanino+Ioffe-2002}.
We first express \(e^{\nu_s(\lambda) n} G^{\mathrm{KRW}}_{\lambda}(0,ns)\) as a sum of probabilities for a certain random walk.
We then use the usual local limit theorem on this random walk to deduce the sharp prefactor.
Let \(G_{\lambda}=G_{\lambda}^{\mathrm{KRW}}, \nu_s = \nu_s(\lambda)\).
Since \(\lambda<\lambda_{\mathrm{\mathrm{exp}}}\), \(\nu\) defines a norm on \(\mathbb{R}^d\) (see Claim 2).
Let \(\tilde{t}_s\) be a dual vector to \(s\) with respect to the norm \(\nu\).
We can rewrite \(e^{\nu_s n} G_{\lambda}(0,ns)\) in the following way:
\begin{equation}
e^{\nu_s n} G_{\lambda}(0,ns)
=
\sum_{N=1}^{\infty} \sum_{\substack{y_1, \dots, y_N \\ \sum y_i=ns}} \prod_{i=1}^{N} w(y_i) ,
\end{equation}
with \(w(y_i) = \lambda e^{\tilde{t}_s \cdot y_i - |y_i|} \psi(y_i)\).
Remark that \(w(y_i)\) has an exponential tail, since \(\nu_s < |s|\).
Moreover, \(w(y)\) defines a probability measure on \(\mathbb{Z}^d \setminus \{0\}\).
Indeed, let \(t_s\) be a dual vector to \(s\) with respect to the norm \(|\cdot|\).
Notice that, for \(x\in\mathbb{R}\),
\begin{align*}
\sum_{k\geq 1} x^{|s| k}e^{\nu_s k} G_{\lambda}(0,ks)
&=
\sum_{N\geq 1} \sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_N \\ \sum y_i=ks }} \prod_{i=1}^{N} x^{t_s \cdot y_i} w(y_i) \\
&\leq
\sum_{N\geq 1}\biggl( \sum_{y\neq 0} x^{t_s \cdot y} w(y) \biggr)^{\!\!N} \\
&=
\dfrac{\sum_{y\neq 0} x^{t_s\cdot y} w(y)}{1-\sum_{y\neq 0} x^{t_s\cdot y} w(y)}.
\end{align*}
The radius of convergence of the series in the left-hand side is equal to 1, whereas the radius of convergence of the series in the right-hand side is strictly larger than 1, since \(w(y)\) has an exponential tail.
It follows that, for \(x=1\), we must have
\begin{equation}
\sum\limits_{y\neq 0}w(y)=1.
\end{equation}
We denote by \(P_0\) the law of the random walk \((S_n)_{n\geq 1}\) on \(\mathbb{Z}^d\), starting at \(0\in\mathbb{Z}^{d}\) and with increments of law \(w\), and by \(E_0\) the corresponding expectation.
We can rewrite
\begin{equation}\label{eq:RW}
e^{\nu_{s}n} G_{\lambda}(0,ns) = \sum_{N\geq 1} P_0(S_N = ns).
\end{equation}
Remark that \(E_0(S_1) = \mu s\) for some \(\mu\in\mathbb{R}\).
Indeed, were it not the case, rough large deviation bounds would imply the existence of \(c>0\) such that \(P_0(S_N = ns) \leq e^{-c\max\{n,N\}}\) for all \(N\). Using~\eqref{eq:RW}, this would imply \(e^{\nu_s n} G(0,ns) \leq e^{-c' n}\), for some \(c'>0\), contradicting the fact that \(e^{\nu_s n} G(0,ns) = e^{{\mathsf o}(n)}\).
Fix \(\delta>0\) small.
On the one hand, uniformly in \(y\) such that \(\abs{y - n\mu s} \leq n^{1/2-\delta}\), we have, by the local limit theorem,
\begin{equation}
\sum_{N:\, \abs{N-n} \leq n^{1/2+\delta}} P_{0}(S_N=y)
=
\dfrac{\tilde{C_\lambda}}{\vert ns\vert^{(d-1)/2}} \bigl(1+{\mathsf o}_n(1)\bigr),
\end{equation}
where \(\tilde{C_\lambda}>0\) can be computed explicitely.
On the other hand, since \(w\) has exponential tail, a standard large deviation upper bound shows that
\begin{equation}
\sum_{N:\,\abs{N-n} > n^{1/2 +\delta}} P_0(S_N=y) \leq e^{-c_2 n^{2\delta'}},
\end{equation}
for some small \(\delta'>0\).
Therefore, it follows from~\eqref{eq:RW} that
\begin{equation}
e^{\nu_s n} G_{\lambda}(0,ns) = \dfrac{C_\lambda}{|ns|^{(d-1)/2}} \bigl(1+{\mathsf o}_n(1)\bigr),
\end{equation}
with \(C_\lambda = \tilde{C_\lambda}\mu^{(d-1)/2}\).
\end{proof}
\subsection{Absence of saturation at large \(\lambda\)}
\label{sec:lambda_sat_less_lambda_c}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_d1}
Suppose \(d=1\) and \(*\in\{\mathrm{Ising}, \mathrm{Potts}, \mathrm{FK}, \mathrm{XY}\}\).
Then, there exists \(\lambda_0 \in (0, \infty)\) such that \(0 < \nu^*(\lambda) < \abs{1}\) when \(\lambda > \lambda_0\).
\end{lemma}
\begin{proof}
In all the models \(\{\mathrm{Ising}, \mathrm{Potts}, \mathrm{FK}, \mathrm{XY}\}\), \(\nu(\lambda) > 0\) for any \(\lambda > 0\) when \(d=1\).
The claim is thus an easy consequence of the finite-energy property for FK percolation: bound \(\Phi^{\mathrm{FK}}(0\leftrightarrow x)\) from below by the probability that a given minimal-length nearest-neighbor path \(\gamma\) is open, the probability of which is seen to be at least \(p_\beta^{\norm{x}_1}\) with \(\lim_{\beta\to\infty} p_\beta = 1\).
A similar argument is available for the \(\mathrm{XY}\) model: set all coupling constants not belonging to \(\gamma\) to \(0\) by Ginibre inequalities and explicitly integrate the remaining one-dimensional nearest-neighbor model to obtain a similar bound.
\end{proof}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_GFF_KRW}
Suppose \(*\in\{\mathrm{GFF}, \mathrm{KRW}\}\). Suppose either \(d=1\) or \(d\geq 3\) and \(\lambda^{*}_{c}=\lambda^{*}_{\exp}\). Then, \(\lambda^{*}_{\mathrm{sat}} < \lambda^{*}_{\exp}\).
\end{lemma}
\begin{proof}
We treat only the KRW as extension to the GFF is immediate.
Suppose first that \(d\geq 3\).
Then, \(G_{\lambda_{\mathrm{c}}}(x,y)\) is finite for any \(x,y\in\mathbb{Z}^d\) and does not decay exponentially fast.
So, \(\nu(\lambda_{\mathrm{c}})\) is well defined and equals \(0\).
Left continuity of \(\nu\) and the assumption \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\) conclude the proof.
For \(d=1\) we use the characterization of Lemma~\ref{lem:SaturationKRW}.
By our choice of normalization for \(J\) and the definition of \(\lambda_{\mathrm{sat}}^{\mathrm{KRW}}\) and \(\mathfrak{s}_t\),
\begin{gather*}
2\sum_{n\geq 1} \psi(n) e^{-n|1|} = 1 = \lambda_{\mathrm{c}}
\quad\text{ and }\quad
\lambda_{\mathrm{sat}}^{\mathrm{KRW}} = \Bigl( \sum_{n\geq 1} \psi(n) (1 + e^{-2n|1|}) \Bigr)^{\!-1}.
\end{gather*}
In particular, defining a probability measure \(p\) on \(\mathbb{N}\) by \(p(n) = 2\psi(n) e^{-n|1|}\), one obtains
\begin{equation*}
\lambda_{\mathrm{sat}}^{\mathrm{KRW}} = \Bigl( \sum_{n\geq 1} p(n)\cosh(n|1|) \Bigr)^{\!-1} < 1 = \lambda_{\mathrm{c}}^{\mathrm{KRW}}.
\end{equation*}
The conclusion will follow once we prove that $\lambda_{\exp}^{\mathrm{KRW}}=1$. Fix $\lambda<1$ and $\delta >0$. Then
\[
\sum_{n\in\mathbb{Z}} e^{\delta n} G^{\mathrm{KRW}}_{\lambda}(0,n)
=
\sum_{n\in\mathbb{Z}} e^{\delta n} \sum_{k\geq 1} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\} \\ \sum_{i=1}^k y_i= n }} \prod_{i=1}^k \lambda J_{0,y_{i}}
=\sum_{k=1}^{\infty}\Bigl(\lambda\sum_{y\neq 0}J_{0,y}e^{\delta y}\Bigr)^{\!\!k}.
\]
By our choice of normalization for $J$ and the fact that $J_{0,y}$ has exponential tails, it is possible to find $\delta$ small enough such that the sum over $k$ is finite, which proves that $\lambda_{\exp}^{\mathrm{KRW}}=1$.
\end{proof}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_AnyD}
Suppose \(d>1\) and consider Bernoulli percolation or the Ising model. Suppose \(\lambda_{\mathrm{\mathrm{exp}}}=\lambda_{\mathrm{c}}\).
Then, there exists \(\lambda_0 \in [0, \lambda_{\mathrm{\mathrm{exp}}})\) such that, for any \(s\in\mathbb{S}^{d-1}\) and \(\lambda \in (\lambda_0, \lambda_{\mathrm{\mathrm{exp}}})\),
\begin{equation*}
\nu_s(\lambda) < |s|.
\end{equation*}
\end{lemma}
\begin{proof}
The existence of \(\lambda_0\) follows from Lemma~\ref{lem:nu_left_cont} and the fact that \(\nu_s(\lambda_{\mathrm{c}}) = 0\) which is obtained by equivalence of directions for \(\nu\) (Lemma~\ref{lem:rate_equiv_directions}) and divergence of the susceptibility at \(\lambda_{\mathrm{c}}\).
The latter is proved for the Ising model and Bernoulli percolation in~\cite{Duminil-Copin+Tassion-2016}. The conclusion follows by the assumption \(\lambda_{\mathrm{\mathrm{exp}}}=\lambda_{\mathrm{c}}\).
\end{proof}
\section{``Non-summable'' case}
In this section we consider directions \(s\in\mathbb{S}^{d-1}\) for which
\begin{equation}\label{eq:NonSummabilityCondition}
\sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} = +\infty,
\end{equation}
where \(t\) is any vector dual to \(s\).
We prove that saturation does not occur in direction \(s\) at any value of \(\lambda\), provided that the model at hand satisfies~\ref{hyp:J_path_lower_bnd}.
\medskip
Before proving the general claim, let us just mention that the claim is almost immediate when \(\psi(ns)\) is not uniformly bounded in \(n\).
Indeed, suppose \(\nu_s(\lambda)=|s|\).
Then, by~\ref{hyp:sub_mult}, \(G_{\lambda}(0,ns)\leq a_{\lambda}^{-1}e^{-\nu_s n}\) (using~\eqref{eq:nu_infimum}), while by~\ref{hyp:J_path_lower_bnd}, \(G_{\lambda}(0,ns)\geq C_{\lambda} \psi(ns)e^{-n|s|}\).
From these two assumptions and the assumption that \(\nu_s(\lambda)=|s|\), we deduce that
\begin{equation*}
C_{\lambda} \psi(ns)e^{-n|s|}\leq G_{\lambda}(0,ns) \leq a_{\lambda}^{-1}e^{-n|s|},
\end{equation*}
which implies that \(\psi(ns)\) is bounded uniformly over \(n\).
\medskip
Let us now turn to a proof of the general case.
\subsection{Absence of saturation at any \(\lambda\)}
\begin{lemma}\label{lem:mass_gap_non_summable_surcharge}
Suppose~\ref{hyp:J_path_lower_bnd} and~\ref{hyp:PsiQuasiIsotropic}. Let \(s\in\mathbb{S}^{d-1}\) and let \(t\) be a vector dual to \(s\). Assume that \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) and that~\eqref{eq:NonSummabilityCondition} holds.
Then, for any \(\lambda>0\), \(\nu_s(\lambda)<|s|\).
\end{lemma}
\begin{proof}
We use the notation of Section~\ref{sec:QuasiIsotropy}. In particular, we assume that \(\mathscr{N}\) and \(\epsilon\) have been chosen small enough to ensure that either \(g\equiv 0\), or \(g\) vanishes only at \(0\).
Let \(\delta>0\) and consider the cone \(\mathscr{Y}_{t,\delta} = \setof{y\in\mathbb{Z}^d}{\mathfrak{s}_t(y) \leq \delta |y|}\).
When \(g\) vanishes only at \(0\), we further assume that \(\delta\) is small enough to ensure that \(\mathscr{Y}_{t,\delta} \cap \partial\mathscr{U} \subset \mathscr{N}\) (this will be useful in the proof of Lemma~\ref{lem:ExplicitCond} below.)
It follows from~\eqref{eq:psi_subexp} that
\[
\sum_{y\notin\mathscr{Y}_{t,\delta}} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
\sum_{y\notin\mathscr{Y}_{t,\delta}} \psi(y) e^{-\delta |y|} < \infty .
\]
Since we assume that~\eqref{eq:NonSummabilityCondition} holds, this implies that
\[
\sum_{y\in\mathscr{Y}_{t,\delta}} \psi(y) e^{-\mathfrak{s}_t(y)} = +\infty.
\]
Let \(\mathcal{T}_R(s) = \setof{y\in\mathbb{R}^d}{\normsup{y-(y\cdot s)s} \leq R}\). We will need the following lemma.
\begin{lemma}\label{lem:intermediaire}
For any \(R>0\) large enough, we have
\begin{equation}\label{eq:DivergenceSubCone}
\inf_{x\in\mathcal{T}_R(s)} \sum_{y\in (x+\mathscr{Y}_{t,\delta}) \cap \mathcal{T}_R(s)} \psi(y-x) e^{-\mathfrak{s}_t(y-x)} = \infty .
\end{equation}
\end{lemma}
This lemma is established below.
In the meantime, assume that the lemma is true. Then, one can find \(R>0\) such that
\begin{equation}\label{eq:BigEnough}
\inf_{x\in\mathcal{T}_R(s)} \sum_{y\in (x+\mathscr{Y}^R_{t,\delta}) \cap \mathcal{T}_R(s)} \psi(y-x) e^{-\mathfrak{s}_t(y-x)} \geq e^2 C_\lambda^{-1} .
\end{equation}
where we have introduced the truncated cone \(\mathscr{Y}^R_{t,\delta} = \setof{y\in\mathscr{Y}_{t,\delta}}{\normsup{s}\leq R}\).
We are now going to construct a family of self-avoiding paths connecting \(0\) to \(ns\) in the following way: we first set \(M=\frac{n}{2R}\) and choose \(y_1, y_2, \dots, y_{M+1}\) in such a way that
\begin{itemize}
\item \(y_k \in \mathscr{Y}^R_{t,\delta}\) for all \(1\leq k\leq M\);
\item for all \(1\leq m\leq M\), \(\sum_{k=1}^m y_k \in \mathcal{T}_R(s)\);
\item \(y_{M+1} = ns- \sum_{k=1}^M y_k\).
\end{itemize}
Note that, necessarily, \(s\cdot y_{M+1} \geq n/2\) and \(y_{M+1}\in \mathcal{T}_R(s)\).
We then consider the set \(\Gamma\subset\mathsf{SAW}(0,ns)\) of all self-avoiding paths \((0, y_1, y_1+y_2, \dots, y_1+\dots+y_{M}, ns)\) meeting the above requirements.
We thus obtain that, by~\ref{hyp:J_path_lower_bnd},
\begin{align*}
e^{n|s|} G_{\lambda}(0,ns)
&\geq
C_{\lambda}\sum_{y_1}\dots\sum_{y_M} \prod_{k=1}^{M+1} C_{\lambda} \psi(y_k) e^{-|y_k| + y_k\cdot t} \\
&=
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} \sum_{y_1}\dots\sum_{y_M} \prod_{k=1}^{M} \psi(y_k) e^{-\mathfrak{s}_t(y_k)}\\
&\geq
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} \sum_{y_1}\dots\sum_{y_{M-1}} \prod_{k=1}^{M-1} \psi(y_k) e^{-\mathfrak{s}_t(y_k)} (e^2 C_{\lambda}^{-1})\\
&\geq\cdots\geq
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} (e^2 C_{\lambda}^{-1})^M = C_{\lambda}^2 e^{n/R +{\mathsf o}(n)},
\end{align*}
where the sums are over \( y_1, \ldots, y_M\) meeting the requirements for the path to be in \(\Gamma\).
The term \(e^{{\mathsf o}(n)}\) in the second line is the contribution of \(y_{M+1}\) (\(y_{M+1}\in\mathcal{T}_R(s)\) and its length is at least \(n/2\), so \(\mathfrak{s}_t(y_{M+1}) = {\mathsf o}(n)\) and \(\psi(y_{M+1})=e^{{\mathsf o}(n)}\)).
For the third and fourth lines, we apply~\eqref{eq:BigEnough} \(M\) times.
\end{proof}
There only remains to prove Lemma~\ref{lem:intermediaire}.
The latter is a direct consequence of the following quantitative version of~\eqref{eq:NonSummabilityCondition}, which can be useful to explicitly determine whether saturation occurs in a given direction; see Remark~\ref{rem:DirectionDepSaturation} in Section~\ref{sec:MainTheorems} for an example. Below, it will be convenient to set \(g^{-1}\equiv 1\) when \(g\equiv 0\).
\begin{lemma}\label{lem:ExplicitCond}
Under the assumptions of Lemma~\ref{lem:mass_gap_non_summable_surcharge}, Condition~\eqref{eq:NonSummabilityCondition} is equivalent to the condition
\begin{equation}\label{eq:ExplicitCondition}
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} = \infty.
\end{equation}
\end{lemma}
\begin{proof}
We shall do this separately for the case \(g\equiv 0\) (\(s_0\) belongs to the ``interior'' of a facet of \(\partial\mathscr{U}\)) and when \(g\) vanishes only at \(0\).
\medskip
\textbf{Case 1: \(\boldsymbol{g\equiv 0}\).}
In this case, we can find \(\eta>0\) such that \(\mathfrak{s}_t(y) = 0\) for all \(y\) in the subcone \(\mathscr{C}_\eta(s) = \setof{\lambda s'}{\lambda>0,\, s'\in\mathbb{S}^{d-1},\, \norm{s'-s} < \eta}\).
In particular, for all \(y\in\mathscr{C}_\eta(s)\),
\[
\psi(y) e^{-\mathfrak{s}_t(y)} = \psi(y),
\]
from which the claim follows immediately using~\ref{hyp:PsiQuasiIsotropic}.
\medskip
\textbf{Case 2: \(\boldsymbol{g > 0}\).}
We now assume that \(g(\tau) > 0\) for all \(\tau\neq 0\) (remember the setting of Section~\ref{sec:QuasiIsotropy}).
For simplicity, let \(u\in\mathbb{Z}^d\) be such that \(\normsup{u}=R\) and write \(\mathscr{C}_u = \mathscr{Y}_{t,\delta} \cap \bigl(u + \mathcal{T}_R(s)\bigr)\) for the corresponding sub-cone.
Given \(y\in\mathscr{Y}_{t,\delta}\), we write \(y^\parallel = y\cdot \hat{t}\) and \(y^\perp = y - y^\parallel \hat{t}\). In particular, we have
\[
y^\parallel = \frac{|y|}{\norm{t}} - |y| f\biggl(\frac{y^\perp}{|y|}\biggr).
\]
This implies that
\[
\mathfrak{s}_t(y)
= |y| - t\cdot y
= |y| - \norm{t} y^\parallel
= \norm{t} |y|\, f(y^\perp/|y|).
\]
We conclude that
\begin{equation}\label{eq:boundsOnSurcharge}
C_+ |y|\, g(\norm{y^\perp}/|y|)
\geq
\mathfrak{s}_t(y)
\geq
C_- |y|\, g(\norm{y^\perp}/|y|)
\end{equation}
where we have set \(C_\pm = c_\pm \norm{t}\).
Using~\ref{hyp:PsiQuasiIsotropic}, we can write
\begin{align*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
C_\psi^+\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0} \sum_{\substack{y\in\mathscr{C}_u\\\normI{y}=\ell\\\norm{y^\perp}\in [r,r+1)}}
e^{-\mathfrak{s}_t(y)}
\leq
c_1 \sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0}
r^{d-2}
e^{-c_2\ell g(c_3r/\ell)} .
\end{align*}
Let \(x=\frac{1}{c_3} \ell g^{-1}(1/\ell)\). The sum over \(r\) is easily bounded. :
\begin{align*}
\sum_{r\geq 0}
r^{d-2}
e^{-C_- \ell g(r/\ell)}
&\leq
\sum_{k\geq 0} \sum_{r=kx}^{(k+1)x} r^{d-2} e^{-c_2\ell g(c_3 r/\ell)} \\
&\leq
\sum_{k\geq 0} e^{-c_2 \ell g(k g^{-1}(1/\ell))} \sum_{r=kx}^{(k+1)x} r^{d-2} \\
&\leq
x^{d-1} \sum_{k\geq 0} (k+1)^{d-1} e^{-c_2 \ell g(k g^{-1}(1/\ell))} .
\end{align*}
Let us prove that the last sum is finite. Let \(h(k) = g(k g^{-1}(1/\ell))\). Notice that \(h(0)=g(0)=0\) and \(h(1)=1/\ell\). Since \(g\) is convex and increasing, \(h\) is convex and increasing as well. Therefore, convexity implies that
\begin{equation*}
h(1)
=
h\bigl( \tfrac1k\cdot k + (1-\tfrac1k)\cdot 0 \bigr)
\leq
\tfrac1k h(k) + (1-\tfrac1k) h(0)
=
\tfrac1k h(k) .
\end{equation*}
Therefore, we get
\begin{equation*}
\sum_{k\geq 0} k^{d-1} e^{-c_2 \ell g(k g^{-1}(1/\ell))}
\leq
\sum_{k\geq 0} (k+1)^{d-1} e^{-c_2 k} ,
\end{equation*}
which implies the following upper bound
\begin{equation*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
c_4\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} .
\end{equation*}
Similarly, using the upper bound in~\eqref{eq:boundsOnSurcharge} (and once more~\ref{hyp:PsiQuasiIsotropic}), we get the following lower bound :
\begin{align*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
&\geq
C_\psi^-\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0} \sum_{\substack{y\in\mathscr{Y}_{t,\delta}\\\normI{y}=\ell\\\norm{y^\perp}\in [r,r+1)}} e^{-\mathfrak{s}_t(y)} \\
&\geq
C_\psi^-\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r=0}^{\frac{1}{c_6}\ell g^{-1}(1/\ell)} r^{d-2} e^{-c_5\ell g(c_6 r/\ell)} \\
&\geq
c_7 \sum_{\ell\geq 1} \psi_0(\ell) \sum_{r=0}^{\frac{1}{c_6}\ell g^{-1}(1/\ell)} r^{d-2} \\
&\geq
c_8 \sum_{\ell\geq 1} \psi_0(\ell)(\ell g^{-1}(1/\ell))^{d-1} . \qedhere
\end{align*}
\end{proof}
\section{Acknowledgments}
Dima Ioffe passed away before this paper was completed.
The first stages of this work were accomplished while he and the fourth author were stranded at Geneva airport during 31 hours. In retrospect, the fourth author is really grateful to easyJet for having given him that much additional time to spend with such a wonderful friend and collaborator.
YA thanks Hugo Duminil-Copin for financial support. SO is supported by the Swiss NSF through an early PostDoc.Mobility Grant. SO also thanks the university Roma Tre for its hospitality, hospitality supported by the ERC (ERC CoG UniCoSM, grant agreement n.724939). YV is partially supported by the Swiss NSF.
\section{Introduction and results}
\subsection{Introduction.}
The correlation length plays a fundamental role in our understanding of the properties of a statistical mechanical system.
It measures the typical distance over which the microscopic degrees of freedom are strongly correlated.
The usual way of defining it precisely is as the inverse of the rate of exponential decay of the 2-point function.
In systems in which the interactions have an infinite range, the correlation length can only be finite if these interactions decay at least exponentially fast with the distance.
Such a system is then said to have \emph{short-range} interactions.\footnote{While the terminology ``short-range'' \textit{vs.} ``long-range'' appears to be rather unprecise, different authors meaning quite different things by these terms, there is agreement on the fact that interactions decreasing exponentially fast with the distance are short-range.}
It is often expected that systems with short-range interactions all give rise to qualitatively similar behavior.
This then serves as a justification for considering mainly systems with nearest-neighbor interactions as a (hopefully generic) representant of this class.
As a specific example, let us briefly discuss one-dimensional systems with short-range interactions.
For those systems, the pressure as well as all correlation functions are always analytic functions of the interaction parameters.
A proof for interactions decaying at least exponentially fast was given by Ruelle~\cite{Ruelle-1975}, while the general case of interactions with a finite first moment was settled by Dobrushin~\cite{Dobrushin-1974} (see also~\cite{Cassandro+Olivieri-1981}).
This is known \emph{not} to be the case, at least for some systems, for interactions decaying
even slower with the distance~\cite{Dyson-1969, Frohlich+Spencer-1982}.
\medskip
In the present work, we consider a variety of lattice systems with exponentially decaying interactions.
We show that, in contrast to the expectation above, such systems can display qualitatively different behavior \emph{depending on the properties of the sub-exponential corrections}.
Under weak assumptions, the correlation length associated with systems whose interactions decay faster than any exponential tends to zero as the temperature tends to infinity.
In systems with exponentially decaying interactions, however, this cannot happen: indeed, the rate of exponential decay of the 2-point function can never be larger than the rate of decay of the interaction.
This suggests that, as the temperature becomes very large, one of the two following scenarii should occur: either there is a temperature \(T_{\mathrm{sat}}\) above which the correlation length becomes constant, or the correlation length asymptotically converges, as \(T\to\infty\), to the inverse of the rate of exponential decay of the interaction.
Notice that when the first alternative happens, the correlation length cannot be an analytic function of the temperature.
It turns out that both scenarios described above are possible.
In fact, both can be realized in the same system by considering the 2-point function in different directions.
What determines whether saturation (and thus non-analyticity) occurs is the correction to the exponential decay of the interactions.
We characterize explicitly the prefactors that give rise to saturation of the correlation length as a function of the relevant parameter (inverse temperature \(\beta\), magnetic field \(h\), etc).
Our analysis also applies to one-dimensional systems, thereby showing that the correlation length of one-dimensional systems with short-range interactions can exhibit a non-analytic behavior, in sharp contrast with the standard analyticity results mentioned above.
We also relate the change of behavior of the correlation length to a violation of the mass gap condition in the theory of correlations developed in the early 20th Century by Ornstein and Zernike, and explain how this affects the behavior of the prefactor to the exponential decay of the 2-point function.
\subsection{Convention and notation}
In this paper, \(|\cdot|\) denotes some arbitrary norm on \(\mathbb{R}^d\), while we reserve \(\|\cdot\|\) for the Euclidean norm.
The unit sphere in the Euclidean norm is denoted \(\mathbb{S}^{d-1}\). Given \(x\in\mathbb{R}^d\), \([x]\) denotes the (unique) point in \(\mathbb{Z}^d\) such that \(x\in [x]+[-\frac12,\frac12)^d\).
To lighten notation, when an element \(x\in\mathbb{R}^d\) is treated as an element of \(\mathbb{Z}^d\), it means that \([x]\) is considered instead.
\subsection{Framework and models}
For simplicity, we shall always work on \(\mathbb{Z}^d\), but the methods developed in this paper should extend in a straightforward manner to more general settings.
We consider the case where the interaction strength between two lattice sites \(i,j\) is given by \(J_{ij}=J_{i-j}=\psi(i-j)e^{-|i-j|}\), where \(|\cdot|\) is some norm on \(\mathbb{R}^d\); we shall always assume that both \(\psi\) and \(|\cdot|\) are invariant under lattice symmetries.
We will suppose \(\psi(y) >0\) for all \(y\neq 0\) to avoid technical issues.
We moreover require that \(\psi\) is a sub-exponential correction, that is,
\begin{equation}
\lim_{|y|\to\infty} \frac{1}{|y|} \log(\psi(y)) =0.
\label{eq:psi_subexp}
\end{equation}
The approach developed in this work is rather general and will be illustrated on various lattice spin systems and percolation models.
We will focus on suitably defined \emph{2-point functions} \(G_\lambda(x,y)\) (sometimes truncated), where \(\lambda\) is some external parameter.
We define now the various models that will be considered and give, in each case, the corresponding definition of \(G_\lambda\) and of the parameter \(\lambda\).
The following notation will occur regularly:
\begin{gather*}
\bar{J} = \sum_{x\in\mathbb{Z}^d} J_{0x}, \quad P(x) = J_{0x}/\bar{J}.
\end{gather*}
By convention, we set \(\bar{J} = 1\) (and thus \(P(x) = J_{0x}\)), since the normalization can usually be absorbed into the inverse temperature or in a global scaling of the field, and assume that \(J_{00} =0\) (so \(\bar{J} = \sum_{x\in\mathbb{Z}^d\setminus\{0\}} J_{0x} = 1\)).
All models will come with a parameter (generically denoted \(\lambda\)). They also all have a natural transition point \(\lambda_{\mathrm{c}}\) (possibly at infinity) where the model ceases to be defined or undergoes a drastic change of behavior.
We will always work in a regime \(\lambda\in[0, \lambda_{\mathrm{\mathrm{exp}}})\) where \(\lambda_{\mathrm{\mathrm{exp}}}\leq \lambda_{\mathrm{c}}\) is the point at which (quasi-)long range order occurs (see~\eqref{eq:lambdaqlr_def}) for the model. For all models under consideration, it is conjectured that \(\lambda_{\mathrm{\mathrm{exp}}} = \lambda_{\mathrm{c}}\).
\subsubsection{KRW model}
A walk is a finite sequence of vertices \((\gamma_0, \dots, \gamma_n)\) in \(\mathbb{Z}^d\). The length of \(\gamma\) is \(\abs{\gamma} =n\). Let \(\mathsf{W}(x,y)\) be the set of (variable length) walks with \(\gamma_0=x,\gamma_{\abs{\gamma}} = y\).
The 2-point function of the killed random walk (KRW) is defined by
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(x,y) = \sum_{\gamma\in\mathsf{W}(x,y)} \prod_{i=1}^{\abs{\gamma}} \lambda J_{\gamma_{i-1} \gamma_i}.
\end{equation}
\(\lambda_{\mathrm{c}}\) is defined by
\begin{equation*}
\lambda_{\mathrm{c}}= \sup\Bsetof{\lambda\geq 0}{\sum_{x\in\mathbb{Z}^d} G^{\mathrm{KRW}}_{\lambda}(0,x) <\infty}.
\end{equation*}
Our choice of normalization for \(J\) implies that \(\lambda_{\mathrm{c}}=1\).
\subsubsection{SAW model}
Self-Avoiding Walks are finite sequences of vertices \((\gamma_0, \dots, \gamma_n)\) in \(\mathbb{Z}^d\) with at most one instance of each vertex (that is, \(i\neq j\implies\gamma_i\neq\gamma_j\)).
Denote \(\abs{\gamma} = n\) the length of the walk.
Let \(\mathsf{SAW}(x,y)\) be the set of (variable length) SAW with \(\gamma_0=x,\gamma_{\abs{\gamma}}=y\).
We then let
\begin{equation}
G^{\mathrm{SAW}}_{\lambda}(x,y) = \sum_{\gamma\in\mathsf{SAW}(x,y)} \prod_{i=1}^{\abs{\gamma}} \lambda J_{\gamma_{i-1} \gamma_i}.
\end{equation}
\(\lambda_{\mathrm{c}}\) is defined by
\begin{equation*}
\lambda_{\mathrm{c}}= \sup\Bsetof{\lambda\geq 0}{\sum_{x\in\mathbb{Z}^d} G^{\mathrm{SAW}}_{\lambda}(0,x) <\infty}.
\end{equation*}
Since \(G^{\mathrm{SAW}}_{\lambda}(x,y) \leq G^{\mathrm{KRW}}_{\lambda}(x,y)\), it follows that \(\lambda_{\mathrm{c}}^{\mathrm{SAW}}\geq \lambda_{\mathrm{c}}^{\mathrm{KRW}} = 1\).
\subsubsection{Ising model}
The Ising model at inverse temperature \(\beta\geq 0\) and magnetic field \(h\in\mathbb{R}\) on \(\mathbb{Z}^d\) is the probability measure on \(\{-1,+1\}^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\sigma\in\{-1,+1\}^{\Lambda_N}\) and \(\Lambda_N=[-N,N]^{d}\cap\mathbb{Z}^{d}\)).
\[
\mu^{\mathrm{Ising}}_{\Lambda_N;\beta,h}(\sigma) = \frac{1}{Z_{\Lambda_N;\beta,h}^{\mathrm{Ising}}} e^{-\beta\mathscr{H}_N(\sigma)},
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} \sigma_i\sigma_j - h\sum_{i\in\Lambda_N}\sigma_i
\]
and partition function \(Z_{\Lambda_N;\beta,h}^{\mathrm{Ising}}\).
The limit \(\mu^{\mathrm{Ising}}_{\beta,h}=\lim_{N\to\infty}\mu^{\mathrm{Ising}}_{\Lambda_N;\beta,h}\) is always well defined and agrees with the unique infinite-volume measure whenever \(h\neq 0\) or \(\beta<\beta_{\mathrm{c}}\), the critical point of the model.
For this model, we will consider two different situations, depending on which parameter we choose to vary:
\begin{itemize}
\item When \(h=0\), we consider
\begin{equation}
G^{\mathrm{Ising}}_{\beta}(x,y) = \mu^{\mathrm{Ising}}_{\beta,0}(\sigma_x\sigma_y)
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
In this case, \(\lambda_{\mathrm{c}} = \beta_{\mathrm{c}}(d)\) marks the boundary of the high-temperature regime (\(\lim_{\norm{x}\to\infty}\mu^{\mathrm{Ising}}_{\beta,0}(\sigma_0\sigma_x) =0\) for \(\beta< \beta_{\mathrm{c}}\) and is \(>0\) for \(\beta>\beta_{\mathrm{c}}\)).
\item When \(h>0\), we allow arbitrary values of \(\beta\geq 0\) and consider
\begin{equation}
G^{\mathrm{IPF}}_{\beta,h}(x,y) = \mu^{\mathrm{Ising}}_{\beta,h}(\sigma_x\sigma_y) - \mu^{\mathrm{Ising}}_{\beta,h}(\sigma_x)\mu^{\mathrm{Ising}}_{\beta,h}(\sigma_y)
\quad\text{ and }\quad
\lambda = e^{-h}.
\end{equation}
Of course, here \(\lambda_{\mathrm{c}}=1\).
The superscript \(\mathrm{IPF}\) stands for ``Ising with a Positive Field''.
\end{itemize}
\subsubsection{Lattice GFF}
The lattice Gaussian Free Field with mass \(m\geq 0\) on \(\mathbb{Z}^d\) is the probability measure on \(\mathbb{R}^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\sigma\in\mathbb{R}^{\Lambda_N}\))
\[
\dd\mu^{\mathrm{GFF}}_{m,\Lambda_N}(\sigma) = \frac{1}{Z_{m,\Lambda_N}^{\mathrm{GFF}}} e^{-\mathscr{H}_N(\sigma)-m^2\sum_{i\in\Lambda_N}\sigma_i^2 } \,\dd\sigma,
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} (\sigma_i-\sigma_j)^2
\]
and partition function \(Z_{m,\Lambda_N}^{\mathrm{GFF}}\). Above, \(\dd\sigma\) denotes the Lebesgue measure on \(\mathbb{R}^{\Lambda_N}\).
The limit \(\mu^{\mathrm{GFF}}_{m}=\lim_{N\to\infty}\mu^{\mathrm{GFF}}_{m,\Lambda_N}\) exists and is unique for any \(m>0\).
When considering the measure at \(m=0\), we mean the measure \(\mu^{\mathrm{GFF}}=\lim_{m\downarrow 0} \mu^{\mathrm{GFF}}_{m}\).
The latter limit exists when \(d\geq 3\), but not in dimensions \(1\) and \(2\).
For this model, we define
\begin{equation}
G^{\mathrm{GFF}}_{(1+m^2)^{-1}}(x,y) = \mu^{\mathrm{GFF}}_{m}(\sigma_x\sigma_y),\quad \lambda = \frac{1}{1+ m^2}.
\end{equation}
The 2-point function of the GFF has a nice probabilistic interpretation: let \(P\) be the probability measure on \(\mathbb{Z}^d\) given by \(P(x)=J_{0x}\).
Let \(P_x^{m}=P_{J,x}^m\) denote the law of the random walk started at \(x\) with killing \(\frac{m^2}{1+m^2}\) and \textit{a priori} i.i.d.\xspace steps of law \(P\) and let \(E_x^m\) be the corresponding expectation.
Let \(X_i\) be the \(i\)th step and \(S_0=x,\ S_k= S_{k-1}+ X_k\) be the position of the walk at time \(k\).
Denote by \(T\) the time of death of the walk. One has \(P^m(T=k)=(1+m^2)^{-k} m^2\).
The 2-point function can then be expressed as
\begin{equation}\label{eq:GFF_RW_Rep_Cov}
G_{\lambda}^{\mathrm{GFF}}(x,z) = \frac{1}{1+m^2}E_x^m\Big[\sum_{k=0}^{T-1} \IF{S_k=z}\Big].
\end{equation}
Thanks to the normalization \(\bar{J}=1\), it is thus directly related to the \(\mathrm{KRW}\) via the identity
\begin{equation}\label{eq:GFF_to_KRW}
G_{\lambda}^{\mathrm{GFF}}(x,z) = \lambda G_{\lambda}^{\mathrm{KRW}}(x,z).
\end{equation}
In particular, \(\lambda_{\mathrm{c}} = 1\) (which corresponds to \(m=0\)) and
\(\sup_{x\in\mathbb{Z}^d} G_{\lambda}^{\mathrm{GFF}}(0,x) < \infty\) for all \(\lambda \in [0,\lambda_{\mathrm{c}})\) in any dimension.
\subsubsection{Potts model and FK percolation}
The \(q\)-state Potts model at inverse temperature \(\beta\geq 0\) on \(\mathbb{Z}^d\) with free boundary condition is the probability measure on \(\{1, 2, \dots, q\}^{\mathbb{Z}^d}\) (\(q\geq 2\)) given by the weak limit of the finite-volume measures (for \(\sigma\in\{1, \dots, q\}^{\Lambda_N}\))
\[
\mu^{\mathrm{Potts}}_{\Lambda_N;\beta,q}(\sigma) = \frac{1}{Z_{\Lambda_N;\beta,q}^{\mathrm{Potts}}} e^{-\beta\mathscr{H}_N(\sigma)}
\]
with Hamiltonian
\[
\mathscr{H}_N(\sigma) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij} \IF{\sigma_i=\sigma_j}
\]
and partition function \(Z_{\Lambda_N;\beta,q}^{\mathrm{Potts}}\).
We write \(\mu^{\mathrm{Potts}}_{\beta,q} =\lim_{N\to\infty} \mu^{\mathrm{Potts}}_{\Lambda_N;\beta,q}\); this limit can be shown to exist.
From now on, we omit \(q\) from the notation, as in our study \(q\) remains fixed, while \(\beta\) varies.
For this model, we consider
\begin{equation}
G^{\mathrm{Potts}}_{\beta}(x,y) = \mu^{\mathrm{Potts}}_{\beta}(\IF{\sigma_x=\sigma_y})- 1/q
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
As in the Ising model, we are interested in the regime \(\beta < \beta_{\mathrm{c}}\), where \(\beta_{\mathrm{c}}\) is the inverse temperature above which long-range order occurs (that is, \(\inf_{x}G^{\mathrm{Potts}}_{\beta}(0,x)>0\) for all \(\beta>\beta_{\mathrm{c}}\), see below). We thus again have \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}(q,d)\).
One easily checks that the Ising model (with \(h=0\)) at inverse temperature \(2\beta\) corresponds to the \(2\)-state Potts model at inverse temperature \(\beta\).
Intimately related to the Potts model is the FK percolation model.
The latter is a measure on edge sub-graphs of \((\mathbb{Z}^d, E_d)\), where \(E_d=\bigl\{\{i,j\}\subset\mathbb{Z}^d\bigr\}\), depending on two parameters \(\beta\in\mathbb{R}_{\geq 0}\) and \(q\in\mathbb{R}_{>0}\), obtained as the weak limit of the finite-volume measures
\begin{equation}
\Phi^{\mathrm{FK}}_{\Lambda_N;\beta,q}(\omega) = \frac{1}{Z^{\mathrm{FK}}_{\Lambda_N;\beta,q}} \prod_{\{i,j\}\in\omega}(e^{\beta J_{ij}}-1) q^{\kappa(\omega)},
\end{equation}
where \(\kappa(\omega)\) is the number of connected components in the graph with vertex set \(\Lambda_N\) and edge set \(\omega\) and \(Z^{\mathrm{FK}}_{\Lambda_N;\beta,q}\) is the partition function.
In this paper, we always assume that \(q\geq 1\). We use the superscript \(\mathrm{Bern}\) for the case \(q=1\) (Bernoulli percolation).
When \(q\in\mathbb{N}\) with \(q\geq 2\), one has the correspondence
\begin{equation}
\label{eq:Potts_FK_Corresp}
\mu^{\mathrm{Potts}}_{\beta,q}(\IF{\sigma_x=\sigma_y})- \frac{1}{q} = \frac{q-1}{q} \, \Phi^{\mathrm{FK}}_{\beta,q}(x\leftrightarrow y).
\end{equation}
For the FK percolation model, we consider
\begin{equation}
G^{\mathrm{FK}}_{\beta}(x,y) = \Phi^{\mathrm{FK}}_{\beta,q}(x\leftrightarrow y)
\quad\text{ and }\quad
\lambda = \beta,
\end{equation}
where \(\{x\leftrightarrow y\}\) is the event that \(x\) and \(y\) belong to the same connected component.
As for the Potts model, \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}(q,d)\); here, this corresponds to the value at which the percolation transition occurs.
\subsubsection{XY model}
The XY model at inverse temperature \(\beta\geq 0\) on \(\mathbb{Z}^d\) is the probability measure on \((\mathbb{S}^{1})^{\mathbb{Z}^d}\) given by the weak limit of the finite-volume measures (for \(\theta\in[0,2\pi)^{\Lambda_N}\))
\[
\dd\mu^{\mathrm{XY}}_{\Lambda_N;\beta}(\theta) = \frac{1}{Z_{\Lambda_N;\beta}^{\mathrm{XY}}} e^{-\beta\mathscr{H}_N(\theta)}\,\dd\theta
\]
with Hamiltonian
\[
\mathscr{H}_N(\theta) = -\sum_{\{i,j\}\subset\Lambda_N } J_{ij}\cos(\theta_i-\theta_j)
\]
and partition function \(Z_{\Lambda_N;\beta}^{\mathrm{XY}}\).
In this case, we consider
\begin{equation}
G_{\beta}^{\mathrm{XY}}(x,y) = \mu^{\mathrm{XY}}_{\beta}\big(\cos(\theta_x-\theta_y)\big)
\quad\text{ and }\quad
\lambda = \beta.
\end{equation}
In dimension \(1\) and \(2\), \(\lambda_{\mathrm{c}}\) is the point at which quasi-long-range order occurs (failure of exponential decay; in particular, \(\lambda_{\mathrm{c}} =\infty\) when \(d=1\)).
In dimension \(d\geq 3\), we set \(\lambda_{\mathrm{c}}=\beta_{\mathrm{c}}^{\mathrm{XY}}(d)\) the inverse temperature above which long-range order occurs (spontaneous symmetry breaking).
\subsection{Inverse correlation length}
To each model introduced in the previous subsection, we have associated a suitable 2-point function \(G_\lambda\) depending on a parameter \(\lambda\) (for instance, \(\lambda=(1+m^2)^{-1}\) for the GFF and \(\lambda=\beta\) for the Potts model).
Each of these 2-point functions gives rise to an \emph{inverse correlation length} associated to a direction \(s\in\mathbb{S}^{d-1}\) via
\begin{equation*}
\nu_s(\lambda) = -\lim_{n\to\infty} \frac{1}{n} \log G_{\lambda}(0,ns) .
\end{equation*}
This limit can be shown to exist in all the models considered above in the regime \(\lambda\in[0,\lambda_{\mathrm{c}})\).
When highlighting the model under consideration, we shall write, for example, \(\nu_s^{\mathrm{Ising}}(\lambda)\).
We also define \(\lambda_{\mathrm{\mathrm{exp}}}\) as
\begin{equation}
\label{eq:lambdaqlr_def}
\lambda_{\mathrm{\mathrm{exp}}} = \min\bigl(\lambda_{\mathrm{c}},\inf\setof{\lambda\geq 0}{\inf_s \nu_s(\lambda)=0}\bigr).
\end{equation}
(Let us note that the infimum over \(s\) is actually not required in this definition, as follows from Lemma~\ref{lem:rate_equiv_directions} below.) It marks the boundary of the regime in which \(\nu\) is non-trivial. It is often convenient to extend the function \(s\mapsto\nu_s(\lambda)\) to a function on \(\mathbb{R}^d\) by positive homogeneity. In all the models we consider, the resulting function is convex and defines a norm on \(\mathbb{R}^d\) whenever \(\lambda<\lambda_{\exp}\).
These and further basic properties of the inverse correlation length are discussed in Section~\ref{sec:BasicPropICL}.
\smallskip
The dependence of \(\nu_s(\lambda)\) in the parameter \(\lambda\) is the central topic of this paper.
\subsection{Mass gap, a comment on the Ornstein--Zernike theory}\label{sec:RemOZ}
For off-critical models, the Ornstein--Zernike (OZ) equation is an identity satisfied by \(G_{\lambda}\), first postulated by Ornstein and Zernike (initially, for high-temperature gases):
\begin{equation}
\label{eq:OZ}
G_{\lambda}(0,x) = D_{\lambda}(0,x) + \sum_{y} G_{\lambda}(y,x)D_{\lambda}(0,y) ,
\end{equation}
where \(D_{\lambda}\) is the direct correlation function (this equation can be seen as \emph{defining} \(D_{\lambda}\)), which is supposed to behave like the interaction: \(D_{\lambda}(x,y) \simeq J_{xy}\).
On the basis of~\eqref{eq:OZ}, Ornstein and Zernike were able to predict the sharp asymptotic behavior of \(G_{\lambda}\), provided that the following \emph{mass gap hypothesis} holds:
there exists \(c=c(\lambda)>0\) such that
\begin{equation*}
D_{\lambda}(0,x) \leq e^{-c|x|} G_{\lambda}(0,x).
\end{equation*}
This hypothesis is supposed to hold in a vast class of high-temperature systems with finite correlation length.
One of the goals of the present work is to show that this hypothesis is doomed to \emph{fail} in certain simple models of this type at very high temperature and to provide some necessary conditions for the presence of the mass gap.
To be more explicit, in all models considered, we have an inequality of the form \(G_{\lambda}(0,x)\geq C J_{0x} = C\psi(x)e^{-|x|}\).
In particular, this implies that \(\nu_s \leq |s|\) for all \(s\in\mathbb{S}^{d-1}\).
We will study conditions on \(\psi\) and \(\lambda\) under which the inequality is either strict (``mass gap'') or an equality (saturation).
We will also be concerned with the asymptotic behavior of \(G_{\lambda}\) in the latter case, while the ``mass gap'' pendant of the question will only be discussed for the simplest case of \(\mathrm{KRW}\), the treatment of more general systems being postponed to a forthcoming paper.
A useful consequence of the OZ-equation~\eqref{eq:OZ}, which is at the heart of the derivation of the OZ prefactor, is the following (formal) identity
\begin{equation*}
\label{eq:OZ_paths}
G_{\lambda}(0,x) = \sum_{\gamma\in\mathsf{W}(0,x)}\prod_{i=1}^{|\gamma|} D_{\lambda}(\gamma_{i-1},\gamma_i).
\end{equation*}One can see Simon-Lieb type inequalities
\begin{equation*}
G_{\lambda}(0,x) \leq D_{\lambda}(0,x) + \sum_{y} G_{\lambda}(y,x)D_{\lambda}(0,y),
\end{equation*}as approaching the OZ equation. In particular, this inequality with \(D_{\lambda}(0,y)\simeq J_{xy}\) is directly related to our assumption~\ref{hyp:weak_SL} below.
\subsection{A link with condensation phenomena}
Recall that the (probabilistic version of) condensation phenomena can be summarized as follows:
take a family of real random variables \(X_1, \ldots, X_N\) (with \(N\) possibly random) and constrain their sum to take a value much larger than \(E[\sum_{k=1}^N X_i]\).
Condensation occurs if most of the deviation is realized by a single one of the \(X_k\)s.
In the case of condensation, large deviation properties of the sum are ``equivalent'' to those of the maximum (see, for instance, \cite{Godreche-2019} and references therein for additional information).
In our case, one can see the failure of the mass gap condition as a condensation transition: suppose the OZ equation holds.
\(G(0,x)\) is then represented as a sum over paths of some path weights.
The exponential cost of a path going from \(0\) to \(x\) is always at least of the order \(|x|\).
Once restricted to paths with exponential contribution of this order, the geometry of typical paths will be governed by a competition between entropy (combinatorics) and the sub-exponential part \(\psi\) of the steps weight.
In the mass gap regime, typical paths are constituted of a number of microscopic steps growing linearly with \(\|x\|\): in this situation, entropy wins over energy and the global exponential cost per unit length is decreased from \(|s|\) to some \(\nu_s<|s|\).
One recovers then the behavior of \(G\) predicted by Ornstein and Zernike.
In contrast, in the saturated regime, typical paths will have one giant step (a condensation phenomenon) and the behavior of \(G\) is governed by this kind of paths, which leads to \(G(0,x)\simeq D(0,x)\simeq J_{0x}\).
\subsection{Assumptions}
To avoid repeating the same argument multiple times, we shall make some assumptions on \(G_\lambda\) and prove the desired results based on those assumptions only (basically, we will prove the relevant claims for either \(\mathrm{KRW}\) or \(\mathrm{SAW}\) and the assumptions allow a comparison with those models).
Proofs (or reference to proofs) that the required properties hold for the different models we consider are collected in Appendix~\ref{app:Properties}.
\medskip
\begin{enumerate}[label={\ensuremath{\mathrm{[A_\arabic*]}}}, start=0]
\item \label{hyp:G_Bounded_Pos_nu_pos}
For any \(\lambda\in [0, \lambda_{\mathrm{c}})\), \(G_{\lambda}(x,y)\geq 0\) for any \(x, y\in \mathbb{Z}^d\) \text{ and }\({\sup_{x\in\mathbb{Z}^d} G_{\lambda}(0,x) < \infty}\).
\item \label{hyp:sub_mult}
For any \(\lambda \in [0, \lambda_{\mathrm{c}})\), there exists \(a_{\lambda} > 0\) such that, for any \(x, y, z\in\mathbb{Z}^d\),
\begin{equation}
\label{eq:G_sub_mult}
G_{\lambda}(x,y) \geq a_{\lambda} G_{\lambda}(x,z) G_{\lambda}(z,y).
\end{equation}This property holds at \(\lambda_{\mathrm{c}}\) if \(\sup_{x}G_{\lambda_{\mathrm{c}}}(0,x)<\infty\).
\item \label{hyp:left_cont}
For any \(x, y\in \mathbb{Z}^d\), \(\lambda\mapsto G_{\lambda}(x,y)\) is non-decreasing and left-continuous on \([0, \lambda_{\mathrm{c}})\). This continuity extends to \([0,\lambda_{\mathrm{c}}]\) if \(G_{\lambda_{\mathrm{c}}}(x,y)\) is well defined.
\item \label{hyp:weak_SL}
There exists \(\alpha\geq 0\) such that, for any \(0\leq \lambda < \lambda_{\mathrm{c}}\), there exists \(C\geq 0\) such that for any \(x, y\in\mathbb{Z}^d\),
\begin{equation}
\label{eq:weak_SL}
G_{\lambda}(x,y) \leq C G_{\alpha\lambda}^{\mathrm{KRW}}(x,y).
\end{equation}
\item \label{hyp:J_path_lower_bnd}
For any \(\lambda \in [0, \lambda_{\mathrm{c}})\), there exist \(c_\lambda>0\) and \(C_{\lambda}>0\) such that, for any collection \(\Gamma\subset\mathsf{SAW}(x,y)\), one has
\begin{equation}\label{eq:J_path_lower_bnd}
G_{\lambda}(x,y) \geq c_\lambda \sum_{\gamma\in\Gamma}(C_{\lambda})^{\abs{\gamma}}\prod_{k=1}^{\abs{\gamma}} J_{\gamma_{k-1} \gamma_k}.
\end{equation}
\end{enumerate}
\medskip
Our choice of \(\lambda_{\mathrm{c}}\) and of \(G_{\lambda}\) ensures that~\ref{hyp:G_Bounded_Pos_nu_pos} is always satisfied.
Assumption~\ref{hyp:sub_mult} holds as soon as the model enjoys some GKS or FKG type inequalities. Assumption~\ref{hyp:left_cont} is often a consequence of the monotonicity of the Gibbs state with respect to \(\lambda\).
The existence of a well-defined high-temperature regime (or rather the \emph{proof} of its existence) depends on this monotonicity.
Assumption~\ref{hyp:weak_SL} is directly related to the Ornstein--Zernike equation~\eqref{eq:OZ} in the form given in~\eqref{eq:OZ_paths}. It is easily deduced from a weak form of Simon--Lieb type inequality, see Section~\ref{sec:RemOZ}.
Assumption~\ref{hyp:J_path_lower_bnd} may seem to be a strong requirement but is usually a consequence of a path representation of correlation functions, some form of which is available for vast classes of systems.
\bigskip
Part of our results will also require the following additional regularity assumption on the prefactor \(\psi\):
\begin{enumerate}[label={\ensuremath{\mathrm{[H_\arabic*]}}}, start=0]
\item \label{hyp:PsiQuasiIsotropic}
There exist \(C_\psi^+, C_\psi^- > 0\) and \(\psi_0:\mathbb{N}_{>0}\to\mathbb{R}\) such that, for all \(y\in\mathbb{Z}^d\setminus\{0\}\),
\[
C^-_\psi \psi_0(\normI{y}) \leq \psi(y) \leq C^+_\psi \psi_0(\normI{y}).
\]
\end{enumerate}
\subsection{Surcharge function}
Our study has two ``parameters'': the prefactor \(\psi\), and the norm \(|\cdot|\).
It will be convenient to introduce a few quantities associated to the latter.
First, two convex sets are important: the unit ball \(\mathscr{U}\subset \mathbb{R}^d\) associated to the norm \(|\cdot|\) and the corresponding \emph{Wulff shape}
\[
\mathscr{W} = \setof{t\in\mathbb{R}^d}{\forall x\in\mathbb{R}^d,\, t\cdot x \leq |x|}.
\]
Given a direction \(s\in \mathbb{S}^{d-1}\), we say that the vector \(t\in\mathbb{R}^d\) is dual to \(s\) if
\(t\in\partial\mathscr{W}\) and \(t\cdot s = |s|\). A direction \(s\) possesses a unique dual vector \(t\) if and only if \(\mathscr{W}\) does not possess a facet with normal \(s\). Equivalently, there is a unique dual vector when the unit ball \(\mathscr{U}\) has a unique supporting hyperplane at \(s/|s|\). (See Fig.~\ref{fig:duality} for an illustration.)
\begin{figure}[ht]
\includegraphics{UnitBallL1.pdf}
\hspace*{1cm}
\includegraphics{WulffL1.pdf}
\hspace*{1cm}
\includegraphics{WulffL1bis.pdf}
\caption{Left: The unit ball for the norm \(|\cdot|=\normI{\cdot}\). Middle: the corresponding Wulff shape \(\mathscr{W}\) with two vectors \(t_1\) and \(t_2\) dual to \(s=(1,0)\). Right: the set \(\mathscr{W}\) with the unique vector \(t\) dual to \(s=\frac{1}{\sqrt{5}}(2,1)\).}
\label{fig:duality}
\end{figure}
The \emph{surcharge function} associated to a dual vector \(t\in\partial\mathscr{W}\) is then defined by
\begin{equation*}
\mathfrak{s}_t(x) = |x|- x\cdot t.
\end{equation*}
It immediately follows from the definition that \(\mathfrak{s}_t(x)\geq 0\) for all \(x\in\mathbb{Z}^d\) and \(\mathfrak{s}_t(s)=0\) if \(t\) is a vector dual to \(s\).
The surcharge function plays a major role in the Ornstein--Zernike theory as developed in~\cite{Campanino+Ioffe-2002,Campanino+Ioffe+Velenik-2003,Campanino+Ioffe+Velenik-2008}.
Informally, \(\mathfrak{s}_t(s')\) measures the additional cost (per unit length) that a step in direction \(s'\) incurs when your goal is to move in direction \(s\).
As far as we know, it first appeared, albeit in a somewhat different form, in~\cite{Alexander-1990}.
\subsection{Quasi-isotropy}\label{sec:QuasiIsotropy}
Some of our results hinge on a further regularity property of the norm \(|\cdot|\).
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) be a dual vector.
Write \(s_0 = s/|s|\in\partial\mathscr{U}\) and \(\hat{t} = t/\norm{t} \in \mathbb{S}^{d-1}\). Let \(T_{s_0}\mathscr{U}\) be the tangent hyperplane to \(\mathscr{U}\) at \(s_0\) with normal \(\hat{t}\) (seen, as usual, as a vector space). It is always possible to choose the dual vector \(t\) such that the following holds (we shall call such a \(t\) \emph{admissible}\footnote{When there are multiple tangent hyperplanes to \(\partial\mathscr{U}\) at \(s_0\), convexity and symmetry imply that all non-extremal elements of the normal cone are admissible.}).
There exist \(\epsilon > 0\) and a neighborhood \(\mathscr{N}\) of \(s_0\) such that \(\partial\mathscr{U}\cap\mathscr{N}\) can be parametrized as (see Fig.~\ref{fig:paramU})
\[
\partial\mathscr{U} \cap \mathscr{N} = \setof{s_0 + \tau v - f(\tau v)\hat{t}}{v\in T_{s_0}\mathscr{U}\cap\mathbb{S}^{d-1},\, |\tau|<\epsilon},
\]
for some convex nonnegative function \(f:T_{s_0} \to \mathbb{R}\) satisfying \(f(0)=0\).
\begin{figure}
\centering
\includegraphics{parametrization.pdf}
\caption{The local parametrization of \(\partial\mathscr{U}\) in a neighborhood of \(s_0\).}
\label{fig:paramU}
\end{figure}
\medskip
We will say that \(\partial\mathscr{U}\) is \emph{quasi-isotropic} in direction \(s\) if the qualitative behavior of \(f\) is the same in all directions \(v\): there exist \(c_+\geq c_- > 0\) and an non-decreasing non-negative convex function \(g\) such that, for all \(v\in T_{s_0}\mathscr{U}\cap\mathbb{S}^{d-1}\) and all \(\tau\in (0, \epsilon)\),
\begin{equation}\label{eq:QuasiIsotropy}
c_+ g(\tau)\geq f(\tau v) \geq c_- g(\tau) .
\end{equation}
Taking \(\mathscr{N}\) and \(\epsilon\) smaller if necessary, we can further assume that either \(g(\tau)>0\) for all \(\tau\in (0, \epsilon)\), or \(g(\tau)\equiv 0\) on \((0, \epsilon)\) (the latter occurs when \(s_0\) is in the ``interior'' of a facet of \(\partial\mathscr{U}\)).
\medskip
A sufficient, but by no means necessary, condition ensuring that quasi-isotropy is satisfied in all directions \(s\) is that the unit ball \(\mathscr{U}\) has a \(C^2\) boundary with everywhere positive curvature. Other examples include, for instance, all \(\ell^p\)-norms, \(1\leq p\leq\infty\).
\subsection{Main results: discussion}
We first informally discuss our results. Precise statements can be found in Theorem~\ref{thm:main} below.
It immediately follows from~\ref{hyp:J_path_lower_bnd} that
\begin{equation}\label{eq:TrivialUpperBoundOnICL}
\nu_s(\lambda) \leq |s| .
\end{equation}
We say that there is \emph{saturation} at \(\lambda\) in the direction \(s\) if \(\nu_s(\lambda) = |s|\).
The function \(\lambda\mapsto \nu_s(\lambda)\) is non-increasing (see~\eqref{eq:nu_monotonicity}) and \(\lim_{\lambda\searrow 0} \nu_s(\lambda) = |s|\) (see Lemma~\ref{lem:lambda_equal_zero}).
We can thus define
\[
\lambda_{\mathrm{sat}}(s) = \sup\setof{\lambda}{\nu_s(\lambda) = |s|}.
\]
In several cases, we will be able to prove that \(\lambda_{\mathrm{sat}}(s)<\lambda_{\mathrm{\mathrm{exp}}}\).
The main question we address in the present work is whether \(\lambda_{\mathrm{sat}}(s) > 0\).
Note that, when \(\lambda_{\mathrm{sat}} \in (0,\lambda_{\mathrm{\mathrm{exp}}})\), the function \(\lambda\mapsto\nu_s(\lambda)\) is not analytic in \(\lambda\).
Our main result can then be stated as follows: provided that suitable subsets of \ref{hyp:G_Bounded_Pos_nu_pos}--\ref{hyp:J_path_lower_bnd} and~\ref{hyp:PsiQuasiIsotropic} hold and \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\in\mathbb{S}^{d-1}\),
\[
\lambda_{\mathrm{sat}}(s) > 0 \quad\Leftrightarrow\quad \sum_{y\in\mathbb{Z}^d} \psi(y)e^{-\mathfrak{s}_t(y)} < \infty ,
\]
where \(t\) is an arbitrary vector dual to \(s\).
\smallskip
What happens when quasi-isotropy fails in direction \(s\) is still mostly open; a discussion can be found in Section~\ref{sec:FailureQuasiIsotropy}.
\begin{remark}
In a sense, exponentially decaying interactions are ``critical'' regarding the presence of a mass gap regime/condensation phenomenon.
Indeed, on the one hand, any interaction decaying slower than exponential will lead to absence of exponential decay (e.g., \(G_\lambda(0,x)\geq C_{\lambda} J_{0x}\) by~\ref{hyp:J_path_lower_bnd} in all the models considered here).
This is a ``trivial'' failure of mass gap, as the model is not massive.
Moreover, the behavior \(G_\lambda(0,x)\asymp J_{0x}\) at any values of \(\lambda\) was proven in some cases: see \cite{Newman+Spohn-1998} for results on the Ising model and \cite{Aoun-2020} for the Potts model.
On the other hand, interactions decaying faster (that is, such that \(\sup_{x\in\mathbb{Z}^d} J_{0x}e^{C\norm{x}} < \infty\) for all \(C>0\)) always lead to the presence of a mass gap (finite-range type behavior).
Changing the prefactor to exponential decay is thus akin to exploring the ``near-critical'' regime.
\end{remark}
\subsection{Main Theorems}\label{sec:MainTheorems}
We gather here the results that are proved in the remainder of the paper.
Given a norm \(|\cdot|\) and \(s\in\mathbb{S}^{d-1}\), fix a vector \(t\) dual to \(s\) and define
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t) = \sum_{x\in\mathbb{Z}^d\setminus \{0\}} \psi(x) e^{-\mathfrak{s}_t(x)}.
\end{equation}
Our first result provides criteria to determine whether \(\lambda_{\mathrm{sat}}>0\).
\begin{theorem}
\label{thm:main}
Suppose~\ref{hyp:G_Bounded_Pos_nu_pos},~\ref{hyp:sub_mult},~\ref{hyp:left_cont},~\ref{hyp:weak_SL},~\ref{hyp:J_path_lower_bnd} are satisfied. Let \(s\in\mathbb{S}^{d-1}\). Then,
\begin{itemize}
\item If there exists \(t\) dual to \(s\) with \(\tilde{\Xi}(|\cdot|, \psi, t)<\infty\), there exists \(0<\lambda_0\leq\lambda_{\mathrm{\mathrm{exp}}}\) such that \(\nu_{s}(\lambda)=|s|\) for any \(\lambda<\lambda_0\).
\item Assume~\ref{hyp:PsiQuasiIsotropic}. If there exists an admissible \(t\) dual to \(s\) such that \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) and \(\tilde{\Xi}(|\cdot|, \psi, t)=\infty\), then \(\nu_{s}(\lambda)<|s|\) for any \(\lambda\in(0, \lambda_{\mathrm{\mathrm{exp}}})\).
\end{itemize}
In particular, when \(\tilde{\Xi}(|\cdot|, \psi, t)<\infty\) for some \(t\) dual to \(s\), there exists \(\lambda_{\mathrm{sat}}\in (0,\lambda_{\mathrm{\mathrm{exp}}}]\) such that \(\nu_{s}(\lambda) =|s|\) when \(\lambda<\lambda_{\mathrm{sat}}\) and \(\nu_{s}(\lambda) <|s|\) when \(\lambda>\lambda_{\mathrm{sat}}\).
\end{theorem}
\begin{corollary}
\label{cor:main}
The claim in Theorem~\ref{thm:main} applies to all the models considered in this paper (that is, \(\mathrm{KRW},\mathrm{SAW}, \mathrm{Ising}, \mathrm{IPF}, \mathrm{FK}, \mathrm{Potts}, \mathrm{GFF}, \mathrm{XY}\)).
\end{corollary}
\begin{remark}\label{rem:DirectionDepSaturation}
Whether \(\lambda_{\mathrm{sat}}(s)>0\) depends in general on the direction \(s\).
To see this, consider the case \(|\cdot|= \normIV{\cdot}\) on \(\mathbb{Z}^2\) with \(\psi(x) = \norm{x}^{-\alpha}\) with \(7/4 \geq \alpha > 3/2\).
In order to determine whether \(\lambda_{\mathrm{sat}}(s)>0\), it will be convenient to use the more explicit criterion derived in Lemma~\ref{lem:ExplicitCond}.
The latter relies on the local parametrization of \(\partial\mathscr{U}\), as described in Section~\ref{sec:QuasiIsotropy}.
Below, we use the notation introduced in the latter section.
In particular, \(\lambda_{\mathrm{sat}}(s)>0\) if and only if
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} < \infty ,
\]
where we can take \(\psi_0(\ell) = \ell^{-\alpha}\) (remember condition~\ref{hyp:PsiQuasiIsotropic}).
On the one hand, let us first consider the direction \(s=(0,1)\).
The corresponding dual vector is \(t=s\).
In this case, one finds that \(f(\tau) = \frac14\tau^4 + \mathsf{O}(\tau^8)\).
We can thus take \(g(\tau) = \tau^4\).
In particular,
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1}
= \sum_{\ell\geq 1} \ell^{3/4-\alpha}
= \infty ,
\]
so that \(\lambda_{\mathrm{sat}}(s)=0\).
On the other hand, let us consider the direction \(s'=2^{-1/2}(1,1)\).
The dual vector is \(t'=2^{-3/4}(1,1)\).
In this case, one finds that \(f(\tau) = 3\cdot2^{-5/4}\cdot\tau^2 + \mathsf{O}(\tau^4)\).
We can thus take \(g(\tau) = \tau^2\).
In particular,
\[
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1}
= \sum_{\ell\geq 1} \ell^{1/2-\alpha}
< \infty ,
\]
so that \(\lambda_{\mathrm{sat}}(s)>0\).
\end{remark}
The next theorem lists some cases in which we were able to establish the inequality \(\lambda_{\mathrm{sat}}< \lambda_{\mathrm{\mathrm{exp}}}\).
\begin{theorem}\label{thm:NotSatCloseToLambdaC}
The inequality \(\lambda_{\mathrm{sat}}^*<\lambda_{\exp}^*\) holds whenever one of the following is true:
\begin{itemize}
\item \(d=1\) and \(*\in\{\mathrm{Ising},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY},\ \mathrm{KRW}\} \);
\item \(d\geq 2\), \(*\in \{\mathrm{Ising},\mathrm{Bern}\}\) and \(\lambda_{\mathrm{c}}^{*}=\lambda_{\mathrm{\mathrm{exp}}}^{*}\);
\item \(d\geq 3\), \(*\in\{\mathrm{GFF},\mathrm{KRW}\}\) and \(\lambda_{\mathrm{c}}^{*}=\lambda_{\mathrm{\mathrm{exp}}}^{*}\).
\end{itemize}
\end{theorem}
Finally, the next theorem establishes a form of condensation in part of the saturation regime.
\begin{theorem}
\label{thm:prefactor}
Suppose \(*\in\{\mathrm{SAW},\ \mathrm{Ising},\ \mathrm{IPF},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY}\} \). Suppose moreover that \(\psi\) is one of the following:
\begin{itemize}
\item \(\psi(x) \propto \vert x\vert^{-\alpha}\), \(\alpha> 0\),
\item \(\psi(x) \propto e^{-a\vert x\vert^{\alpha}}\), \(a> 0, 0<\alpha <1\).
\end{itemize}
Then, if \(s\in\mathbb{S}^{d-1}\) is such that \(\tilde{\Xi}(|\cdot|, \psi, t) <\infty\) for some \(t\) dual to \(s\), there exists \(\lambda_1>0\) such that, for any \(\lambda <\lambda_{1}\), there exist \(c_{\pm}=c_{\pm}(\lambda)>0\) such that
\begin{equation}
c_-(\lambda) J_{0,ns}\leq G_{\lambda}^*(0,ns) \leq c_+(\lambda) J_{0,ns}.
\end{equation}
\end{theorem}
\subsection{``Proof'' of Theorem~\ref{thm:main}: organization of the paper}
We collect here all pieces leading to the proof of Theorem~\ref{thm:main} and its corollary.
First, we have that any model \(*\in\{\mathrm{SAW},\ \mathrm{Ising},\ \mathrm{IPF},\ \mathrm{FK},\ \mathrm{Potts},\ \mathrm{GFF},\ \mathrm{XY}\}\) satisfies~\ref{hyp:G_Bounded_Pos_nu_pos},~\ref{hyp:sub_mult},~\ref{hyp:left_cont},~\ref{hyp:weak_SL}, and~\ref{hyp:J_path_lower_bnd} (see Appendix~\ref{app:Properties}).
We omit the explicit model dependence from the notation.
We therefore obtain from Claims~\ref{claim:nu_exists},~\ref{claim:nu_monotone}, and~\ref{claim:nu_trivialUB} and Lemma~\ref{lem:lambda_equal_zero} that, for any \(s\in\mathbb{S}^{d-1}\),
\begin{itemize}
\item \(\nu_s(\lambda)\) is well defined for \(\lambda\in[0, \lambda_{\mathrm{c}})\),
\item \(\lambda\mapsto \nu_s(\lambda)\) is non-increasing,
\item \(\lim_{\lambda\searrow 0} \nu_s(\lambda) = |s|\).
\end{itemize}
In particular, setting
\begin{equation}
\lambda_{\mathrm{sat}} = \lambda_{\mathrm{sat}}(s) = \sup\setof{\lambda\geq 0}{\nu_s(\lambda) =|s|},
\end{equation}
it follows from monotonicity that
\begin{itemize}
\item for any \(\lambda \in (0, \lambda_{\mathrm{sat}})\), \(\nu_s(\lambda) = |s|\),
\item for any \(\lambda \in (\lambda_{\mathrm{sat}}, \lambda_{\mathrm{\mathrm{exp}}})\), \(0<\nu_s(\lambda) < |s|\).
\end{itemize}
Via a comparison with the KRW given by~\ref{hyp:weak_SL}, Lemmas~\ref{lem:SaturationKRW}, and~\ref{lem:SaturationAtSmallLambda} establish that
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t)<\infty \implies \lambda_{\mathrm{sat}}(s)>0,
\end{equation}
while Lemma~\ref{lem:mass_gap_non_summable_surcharge} implies that, when $\psi$ satisfies~\ref{hyp:PsiQuasiIsotropic} and \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) (with an admissible \(t\)),
\begin{equation}
\tilde{\Xi}(|\cdot|, \psi, t)=\infty \implies \lambda_{\mathrm{sat}}(s)=0,
\end{equation}
via a comparison with a suitable SAW model, allowed by~\ref{hyp:J_path_lower_bnd}.
These results are complemented in Section~\ref{sec:lambda_sat_less_lambda_c} by the inequality \(\lambda_{\mathrm{sat}} < \lambda_{\mathrm{\mathrm{exp}}}\) for some particular cases (as stated in Theorem~\ref{thm:NotSatCloseToLambdaC}), using ``continuity'' properties of the models \emph{at} \(\lambda_{\mathrm{c}}\) and the conjectured equality \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\).
Whether \(\lambda_{\mathrm{sat}} < \lambda_{\mathrm{\mathrm{exp}}}\) always holds or not is an open problem (see Section~\ref{sec:open_problems}).
A proof that a condensation phenomenon (Theorem~\ref{thm:prefactor}) indeed occurs is presented in Section~\ref{sec:pre_factor}. It is carried out for a more restricted family of \(\psi\) than our main saturation result and only proves condensation in a restricted regime (see Section~\ref{sec:open_problems} for more details).
\subsection{Open problems and conjectures}\label{sec:open_problems}
The issues raised in the present work leave a number of interesting avenues open. We list some of them here, but defer the discussion of the issues related to quasi-isotropy to the next section.
\subsubsection{Is \(\lambda_{\mathrm{sat}}\) always smaller than \(\lambda_{\mathrm{\mathrm{exp}}}\)?}
While this work provides precise criteria to decide whether \(\lambda_{\mathrm{sat}}(s)>0\), we were only able to obtain an upper bound in a limited number of cases. It would in particular be very interesting to determine whether it is possible that \(\lambda_{\mathrm{sat}}\) coincides with \(\lambda_{\mathrm{\mathrm{exp}}}\), that is, that the correlation length \emph{remains constant in the whole high-temperature regime}. Let us summarize that in the following
\begin{open}
Is it always the case that \(\lambda_{\mathrm{sat}}(s) < \lambda_{\mathrm{\mathrm{exp}}}\)?
\end{open}
One model from which insight might be gained is the \(q\)-state Potts model with large \(q\). In particular, one might try to analyze the behavior of \(\nu_s(\lambda)\) for very large values of \(q\), using the perturbative tools available in this regime.
\subsubsection{What can be said about the regularity of \(\lambda\mapsto\nu_s(\lambda)\)?}
In several cases, we have established that, under suitable conditions, \(\lambda_{\mathrm{\mathrm{exp}}} > \lambda_{\mathrm{sat}}(s) > 0\). In particular, this implies that \(\nu_s\) is not analytic in \(\lambda\) \emph{at} \(\lambda_{\mathrm{sat}}(s)\). We believe however that this is the only point at which \(\nu_s\) fails to be analytic in \(\lambda\).
\begin{conjecture}
The inverse correlation length \(\nu_s\) is always an analytic function of \(\lambda\) on \((\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\).
\end{conjecture}
(Of course, the inverse correlation length is trivially analytic in \(\lambda\) on \([0,\lambda_{\mathrm{sat}}(s))\) when \(\lambda_{\mathrm{sat}}(s)>0\).)
\begin{conjecture}
Assume that \(\lambda_{\mathrm{sat}}(s)>0\). Then, the inverse correlation length \(\nu_s\) is a continuous function of \(\lambda\) at \(\lambda_{\mathrm{sat}}(s)\).
\end{conjecture}
Once this is settled, one should ask more refined questions, including a description of the qualitative behavior of \(\nu_s(\lambda)\) close to \(\lambda_{\mathrm{sat}}(s)\), similarly to what was done in~\cite{Ott+Velenik-2018} in a case where a similar saturation phenomenon was analyzed in the context of a Potts model/FK percolation with a defect line.
\subsubsection{Sharp asymptotics for \(G_\lambda(0,x)\)}
As we explain in Section~\ref{sec:pre_factor}, the transition from the saturation regime \([0, \lambda_{\mathrm{sat}}(s))\) to the regime \((\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\) manifests itself in a change of behavior of the prefactor to the exponential decay of the 2-point function \(G_\lambda(0,ns)\). Namely, in the former regime, the prefactor is expected to always behave like \(\psi(ns)\), while in the latter regime, it should follow the usual OZ decay, that is, be of order \(n^{-(d-1)/2}\). This change is due to the failure of the mass gap condition of the Ornstein--Zernike theory when \(\lambda<\lambda_{\mathrm{sat}}(s)\). It would be interesting to obtain more detailed information.
\begin{conjecture}
For all \(\lambda\in(\lambda_{\mathrm{sat}}(s), \lambda_{\mathrm{\mathrm{exp}}})\), \(G_\lambda(0,ns)\) exhibits OZ behavior: there exists \(C=C(s,\lambda) > 0\) such that
\[
G_\lambda(0,ns) = C n^{-(d-1)/2}\, e^{-\nu_s(\lambda) n} (1+{\mathsf o}(1)).
\]
\end{conjecture}
This type of asymptotic behavior has only been established for finite-range interactions: see~\cite{Campanino+Ioffe+Velenik-2003} for the Ising model at \(\beta<\beta_{\mathrm{c}}\), \cite{Campanino+Ioffe+Velenik-2008} for the Potts model (and, more generally FK percolation) at \(\beta<\beta_{\mathrm{c}}\) and~\cite{Ott-2019} for the Ising model in a nonzero magnetic field (see also~\cite{Ott+Velenik-2019} for a review).
We shall come back to this problem in a future work. In the present paper, we only provide a proof in the simplest setting, the killed random walk (see Section~\ref{sec:pre_factorOZ}).
\smallskip
One should also be able to obtain sharp asymptotics in the saturation regime, refining the results in Section~\ref{sec:pre_factor}. Let \(t\) be a dual vector to \(s\). We conjecture the following to hold true.
\begin{conjecture}
For all \(\lambda\in [0, \lambda_{\mathrm{sat}}(s))\), there exists $C(\lambda,s)>0$ such that \(G_\lambda(0,ns)\) exhibits the following behavior:
\[
G_\lambda(0,ns) = C(\lambda,s) \, \psi(ns)\, e^{-|s| n} (1+{\mathsf o}(1)),
\]
\end{conjecture}
In this statement, $C(\lambda,s)$ depends also on the model considered. Similar asymptotics have been obtained for models with interactions decaying slower than exponential: see~\cite{Newman+Spohn-1998} for the Ising model and~\cite{Aoun-2020} for the \(q\)-state Potts model. In those cases, the constant $C(\lambda,s)$ is replaced by the susceptibility divided by $q$.
\medskip
Finally, the following problem remains completely open.
\begin{open}
Determine the asymptotic behavior of \(G_\lambda(0,ns)\) at \(\lambda_{\mathrm{sat}}(s)\).
\end{open}
\subsubsection{Sharpness}
In its current formulation, Theorem~\ref{thm:NotSatCloseToLambdaC} partially relies on the equality between \(\lambda_{\mathrm{c}}\) and \(\lambda_{\mathrm{\mathrm{exp}}}\). As already mentioned, we expect this to be true for all models considered in the present work.
\begin{conjecture}
For all models considered in this work, \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\).
\end{conjecture}
We plan to come back to this issue in a future work.
\subsection{Behavior when quasi-isotropy fails}\label{sec:FailureQuasiIsotropy}
In this section, we briefly discuss what we know about the case of a direction \(s\in\mathbb{S}^{d-1}\) in which the quasi-isotropy condition fails.
As this remains mostly an open problem, our discussion will essentially be limited to one particular example.
What remains valid more generally is discussed afterwards.
\medskip
We restrict our attention to \(d=2\).
Let us consider the norm \(|\cdot|\) whose unit ball consists of four quarter-circles of (Euclidean) radius \(\frac12\) and centers at \((\pm\frac12,\pm\frac12)\), joined by 4 straight line segments; see Fig.~\ref{fig:surcharge}, left.
(The associated Wulff shape is depicted in the same figure, middle.)
We are interested in the direction \(s=\frac1{\sqrt{5}}(2,1)\), in which \(\partial\mathscr{U}\) is \emph{not} quasi-isotropic.
The corresponding dual vector is \(t=(1,0)\).
The associated surcharge function \(\mathfrak{s}_t\) is plotted on Fig.~\ref{fig:surcharge}, right.
Observe how the presence of a facet with normal \(t\) in \(\partial\mathscr{U}\) makes the surcharge function degenerate: the surcharge associated to any increment in the cone \(\setof{(x,y)\in\mathbb{Z}^2}{0\leq x\leq \abs{y}/2}\) vanishes.
The direction \(s\) falls right at the boundary of this cone of zero-surcharge increments.
\begin{figure}
\centering
\includegraphics[width=4cm]{UnitBallPatch.pdf}
\hspace{1cm}
\includegraphics[width=4cm]{Wulff-BdPt.pdf}
\hspace{1cm}
\includegraphics[width=4cm]{surcharge-BdPt.pdf}
\caption{Left: the unit ball associated to the norm \(|\cdot|\) in the example of Section~\ref{sec:FailureQuasiIsotropy}. Middle: the corresponding Wulff shape. Right: polar plot of the surcharge function associated to the direction \(s=\frac{1}{\sqrt{5}}(2,1)\).}
\label{fig:surcharge}
\end{figure}
A priori, our criteria do not allow us to decide whether \(\lambda_{\mathrm{sat}}(s)>0\), since \(\partial\mathscr{U}\) (and thus the surcharge function) displays qualitatively different behaviors on each side of \(s\).
However, it turns out that, in this particular example, one can determine what is happening, using a few observations.
First, the argument in Lemma~\ref{lem:mass_gap_non_summable_surcharge} still applies provided that the sums corresponding to both halves of the cone located on each side of \(s\) diverge.
The corresponding conditions ensuring that \(\lambda_{\mathrm{sat}}(s)=0\) as given in~\eqref{eq:ExplicitCondition}, reduce to
\[
\sum_{\ell\geq 1} \ell \psi_0(\ell) = \infty
\]
for the cone on the side of the facet, and
\[
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) = \infty
\]
on the side where the curvature is positive.
Obviously, both sums diverge as soon as the second one does, while both are finite whenever the first one is.
We conclude from this that \(\lambda_{\mathrm{sat}}(s) > 0\) when
\[
\sum_{\ell\geq 1} \ell \psi_0(\ell) < \infty,
\]
while \(\lambda_{\mathrm{sat}}(s) = 0\) when
\[
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) = \infty.
\]
Of course, this leaves undetermined the behavior when both
\begin{equation}\label{eq:ImpossibleGap}
\sum_{\ell\geq 1} \ell \psi_0(\ell) = \infty
\quad\text{ and }\quad
\sum_{\ell\geq 1} \ell^{1/2} \psi_0(\ell) < \infty.
\end{equation}
However, the following simple argument allows one to determine what actually occurs in such a case.
First, observe that, since \(\nu_{s'}\leq |s'|\) for all \(s'\in\mathbb{R}^d\), the unit ball \(\mathscr{U}_\nu\) associated to the norm \(x\mapsto\nu_x(\lambda)\) always satisfies \(\mathscr{U}_\nu \supset \mathscr{U}\).
We now claim that this implies \(\lambda_{\mathrm{sat}}(s) > 0\) if and only if \(\sum_{\ell\geq 1} \ell \psi_0(\ell) < \infty\).
Indeed, suppose \(\lambda_{\mathrm{sat}}(s) > 0\).
Then, for small enough values of \(\lambda\), the boundaries of \(\mathscr{U}_\nu\) and \(\mathscr{U}\) coincide along the 4 circular arcs (including the points between the arcs and the facets).
But convexity of \(\mathscr{U}_\nu\) then implies that they must coincide everywhere, so that \(\lambda_{\mathrm{sat}}(s')>0\) in every direction \(s'\) pointing inside the facets.
But the latter can only occur if \(\sum_{\ell\geq 1} \ell \psi(\ell) < \infty\).
In particular, the case~\eqref{eq:ImpossibleGap} implies \(\lambda_{\mathrm{sat}}(s) = 0\).
\medskip
As long as we consider a two-dimensional setting, the first part of the above argument applies generally, that is, whenever quasi-isotropy fails.
The second part, however, makes crucial use of the fact that \(s\) is in the boundary of a facet of \(\partial\mathscr{U}\).
We don't know how to conclude the analysis when this is not the case.
\medskip
In higher dimensions, the situation is even less clear.
\begin{open}
Provide a necessary and sufficient condition ensuring that \(\lambda_{\mathrm{sat}}(s)>0\) in a direction \(s\in\mathbb{S}^{d-1}\) in which \(\partial\mathscr{U}\) fails to be quasi-isotropic.
\end{open}
\section{Some basic properties}\label{sec:BasicProperties}
\subsection{Basic properties of the inverse correlation length} \label{sec:BasicPropICL}
A first observation is
\begin{claim}
\label{claim:nu_exists}
Suppose~\ref{hyp:sub_mult} holds.
Then, \(\nu_s(\lambda)\) exists for any \(\lambda\in[0, \lambda_{\mathrm{c}})\) and \(s\in\mathbb{S}^{d-1}\).
Moreover
\begin{equation}\label{eq:nu_infimum}
G_{\lambda}(0,ns) \leq a_{\lambda}^{-1} e^{-\nu_s(\lambda) n}.
\end{equation}
\end{claim}
The proof is omitted, as it is a simple variation of the classical subadditive argument.
\begin{claim}
\label{claim:nu_norm}
Suppose~\ref{hyp:sub_mult} holds. For \(\lambda<\lambda_{\exp}\), the function on \(\mathbb{R}^d\) defined by \(\nu_x(\lambda) = \norm{x}\cdot \nu_{x/\norm{x}}(\lambda)\) when \(x\neq 0\) and \(\nu_0(\lambda)=0\) is convex and defines a norm on \(\mathbb{R}^d\).
\end{claim}
Again, the proof is omitted, as it is a standard consequence of Assumption~\ref{hyp:sub_mult}.
Our third and fourth (trivial) observations are
\begin{claim}
\label{claim:nu_monotone}
Suppose~\ref{hyp:left_cont} holds.
Then, for any \(s\in \mathbb{S}^{d-1}\), any \(x, y\in\mathbb{Z}^d\) and any \(0 \leq \lambda \leq \lambda' < \lambda_{\mathrm{c}}\),
\begin{equation}\label{eq:nu_monotonicity}
G_{\lambda}(x,y) \leq G_{\lambda'}(x,y)
\quad\text{ and }\quad
\nu_s(\lambda) \geq \nu_s(\lambda').
\end{equation}
\end{claim}
\begin{claim}
\label{claim:nu_trivialUB}
Let \(s\in\mathbb{S}^{d-1}\). Suppose \(\nu_s(\lambda)\) is well defined and that~\ref{hyp:J_path_lower_bnd} holds.
Then, \(\nu_s\leq |s|\).
\end{claim}
Finally, we look at the behavior of \(\nu\) when \(\lambda\searrow 0\).
\begin{lemma}
\label{lem:lambda_equal_zero}
Suppose~\ref{hyp:weak_SL} and~\ref{hyp:J_path_lower_bnd} hold. Then, for any \(s\in\mathbb{S}^{d-1}\), \(\lim_{\lambda\searrow 0} \nu_{s}(\lambda) = |s|\).
\end{lemma}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\).
By~\ref{hyp:J_path_lower_bnd}, \(\nu_s\leq |s|\).
Let \(\alpha\) be given by~\ref{hyp:weak_SL}.
Fix any \(\epsilon>0\).
Then, let \(\lambda<\bigl( \alpha\sum_{y\neq 0} \psi(y)e^{-\epsilon|y|} \bigr)^{-1}\).
We claim that \(G_{\lambda}(0,ns)\leq c(\lambda,\epsilon) e^{-(1-\epsilon)n|s|}\) which gives the desired claim.
Indeed,
\begin{align*}
G_{\lambda}(0,ns)
&\leq
CG_{\alpha\lambda}^{\mathrm{KRW}}(0,ns) \\
&=
C\sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i=ns}} \prod_{i=1}^{k} \alpha\lambda \psi(y_i)e^{-|y_i|}\\
&\leq
Ce^{-(1-\epsilon)n|s|}\sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i=ns}} \prod_{i=1}^{k} \alpha\lambda \psi(y_i)e^{-\epsilon|y_i|}\\
&\leq
Ce^{-(1-\epsilon)n|s|}\sum_{k\geq 1} \Big(\lambda \sum_{y\neq 0} \alpha \psi(y)e^{-\epsilon|y|}\Big)^{\!k}.
\qedhere
\end{align*}
\end{proof}
\subsection{Weak equivalence of directions}
Let us introduce
\begin{equation}
\nu_+(\lambda) = \max_{s\in\mathbb{S}^{d-1}} \nu_{s}(\lambda)
\quad\text{ and }\quad
\nu_-(\lambda) = \min_{s\in\mathbb{S}^{d-1}} \nu_{s}(\lambda) .
\end{equation}
The existence of these quantities follows from the fact that \(s\mapsto \nu_s(\lambda)\) is continuous (indeed, it is the restriction of a norm on \(\mathbb{R}^d\) to the set \(\mathbb{S}^{d-1}\)).
\begin{lemma}
\label{lem:rate_equiv_directions}
Suppose~\ref{hyp:sub_mult} holds.
Then, \(d\cdot\nu_-(\lambda)\geq \nu_+(\lambda) \geq \nu_-(\lambda)\).
\end{lemma}
\begin{proof}
The second inequality holds by definition.
To obtain the first one, set \(s^*\) to be a direction realizing the minimum.
By lattice symmetries, all its \(\pi/2\) rotations around a coordinate axis also achieve the minimum.
For a fixed direction \(s\), denote by \(s^*_1, \dots, s^*_d\) a basis of \(\mathbb{R}^d\) constituted of rotated versions of \(s^*\) such that \(s = \sum_{i=1}^d \alpha_i s^*_i\) with \(1 \geq \alpha_i \geq 0\).
Then, for any \(n\), \(n s = \sum_{i=1}^d n\alpha_i s_i^*\).
So (integer parts are implicitly taken), by~\ref{hyp:sub_mult},
\begin{equation}
-\log G_{\lambda}(0,ns)
\leq
- \sum_{i=1}^d \log G_{\lambda}(0, n\alpha_i s_i^*) - d\log(a_{\lambda})
= \sum_{i=1}^d n\alpha_i\nu_-(1+{\mathsf o}_n(1)).
\end{equation}
In particular, \(\lim_{n\to\infty} -\log G_{\lambda}(0, ns)/n\leq d\cdot\nu_-\).
\end{proof}
\subsection{Left-continuity of \(\lambda\mapsto\nu_{s}(\lambda)\)}
\begin{lemma}
\label{lem:nu_left_cont}
Suppose~\ref{hyp:sub_mult} and~\ref{hyp:left_cont} hold. Let \(s\in \mathbb{S}^{d-1}\). Let \(\lambda'\in (0,\lambda_{\mathrm{c}}]\) be such that
\begin{itemize}
\item \(G_{\lambda'}\) is well defined.
\item There exists \(\delta>0\) such that \(\inf_{\lambda\in(\lambda'-\delta,\lambda']}a_{\lambda}>0\) (where \(a_{\lambda}\) is given by~\ref{hyp:sub_mult}).
\end{itemize}
Then, the function \(\lambda\mapsto \nu_s(\lambda)\) is left-continuous at \(\lambda'\).
\end{lemma}
\begin{proof}
Fix \(\lambda'\in (0,\lambda_{\mathrm{c}}]\) such that \(G_{\lambda'}\) is well defined and \(s\in\mathbb{S}^{d-1}\). Let \(\delta\) be given by our hypotheses and let \(I=(\lambda'-\delta,\lambda']\), and \(C=-\log(\inf_{\lambda\in I}a_{\lambda})\). Set
\begin{equation*}
f_n(\lambda) = -\log G_{\lambda}(0,ns).
\end{equation*}
Then, for any \(\lambda\in I\) and \(n,m\in\mathbb{Z}_{>0}\), \(f_{n+m}(\lambda)\leq f_{n}(\lambda)+f_{m}(\lambda)+C\). In particular, for any \(n\geq 1\) and any \(\lambda\in I\),
\begin{equation*}
\nu_s(\lambda)=\lim_{q\to\infty} \frac{f_{qn}(\lambda)}{qn} \leq \frac{f_n(\lambda)}{n} + \frac{C}{n}.
\end{equation*}
Fix \(\epsilon>0\). Choose \(n_0\) such that \(C/n_0<\epsilon/3\) and \(\abs{\frac{f_{n_0}(\lambda')}{n_0}-\nu_s(\lambda')}\leq \epsilon/3\). By left-continuity of \(G_{\lambda}(0,n_0s)\) at \(\lambda'\), one can choose \(\epsilon'_0>0\) such that
\begin{equation*}
\abs{\frac{f_{n_0}(\lambda'-\epsilon')}{n_0} - \frac{f_{n_0}(\lambda')}{n_0}}\leq \epsilon/3
\end{equation*}for any \(\epsilon'<\epsilon'_0\). In particular, for any \(\epsilon'<\epsilon'_0\),
\begin{align*}
0\leq \nu_{s}(\lambda'-\epsilon')-\nu_s(\lambda')
&
\leq \frac{f_{n_0}(\lambda'-\epsilon')}{n_0} + \frac{C}{n_0} - \nu_s(\lambda')\\
&
\leq \abs{\frac{f_{n_0}(\lambda'-\epsilon')}{n_0} - \frac{f_{n_0}(\lambda')}{n_0}} + \epsilon/3 + \abs{\frac{f_{n_0}(\lambda')}{n_0} - \nu_s(\lambda')}\\
&
\leq \epsilon,
\end{align*}
where we used~\eqref{eq:nu_monotonicity} in the first line.
\end{proof}
\section{``Summable'' case}
In this section, we consider directions \(s\in\mathbb{S}^{d-1}\) for which
\begin{equation}\label{eq:SummabilityCondition}
\sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} < \infty,
\end{equation}
where \(t\) is any vector dual to \(s\).
In this case, we first prove that saturation occurs in direction \(s\) at small enough values of \(\lambda\), whenever the model at hand satisfies~\ref{hyp:weak_SL}.
Then, we complement this result by showing, in some models, that saturation does not occur for values of \(\lambda\) close enough to \(\lambda_{\mathrm{\mathrm{exp}}}\).
\subsection{Saturation at small \(\lambda\)}
\begin{lemma}\label{lem:SaturationKRW}
Let \(s\in\mathbb{S}^{d-1}\) and fix some vector \(t\) dual to \(s\). Assume that~\eqref{eq:SummabilityCondition} holds.
Then, one can define \(0<\tilde{\lambda}\equiv \tilde{\lambda}^{\mathrm{KRW}}\leq \lambda_{\mathrm{c}}\) (given by~\eqref{eq:lambda_tilde_KRW}) such that, for any \(\lambda \in (0, \tilde{\lambda})\), \(\nu_s^{\mathrm{KRW}}(\lambda) = |s|\).
Moreover, when \(d=1\), \(\tilde{\lambda}^{\mathrm{KRW}} = \lambda_{\mathrm{sat}}^{\mathrm{KRW}}\).
\end{lemma}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\). Assume that~\eqref{eq:SummabilityCondition} holds.
Let \(G_{\lambda} \equiv G^{\mathrm{KRW}}_{\lambda}\).
Set
\begin{equation}\label{eq:lambda_tilde_KRW}
\tilde{\lambda} = \min\Bigl\{\Bigl( \sum_{y\neq 0} \psi(y)e^{-\mathfrak{s}_t(y)}\Bigr)^{-1}, 1 \Bigr\} > 0.
\end{equation}
(Recall that \(\lambda_{\mathrm{c}}=1\) for the KRW.) Suppose \(\lambda< \tilde{\lambda}\). Let us introduce
\begin{align*}
A_k(n)
&=
\sum_{\substack{y_1, \dots, y_k\in\mathbb{Z}^d \setminus \{0\} \\ \sum_{i=1}^k y_i= ns }}\prod_{i=1}^k \lambda J_{y_i} \\
&=
e^{-n|s|} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z}^d \setminus \{0\} \\ \sum_{i=1}^k y_i= ns }} \prod_{i=1}^k \psi(y_i)e^{-\mathfrak{s}_t(y_i)}
\leq e^{-n|s|} \Bigl(\lambda \sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} \Bigr)^{\!k}.
\end{align*}
Since \(\lambda \sum_{y\neq 0} \psi(y)e^{-\mathfrak{s}_t(y)} < 1\) for all \(\lambda\in [0, \tilde{\lambda})\), the first part of the result follows from
\begin{equation}\label{eq:UB_A_n}
G_{\lambda}(0,ns)= \sum_{k=1}^{\infty} A_k(n),
\end{equation}
which is a decomposition according to the length of the walk.
To get the second part of the \(d=1\) case, one can assume \(\tilde{\lambda}<1=\lambda_{\mathrm{c}}\) (the claim being empty otherwise).
Without loss of generality, we consider \(s=1\).
The unique dual vector is \(t = |1|\).
Let \(\lambda \in (\tilde{\lambda}, \lambda_{\mathrm{c}})\).
As \(\lambda < \lambda_{\mathrm{c}}\), \(\nu_{1}(\lambda)\) is the radius of convergence of \(\mathbb{G}_{\lambda}(z) = \sum_{n\geq 1} e^{zn} G_{\lambda}(0,n)\).
It is therefore sufficient to find \(\epsilon>0\) such that \({\mathbb{G}_{\lambda}((1-\epsilon)|1|)} =\infty\).
The summability of \(\mathbb{G}_{\lambda}((1-\epsilon)|1|)\) is equivalent to the summability of
\begin{align*}
\sum_{n\geq 1} e^{(1-\epsilon)|1|n} G_{\lambda}(0,n)
&=
\sum_{n\geq 1} e^{(1-\epsilon)tn} \sum_{k\geq 1} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\} \\ \sum_{i=1}^k y_i= n }} \prod_{i=1}^k \lambda \psi(y_i) e^{-|y_i|}\\
&=
\sum_{k\geq 1} \sum_{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\}}\prod_{i=1}^k \lambda \psi(y_i) e^{-|y_i| +(1-\epsilon) t y_i}\\
&=
\sum_{k\geq 1} \Bigl( \lambda\sum_{y\in\mathbb{Z} \setminus \{0\}} \psi(y_i) e^{-\mathfrak{s}_t(y)} e^{-\epsilon |1| y} \Bigr)^{\!k}.
\end{align*}
Now, \(f(\epsilon) = \lambda\sum_{y\in\mathbb{Z} \setminus \{0\}} \psi(y) e^{-\mathfrak{s}_t(y)} e^{-\epsilon |1|y}\) is continuous in \(\epsilon\) on \([0,\infty)\), and \(f(0)>1\) by choice of \(\lambda\).
So, it is still \(>1\) for some \(\epsilon>0\), implying the claim.
\end{proof}
\begin{remark}
The statement of Lemma~\ref{lem:SaturationKRW} obviously extends to the Gaussian Free Field via~\eqref{eq:GFF_to_KRW}.
\end{remark}
We can now push the result to other models.
\begin{lemma}\label{lem:SaturationAtSmallLambda}
Suppose~\ref{hyp:weak_SL} holds.
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) dual to \(s\). Assume that~\eqref{eq:SummabilityCondition} holds.
Then, there exists \(\tilde\lambda>0\) such that, for any \(\lambda \in [0, \tilde\lambda)\), \(\nu_s(\lambda) = |s|\).
\end{lemma}
\begin{proof}
Let \(\alpha\) be given by~\ref{hyp:weak_SL}.
Set
\begin{equation*}
\tilde{\lambda} = \frac{1}{\alpha} \tilde{\lambda}^{\mathrm{KRW}} > 0.
\end{equation*}
By~\ref{hyp:weak_SL} and Lemma~\ref{lem:SaturationKRW}, for \(\lambda<\tilde{\lambda}'\),
\begin{equation*}
G_{\lambda}(0,ns) \leq C G_{\alpha\lambda}^{\mathrm{KRW}}(0,ns) \leq ce^{-n|s|}
\end{equation*}
for some \(\lambda\)-dependent constant \(c\), as \(\alpha\lambda < \tilde{\lambda}^{\mathrm{KRW}}\).
\end{proof}
\subsection{Prefactor for \(\mathrm{KRW}\) when \(\lambda<\lambda_{\mathrm{sat}}\)}\label{sec:pre_factor}
We first show the condensation phenomenon mentioned in the introduction for polynomial prefactors.
Namely, we prove
\begin{lemma}\label{lem:pre_fact_polynomial}
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) dual to \(s\).
Suppose that \(\psi(x)=C_{\alpha}\vert x\vert^{-\alpha}\) and that~\eqref{eq:SummabilityCondition} holds. Then, there exists \(\tilde{\lambda}>0\) (the same as in Lemma~\ref{lem:SaturationAtSmallLambda}) such that, for any \(\lambda<\tilde{\lambda}\), there exists \(c_+=c_{+}(\lambda)>0\) such that
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(0,ns) \leq c_+ J_{0,ns}.
\end{equation}
\end{lemma}
\begin{remark}
As \(\mathfrak{s}_t \geq 0\), \(\alpha>d\) always implies~\eqref{eq:SummabilityCondition}.
\end{remark}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\).
Denote \(G_{\lambda}\equiv G^{\mathrm{KRW}}_{\lambda}\).
Let \(\tilde{\lambda}\) be given by~\eqref{eq:lambda_tilde_KRW} and fix \(\lambda<\tilde{\lambda}\). Start as in the proof of Lemma~\ref{lem:SaturationKRW}.
Define
\begin{equation*}
A_k(n)
=
\sum_{\substack{\gamma\in\mathsf{W}(0,ns)\\ \abs{\gamma} = k}} \prod_{i=1}^k \lambda J_{\gamma_{i-1}\gamma_i}
=
e^{-n|s|} \sum_{\substack{y_1,\dots,y_k\neq 0\\ \sum y_i = ns}} \prod_{i=1}^k \lambda \psi(y_i) e^{-\mathfrak{s}_t(y_i)}
\leq
e^{-n|s|} (\lambda\tilde{\lambda}^{-1})^k.
\end{equation*}
Since \(\lambda < \tilde{\lambda}\), the inequality above implies that there exist \(C_{1},C_{2}>0\) such that
\begin{equation*}
\sum\limits_{k=C_{1}\log(n)}^{\infty}\sum_{\substack{\gamma\in\mathsf{W}(0,ns) \\ \abs{\gamma}=k}}\lambda^{k}\prod_{i=1}^{k}J_{\gamma_{i-1} \gamma_i}\leq C_{2}J_{0,ns}.
\end{equation*}
Therefore, we can assume that \(k\leq C_{1}\log(n)\).
Let \(\gamma\in\mathsf{W}(0,ns)\) with \(\vert\gamma\vert=k\).
Since \(k<n\), there exists \(j\) such that \(\vert\gamma_{j}-\gamma_{j-1}\vert\geq \vert ns\vert /k\).
Then, we can write
\begin{align*}
A_k(n)
&\leq
k\sum_{y: \vert y\vert\geq \vert ns\vert /k} \psi(y)e^{-\vert y\vert }\sum_{\substack{\gamma\in\mathsf{W}(0,ns-y) \\ \abs{\gamma}=k-1}} \lambda^{k}\prod_{i=1}^{k-1}J_{\gamma_{i-1} \gamma_i}\\
&\leq
k e^{-n|s|}\psi(ns/k) \lambda \sum_{\substack{y_1,\dots y_{k-1}\\ \vert\sum y_i -ns\vert\geq \vert ns\vert /k } } \prod_{i=1}^{k-1} \lambda\psi(y_i)e^{-\mathfrak{s}_t(y_i)}\\
&\leq
C_3k^{1+\alpha} e^{-n|s|}\psi(ns) \lambda \Big(\sum_{y_1\neq 0} \lambda\psi(y_1)e^{-\mathfrak{s}_t(y_1)}\Big)^{k-1}\\
&= C_3J_{0,ns} k^{1+\alpha} \tilde{\lambda} (\lambda \tilde{\lambda}^{-1})^{k},
\end{align*}
where we used \(\vert y\vert\geq \vert ns\vert /k\) and \(\mathfrak{s}_t\geq 0\) in the second line, the polynomial form of \(\psi\) in the third one, and the definition of \(\tilde{\lambda}\) in the last one. \(C_3\) is a constant depending on \(|\ |\) and \(\alpha\) only.
This yields
\begin{align*}
\sum_{k=1}^{C_{1}\log(n)} A_k(n)
\leq
C_3J_{0,ns} \tilde{\lambda} \sum_{k=1}^{\infty}k^{\alpha+1} (\lambda \tilde{\lambda}^{-1})^k.
\end{align*}
Since \(\lambda<\tilde{\lambda}\), the last sum converges, which concludes the proof.
\end{proof}
We now show the same condensation phenomenon for a class of fast decaying prefactors in a perturbative regime of \(\lambda\). Namely, we assume that the function \(\psi\) satisfies
\begin{enumerate}[label={\ensuremath{\mathrm{[H_\arabic*]}}}, start=1]
\item \label{hyp:psi_hyp1}
\(\psi(y)\) depends only on \(\abs{y}\) and is decreasing in \(\abs{y}\).
\item \label{hyp:psi_hyp2} there exist \(c>0\) and \(0<a\leq 1\) such that
\begin{equation}\label{eq:prefactor_super_summability}
\sum_{y\neq 0} \psi(y)^{a} e^{-\mathfrak{s}_t(y)} < \infty,
\end{equation}
and, for every \(n, m\in\mathbb{R}_+\) with \(m\leq n\),
\begin{equation}\label{eq:prefactor_factor_bnd}
\psi(n)\psi(m)\leq c\psi(n+m)\psi(m)^{a}.
\end{equation}
\end{enumerate}
These assumptions are in particular true for prefactors exhibiting stretched exponential decay, \(\psi (x)= C\exp (-b \abs{x}^{\gamma} )\) with \(b>0\) and \(0<\gamma <1\), as well as for power-law decaying prefactors \(\psi(x)= C\abs{x}^{-\alpha}\) with \(\alpha>d\).
\begin{lemma}
\label{lem:pre_fact_fast_dec}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\). Assume that \(\psi\) is such that~\ref{hyp:psi_hyp1} and~\ref{hyp:psi_hyp2} hold (in particular, \eqref{eq:SummabilityCondition} holds for \(t\)).
Then, there exists \(\lambda_{0}>0\) such that, for any \(\lambda < \lambda_{0}\), one can find \(c_+>0\) such that
\begin{equation}
G^{\mathrm{KRW}}_{\lambda}(0,ns) \leq c_+ J_{0,ns} .
\end{equation}
\end{lemma}
\begin{remark}
On can notice that in the case \(\psi(x) = C_{\alpha}|x|^{-\alpha}\),~\eqref{eq:prefactor_factor_bnd} is satisfied with \(a=1\). In which case, \(c= 2^{\alpha}\) and~\eqref{eq:prefactor_super_summability} is simply~\eqref{eq:SummabilityCondition}. The condition is therefore the same as the one of Lemma~\ref{lem:pre_fact_polynomial} but the \(\lambda_0\) of Lemma~\ref{lem:pre_fact_fast_dec} is smaller than the \(\tilde{\lambda}\) of Lemma~\ref{lem:pre_fact_polynomial} (\(\tilde{\lambda} = 2^{\alpha} \lambda_0\)).
\end{remark}
\begin{proof}
Fix \(s\in\mathbb{S}^{d-1}\) and a dual vector \(t\) and let \(\psi\) be as in the statement. Write \(G_{\lambda}\equiv G^{\mathrm{KRW}}_{\lambda}\).
Let \(c, a\) be given by~\ref{hyp:psi_hyp2}.
Let \(\lambda_0\) be given by
\begin{equation*}
\lambda_0 = \Bigl(c\sum_{y\neq 0} \psi(y)^{a}e^{-\mathfrak{s}_t(y)} \Bigr)^{\!-1}>0.
\end{equation*}
We can rewrite \(G_{\lambda}\) as
\begin{align*}
e^{n|s|} G_{\lambda}(0,ns)
&=
\sum\limits_{k=1}^{\infty} \lambda^k \sum_{\substack{y_1, \dots, y_k \\ \sum_{i=1}^k y_i=ns }} \prod_{i=1}^k \psi(y_i) e^{-\mathfrak{s}_t(y_i)}\\
&\leq
\sum_{k=1}^\infty \lambda^k k\sum_{\substack{y_1, \dots, y_{k-1}\\ \abs{ns -\sum_{i=1}^{k-1} y_i} \geq \max_i\abs{y_i}}} \psi\Bigl(ns - \sum_{i=1}^{k-1} y_i\Bigr) \prod_{i=1}^{k-1} \psi(y_i) e^{-\mathfrak{s}_t(y_i)},
\end{align*}
where we used \(\mathfrak{s}_t \geq 0\).
Now, iterating~\eqref{eq:prefactor_factor_bnd} \(k\) times yields that, for any \(k\geq 1\) and any \(y_1, \dots, y_{k-1}\neq 0\) such that \(\abs{ns - \sum_{i=1}^{k-1}y_i} \geq \max_i\abs{y_i}\),
\begin{equation*}
\psi\Bigl(ns-\sum_{i=1}^{k-1} y_i\Bigr) \prod_{i=1}^{k-1} \psi(y_i)
\leq
c^k \psi(ns) \prod_{i=1}^{k-1} \psi(y_i)^{a}.
\end{equation*}
This gives
\begin{equation*}
G_{\lambda}(0,ns)
\leq
e^{-n|s|} \psi(ns) \lambda c \sum_{k=1}^\infty k \Bigl(\lambda c\sum_{y\neq 0} \psi(y)^{a} e^{-\mathfrak{s}_t(y)} \Bigr)^{\!k-1}.
\end{equation*}
The result follows since \(\lambda<\lambda_0\).
\end{proof}
As for the saturation result, one can use~\ref{hyp:weak_SL} to push the result to other models.
\begin{corollary}
\label{cor:condensation}
Assume that~\ref{hyp:weak_SL} and~\ref{hyp:J_path_lower_bnd} hold.
Let \(s\in\mathbb{S}^{d-1}\) and \(t\) be a dual vector. Suppose that \(\psi\) fulfill the hypotheses of either Lemma~\ref{lem:pre_fact_polynomial} or Lemma~\ref{lem:pre_fact_fast_dec}.
Then, there exists \(\lambda_0>0\) such that, for any \(\lambda<\lambda_0\),
\begin{equation*}
c_-(\lambda)J_{0,ns}\leq G_{\lambda}(0,ns) \leq c_+(\lambda) J_{0,ns},
\end{equation*}
for some \(c_+(\lambda),c_-(\lambda)>0\).
\end{corollary}
The use of~\ref{hyp:J_path_lower_bnd} to obtain the lower bound is obviously an overkill and the inequality follows from the less restrictive versions of the arguments we use in Appendix~\ref{app:Properties}.
\subsection{Prefactor for \(\mathrm{KRW}\) when \(\lambda>\lambda_{\mathrm{sat}}\)}\label{sec:pre_factorOZ}
In this section, we establish Ornstein--Zernike asymptotics for \(\mathrm{KRW}\) whenever there is a mass gap (that is, when saturation does not occur). We expect similar results for general models, but the proofs would be much more intricate. We will come back to this issue in another paper.
\begin{lemma}\label{lem:OZ}
Let \(s\in\mathbb{S}^{d-1}\) and \(\lambda\in (\lambda_{\mathrm{sat}}(s),\lambda_{\mathrm{\mathrm{exp}}})\). There exists \(C_{\lambda}=C(\lambda)>0\) such that
\begin{equation}
G_{\lambda}^{\mathrm{KRW}}(0,ns)=\dfrac{C_{\lambda}}{\vert ns\vert^{(d-1)/2}}e^{-\nu_{s}(\lambda)n}(1+o_{n}(1)).
\end{equation}
\end{lemma}
\begin{proof}
We follow the ideas developed in \cite{Campanino+Ioffe-2002}.
We first express \(e^{\nu_s(\lambda) n} G^{\mathrm{KRW}}_{\lambda}(0,ns)\) as a sum of probabilities for a certain random walk.
We then use the usual local limit theorem on this random walk to deduce the sharp prefactor.
Let \(G_{\lambda}=G_{\lambda}^{\mathrm{KRW}}, \nu_s = \nu_s(\lambda)\).
Since \(\lambda<\lambda_{\mathrm{\mathrm{exp}}}\), \(\nu\) defines a norm on \(\mathbb{R}^d\) (see Claim 2).
Let \(\tilde{t}_s\) be a dual vector to \(s\) with respect to the norm \(\nu\).
We can rewrite \(e^{\nu_s n} G_{\lambda}(0,ns)\) in the following way:
\begin{equation}
e^{\nu_s n} G_{\lambda}(0,ns)
=
\sum_{N=1}^{\infty} \sum_{\substack{y_1, \dots, y_N \\ \sum y_i=ns}} \prod_{i=1}^{N} w(y_i) ,
\end{equation}
with \(w(y_i) = \lambda e^{\tilde{t}_s \cdot y_i - |y_i|} \psi(y_i)\).
Remark that \(w(y_i)\) has an exponential tail, since \(\nu_s < |s|\).
Moreover, \(w(y)\) defines a probability measure on \(\mathbb{Z}^d \setminus \{0\}\).
Indeed, let \(t_s\) be a dual vector to \(s\) with respect to the norm \(|\cdot|\).
Notice that, for \(x\in\mathbb{R}\),
\begin{align*}
\sum_{k\geq 1} x^{|s| k}e^{\nu_s k} G_{\lambda}(0,ks)
&=
\sum_{N\geq 1} \sum_{k\geq 1} \sum_{\substack{y_1,\dots,y_N \\ \sum y_i=ks }} \prod_{i=1}^{N} x^{t_s \cdot y_i} w(y_i) \\
&\leq
\sum_{N\geq 1}\biggl( \sum_{y\neq 0} x^{t_s \cdot y} w(y) \biggr)^{\!\!N} \\
&=
\dfrac{\sum_{y\neq 0} x^{t_s\cdot y} w(y)}{1-\sum_{y\neq 0} x^{t_s\cdot y} w(y)}.
\end{align*}
The radius of convergence of the series in the left-hand side is equal to 1, whereas the radius of convergence of the series in the right-hand side is strictly larger than 1, since \(w(y)\) has an exponential tail.
It follows that, for \(x=1\), we must have
\begin{equation}
\sum\limits_{y\neq 0}w(y)=1.
\end{equation}
We denote by \(P_0\) the law of the random walk \((S_n)_{n\geq 1}\) on \(\mathbb{Z}^d\), starting at \(0\in\mathbb{Z}^{d}\) and with increments of law \(w\), and by \(E_0\) the corresponding expectation.
We can rewrite
\begin{equation}\label{eq:RW}
e^{\nu_{s}n} G_{\lambda}(0,ns) = \sum_{N\geq 1} P_0(S_N = ns).
\end{equation}
Remark that \(E_0(S_1) = \mu s\) for some \(\mu\in\mathbb{R}\).
Indeed, were it not the case, rough large deviation bounds would imply the existence of \(c>0\) such that \(P_0(S_N = ns) \leq e^{-c\max\{n,N\}}\) for all \(N\). Using~\eqref{eq:RW}, this would imply \(e^{\nu_s n} G(0,ns) \leq e^{-c' n}\), for some \(c'>0\), contradicting the fact that \(e^{\nu_s n} G(0,ns) = e^{{\mathsf o}(n)}\).
Fix \(\delta>0\) small.
On the one hand, uniformly in \(y\) such that \(\abs{y - n\mu s} \leq n^{1/2-\delta}\), we have, by the local limit theorem,
\begin{equation}
\sum_{N:\, \abs{N-n} \leq n^{1/2+\delta}} P_{0}(S_N=y)
=
\dfrac{\tilde{C_\lambda}}{\vert ns\vert^{(d-1)/2}} \bigl(1+{\mathsf o}_n(1)\bigr),
\end{equation}
where \(\tilde{C_\lambda}>0\) can be computed explicitely.
On the other hand, since \(w\) has exponential tail, a standard large deviation upper bound shows that
\begin{equation}
\sum_{N:\,\abs{N-n} > n^{1/2 +\delta}} P_0(S_N=y) \leq e^{-c_2 n^{2\delta'}},
\end{equation}
for some small \(\delta'>0\).
Therefore, it follows from~\eqref{eq:RW} that
\begin{equation}
e^{\nu_s n} G_{\lambda}(0,ns) = \dfrac{C_\lambda}{|ns|^{(d-1)/2}} \bigl(1+{\mathsf o}_n(1)\bigr),
\end{equation}
with \(C_\lambda = \tilde{C_\lambda}\mu^{(d-1)/2}\).
\end{proof}
\subsection{Absence of saturation at large \(\lambda\)}
\label{sec:lambda_sat_less_lambda_c}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_d1}
Suppose \(d=1\) and \(*\in\{\mathrm{Ising}, \mathrm{Potts}, \mathrm{FK}, \mathrm{XY}\}\).
Then, there exists \(\lambda_0 \in (0, \infty)\) such that \(0 < \nu^*(\lambda) < \abs{1}\) when \(\lambda > \lambda_0\).
\end{lemma}
\begin{proof}
In all the models \(\{\mathrm{Ising}, \mathrm{Potts}, \mathrm{FK}, \mathrm{XY}\}\), \(\nu(\lambda) > 0\) for any \(\lambda > 0\) when \(d=1\).
The claim is thus an easy consequence of the finite-energy property for FK percolation: bound \(\Phi^{\mathrm{FK}}(0\leftrightarrow x)\) from below by the probability that a given minimal-length nearest-neighbor path \(\gamma\) is open, the probability of which is seen to be at least \(p_\beta^{\norm{x}_1}\) with \(\lim_{\beta\to\infty} p_\beta = 1\).
A similar argument is available for the \(\mathrm{XY}\) model: set all coupling constants not belonging to \(\gamma\) to \(0\) by Ginibre inequalities and explicitly integrate the remaining one-dimensional nearest-neighbor model to obtain a similar bound.
\end{proof}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_GFF_KRW}
Suppose \(*\in\{\mathrm{GFF}, \mathrm{KRW}\}\). Suppose either \(d=1\) or \(d\geq 3\) and \(\lambda^{*}_{c}=\lambda^{*}_{\exp}\). Then, \(\lambda^{*}_{\mathrm{sat}} < \lambda^{*}_{\exp}\).
\end{lemma}
\begin{proof}
We treat only the KRW as extension to the GFF is immediate.
Suppose first that \(d\geq 3\).
Then, \(G_{\lambda_{\mathrm{c}}}(x,y)\) is finite for any \(x,y\in\mathbb{Z}^d\) and does not decay exponentially fast.
So, \(\nu(\lambda_{\mathrm{c}})\) is well defined and equals \(0\).
Left continuity of \(\nu\) and the assumption \(\lambda_{\mathrm{c}}=\lambda_{\mathrm{\mathrm{exp}}}\) conclude the proof.
For \(d=1\) we use the characterization of Lemma~\ref{lem:SaturationKRW}.
By our choice of normalization for \(J\) and the definition of \(\lambda_{\mathrm{sat}}^{\mathrm{KRW}}\) and \(\mathfrak{s}_t\),
\begin{gather*}
2\sum_{n\geq 1} \psi(n) e^{-n|1|} = 1 = \lambda_{\mathrm{c}}
\quad\text{ and }\quad
\lambda_{\mathrm{sat}}^{\mathrm{KRW}} = \Bigl( \sum_{n\geq 1} \psi(n) (1 + e^{-2n|1|}) \Bigr)^{\!-1}.
\end{gather*}
In particular, defining a probability measure \(p\) on \(\mathbb{N}\) by \(p(n) = 2\psi(n) e^{-n|1|}\), one obtains
\begin{equation*}
\lambda_{\mathrm{sat}}^{\mathrm{KRW}} = \Bigl( \sum_{n\geq 1} p(n)\cosh(n|1|) \Bigr)^{\!-1} < 1 = \lambda_{\mathrm{c}}^{\mathrm{KRW}}.
\end{equation*}
The conclusion will follow once we prove that $\lambda_{\exp}^{\mathrm{KRW}}=1$. Fix $\lambda<1$ and $\delta >0$. Then
\[
\sum_{n\in\mathbb{Z}} e^{\delta n} G^{\mathrm{KRW}}_{\lambda}(0,n)
=
\sum_{n\in\mathbb{Z}} e^{\delta n} \sum_{k\geq 1} \sum_{\substack{y_1, \dots, y_k\in\mathbb{Z} \setminus \{0\} \\ \sum_{i=1}^k y_i= n }} \prod_{i=1}^k \lambda J_{0,y_{i}}
=\sum_{k=1}^{\infty}\Bigl(\lambda\sum_{y\neq 0}J_{0,y}e^{\delta y}\Bigr)^{\!\!k}.
\]
By our choice of normalization for $J$ and the fact that $J_{0,y}$ has exponential tails, it is possible to find $\delta$ small enough such that the sum over $k$ is finite, which proves that $\lambda_{\exp}^{\mathrm{KRW}}=1$.
\end{proof}
\begin{lemma}\label{lem:nontrivial_mass_gap_regim_AnyD}
Suppose \(d>1\) and consider Bernoulli percolation or the Ising model. Suppose \(\lambda_{\mathrm{\mathrm{exp}}}=\lambda_{\mathrm{c}}\).
Then, there exists \(\lambda_0 \in [0, \lambda_{\mathrm{\mathrm{exp}}})\) such that, for any \(s\in\mathbb{S}^{d-1}\) and \(\lambda \in (\lambda_0, \lambda_{\mathrm{\mathrm{exp}}})\),
\begin{equation*}
\nu_s(\lambda) < |s|.
\end{equation*}
\end{lemma}
\begin{proof}
The existence of \(\lambda_0\) follows from Lemma~\ref{lem:nu_left_cont} and the fact that \(\nu_s(\lambda_{\mathrm{c}}) = 0\) which is obtained by equivalence of directions for \(\nu\) (Lemma~\ref{lem:rate_equiv_directions}) and divergence of the susceptibility at \(\lambda_{\mathrm{c}}\).
The latter is proved for the Ising model and Bernoulli percolation in~\cite{Duminil-Copin+Tassion-2016}. The conclusion follows by the assumption \(\lambda_{\mathrm{\mathrm{exp}}}=\lambda_{\mathrm{c}}\).
\end{proof}
\section{``Non-summable'' case}
In this section we consider directions \(s\in\mathbb{S}^{d-1}\) for which
\begin{equation}\label{eq:NonSummabilityCondition}
\sum_{y\neq 0} \psi(y) e^{-\mathfrak{s}_t(y)} = +\infty,
\end{equation}
where \(t\) is any vector dual to \(s\).
We prove that saturation does not occur in direction \(s\) at any value of \(\lambda\), provided that the model at hand satisfies~\ref{hyp:J_path_lower_bnd}.
\medskip
Before proving the general claim, let us just mention that the claim is almost immediate when \(\psi(ns)\) is not uniformly bounded in \(n\).
Indeed, suppose \(\nu_s(\lambda)=|s|\).
Then, by~\ref{hyp:sub_mult}, \(G_{\lambda}(0,ns)\leq a_{\lambda}^{-1}e^{-\nu_s n}\) (using~\eqref{eq:nu_infimum}), while by~\ref{hyp:J_path_lower_bnd}, \(G_{\lambda}(0,ns)\geq C_{\lambda} \psi(ns)e^{-n|s|}\).
From these two assumptions and the assumption that \(\nu_s(\lambda)=|s|\), we deduce that
\begin{equation*}
C_{\lambda} \psi(ns)e^{-n|s|}\leq G_{\lambda}(0,ns) \leq a_{\lambda}^{-1}e^{-n|s|},
\end{equation*}
which implies that \(\psi(ns)\) is bounded uniformly over \(n\).
\medskip
Let us now turn to a proof of the general case.
\subsection{Absence of saturation at any \(\lambda\)}
\begin{lemma}\label{lem:mass_gap_non_summable_surcharge}
Suppose~\ref{hyp:J_path_lower_bnd} and~\ref{hyp:PsiQuasiIsotropic}. Let \(s\in\mathbb{S}^{d-1}\) and let \(t\) be a vector dual to \(s\). Assume that \(\partial\mathscr{U}\) is quasi-isotropic in direction \(s\) and that~\eqref{eq:NonSummabilityCondition} holds.
Then, for any \(\lambda>0\), \(\nu_s(\lambda)<|s|\).
\end{lemma}
\begin{proof}
We use the notation of Section~\ref{sec:QuasiIsotropy}. In particular, we assume that \(\mathscr{N}\) and \(\epsilon\) have been chosen small enough to ensure that either \(g\equiv 0\), or \(g\) vanishes only at \(0\).
Let \(\delta>0\) and consider the cone \(\mathscr{Y}_{t,\delta} = \setof{y\in\mathbb{Z}^d}{\mathfrak{s}_t(y) \leq \delta |y|}\).
When \(g\) vanishes only at \(0\), we further assume that \(\delta\) is small enough to ensure that \(\mathscr{Y}_{t,\delta} \cap \partial\mathscr{U} \subset \mathscr{N}\) (this will be useful in the proof of Lemma~\ref{lem:ExplicitCond} below.)
It follows from~\eqref{eq:psi_subexp} that
\[
\sum_{y\notin\mathscr{Y}_{t,\delta}} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
\sum_{y\notin\mathscr{Y}_{t,\delta}} \psi(y) e^{-\delta |y|} < \infty .
\]
Since we assume that~\eqref{eq:NonSummabilityCondition} holds, this implies that
\[
\sum_{y\in\mathscr{Y}_{t,\delta}} \psi(y) e^{-\mathfrak{s}_t(y)} = +\infty.
\]
Let \(\mathcal{T}_R(s) = \setof{y\in\mathbb{R}^d}{\normsup{y-(y\cdot s)s} \leq R}\). We will need the following lemma.
\begin{lemma}\label{lem:intermediaire}
For any \(R>0\) large enough, we have
\begin{equation}\label{eq:DivergenceSubCone}
\inf_{x\in\mathcal{T}_R(s)} \sum_{y\in (x+\mathscr{Y}_{t,\delta}) \cap \mathcal{T}_R(s)} \psi(y-x) e^{-\mathfrak{s}_t(y-x)} = \infty .
\end{equation}
\end{lemma}
This lemma is established below.
In the meantime, assume that the lemma is true. Then, one can find \(R>0\) such that
\begin{equation}\label{eq:BigEnough}
\inf_{x\in\mathcal{T}_R(s)} \sum_{y\in (x+\mathscr{Y}^R_{t,\delta}) \cap \mathcal{T}_R(s)} \psi(y-x) e^{-\mathfrak{s}_t(y-x)} \geq e^2 C_\lambda^{-1} .
\end{equation}
where we have introduced the truncated cone \(\mathscr{Y}^R_{t,\delta} = \setof{y\in\mathscr{Y}_{t,\delta}}{\normsup{s}\leq R}\).
We are now going to construct a family of self-avoiding paths connecting \(0\) to \(ns\) in the following way: we first set \(M=\frac{n}{2R}\) and choose \(y_1, y_2, \dots, y_{M+1}\) in such a way that
\begin{itemize}
\item \(y_k \in \mathscr{Y}^R_{t,\delta}\) for all \(1\leq k\leq M\);
\item for all \(1\leq m\leq M\), \(\sum_{k=1}^m y_k \in \mathcal{T}_R(s)\);
\item \(y_{M+1} = ns- \sum_{k=1}^M y_k\).
\end{itemize}
Note that, necessarily, \(s\cdot y_{M+1} \geq n/2\) and \(y_{M+1}\in \mathcal{T}_R(s)\).
We then consider the set \(\Gamma\subset\mathsf{SAW}(0,ns)\) of all self-avoiding paths \((0, y_1, y_1+y_2, \dots, y_1+\dots+y_{M}, ns)\) meeting the above requirements.
We thus obtain that, by~\ref{hyp:J_path_lower_bnd},
\begin{align*}
e^{n|s|} G_{\lambda}(0,ns)
&\geq
C_{\lambda}\sum_{y_1}\dots\sum_{y_M} \prod_{k=1}^{M+1} C_{\lambda} \psi(y_k) e^{-|y_k| + y_k\cdot t} \\
&=
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} \sum_{y_1}\dots\sum_{y_M} \prod_{k=1}^{M} \psi(y_k) e^{-\mathfrak{s}_t(y_k)}\\
&\geq
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} \sum_{y_1}\dots\sum_{y_{M-1}} \prod_{k=1}^{M-1} \psi(y_k) e^{-\mathfrak{s}_t(y_k)} (e^2 C_{\lambda}^{-1})\\
&\geq\cdots\geq
(C_{\lambda})^{M +2} e^{{\mathsf o}(n)} (e^2 C_{\lambda}^{-1})^M = C_{\lambda}^2 e^{n/R +{\mathsf o}(n)},
\end{align*}
where the sums are over \( y_1, \ldots, y_M\) meeting the requirements for the path to be in \(\Gamma\).
The term \(e^{{\mathsf o}(n)}\) in the second line is the contribution of \(y_{M+1}\) (\(y_{M+1}\in\mathcal{T}_R(s)\) and its length is at least \(n/2\), so \(\mathfrak{s}_t(y_{M+1}) = {\mathsf o}(n)\) and \(\psi(y_{M+1})=e^{{\mathsf o}(n)}\)).
For the third and fourth lines, we apply~\eqref{eq:BigEnough} \(M\) times.
\end{proof}
There only remains to prove Lemma~\ref{lem:intermediaire}.
The latter is a direct consequence of the following quantitative version of~\eqref{eq:NonSummabilityCondition}, which can be useful to explicitly determine whether saturation occurs in a given direction; see Remark~\ref{rem:DirectionDepSaturation} in Section~\ref{sec:MainTheorems} for an example. Below, it will be convenient to set \(g^{-1}\equiv 1\) when \(g\equiv 0\).
\begin{lemma}\label{lem:ExplicitCond}
Under the assumptions of Lemma~\ref{lem:mass_gap_non_summable_surcharge}, Condition~\eqref{eq:NonSummabilityCondition} is equivalent to the condition
\begin{equation}\label{eq:ExplicitCondition}
\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} = \infty.
\end{equation}
\end{lemma}
\begin{proof}
We shall do this separately for the case \(g\equiv 0\) (\(s_0\) belongs to the ``interior'' of a facet of \(\partial\mathscr{U}\)) and when \(g\) vanishes only at \(0\).
\medskip
\textbf{Case 1: \(\boldsymbol{g\equiv 0}\).}
In this case, we can find \(\eta>0\) such that \(\mathfrak{s}_t(y) = 0\) for all \(y\) in the subcone \(\mathscr{C}_\eta(s) = \setof{\lambda s'}{\lambda>0,\, s'\in\mathbb{S}^{d-1},\, \norm{s'-s} < \eta}\).
In particular, for all \(y\in\mathscr{C}_\eta(s)\),
\[
\psi(y) e^{-\mathfrak{s}_t(y)} = \psi(y),
\]
from which the claim follows immediately using~\ref{hyp:PsiQuasiIsotropic}.
\medskip
\textbf{Case 2: \(\boldsymbol{g > 0}\).}
We now assume that \(g(\tau) > 0\) for all \(\tau\neq 0\) (remember the setting of Section~\ref{sec:QuasiIsotropy}).
For simplicity, let \(u\in\mathbb{Z}^d\) be such that \(\normsup{u}=R\) and write \(\mathscr{C}_u = \mathscr{Y}_{t,\delta} \cap \bigl(u + \mathcal{T}_R(s)\bigr)\) for the corresponding sub-cone.
Given \(y\in\mathscr{Y}_{t,\delta}\), we write \(y^\parallel = y\cdot \hat{t}\) and \(y^\perp = y - y^\parallel \hat{t}\). In particular, we have
\[
y^\parallel = \frac{|y|}{\norm{t}} - |y| f\biggl(\frac{y^\perp}{|y|}\biggr).
\]
This implies that
\[
\mathfrak{s}_t(y)
= |y| - t\cdot y
= |y| - \norm{t} y^\parallel
= \norm{t} |y|\, f(y^\perp/|y|).
\]
We conclude that
\begin{equation}\label{eq:boundsOnSurcharge}
C_+ |y|\, g(\norm{y^\perp}/|y|)
\geq
\mathfrak{s}_t(y)
\geq
C_- |y|\, g(\norm{y^\perp}/|y|)
\end{equation}
where we have set \(C_\pm = c_\pm \norm{t}\).
Using~\ref{hyp:PsiQuasiIsotropic}, we can write
\begin{align*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
C_\psi^+\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0} \sum_{\substack{y\in\mathscr{C}_u\\\normI{y}=\ell\\\norm{y^\perp}\in [r,r+1)}}
e^{-\mathfrak{s}_t(y)}
\leq
c_1 \sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0}
r^{d-2}
e^{-c_2\ell g(c_3r/\ell)} .
\end{align*}
Let \(x=\frac{1}{c_3} \ell g^{-1}(1/\ell)\). The sum over \(r\) is easily bounded. :
\begin{align*}
\sum_{r\geq 0}
r^{d-2}
e^{-C_- \ell g(r/\ell)}
&\leq
\sum_{k\geq 0} \sum_{r=kx}^{(k+1)x} r^{d-2} e^{-c_2\ell g(c_3 r/\ell)} \\
&\leq
\sum_{k\geq 0} e^{-c_2 \ell g(k g^{-1}(1/\ell))} \sum_{r=kx}^{(k+1)x} r^{d-2} \\
&\leq
x^{d-1} \sum_{k\geq 0} (k+1)^{d-1} e^{-c_2 \ell g(k g^{-1}(1/\ell))} .
\end{align*}
Let us prove that the last sum is finite. Let \(h(k) = g(k g^{-1}(1/\ell))\). Notice that \(h(0)=g(0)=0\) and \(h(1)=1/\ell\). Since \(g\) is convex and increasing, \(h\) is convex and increasing as well. Therefore, convexity implies that
\begin{equation*}
h(1)
=
h\bigl( \tfrac1k\cdot k + (1-\tfrac1k)\cdot 0 \bigr)
\leq
\tfrac1k h(k) + (1-\tfrac1k) h(0)
=
\tfrac1k h(k) .
\end{equation*}
Therefore, we get
\begin{equation*}
\sum_{k\geq 0} k^{d-1} e^{-c_2 \ell g(k g^{-1}(1/\ell))}
\leq
\sum_{k\geq 0} (k+1)^{d-1} e^{-c_2 k} ,
\end{equation*}
which implies the following upper bound
\begin{equation*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
\leq
c_4\sum_{\ell\geq 1} \psi_0(\ell) (\ell g^{-1}(1/\ell))^{d-1} .
\end{equation*}
Similarly, using the upper bound in~\eqref{eq:boundsOnSurcharge} (and once more~\ref{hyp:PsiQuasiIsotropic}), we get the following lower bound :
\begin{align*}
\sum_{y\in\mathscr{C}_u} \psi(y) e^{-\mathfrak{s}_t(y)}
&\geq
C_\psi^-\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r\geq 0} \sum_{\substack{y\in\mathscr{Y}_{t,\delta}\\\normI{y}=\ell\\\norm{y^\perp}\in [r,r+1)}} e^{-\mathfrak{s}_t(y)} \\
&\geq
C_\psi^-\sum_{\ell\geq 1} \psi_0(\ell) \sum_{r=0}^{\frac{1}{c_6}\ell g^{-1}(1/\ell)} r^{d-2} e^{-c_5\ell g(c_6 r/\ell)} \\
&\geq
c_7 \sum_{\ell\geq 1} \psi_0(\ell) \sum_{r=0}^{\frac{1}{c_6}\ell g^{-1}(1/\ell)} r^{d-2} \\
&\geq
c_8 \sum_{\ell\geq 1} \psi_0(\ell)(\ell g^{-1}(1/\ell))^{d-1} . \qedhere
\end{align*}
\end{proof}
\section{Acknowledgments}
Dima Ioffe passed away before this paper was completed.
The first stages of this work were accomplished while he and the fourth author were stranded at Geneva airport during 31 hours. In retrospect, the fourth author is really grateful to easyJet for having given him that much additional time to spend with such a wonderful friend and collaborator.
YA thanks Hugo Duminil-Copin for financial support. SO is supported by the Swiss NSF through an early PostDoc.Mobility Grant. SO also thanks the university Roma Tre for its hospitality, hospitality supported by the ERC (ERC CoG UniCoSM, grant agreement n.724939). YV is partially supported by the Swiss NSF.
|
1,108,101,565,167 | arxiv | \section{INTRODUCTION}
\subsection{Motivation}
Nickel-based Superalloys are found in a wide range of applications. The most
prominent use is in the manufacture of gas turbines for use in commercial and
military aircraft, power generation, and marine propulsion.A jet engine experiences temperatures ranging from 300K to 1500K along with very high pressure. Superalloys exhibit high creep resistance
at high temperatures, good surface stability, and corrosion and oxidation
resistance.Plastic deformation of any material can be understood by scaling
down to the molecular level.At this scale,the movement of dislocations play a vital role in controlling the plastic deformation. The distribution and size of precipitates Ni$_{3}$Al acts as a obstruction to the movement of dislocations.
\subsection{Theory}
The microstructure of Nickel-based Superalloys consists of
ordered $\gamma'$ Ni$_{3}$Al precipitates with a L1$_{2}$ structure, coherently
set in the $\gamma$-matrix, a face-centred cubic (fcc) nickel-based solid
solution. Nickel-based superalloys derive much of their excellent mechanical properties at high temperature from the $\gamma'$ precipitates. They have a roughly cuboidal shape.The dimensions of precipitates and channels are in the sub-micrometer range.The essential role of $\gamma'$ precipitate in the matrix is to obstruct the movement of dislocations by acting as a bulk obstacle.The $\gamma$-$\gamma'$ interface plays a vital role as there exists a misfit between
the lattice parameter of the precipitate and the matrix. Lattice misfit is a measure of the incoherency between the $\gamma'$ precipitate and the
$\gamma$ matrix.Mathematically,misfit can be calculated as
\[\delta = 2\frac{a_{\gamma'} - a_\gamma}{a_{\gamma'} + a_\gamma}\]
where, a$_\gamma$ is the lattice parameter of the $\gamma$ matrix and a$_{\gamma'}$ is the lattice parameter of the $\gamma'$ precipitate.The lattice misfit that is specified in a given simulation affects the overall dynamics of dislocations observed and subsequently the prediction of the properties. From the scientific literature, the values of a$_\gamma$ and a$_{\gamma'}$ are found as 3.52\AA and 3.529\AA respectively.The misfit is hence very small and assumed to be coherent in the study.
\subsection{Objective}
The objective of the project is to perform a Molecular Dynamics simulation to study the interaction between the precipitate and the interaction.One of the important goals of the project is to investigate the dislocation precipitate interaction for different precipitate sizes and distributions.In order to achieve this, a model has to be setup to perform the simulation.The system consists of a Nickel ($\gamma$) matrix ,a single edge dislocation and two cuboidal Ni$_{3}$Al ($\gamma'$) precipitates.
This paper is organized as follows. Section 2 provides a brief literature survey.Section 3 describes the procedure to setup the simulation cell,parameters associated the model.Section 4 shows the results of the simulation, observations and conclusion.Section 5 is used for acknowledgements
\section{Literature Review}
\subsection{Ni-based Superalloys}
Ni-based superalloys are very useful for high-temperature applications because of their strength retention capabilities and relatively high creep compliance. The shape of the precipitates (Ni$_{3}$Al) is usually in the form of thin plate-like structure or cuboidal when the fraction of the volume occupied by the precipitate is moderately high. The interaction of dislocations with precipitates can result in climb, bow-out (the so-called orowan bowing) or it could result in the cutting of the precipitate by the dislocation
While precipitation hardening has been studied extensively using continuum mechanics, its underlying mechanism in the atomistic scale has not been investigated satisfactorily. The continuum assumption makes use of both long-range and short range forces (ie), those that arise from the intrinsic stress field associated with a screw/edge dislocation or those that arise from the interaction of the dislocation and the particle, which are in turn, thought to originate from the mismatch of the elastic properties between the matrix and precipitate phases. Eshelby's equivalence principle has been used in various forms to achieve homogenization of the precipitate hardened alloys to simulate their macroscale response. There are several major assumptions made by these analytical constitutive methods that attempt to study dislocation particle interaction. The assumptions include, but are not limited to line tension in the dislocation line, coherency between precipitate and the matrix phase, and relatively smooth dislocation lines. They completely negate the possiblity of formation of stacking fault which would lead to the introduction of anisotropy in an otherwise isotropic polycrystal. The smoothness of dislocation is another gross-simplification that negates the possibility of the formation of kinks around the precipitate, especially if the precipitate is cuboidal. The strengthening effect is particularly pronounced if the volume fraction of the precipitates is moderately high, as is the case with stable cuboidal precipitates, generally.
\subsection{A brief look at continuum based approaches}
The first ever publication on the continuum mechanics based interaction between inclusions and dislocations was published by Nabarro (1940)\cite{Mott1940}. He calculated the stresses in a matrix with a harder inclusion based on difference in lattice parameters. The internal force attributed to the system is non-zero even in the absence of external loading due to the intrinsic stress-field around a dislocation. Ardell\cite{ardell1985precipitation} and Ashby provide greater detail in their discussions about the interactions of precipitates and matrix. These discssions, are again based on the line-tension concept. The discussion is also limited by its application to only dispersed particulates in an infinite medium. The effect of Matrix non-linearity and aspect ratio on strength are studied in Lee and Mear (1991)\cite{lee1991effect}. A energy based formulation was developed by Zhu and Zbib (1995)\cite{zhu1995macroscopic} to study the elasto-plastic response of MMC to understand the strengthening effect. They modeled the MMC as rigid inclusions in an infinite plsatically flowing matrix. An energy-based framework was constructed from which the constitutive model was derived.
As discuused before, the continuum models are restricted to specific geometries and are not generic formulations that can be used to address complicated loading scenarios.
\section{Usage of molecular dynamics }
The short-range interactions can be effectively studied with the LAMMPS molecular dynamics package. This provides us with a clear understanding of the unit processes that are
activated when a dislocation interacts with a given precipitate. The system studied here is the coherent Ni-Ni$_{3}$Al matrix precipitate super-alloy.
\subsection{Model definition and lattice generation}
The geometry of the system consists of the Nickel matrix and the Ni$_{3}$Al precipitate. The system consists of cuboidal precipitates in a single crystal of Nickel. The orientation is along the cartesian axes, because the stacking fault energy of Nickel is particularly low on the [100] direction. This is proven to be unneccessary as the formation of a dislocation loop which always happens when the dislocation starts bowing around the precipitate. The alloy is created using a user-defined basis for a crystal structure that contains Al at the corners and Ni atoms at the face centres, as is the case with the precipitate phase. The basis allocates a particular type of atom to each position and generates the lattice along the specified orthonormal directions.
\subsection{Creation of the dislocation}
Two methods were devised to create the dislocations, as explained in [6]. The first one involoved creation of known sources of dislocations, like voids. The major disadvantage with this setup was the fact that voids only created partials, whereas for our simulations, we needed full edge dislocations. This was overcome by using the other method suggested in [6], which was, define the basis associated with the lattice (both precipitate and matrix), and dump the coordinates to a file. The coordinates were read from this file in a third party application called Atomsk which allowed us to delete a set of half-planes and move the remaining atoms according to the linear elastic displacement field surrounding the dislocation (in this case, an edge dislocation). One other way of creating a dislocation in LAMMPS itself was to remove two half planes and let the system go to equilibrium, thereby evolving itself into a system of two partials that can be controlled using shear forces. However, this method was not adopted after significant proliferation due to the enormous times involved in equilibriation within which partials and misfits associated with the precipitate-matrix phase started evolving, hence undercutting the purpose of the study. As it was mentioned before, the coordinates of the lattice without the dislocation is dumped into a file which is read by atomsk. The dislocation line direction and glide plane are taken as inputs, along with the magnitude of the burgers vector and the poisson's ratio for the matrix. The output created by these arguments is used in the analysis.
\begin{center}
Fig1. An edge dislocation created in Atomsk
\includegraphics[width=5cm]{dislocation.PNG}
\end{center}
\begin{center}
\includegraphics[width=5cm]{misfits.PNG}
Fig (2) - A misfit at the centre of the precipitate
\end{center}
\subsection{Parameters associated with the simulation}
The compute style command are used to calculate the thermodynamic outputs of the system. By default, temperature pressure and volume at the end of the current timestep are provided as outputs. However, other parameters required to study the effects of dislocation motion, such as stress, volumetric deformation, strain are required to be input by the user.
The parameters calculated here include stress/atom and voronoi/atom, which is in-turn used to come up with a volume averaged stress measure, over all the voronoi cells. Following the specification of the compute command, the energy minimization is done with conjugate gradient method. The potential used is Mishin's EAM potential for Ni-Al alloy. The EAM potentials are calculated as follows.
\hspace{30pt} \boldmath{$U_{i}=F_{\alpha}\left(\underset{i\neq j}{\sum}\rho_{\alpha\beta}\left(r_{ij}\right)\right)+\frac{1}{2}\underset{i\neq j}{\sum}\phi_{\alpha\beta}\left(r_{ij}\right)$}
An energy tolerance of 1 pJ was adopted. Note that this isn't particularly low (usually, the values provided are in the order of fJ). This is because, the linear elastic displacement field associated with the dislocation is already programmed into the system. Also, reducing the energy tolerance caused the computational time required to increase dramatically, so it was increased. The misfits associated with the Ni-Ni$_{3}$Al were disappearing and since they are critical to studying the mechanisms in the atomistic scale, the energy tolerance was increased to 1 pJ. It is unclear why this phenomenon is observed. Further investigation is necessary here.
Following the energy minimization, an ensemble of velocities are created to achieve equilibriation via temperature rescaling at a set number of NVE integration timesteps. Note that the window within rescaling is done and fraction to which it is done in this simulation is set at 10.0 and 1.0 (1.0 meaning that temperature is reset to the system temperature/desired equilibrium temperature), respectively, which acknowledges the fact that temperature deviations (even slight ones from set temperature (in this case, 30K), strongly affect properties like intrinsic dislocation densities and especially stacking fault energies. The thermal temperatures are calculated after subtracting explicitly the bulk advection components using the keyword "thermo-modify". The equilibriation was performed for 5ps and the temperature results shown in the corresponding section of the paper.
The shearing action is accomplished by setting the velocity boundary condition on the top surface. This is followed by velocity rescaling at every timestep to ensure equilibriation at every 10 timesteps. The temperature is re-calculated taking into consideration only the glide direction and dislocation direction as velocity components. Thus this option overrides the default method for computing Temperature, for the purposes of this analysis.
The values of stress/atom and voronoi/atom obtained from this analysis is dumped into a lammps dump file, which is later used for visualiation with Ovito.
\begin{center}
\includegraphics[width=8cm]{Untitled_Diagram.png}
\newline
Fig (3)- Simulation flow chart
\newline
\newline
\end{center}
\subsection{Model parameters}
The dimensions and the model parameters are shown below. The distance and the particle size are subject to variation and are denoted by 'd' and 's' respectively.
The parameter d was varied from 20 lattice units to 50 lattice units, in steps of 10, whereas 's' was taken as 10,15 or 20 lattice units.
\begin{center}
\includegraphics[width=9cm]{model.PNG}
Fig (4) - Diagram showing the model dimensions
\newline
\end{center}
\section{SIMULATION RESULTS}
\subsection{Qualitative and physically meaningful results}
The energy minimization results in a lattice with the two partials that were explicitly created with Atomsk as well as misfits associated with matrix-precipitate mismatch.
\begin{center}
\includegraphics[width=8cm]{d40_s15initial.png}
\linebreak
Fig (5)-Showing initial dislocations
\end{center}
The figure contains several shockley dislocations with burgers vector 1/6$[112]$. The general motion of the dislocations and their interactions with the precipitates consists of distinctly observable regimes. It first starts with the dislocation hitting the precipitate phase. The force acting on the dislocation due to the applied shear stress is always perpendicular to the burgers vector and dislocation line between the two precipitate phases will bow out and can generate dislocation loops. The normal force couple created due to the shearing, acting on the dislocation will produce a torque on the dislocation line between the precipitate phases along a direction perpendicular to the slip plane. As a result, dislocation line first assumes a semi-circle and with force per unit length acting normal to the line vector the dislocation line then assumes a "bow" shape and a dislocation loop is generated. One must note that for a particular burgers vector of the dislocation, the burgers vector of the misfit generated due to the misfit between the matrix and the precipitate will have a particular orientation, as discussed above. The stacking fault is present within the dislocation loop and its crystal structure is HCP, as expected.
Note that this rendered picture (fig (7)) gives the impression of two different dislocations interacting. However, this is a periodic cell, implying that the dislocation in this cell is interacting with the bow-out from the previous cell, thereby giving the appearance of two partials from the same cell interacting with each other. The misfit dislocations at the particle act as effective pinning agents for dislocation motion, thereby hardening the alloy.
\begin{center}
\includegraphics[width=8cm]{d40_s15bow.png}
\linebreak
Fig (6) - Dislocation bow-out around the particle
\end{center}
The stacking fault generation is expected wherever there are dislocation loops. One of the loops is found in between the two particles, in the y-z direction (not shown). This is shown in the figure below.
\begin{center}
\includegraphics[width=8cm]{d40_s15stack.png}
\linebreak
Fig (7) - Stacking fault showing HCP Ni around the particle
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\end{center}
This is swiftly followed by the annihilation of the partials with interacting dislocations, be it the misfit shockleys or the next periodic cells edge dislocation. This leaves only the misfits in the lattice, as shown. Some strays are bound to be leftover because the periodicity is lost in [010] direction. A definite drop in the stress is expected due to this phenomenon.
Further shearing causes further generation and destruction of dislocation causing other bumps in the stress/strain plot.
\newline
\newline
\newline
\newline
\newline
\newline
\newline
\begin{center}
\includegraphics[width=8cm]{d40_s15annh.png}
\linebreak
Fig (8) - Lattice with only misfits
\end{center}
\subsection{Qualitative results}
\begin{center}
\includegraphics[width=8cm]{d20_s10.png}
\linebreak
Figure(9) - Stress vs Time step plot for precipitate distance of 20 lattice units and size of precipitate 10 lattice units.
\end{center}
\begin{center}
\includegraphics[width=8cm]{d30_s10.png}
\linebreak
Figure(10) - Stress vs Time step plot for precipitate distance of 30 lattice units and size of precipitate 10 lattice units.
\end{center}
\begin{center}
\includegraphics[width=8cm]{d30_s20.png}
\linebreak
Figure(11) - Stress vs Time step plot for precipitate distance of 30 lattice units and size of precipitate 20 lattice units.
\end{center}
\begin{center}
\includegraphics[width=8cm]{d40_s15.png}
\linebreak
Figure(12) - Stress vs Time step plot for precipitate distance of 40 lattice units and size of precipitate 15 lattice units.
\end{center}
\begin{center}
\includegraphics[width=8cm]{d40_s20.png}
\linebreak
Figure(13) - Stress vs Time step plot for precipitate distance of 40 lattice units and size of precipitate 20 lattice units.
\end{center}
\begin{center}
\includegraphics[width=8cm]{d50_s15.png}
\linebreak
Figure(14) - Stress vs Time step plot for precipitate distance of 50 lattice units and size of precipitate 15 lattice units.
\end{center}
As explained before, the strength is a strong function of the size and inter-particle distance. Here, it is tabulated for a few sample cases to illustrate the contrast.
\begin{table}[h]
\caption{VARIATION OF MAXIMUM STRESS}
\label{table_example}
\begin{center}
\begin{tabular}{|c||c||c||c|}
\hline
Distance(lattice units) & Size & Max Stress (bar) & Timestep\\
\hline
10 & 10 & 106137.2402539 & 26000\\
\hline
20 & 10 & 177092.09471901 & 57000\\
\hline
30 & 10 & 217729.24225723 & 23000\\
\hline
40 & 15 & 174884.42767411 & 22000\\
\hline
40 & 20 & 238394.39515007 & 124000\\
\hline
50 & 15 & 242913.97642214 & 21000\\
\hline
50 & 20 & 249883.47839079 & 56000\\
\hline
\end{tabular}
\end{center}
\end{table}
As is evident from the table, the strength is directly proportional to both the inter-particle distance as well as particle size $[with the outlier simulation being d40, s15]$
The thermos output of temperature was read from the log file and it was plotted to ensure system was at equilibrium at the start of the verlet run. The plot for one of the simulation cases is shown. There hasn't been much of a difference between the various scenarios.
\begin{center}
\includegraphics[width=8cm]{temp_conv.png}
\end{center}
\subsection{Conclusion}
Therefore, several MD simulations have been done to simulate the effect of precipitate on matrix mechanical properties. The generation of stacking faults and misfits have been investigated and the subsequent strengthening effect has been explained reasonably well, within the confines of the simulation box, by plotting stress-strain response. The strength has been shown proportional to particle size and inter-particle distance. This, to an extent, explains precipitation hardening.
\subsection{Outlook}
There exists a specific relationship between SFW (Stacking fault width) and inter-particle distance \cite{zhu1995macroscopic}. It remains to be seen if the same can be verified by the MD simulation. Plus the critical particle distance, below which the dislocation just cuts through the particle hasn't been investigated very well. It is thought to cause a softening in the material. A relationship can be established between particle size, volume fraction, shape and critical dislocation cut-through distance.
\section{Acknowledgments}
We would first like to thank Prof. Anand Kanjarla for his guidance and motivation. We would also like to thank A.R.G.Sreekar and Abhishek Shandilya for sharing their work with us. We would like to extend our warm regards to Sri Hari (T.A for the course), for helping us through odd hours.
The availability of high-performance computing facility GNR has also been greatly helpful.
\bibliographystyle{plain}
|
1,108,101,565,168 | arxiv |
\section*{ACKNOWLEDGMENT}
\end{document}
\section{INTRODUCTION}
With the empowered ability of artificial intelligence, autonomous driving has become one of the promising directions for both research and application.
Due to the safety concern and the high cost of real vehicle testing, autopilot simulation is widely used for fast iteration and verification of driving algorithms~\cite{dosovitskiy2017carla, rong2020lgsvl}.
It attracts massive attention from researchers and industries to build a high-quality simulator.
However, existing open-sourced autopilot simulators primarily focus on traffic modeling with hand-scripted rules to facilitate lane following, collision avoidance, etc. ~\cite{elsayed2020ultra, krajzewicz2002sumo}. Such heuristic rules are insufficient to model the variability and noise in real-world human driving behaviors.
To improve the fidelity of driving behavior simulation, replaying the movement in the human driving dataset~\cite{osinski2020carla} has become popular.
However, the data replay lacks fidelity during real-time interactions, since the social vehicles cannot provide reasonable responses when the ego vehicle acts differently as in the dataset.
To further generate both realistic and reactive traffic flows, supervised learning approaches have been applied in social vehicle modeling~\cite{bansal2018chauffeurnet, bergamini2021simnet}.
Unfortunately, supervised decision models suffer from compounding errors~\cite{xu2020error} and causal confusion~\cite{de2019causal} problems, which make them perform poorly as the planning steps and traffic density increase. Adversarial imitation learning methods handle these issues through rich online interactions in driving simulators~\cite{bhattacharyya2018multi, bhattacharyya2019simulating}. However, few works have explored the potential of using them to enhance the traffic flow in existing simulators.
Another advantage of using simulations is that it allows users to design extensive and specific scenarios beyond those from the datasets to evaluate driving strategies.
Prior learning-based works in building traffic flow only use a single model or multiple models separately~\cite{zhou2020smarts, kothari2021drivergym}, which limits the configurability of the simulator.
A recent work, SimNet~\cite{bergamini2021simnet}, replaces the model-inferred steering angle with hard-coded values to force generating lane-changing behaviors of social vehicles. This approach sacrifices fidelity to provide specific interaction scenarios.
Instead, our focus is to enable specific scenario generations while maintaining fidelity.
To provide a principled and unified solution for building high-quality traffic flows in autonomous driving simulators, in this paper, we propose RITA, a traffic generation framework with \textbf{R}ealistic \textbf{I}nteractive \textbf{T}r\textbf{A}ffic flow.
RITA focuses on augmenting existing autonomous driving simulators with high-quality traffic flows, aiming at reaching three design goals: \textit{fidelity}, \textit{diversity}, and \textit{controllability}. To meet these requirements, RITA is developed with two core modules: RITABackend and RITAKit.
The first module, RITABackend, incorporates machine learning models built from real-world datasets, including vehicle control and guided traffic generation models.
In particular, the vehicle control models are trained using adversarial imitation learning methods, which generate specific behaviors and generalize better outside the data distribution than supervised methods. In addition, the guided traffic generation models are for customized background traffic generation, providing specific initialization states with a guided diffusion sampling algorithm.
The second module, RITAKit, is an easy-to-use toolkit that combines models from RITABackend to generate traffic flows in the simulator. RITAKit has a friendly programming interface that makes it easy to set up the different traffic flows that can be controlled.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{figs/overview.pdf}
\vspace{-5pt}
\caption{Overview of the RITA traffic generation pipeline.}
\vspace{-10pt}
\label{fig:overview}
\end{figure*}
To show the superiority of generated traffic flows and demonstrate the usages of RITA, we conduct experiments on two representative tasks in highway driving, \textit{cut-in} and \textit{ramp}. With regard to these tasks, we first assess the quality of the generated traffic flow with regard to three design goals. Then we show two use cases of RITA to facilitate the autonomous driving workflow. On the one hand, RITA traffic flow can be used flexibly to benchmark off-the-shelf driving strategies from different aspects. Benchmarked driving strategies are either hand-crafted or trained with the history-replay traffic flow. Also, RITA traffic flow can help further improve the driving strategies' performances by allowing online interactions with reactive and realistic social vehicles.
Currently, RITA is in the process of being incorporated into Huawei's autonomous driving platform.
In the future, we hope RITA can serve as a standard building block in developing high-fidelity autonomous driving simulators and contribute to the evaluation and improvement of driving strategies for the community.
\section{Key Features}
We highlight three key features of the traffic flow in RITA.
\paragraph{Fidelity}
Fidelity is an essential measure of simulators that describes the gap between the simulation and the reality \cite{zhang2021learning}.
In particular, driving simulators are essential for ensuring the sim-to-real performance of driving algorithms and are the most effective evaluation tool before real-world road testing. Thus, the smaller the gap between the interaction responses in the simulator and those encountered in reality, the more likely it is that a strategy achieving high performance in the simulator will also perform well in real road conditions.
The fidelity of the traffic flow lies in two aspects: 1) the vehicle state distribution in the traffic flow. For example, social vehicles always move fast on highways in light traffic conditions; 2) the fidelity of the responses from the social vehicles when interacting with the ego vehicle, like the typical lane-changing scenario. We observe that human drivers exhibit specific noise in multiple dimensions when performing lane changes in real datasets. Therefore, we would expect that the traffic flows generated by RITA also possess similar microscopic properties to human interaction behaviors.
\paragraph{Diversity}
Human driving behavior exhibits a high degree of diversity due to differences in destinations, road conditions, drivers' personalities, and other factors. This means, in similar situations, three distinct drivers will behave differently: if driver A tries to change lanes to the left when there is a vehicle in his front, driver B can tend to maintain his current route; even if driver C also chooses to change lanes, the timing of action and the speed of lane change can be quite different.
To reach that, we must ensure that RITA's basic vehicle control models include as much behavioral diversity as possible from real data. In addition, when these models are combined to generate diversified traffic flow, the resulting behavior still maintains a high fidelity level.
\paragraph{Controllability}
Building upon diversity, we can define controllability as an enhancement feature, i.e., the ability to generate customized traffic flows based on user specifications from many potential instances. For example, we can construct test tasks with different difficulty levels by adjusting the distribution and behavior of the traffic flow, or we can generate scenarios with a targeted assessment of the weaknesses of the driving strategy under test.
High controllability also corresponds to high usability, allowing various researchers to build their benchmark tasks by simply changing the parameters of the traffic definition interface in RITAKit. The modular design of RITA allows it to be easily extended to more maps and datasets or even used by other simulators.
\section{RITA Overview}
In this section, we present an overview of our proposed traffic generation framework.
To achieve the three design goals, the framework needs to be able to have sufficient expressiveness to cover the diverse traffic flows in the real data and enough variable modules to support the generation of specific traffic flows based on the users' specifications. At a glance, we maintain a diverse zoo of vehicle control models and multiple static replay trajectories generation methods, and propose a set of interfaces for traffic definition to achieve the above requirements.
We start with the general formulation of autonomous driving, and then give a compositional illustration of RITA.
\subsection{General Formulation}
We consider the autonomous driving task in the discrete-time setting within a fixed roadway area~\cite{interactiondataset, ettinger2021large, halkias2006next}, where each vehicle acts simultaneously at each time step and vehicles can enter or drive out of the given area, and therefore causing the number of vehicles to change.
Formally, denote the vehicles in the area at each time step $t$ as a set $V_t=\{v_1, v_2, \dots, v_{n_t}\}$, where $v_i$ denotes unique vehicle index, and $n_t$ is the number of vehicles. Then, the simulation proceeds by iterating between every vehicle $v\in V_t$ takes its individual action $a^v_t$ according to some policy $\pi(a^v_t|h^v_t)$ based on its historical observations $h^v_t=(o^v_0, \dots, o^v_t)$, and the simulator updates global state $s_t$ to $s_{t+1}$ under the transition function $\mathcal{T}(s_{t+1}|s_t, \bm{a}_t)$, where the joint action $\bm{a}_t=[a_i]_{i=1}^{n_t}$ is the concatenation of all vehicles' actions.
The goal of this paper is to build microscopic traffic flows that can interact with user-specified driving strategies, which are referred to as the \textit{ego strategies} in the following text.
Thereafter we divide the vehicles as \textit{ego vehicles} and \textit{social vehicles}. Ego vehicles are those that are controlled by ego strategies, while social vehicles' movements are determined by the simulator, data replay, or the trained reactive agents as in \se{sec:backend}. Note that such divisions do not have to be fixed during simulation. For example, one vehicle can be a social vehicle in the first half of an episode and is then taken over by the ego strategies for the second half.
\subsection{Compositional Illustration}
RITA's aspiration is to create high-quality traffic flow to improve the existing driving simulators. Developing a simulator and building basic components (such as kinematic simulation, collision detection, etc.) from scratch are outside the major scope of concern. To this end, we choose to build upon an existing simulator, in this paper particularly, SMARTS~\cite{zhou2020smarts}, yet it is feasible to integrate RITA into any other simulators such as SUMO~\cite{krajzewicz2002sumo} and BARK~\cite{bernhard2020bark}.
In total, RITA can be decomposed into two core modules, RITAkit and RITABackend. As a high-fidelity traffic generation framework, RITA can be initialized from real-world datasets and transform traffic specifications to specific scenarios.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/components.pdf}
\caption{Compositional illustration of RITA.}
\label{fig:components}
\end{figure}
We illustrate the architecture of RITA in \fig{fig:components}, organized by a bottom-to-top order. First, real-world datasets are used to train machine learning models in RITABackend. Once models are built, the scenarios can be configured via RITAKit with scenario specifications, usually expressed as parameters in a unified traffic generation function.
Notably, RITAKit directly operates on RITABackend and transforms scenario specifications into model calls in RITABackend. With this decoupled design, the creation and adjustment of benchmark tasks can be made with less effort, where users do not need to define each sub-model call manually.
During the evaluation procedure, both RITAKit and ego strategies are plugged into SMARTS to control social vehicles and ego vehicles, respectively.
RITAKit also takes ego strategies as input for rare-case generation and automatically generates customized traffic flows.
In the following, we further show the training and evaluation results for models in RITABackend in \se{sec:backend}, and describe RITAKit in detail in \se{sec:toolkit}.
\iffalse
\section{RITA Overview}
In this section, we present an overview of our proposed driving benchmark. We start with a general formulation and notations for our autonomous driving simulations. Then we introduce the core components and the benchmark procedure.
\subsection{Formulation and Notations}
Similar to several existing simulators and benchmarks, we consider the autonomous driving task in the discrete time setting within a fixed roadway area. Each vehicle acts simultaneously at each time step and vehicles can enter or drive out of the area, which causes the number of vehicles to change. At each time step $t$, the vehicles in the area make up a set $V_t=\{v_1, v_2, \dots, v_{n_t}\}$, where $v_i$ is the unique vehicle id and $n_t$ is the current number of vehicles. Each vehicle $v\in V_t$ takes its individual action $a^v_t$ according to some policy $\pi(a^v_t|h^v_t)$, where $h^v_t=(s^v_0, \dots, s^v_t)$ is the historical observations of vehicle $v$. The simulator then updates its internal state $s_t$ to $s_{t+1}$ under joint actions of all vehicles.
Since our goal is to build a benchmark for user-specified driving strategies, we divide the vehicles as ego vehicles and social vehicles. Ego vehicles are controlled by the strategies to be benchmarked, and social vehicles' movements are determined by the benchmark itself. Note that such divisions are not required to be fixed during a simulation. For example, one vehicle can be a social vehicle in the first half of the trajectory and then taken over by the benchmarked strategies.
\subsection{Core Components}
Here we introduce the core components in the benchmark.
\subsubsection{Social Vehicle Model Zoo}
Social vehicle model zoo $\Pi_s$ is a set that contains all machine learning models used by the benchmark to control social vehicles. Note that we only include those models trained on real-world data sets as members of $\Pi_s$, and does not include rule-based heuristic models like Intelligent Driver Model (IDM)~\cite{treiber2000congested}. In cases without causing ambiguity, we simply refer to $\Pi_s$ as the model zoo.
\subsubsection{Bubble}
Bubble is a special region in the roadway area where we try to provide the highest quality traffic flow. To achieve that, only social vehicles inside the bubble are controlled by models from the model zoo. The bubble mechanism not only allows us to focus on traffic in specific key areas such as road junctions, but also reduces the model inference cost. In addition, model errors accumulate as the duration of vehicle control increases, and limiting the model-controlled area can alleviate this issue. We provide two types of bubbles for configuration: the fixed bubble which has a predefined location, and the moving bubble which is attached to the ego vehicle and always covers its surrounding area.
\subsubsection{Policy Mapping Function}
We use a policy mapping function $\mathcal{F}(\pi|s_t, v)$ to define the control relationship between policies and vehicles under current state. Note that the term policy include models from the model zoo, rule-based strategies, data replay and benchmarked strategy.
\subsubsection{Replay Trajectories Set}
The replay trajectories set $\mathcal{D}_r=\{\tau^i\}_{i=1}^m$ includes $m$ multi-vehicle trajectories.
We use the replay trajectories set to provide the initial states of both social vehicles and ego vehicles, as well as the movements of social vehicles before they enter the bubble.
\subsubsection{State and Action Spaces}
Since the benchmark is designed to study the immediate response of ego vehicles under complex and variable traffic flow, the challenges in multi-modal perception and underlying motor control are beyond our focus. Therefore, we adopt rather simple state and action spaces. \zhengbang{simple descriptions, leave details and dynamics to appendix}
\section{Traffic Generation}
\label{sec:traffic-gen}
To best support the training of different algorithm, we here provide a traffic generation toolkit which is capable to produce diverse and realistic background traffic flow, along with formal description language. User can either choose built-in interactive modes or design their own environment by overriding functions in the paradigm, finally result in ultimate, diverse and controllable scenarios.
\subsection{Social Agent Mapping}
By assigning social vehicles in the bubble with different roles in the model zoo, we create several high-interactive environments. For example, to strengthen the ability of handling left cut-in vehicles, we provide `MappingMode = “LeftCutin”`, where the social vehicles in the right side of ego vehicle are attached to reactive model while social vehicles in the left side connected to diverse style of left cut-in model. The [FigureX]\zhengbang{complete this} shows a specific scenario. At the same time, modular structure allows users to design their functions, producing traffic flow in diverse distribution.
\subsubsection{Multi-modal GAIL}
To further imitate the diversity of human driving and produce high-quality interactive scenario, we want to learn discriminated modality from unlabeled recorded data. Inspired by InfoGAIL[quote], which introduce a latent variable $c$ and enforce high mutual information between c and the state-action pairs, we develop a updated version [Further introduction]. By controlling the discrete variable $c$, we now can generalize qualified scenarios according to diverse preference.
\subsection{Traffic Generation Toolkit}
The above procedure can be described in a formal coding-style language, giving a more natural view of the whole scheme. [pseudo-code]. Under this framework, people can construct their experimental scenario with a list of meaningful arguments, and it also offers a more simple manner to reproduce the environment and the result. Due to dataset limit, we here only provide two typical interaction behaviors. But with this handy toolkit, anyone in the opensource community can conveniently join this project and contribute to enriching the zoo.
\subsection{Replay Trajectories Generation}
We do not directly use the real-world data to make up the replay trajectories set. Instead, new instances are sampled from the distribution of real trajectories by a generative model and added to the replay trajectories set. Such design choice is made for two reasons. First, samples from the generative models can generalize beyond the fixed dataset, which increase the diversity of the traffic flow. Second, if the benchmark is released as a public competition or third-party evaluation, the use of new samples can protect the test scenarios from data leakage.
In particular, we use the diffusion model to construct the replay trajectories set.
\fi
\section{Building RITAbackend}
\label{sec:backend}
\subsection{Model Zoo}
To evaluate the ego strategies with high-fidelity traffic flows, we mandate a set of reactive agents that can generate traffic flows during flexible interactions.
Instead of designing complex and proper reward functions for learning autonomous driving agents, we choose to learn to behave like a human driver from real-world data. To this end, we utilize imitation learning methods~\cite{ho2016generative,liu2020energy}, which provide a way to learn from demonstrations and mimic the demonstrator's behaviors. In RITA, we maintain a model zoo that integrate a set of imitation learning algorithms for building reactive agents.
There exists some previous methods that take behavior cloning (BC) to learn their reactive agents, such as~\cite{9561666,bansal2018chauffeurnet} due to its simplicity. However, BC can suffer from serious compounding error problem when data is limited~\cite{ross2010efficient}.
On the contrary, inverse reinforcement learning (IRL) methods~\cite{arora2021survey} theoretically outperform BC with less compounding error~\cite{xu2020error}.
Thereafter, we mainly take the off-the-shelf Generative Adversarial Imitation Learning (GAIL) algorithm~\cite{ho2016generative} and its variants in our model zoo.
As a popular online imitation learning algorithm induced from the IRL framework, GAIL inherits the advantage in less compounding error and has been successfully applied in driving behavior imitation~\cite{kuefler2017imitating}.
In the sequel, we briefly introduce these methods and their features for strategy learning.
\subsubsection{Single-agent GAIL}
Although GAIL is designed to solve imitation learning problems for a single agent, we can apply it to multi-agent learning problems by independently learning different agents' strategies.
In our implementation, we train the policy models of each agent by replacing certain history vehicles in the replay data as the ego vehicle to be controlled. Using hand-scripted rules to select expert data with specific interactive behavior (e.g., left cut-in), we can ask the trained models to have desired reactive and interactive capability.
\subsubsection{InfoGAIL}
To model the diversity of human driving and produce high-quality interactive scenarios from unlabeled recorded data, we contain InfoGAIL~\cite{li2017infogail} in our model zoo to be a multi-modal reactive agent, which models and controls different distinct modalities of the recorded data with a latent variable.
Since InfoGAIL is also a single-agent algorithm, the implementation follows the independent training style as GAIL. We can produce diverse scenarios according to different preferences by controlling the discrete variable. The InfoGAIL agent is also trained on a filtered dataset to serve as a reactive agent for specific behaviors.
\subsubsection{MAGAIL}
Considering the multi-agent nature of autonomous driving and modeling the interaction between different agents, we also include the multi-agent extension of GAIL, i.e., MAGAIL~\cite{song2018multi}, serving as a multi-agent reactive agent.
We want MAGAIL to model a general reactive policy that does not have to make any active interaction to influence the benchmarked strategies to be evaluated but instead makes passive responses only to ensure safe driving.
This is implemented using the entire traffic flow dataset. By carrying out the parameter sharing technique \cite{8593758}, MAGAIL controls all the vehicles in an area.
\subsection{Model Performance Analysis}
\label{sec:model-performance}
To show the high fidelity of the learned models, we evaluate their performance from four aspects:
\begin{itemize}
\item \emph{Safety}: Non-collision rate with social vehicles.
\item \emph{Completion}: Completion rate of finishing specific interaction behavior.
\item \emph{Stability}: A selected constant minus the mean value of acceleration and yaw rate for the trajectory.
\item \emph{Diversity}: Standard deviation of acceleration and yaw rate for the trajectory.
\end{itemize}
The results are shown in \fig{fig:Model Performance}. It is easily concluded that all GAIL-based models enjoy high safety and completion rate, which outperforms BC a lot, showing their capability for traffic generalization. The high stability provides smooth driving behavior similar to humans, along with some model-specific character. In particular, we find that the multi-agent reactive agent (MAGAIL) has the highest safety rate while the multi-modal reactive agent (InfoGAIL) differs in diversity. These features indicate the necessity of multiple training algorithms for diverse models. Notably, the model performance does not seem high enough to create totally safe traffic, which may be because we enforce the model to make specific interactive behavior, resulting in the loss of reactivity. However, complete safety is meaningless since autonomous driving simulation needs exactly the realistic but unsafe environment for policy to make self-improvement.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/model-cutin.pdf}
\caption{Left Cut-in}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/model-merge.pdf}
\caption{On-ramp}
\end{subfigure}
\caption{Performance of vehicle control models in the model zoo. We evaluate the performance from four aspects, where Safety and Completion focus on the driving task itself; Stability and Diversity measure the actions in trajectory.}
\vspace{-12pt}
\label{fig:Model Performance}
\end{figure}
\subsection{Guided Traffic Generation}
When utilizing RITA to provide traffic flows for interacting with ego vehicles, a natural way is making all social vehicles controlled by reactive agents from the model zoo. However, such an approach can be computationally expensive when there exists a large number of vehicles. Also, when an increasing number of vehicles react to each other, there is a growing chance of deviating from model training distributions.
To save simulation cost and alleviate the aforementioned distributional shifts, we pre-specify a special region, which is called \textit{bubble}, on the map. Typically, the bubble covers the area inside which reactive behaviors are mostly expected to happen, such as the ramp merge area. Only social vehicles inside the bubble are controlled by reactive models, while others are just replaying movements in static trajectories.
In such a context, these static trajectories profoundly affect the interactions within the bubble by determining the initial states of social vehicles entering it. Even if we fix the control model of social vehicles within the bubble, the performance of ego vehicles under different static trajectories can vary much due to factors such as traffic density and the distance between social vehicles and ego vehicles.
Normally, the static trajectories can be sampled using \textit{dataset sampling}, i.e., directly sampled from datasets. Although dataset sampling maintains a high degree of fidelity, it has a slight chance to sample trajectories from the long tail part of its distribution (e.g., a group of quite dense traffic) as the dataset can be potentially large. Also, the sampled trajectories are agnostic to benchmarked strategies; therefore, dataset sampling overlooks the fact that every strategy has its Achilles heel.
To generate static trajectories with more flexibility and controllability, RITA additionally provides \textit{guided generation} for acquiring static trajectories. First, a diffusion-based generative model $s_\theta$ is trained in RITABackend to cover the multi-vehicles trajectories distribution of the dataset. The diffusion model is trained with the same objective as in DDPM, which is a reweighed version of the evidence lower bound (ELBO):
\begin{equation}
\theta^*=\argmin_{\theta}\sum_{i=1}^N(1-\alpha_i)\mathbb{E}_{p(x)}\mathbb{E}_{p_{\alpha_i(\tilde{x}|x)}}[\|s_{\theta}(\tilde{x}, i)-\nabla_{\tilde{x}}\log p_{\alpha_i}(\tilde{x}|x)\|_2^2]~,
\end{equation}
where $p(x)$ is the multi-vehicle trajectories distribution in the dataset, $\alpha_i=\prod_{j=1}^i(1-\beta_j)$, $p_{\alpha_i}(\tilde{x}|x)=\mathcal{N}(\tilde{x};\sqrt{\alpha_i}x, (1-\alpha_i)\bm{I})$, and $0<\beta_1,\beta_2,\cdots\beta_N<1$ is a sequence of positive noise scales.
After training, users can specify a (differentiable) scoring function on trajectories, whose gradient is used for guided sampling from the diffusion model~\cite{song2020score}. A prediction network $\mathcal{J}(x)$ trained on a small labeled dataset is well suited as the scoring function. The generated static trajectories are guided towards the high-score instances while still within the real data support. The reverse diffusion process transitions during guided sampling are sampled from:
\begin{equation}
p_{\theta}^{\mathcal{J}}(x^{i-1}|x^{i})=\mathcal{N}(x^{i-1};\mu+\Sigma\nabla\mathcal{J}(\mu), \Sigma)~,
\end{equation}
where $\mu, \sigma$ are the parameters of the original reverse process transition $p_{\theta}(x^{i-1}|x^i)$.
Since it is difficult to define the order of vehicles in a multi-vehicle trajectory, the diffusion model uses a shared U-Net structure to process each vehicle's trajectory and perform an order-independent self-attention operation on all vehicles' intermediate features of the last encoder layer. The score function we used shares the same encoder structure with the diffusion model but only performs attention operation between ego vehicle and other vehicles rather than a complete self-attention. This is because the metrics labels we used to train the scoring functions are primarily egocentric.
\section{RITAKit: A Configurable Traffic Generation Toolkit}
\label{sec:toolkit}
In this section, we present RITAKit, a traffic flow generation toolkit for interactive scenarios. RITAKit is a middle-ware that connects user specifications and RITABackend to generate traffic flow. From the user perspective, RITAKit is an easy-to-use programming interface, and the following code block shows an example of assigning a user-specific scenario.
\begin{Verbatim}[numbers=left, xleftmargin=8mm]
ScenarioMaker(
scenario = "ngsim-i80",
bubble = Bubble(
type='moving',
zone=Rectangle(40,20)),
interaction = "CommonLeftCutin",
guided_generation = "Stability",
)
\end{Verbatim}
We elaborate on each input parameter with how it influences the traffic flow and currently supported options.
\begin{itemize}
\item \textit{scenario} specifies the map and the corresponding real-world dataset used to build models in RITABackend. Two scenarios, \texttt{ngsim-i80} and \texttt{ngsim-us101}, are already integrated and tested. We are continuously working to incorporate more open-source datasets.
\item \textit{bubble} defines a special region in the roadway area to provide high quality and the most reactive traffic flow. Only social vehicles inside the bubble are controlled by the models from RITABackend. Before entering the bubble, the traffic is simply replaying the dataset, and after exiting the bubble, vehicles are controlled by IDM~\cite{treiber2000congested}. Bubble can be classified into two types, \texttt{moving} and \texttt{fixed}. The moving bubble is attached to and always covers the ego vehicle, while the fixed bubble is kept still in the specified positions.
\item \textit{interaction} configures the distribution of models that control social vehicles inside the bubble. Based on four typical interaction schemas in NGSIM I80 and US101, there are well-designed built-in options to choose from, e.g., \texttt{CommonLeftCutin}, \texttt{CommonOnRamp}, \texttt{RareLeftCutin} and \texttt{RareOnRamp}. In 'Common' interaction, different models will be uniformly selected from the model zoo, while 'Rare' interaction uses more proportion of InfoGAIL models. Also, users are welcome to program their configuration by overriding the interaction handler. We provide examples in Appendix \ref{appendix: Scenario Example}
\item \textit{guided\_generation} controls whether guided initial states generation is used and the score function for guiding the diffusion process. For example, \texttt{None} means not using guided generation but uniformly sampling from datasets instead. \texttt{Stability} refers to built-in score function stands for generating states that make the benchmarked strategy get worse stability, according to our designed metric. We support score function reconstruction in configurable interfaces as well.
\end{itemize}
\subsection{Environment Design}
In this paper, we construct four typical interactive environments in a highway scenario to comprehensively assess the quality of the toolkit. We take open-source NGSIM I80 and US101 datasets for training and evaluation. All the environment ends until the desired interaction is completed or illegal termination events (e.g., collision) are triggered.
\emph{Task 1: Cut-in scenario.} A moving bubble is created around the cut-in agent to take over surrounding vehicles using our reactive model. Considering different cut-in direction, we have left cut-in and right cut-in environments.
\emph{Task 2: Ramp scenario.} Focusing on the classic ramp road structure in the highway scenario, we use the fixed bubble to replace the vehicles of the ramp lane and highway lane with the corresponding behavior model. We build on-ramp (enter the highway) and off-ramp (escape the highway) environments, respectively.
\subsection{Generalized Traffic Flow Quality Measurement}
In this subsection, we evaluate the quality of the interaction behavior and traffic flow, showing their high fidelity and diversity. In order to make a fair comparison, we extract human data with the same interaction and replace history vehicles under the same initial state.
\subsubsection{Interaction Behavior Analysis}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/FFT_Analysis_Color_1.pdf}
\caption{Fourier Analysis}
\label{fig:Fourier Analysis}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/Scatter_Distribution_Color_1.pdf}
\caption{Scatter Distribution}
\label{fig:Scatter Distribution}
\end{subfigure}
\begin{subfigure}[b]{0.264\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/multiagent_traffic_3.pdf}
\caption{Traffic Flow}
\label{fig:Traffic Flow}
\end{subfigure}
\caption{Benchmark Quality Measurement. The first row corresponds to the left cut-in scenario, and the second to the on-ramp scenario. For interaction, Fourier Analysis discusses lateral speed features over time, while Scatter Distribution focuses on the cut-in moment when the most vital interaction happens. For traffic, Traffic Flow quantifies the error of neighbors' statistics between human data and policy-generalized data in an egocentric view.}
\label{fig:Benchmark Quality Measurement}
\end{figure*}
We first quantify how the reactive agent performs in the interaction scenario. In particular, we measure from three key dimensions: \textit{time}, \textit{time-to-collision (TTC)}, and \textit{lateral speed}. \textit{Time} can reveal the fluctuation during the interaction, and the other two dimensions show us more details about behavioral intensity and safety level.
To make reasonable analysis over \textit{time}, we bring in the idea of Fourier analysis, mapping the time domain to the frequency domain for an overall view of the life cycle of an interaction. In detail, we conduct the fast Fourier transform (FFT) with 512 sample rates over lateral speed on trajectories and plot the envelope of them for each model, shown in \fig{fig:Fourier Analysis}.
Considering the area between the envelope and the frequency axis as a representation of the interaction distribution, we compare two quantitative evaluation metrics:
\begin{itemize}
\item \emph{IoU}: Intersection area of model and human data / Union area.
\item \emph{Coverage}:
\begin{equation*}
\frac{\sum_{f=-N}^N v_{\text{model}}(f)}{\sum_{f=-N}^N \max(v_{\text{model}}(f),v_{\text{human}}(f))}~,
\end{equation*}
where $v(f)$ is the amplitude at frequency $f$.
\end{itemize}
For analyzing \textit{TTC} and \textit{lateral speed}, we
draw the scatter distribution at all cut-in moments in \fig{fig:Scatter Distribution}. Assuming the data as Gaussian distribution, we further show the 95\% confidence ellipse of human data and the combination of models (policy).
It is first and obviously noticed in \fig{fig:Fourier Analysis}, both interactive models can cover almost all the frequency domain with high \textit{IoU} (GAIL: 0.79, InfoGAIL: 0.73) and \textit{Coverage} (GAIL: 0.91, InfoGAIL: 0.95), elucidating the interaction trajectories are realistic enough. For the InfoGAIL model, the shadow area indicates its modality difference, and we can find apparent diversity, especially in low and high frequency areas.
Furthermore, in \fig{fig:Scatter Distribution}, we find that the ellipse of reactive agents successfully matches most parts of human data distribution, showing its high fidelity. Additionally, the reactive agents occupy the nearby area of human distribution, which can be seen as a reasonable enhancement of interaction, showing the diversity of making rare cases.
\subsubsection{Traffic Flow Analysis}
We measure the traffic quality in a holistic view, i.e., qualify the traffic flow generated by multi-agent reactive agents.
In particular, we use affordance~\cite{Chen_2015_ICCV} as the metric, which calculates metrics with neighbors in an ego-centric view. Since interaction commonly happens between ego-agent and its neighbors, this metric is amenable for analyzing interactive traffic. Here we take \emph{mean distance, speed, heading} of neighbor vehicles to describe traffic dynamically. With human data as baseline (normalized to 1), we plot statistical bar charts in \fig{fig:Traffic Flow}.
These statistics show clear evidence that the traffic around ego-agent is very similar to human data. The max error rate in both distance and heading $\leq 7\%$, speed $\leq 12\%$, elucidates that we recover high-fidelity trajectories beyond human data with reactivity and interaction.
\section{Benchmark and Optimization of Driving Strategies}
This section demonstrates two use cases of traffic flows generated by RITA. First, we show that the RITA traffic flow can be used to assess the performance of given driving strategies. Specifically, we demonstrate the results of using replay trajectories generated from guided traffic generation. Then, we demonstrate that the RITA environment successfully optimizes policy trained on history replay traffic flows.
\subsection{Benchmark under RITA Traffic Flow}
We benchmarked four popular AD solutions, including three machine learning algorithms, Behavior Cloning (BC), Generative Adversarial Imitation Learning (GAIL), Soft Actor-Critic \cite{pmlr-v80-haarnoja18b} (SAC), and a rule-based agent using keep-lane strategy based on the intelligent driver model (IDM).
We evaluate the safe-driving ability of algorithms under the interactive traffic we built above. Differing from quality measurement, here we design strong interaction patterns and let the test vehicle deal with constant active behavior from neighbors that affects its normal driving. In the left cut-in scenario, we make the right lane of ego-agent controlled by left cut-in models and the other two lanes controlled by multi-agent reactive agent (right cut-in similarly); in the on-ramp scenario, the ego-agent is placed at ramp lane with a continuous flow of merging vehicles controlled by interactive model while in off-ramp scenario we assigned reactive or off-ramp models for surrounding vehicles to build traffic.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/benchmark-cutin.pdf}
\caption{Left Cut-in}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/benchmark-merge.pdf}
\caption{On-ramp}
\end{subfigure}
\caption{Benchmark Results. To keep consistency, we use the same metric as \fig{fig:Model Performance}. Notice that we evaluate the safe-driving ability here, so Safety is numerically equal to Completion.}
\label{fig:Benchmark Results}
\end{figure}
The results are depicted in Fig \ref{fig:Benchmark Results}, where GAIL shows the best performance in the two tasks. The reason may be that GAIL can efficiently use expert training data and interact online with the simulator. On the contrary, IDM shows poor performance because of frequent interaction in the traffic, which the model itself cannot handle properly. Also, we observed that all algorithms perform better on fix-located traffic task \emph{on-ramp}, which may imply interactive traffic with moving bubble has a more challenging dynamic to deal with.
\paragraph{Benchmark using Guided Generation}
For the left cut-in environment, we conduct guided sampling of replay trajectories according to driving models, and evaluation results on these trajectories are shown in \tb{table:guided}.
The definition of \textit{Stability} is the same as in \se{sec:model-performance}, and \textit{Distance Ratio} is the ratio of the actual distance traveled by the ego vehicle in the simulator to the distance traveled in the replay trajectory. The upper limit of the distance ratio is set to 1.
Specifically, we compare guided sampling (Guided) with two other sampling techniques, dataset sampling (Dataset) and normal sampling (Normal). Dataset sampling first samples a period of 128 time steps (12.8 seconds) from the dataset and randomly chooses one vehicle that appeared in this interval as the ego vehicle. Then the other 15 vehicles, which are most close to the ego vehicle at the first step, are chosen as social vehicles. Trajectories obtained by dataset sampling are also used to train the diffusion-based generative model.
Normal sampling stands for a direct sample from the generative model. Guided sampling trains the scoring function network using the model performance on the dataset sampling trajectories, and its gradient is used to guide the generative model to produce samples resulting in the models' low performance. For all three sampling methods, we sample 1000 trajectories for evaluation.
From \tb{table:guided}, we observe that ego vehicles' performance is similar when being evaluated on replay trajectories obtained by dataset sampling and normal sampling. This shows that the trajectories sampled by the generative model are similar to the dataset trajectories in evaluating the performance of the ego strategies. On the other hand, both stability and distance ratio evaluated on the guided generated trajectories are significantly lower than the others. In particular, the distance ratio of GAIL model is lowered by 16.1\% with guided sampling, and the stability of SAC model is lowered by 10.9\%.
\begin{table}
\begin{center}
\caption{Comparisons of different types of sampling replay trajectories.}
\label{table:guided}
\begin{tabular}{ccccc}
\toprule
Metric & Sample Type & \textbf{GAIL} & \textbf{SAC} & \textbf{BC} \\
\midrule
\multirow{3}{*}{Stability} & Dataset & 0.911 & 0.811 & 0.822 \\
& Normal & 0.908 & 0.807 & 0.824 \\
& Guided & 0.861 & 0.719 & 0.775 \\
\midrule
\multirow{3}{*}{Distance Ratio} & Dataset & 0.745 & 0.663 & 0.594 \\
& Normal & 0.734 & 0.653 & 0.591 \\
& Guided & 0.616 & 0.559 & 0.549 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Optimization under RITA Traffic Flow}
\begin{table}
\begin{center}
\caption{Completion rates of two models before and after being finetuned under RITA traffic flow in the cut-in scenario.}
\label{table:2}
\begin{tabular}{ccccc}
\toprule
& \textbf{GAIL} & \textbf{GAIL-finetune} & \textbf{SAC} & \textbf{SAC-finetune} \\
\midrule
Replay & 0.795 & 0.806 & 0.733 & 0.676 \\
RITA & 0.766 & 0.878 & 0.712 & 0.882 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
To further claim the advantage of RITA, we take the history replay environment as our baseline and conduct optimization tasks. We first train two policies using SAC and GAIL, respectively, until convergence. Then we execute finetune with RITA interactive traffic flow and evaluate them under the history replay environment and RITA environment.
The result can be seen in Table \ref{table:2}. It suggests two significant facts: (1) Policy trained by history replay shows relatively poor performance in the RITA environment, indicating that it can hardly handle a dynamic environment with realistic human response. (2) Policy finetuned by RITA achieves high performance in both evaluations, implying that RITA can output a more robust policy than static human replay data. Given that surrounding vehicles can make reasonable responses even if the policy itself deviates from the dataset trajectory in the RITA environment, it somehow means that collisions happening in the RITA environment are more unacceptable. The performance drop may suggest a not small loss in interaction ability and will cause agent collision when facing similar real-world situations.
\section{RELATED WORK}
\subsection{Microscopic Traffic Simulation}
Unlike macroscopic traffic simulation, which models average vehicle dynamics like traffic density, microscopic traffic simulation separately models each vehicle and its dynamics, playing a critical role in optimizing self-driving strategies in simulators. Most traffic simulators or benchmarks use heuristic-based models to simulate the background traffic~\cite{gipps1981behavioural, treiber2000congested, elsayed2020ultra}, e.g., following the lane and avoiding head-on collisions~\cite{dosovitskiy2017carla}.
Since human driving behaviors are hard to define completely with heuristic rules, these methods lack the ability to model complex multi-vehicle interactions. SimNet~\cite{bergamini2021simnet} is similar to our approach in using a data-driven approach to obtain models for social vehicle control. However, the control models in SimNet are trained in an offline manner, which is more sensitive to distribution shifts.
Microscopic traffic simulation solutions in autonomous driving simulators need to offer easy-to-use interfaces to allow users to specify specific characteristics of the traffic flow.
However, most traffic generation algorithms do not provide such interfaces~\cite{bhattacharyya2020modeling, bansal2018chauffeurnet}, so the additional design is required when incorporating into simulators. RITA, as a complete microscopic traffic generation framework independent of the simulator, integrates the interface for traffic definition in RITAKit.
Another group of studies on microscopic traffic simulation focus on generating traffic flows that make self-driving vehicles perform poorly. Ding et.al.~\cite{ding2020learning} propose to generate safety-critical scenarios by sampling from a pre-designed probability graphic model, where conditional probabilities are optimized via the policy gradient. Such optimization with reinforcement learning does not utilize real datasets, and the generated scenarios may lack fidelity. AdvSim~\cite{wang2021advsim} conducts black-box adversarial attacks on real-world trajectories by adding perturbations to vehicles' behaviors. However, random perturbations may drive the final generated samples away from the true data distribution. Although the aforementioned corner-case generation methods can be used to evaluate the worst performance of a strategy, such unrealistic scenarios violate the fidelity requirement of RITA. Instead, RITA provides an adversarial sampling of scenarios given the current strategy while constraining the generated scenarios to stay within the real data distribution.
\begin{table}[h!]
\vspace{-2pt}
\caption{Comparison of microscopic traffic flows in driving simulations. Sim, bench, comp are shorts for simulator, benchmark, component respectively.}
\vspace{-4pt}
\label{tb:traffic-flow-comparison}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{Name} & Data-driven & Configurable & Adversarial & \multirow{2}{*}{Type} \\
& Models & Interface & Generation & \\
\midrule
CARLA \cite{dosovitskiy2017carla} & \xmark & \cmark & \xmark & Sim \\
SMARTS \cite{zhou2020smarts} & \xmark & \cmark & \xmark & Sim \\
BARK \cite{bernhard2020bark}& \cmark & \cmark & \xmark & Bench \\
NuPlan \cite{caesar2021nuplan} & \cmark & \cmark & \xmark & Bench \\
SimNet \cite{bergamini2021simnet}& \cmark & \xmark & \xmark & Comp \\
AdvSim \cite{wang2021advsim}& \xmark & \xmark & \cmark & Comp \\
\midrule
\textbf{RITA (Ours)} & \cmark & \cmark & \cmark & Comp \\
\bottomrule
\end{tabular}}
\vspace{-6pt}
\end{table}
We compare microscopic traffic flows in existing driving simulation literature from three aspects in \tb{tb:traffic-flow-comparison}. Specifically, we judge whether these traffic flows are generated by data-driven driving models, provide a configurable interface for controllable generation, and support generating specific rare-case traffic.
\subsection{Human Behavior Modeling}
To more accurately evaluate the performance of the autonomous driving model in real traffic, we choose to deploy social vehicle models that imitate human drivers in the simulator. This requires us to adopt a practical approach to human behavior modeling.
Human behavior modeling is becoming a trending research direction in the field of human-robot interaction (HRI) and has been used for various purposes. A direct motivation is to obtain control policies by imitating human data and deploying the imitated models on robots or autonomous vehicles~\cite{huang2015adaptive, codevilla2018end, bhattacharyya2020modeling}.
Other than directly using human behavior models to control the robots, these models can also help robots make decisions by predicting the actions of humans who interact with them~\cite{mainprice2016goal, schwarting2019social}. Moreover, if learned human models are conditioned on the robots' actions, they can be used to reason future human responses to robot behavior, which enables the robots to proactively shape or guide human behaviors~\cite{dragan2015effects, tellex2014asking}.
Another goal of building human behavior models is to build more realistic simulators that can better access or improve the model performances in human-robot interactions without real interaction with humans~\cite{shi2019virtual, caesar2021nuplan}.
RITA aims to accomplish this goal in autonomous driving tasks and adopts several adversarial imitation learning methods to build human behavior models. Since human behavior modeling is attracting more and more research attention, more advanced human modeling techniques can be continuously incorporated into the RITA framework.
\section{Conclusions}
This paper presents RITA, a framework for generating high-quality traffic flow in a driving simulator. RITA contains several components, first is RITABackend, which learns vehicle control and static trajectories generation models from real-world datasets; besides, built on top of RITABackend, RITAKit provides an easy-to-use interface to customize the traffic flow. Combining these two modules, RITA can deliver traffic flow with high fidelity, diversity, and controllability. Under RITA, we design two benchmark tasks with highly interactive traffic flow. Under these two tasks, we conduct experiments to show the high quality of generated traffic flows from multiple perspectives. Also, we demonstrate two use cases of RITA: evaluating the performance of existing driving strategies and
fine-tuning those driving strategies.
We believe that RITA will be integrated into existing simulators, and we look forward to making RITA a standard component of driving simulators.
\section*{APPENDIX}
\section{Simulation Space}
\subsection{Observation Space}
\label{sec:appendix-obs}
To let models be generalized in different maps, we collect information from three main aspects in egocentric coordinates: [\emph{Ego dynamics, Lane observation, Neighbor observation}].
\subsubsection{Ego dynamics}
Ego dynamics choose the absolute value to represent ego vehicle attributes. Here we simply use \emph{linear velocity} since heading and position are relative amounts changing with the map.
\subsubsection{Lane observation}
In SMARTS, the simulator produces a list of equally spaced points in the centerline of lanes on the map, called waypoints. Each waypoint has its position and heading. Therefore by calculating relative data between the nearest waypoint and the ego vehicle, we can locate our position on the map. We here calculate the relative position and heading of the ego, left, and right lane between agent and nearest waypoint.
\subsubsection{Neighbor observation}
We divided neighbor vehicles into eight areas according to their relative position, illustrated in Fig \ref{fig:Neighbor_Position}. The first letter indicates its position: ['B'ottom, 'M'iddle, 'T'op] while the second indicates its lane: ['L'eft, 'M'iddle, 'R'ight]. We calculate the relative position, heading, and speed for each neighbor.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\linewidth]{figs/Neighbor_Definition.pdf}
\caption{Neighbor Position. The first letter indicates the neighbor's relative position, while the second indicates its lane.}
\label{fig:Neighbor_Position}
\end{figure}
\subsection{Action Space}
We here take continuous 2-dimensional \emph{linear acceleration} $a_l$ and \emph{angular velocity} $w_a$ as our action space. As SMARTS deploys a dynamic vehicle model with physical constraints and we want to make reasonable behavior compared to the NGSIM dataset, we here make corresponding limitation for $a_l \in [-3.0,3.0] m/s^2$ and $w_a \in [-2.0,2.0] rad/s$.
\section{Model Zoo}
The tabular model zoo used for constructing interaction scenarios is shown in Table \ref{table:Model Zoo}. We have trained common reactive models as well as models dealing with specific interactions.
\begin{table*}[htb]
\begin{center}
\begin{tabular}{c|c|c}
\toprule
\textbf{Algorithm} & \textbf{Training Scenarios} & \textbf{Model} \\
\hline
GAIL & left cut-in,right cut-in, on-ramp, off-ramp & left cut-in, right cut-in, on-ramp, off-ramp \\
MAGAIL & all scenarios & reactive \\
InfoGAIL & left cut-in,right cut-in, on-ramp, off-ramp & left cut-in,right cut-in, on-ramp, off-ramp[$c$ = 1,2,3] \\
\bottomrule
\end{tabular}
\vspace{5pt}
\caption{Benchmark Model Zoo. The entire model zoo is trained from the open-sourced dataset. For GAIL and InfoGAIL, each interaction scenario trains a corresponding model. For MAGAIL, we get a general reactive model using the parameter-sharing technique. We can change the modality of InfoGAIL model by assigning a different value of latent variable $c$.}
\label{table:Model Zoo}
\end{center}
\end{table*}
\section{Scenario Generation Examples}
\label{appendix: Scenario Example}
We give extra code examples for creating specific interactive traffic following the scheme in the main paper. While in the main paper, we give a brief introduction with built-in configuration, here we make simple reconstruction to help users understand the RITA structure. It is welcome to override the interaction handler and other interfaces to formulate wanted scenario.
\begin{figure*}[h]
\centering
\begin{minipage}{0.45\linewidth}
\begin{Verbatim}[numbers=left, xleftmargin=8mm]
ScenarioMaker(
scenario = "ngsim-us101",
bubble = Bubble(
type='moving',
zone=Rectangle(40,20)),
interaction = CutinHandler(
assign_type='common',
cutin_direction='left',
checker='lane_change',
),
guided_generation = DiffusionGenerator(
guided_mode='Distance',
target=policy,
),
)
\end{Verbatim}
\caption*{Left Cut-in Scenario Example}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\begin{Verbatim}[numbers=left, xleftmargin=8mm]
ScenarioMaker(
scenario = "ngsim-i80",
bubble = Bubble(
type='fixed',
position=(140,0)
zone=Rectangle(110,15)),
interaction = OnRampHandler(
assign_type='rare',
checker = RampChecker(
ramp_lane='E3',
main_lane='gneE01'
)
),
guided_generation = "None",
)
\end{Verbatim}
\caption*{On-ramp Scenario Example}
\end{minipage}
\caption{Code Example with Simple Reconstruction. A basic \textit{InteractionHandler} require two arguments: \textit{assign\_type} for interactive model assignment, \textit{checker} for interaction behavior identification. A basic \textit{DiffusionGenerator} needs to offer \textit{guided\_mode} (i.e. score function) to train $\mathcal{J}(x)$ and the \textit{target} model for simulation. }
\end{figure*}
\section{Algorithm Parameters}
Here we describe basic information about benchmarked algorithm implementation. Algorithms used in benchmark and optimization experiments share the same setting.
\subsection{Rule-Based Model}
The model from the SMARTS agent zoo is controlled by IDM. We use a keep-lane agent as we do not ask the model to do active interaction behavior but to make reasonable responses under it.
\subsection{Machine Learning Model}
\begin{table}[htb]
\begin{center}
\caption{Algorithm Parameters}
\label{table:Algorithm Parameters}
\begin{tabular}{|c|c|}
\hline BC & parameters \\
\hline Network & MLP \\
Hidden Size & 256 \\
Hidden Layers & 3 \\
Batch Size & 1024 \\
Learning Rate & 0.0003 \\
Loss Function & MSE \\
\hline
\end{tabular}
\vspace{5pt}
\begin{tabular}{|c|c|}
\hline GAIL & parameters \\
\hline Network & MLP \\
Policy Hidden Size & 256 \\
Policy Hidden Layers & 3 \\
Discriminator Hidden Size & 64 \\
Discriminator Hidden Layer & 2 \\
Batch Size & 256 \\
Policy Learning Rate & 0.0003 \\
Discriminator Learning Rate & 0.0003 \\
Policy Trainer & SAC \\
\hline
\end{tabular}
\vspace{5pt}
\begin{tabular}{|c|c|}
\hline SAC & parameters \\
\hline Network & MLP \\
Policy Hidden Size & 256 \\
Policy Hidden Layers & 3 \\
Batch Size & 256 \\
Alpha & 0.2 \\
Policy Learning Rate & 0.0003 \\
Q Learning Rate & 0.0003 \\
Reward & Travel Distance \\
\hline
\end{tabular}
\end{center}
\end{table}
To conduct as fair a comparison as possible, we make all the algorithms have the same state-action space and policy network structure. We choose five random seeds for each algorithm and record their average performance. We share necessary algorithm parameters in Table \ref{table:Algorithm Parameters}.
\section{Additional Results}
Here we put the right cut-in scenario and off-ramp scenario results. To achieve high fidelity, the data-driven model zoo must access adequate qualified interaction data to learn good policy, which means the lack of desired data harms the performance of model and the simulated scenario. The training data of the right cut-in scenario only accounts for 1/6 of the total cut-in data. And for the off-ramp scenario, about 95\% of data follows the lane and not producing meaningful interaction, making it hard to create high-quality traffic flow.
\subsection{Model Performance}
Because of the data limitation mentioned above, performance in Fig \ref{fig:Additional Model Performance} not reflects ideal situations in both scenarios. The right cut-in performance drops compared to the left cut-in, while off-ramp performance achieves abnormally high results due to interaction trajectories' absence in data.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{figs/model-cutinright.pdf}
\caption{Right Cut-in}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{figs/model-mergeout.pdf}
\caption{Off-ramp}
\end{subfigure}
\caption{Model Performance.}
\label{fig:Additional Model Performance}
\end{figure}
\subsection{Interaction Traffic Analysis}
The interaction quality suffers from the same declination in model performance, which can be seen in Fig \ref{fig:Additional Fourier Analysis} and Fig \ref{fig:Additional Scatter Distribution}, where the average \textit{IoU} and \textit{Coverage} decreased and scatter distribution matches poorly with human data. However, the traffic flow quality shown in Fig \ref{fig:Additional Traffic Flow} remains stable as the reactive ability not be hurt.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/FFT_Analysis_Color_1_2.pdf}
\caption{Fourier Analysis}
\label{fig:Additional Fourier Analysis}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/Scatter_Distribution_Color_1_2.pdf}
\caption{Scatter Distribution}
\label{fig:Additional Scatter Distribution}
\end{subfigure}
\begin{subfigure}[b]{0.264\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/multiagent_traffic_4.pdf}
\caption{Traffic Flow}
\label{fig:Additional Traffic Flow}
\end{subfigure}
\caption{Benchmark Quality Measurement. The first row corresponds to the right cut-in scenario, and the second to the off-ramp scenario.}
\label{fig:Benchmark Quality Measurement: Appendix}
\end{figure*}
\subsection{Benchmark Results}
In fig \ref{fig:Additional Benchmark Results}, we still benchmark algorithms in right cut-in and off-ramp scenarios. In the right cut-in scenario, GAIL shows even more performance advantage compared to the left cut-in scenario, as it can make use of both expert data and simulation results. We have to design a simple task for the off-ramp scenario due to the aforementioned model limitation. Here we find IDM achieves the best performance, as it can execute lane-following tasks flawlessly.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{figs/benchmark-cutinright.pdf}
\caption{Right Cut-in}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{figs/benchmark-mergeout.pdf}
\caption{Off-ramp}
\end{subfigure}
\caption{Benchmark Results.}
\label{fig:Additional Benchmark Results}
\end{figure}
\section{Introduction}
This document explains the main features of the `\texttt{aamas}'
document class, which is essentially identical to the `\texttt{acmart}'
document class provided by the ACM. The only difference is a minor
modification to allow for the correct copyright attribution to IFAAMAS.
For detailed documentation of the original document class, please refer
to the relevant website maintained by the~ACM:
\begin{center}
\url{https://www.acm.org/publications/proceedings-template}
\end{center}
The first command in your source file should be either one of these:
\begin{verbatim}
\documentclass[sigconf]{aamas}
\documentclass[sigconf,anonymous]{aamas}
\end{verbatim}
The first variant should be used for final papers. The second should be
used when you submit your paper for blind review; it will replace the
names of the authors with the submission number.
Make sure your paper includes the correct copyright information and
the correct specification of the \emph{ACM Reference Format}. Both of
these will be generated automatically if you include the correct
\emph{copyright block} as shown in the source file of this document.
Modifying the template---e.g., by changing margins, typeface sizes,
line spacing, paragraph or list definitions---or making excessive use
of the `\verb|\vspace|' command to manually adjust the vertical spacing
between elements of your work is not allowed. You risk getting your
submission rejected (or your final paper excluded from the proceedings)
in case such modifications are discovered. The `\texttt{aamas}' document
class requires the use of the \textit{Libertine} typeface family, which
should be included with your \LaTeX\ installation. Please do not use
other typefaces instead.
Please consult the \emph{Call for Papers} for information on matters
such as the page limit or anonymity requirements. It is available from
the conference website:
\begin{center}
\url{https://aamas2023.soton.ac.uk/}
\end{center}
To balance the columns on the final page of your paper, use the
`\texttt{balance}' package and issue the `\verb|\balance|' command
somewhere in the text of what would be the first column of the last
page without balanced columns. This will be required for final papers.
\section{The Preamble}
You will be assigned a submission number when you register the abstract
of your paper on \textit{EasyChair}. Include this number in your
document using the `\verb|\acmSubmissionID|' command.
Then use the familiar commands to specify the title and authors of your
paper in the preamble of the document. The title should be appropriately
capitalised (meaning that every `important' word in the title should
start with a capital letter). For the final version of your paper, make
sure to specify the affiliation and email address of each author using
the appropriate commands. Specify an affiliation and email address
separately for each author, even if two authors share the same
affiliation. You can specify more than one affiliation for an author by
using a separate `\verb|\affiliation|' command for each affiliation.
Provide a short abstract using the `\texttt{abstract}' environment.
Finally, specify a small number of keywords characterising your work,
using the `\verb|\keywords|' command.
\section{The Body of the Paper}
For help with typesetting the body of your paper in \LaTeX\@, please
make use of the familiar resources~\cite{Lam94}. In this section we
merely highlight a few specific features.
\subsection{Mathematical Expressions}
You can typeset all sorts of in-line mathematical expressions
with the usual \verb|$...$| construct, as in
$\Diamond\Diamond\varphi \rightarrow \Diamond\varphi$ or
$\boldsymbol{R} = (R_1,\ldots,R_n)$.
For more complex expressions, it may often be preferable to use one of
the various equation-type environments available in \LaTeX\@, as shown
in the following example:
\begin{eqnarray}
Z_i & = & \frac{u_i(x_i) - u_i(x_{-i})}{u_i(x_i)}
\end{eqnarray}
Here is a second example for an equation:
\begin{eqnarray}\label{eq:vcg}
p_i(\boldsymbol{\hat{v}}) & = &
\sum_{j \neq i} \hat{v}_j(f(\boldsymbol{\hat{v}}_{-i})) -
\sum_{j \neq i} \hat{v}_j(f(\boldsymbol{\hat{v}}))
\end{eqnarray}
Use the usual combination of `\verb|\label|' and `\verb|\ref|' to refer
to numbered equations, such as Equation~(\ref{eq:vcg}) above. Of course,
introducing numbers in the first place is only helpful if you in fact
need to refer back to the equation in question elsewhere in the paper.
\subsection{Tables and Figures}
Use the `\texttt{table}' environment (or its variant `\texttt{table*}')
in combination with the `\texttt{tabular}' environment to typeset tables
as floating objects. The `\texttt{aamas}' document class includes the
`\texttt{booktabs}' package for preparing high-quality tables. Tables
are often placed at the top of a page near their initial cite, as done
here for Table~\ref{tab:locations}.
\begin{table}[t]
\caption{Locations of the first five editions of AAMAS}
\label{tab:locations}
\begin{tabular}{rll}\toprule
\textit{Year} & \textit{City} & \textit{Country} \\ \midrule
2002 & Bologna & Italy \\
2003 & Melbourne & Australia \\
2004 & New York City & USA \\
2005 & Utrecht & The Netherlands \\
2006 & Hakodate & Japan \\ \bottomrule
\end{tabular}
\end{table}
The caption of a table should be placed \emph{above} the table.
Always use the `\verb|\midrule|' command to separate header rows from
data rows, and use it only for this purpose. This enables assistive
technologies to recognise table headers and support their users in
navigating tables more easily.
\balance
Use the `\texttt{figure}' environment for figures. If your figure
contains third-party material, make sure to clearly identify it as such.
Every figure should include a caption, and this caption should be placed
\emph{below} the figure itself, as shown here for Figure~\ref{fig:logo}.
In addition, every figure should also have a figure description, unless
it is purely decorative. Use the `\verb|\Description|' command for this
purpose. These descriptions will not be printed but can be used to
convey what's in an image to someone who cannot see it. They are also
used by search engine crawlers for indexing images, and when images
cannot be loaded. A figure description must consist of unformatted plain
text of up to 2000~characters. For example, the definition of
Figure~\ref{fig:logo} in the source file of this document includes the
following description: ``Logo of AAMAS 2023 -- The 22nd International Conference on Autonomous Agents and Multiagent Systems.'' For more information on how best to write figure descriptions
and why doing so is important, consult the information available here:
\begin{center}
\url{https://www.acm.org/publications/taps/describing-figures/}
\end{center}
The use of colour in figures and graphs is permitted, provided they
remain readable when printed in greyscale and provided they are
intelligible also for people with a colour vision deficiency.
\section{Citations and References}
The use of the \rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em\TeX\ to prepare your list of references is highly
recommended. To include the references at the end of your document, put
the following two commands just before the `\verb|\end{document}|'
command in your source file:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,565,169 | arxiv | \section{Introduction}
Hadronic atoms provide valuable information about in-medium
modification of hadron properties, on hadron-nucleon interaction, and
also on properties of nuclei not easily accessible by other probes, as
the distribution density of neutrons. This field has been the subject
of thorough study, both theoretical and experimental, since long time
ago for pions and anti-kaons \cite{Ericson:1966fm,Friedman:2007zza,
Nieves:1993ev,GarciaRecio:1991wk,Hirenzaki:2008zz,Gilg:1999qa}, and more
recently for anti-protons
\cite{Wycech:2007jb,Klos:2007is,Trzcinska:2001sy}. However, for
anti-charmed atoms not much theoretical work exists in the literature.
To our knowledge, only Ref.~\cite{Tsushima:1998ru} studies $D^-$
atoms. There, the $1s$, $2s$ and $1p$ states (neglecting widths) of
$D^-$ in $^{208}$Pb were evaluated using the quark-meson coupling
model of Ref.~\cite{Guichon:1987jp}. ${\bar D} NN$ bound states (rather
than atoms) were predicted in \cite{Yasui:2009bz}. On the experimental
side, the study of anti-$D$ mesic atoms poses a serious challenge. The
study of open charm systems seems timely in view of the forthcoming
experiments by the PANDA \cite{Wiedner:2011mf,PANDA} and CBM
\cite{Aichelin,Staszel:2010zz} Collaborations at the future FAIR
facility at Darmstadt \cite{fair}.
As compared to other mesic atoms, $D^-$ atoms have a number of
specific features that make them worth studying. First, the ${\bar D}$
meson is so heavy that the atomic meson wave function has a sizable
overlap with the nucleus, specially for the low lying levels and for
heavy nuclei. Hence the strong interaction effects are expected to be
larger than for other mesic atoms, even if the optical potentials
themselves are of comparable strength. Second, ${\bar D} N$ has no lower
hadronic channels for strong interaction decay. This is unlike other
hadron-nucleus bound systems. For instance, in pionic atoms the
channel $\pi NN\to NN$ is available, in $K^-$ atoms $\bar{K} N\to \pi
\Lambda$ and $\pi\Sigma$, in $D^0$-nucleus $D N \to \pi \Lambda_c$ and
$\pi\Sigma_c$, in $\bar{p}$ atoms $\bar{p} N\to$pions, or in
$\eta$-nucleus, $\eta N\to \pi N$. So, if bound, the ${\bar D}$ remains in the
nucleus until its weak decay is produced. Third, heavy quark spin
symmetry (HQSS), a well established approximate QCD symmetry
\cite{Isgur:1989vq,Neubert:1993mb}, is expected to play an important
role in $D^-$ atoms. One of the consequences of HQSS is that the
$\bar{D}^*$ vector meson degrees of freedom should have some
(important) influence on these systems. Hence, such degrees of freedom
should be incorporated by means of any realistic treatment. This is
automatically achieved in the SU(8) extended Weinberg-Tomozawa model
followed in this work~\cite{GarciaRecio:2008dp,Gamermann:2010zz}. Fourth, all
$t$-channel vector meson exchange models without incorporating HQSS,
that is, not including vector mesons in the coupled-channel space, produce
a featureless real repulsive potential below threshold
\cite{Lutz:2005vx,Tolos:2007vh,JimenezTejero:2011fc}. This scenario is expected to
change when HQSS is enforced. Indeed, the calculation of
\cite{Yasui:2009bz} identifies an $I=0, J=1/2^-$ ${\bar D} N$ bound state
with $1.4\,{\rm MeV}$ of binding energy. The same state is also found in
the SU(8) model of Ref.~\cite{Gamermann:2010zz}. This exotic baryonic state plays
an important role in the $D^-$ atom dynamics. Due to the existence of
this exotic state, the ${\bar D}$ optical potential turns out to be
attractive, dissipative and strongly energy dependent. In addition,
due to the energy dependence, not so relevant in other mesic atoms,
a proper implementation of the electromagnetic interaction, through
minimal coupling, needs to be considered.
The paper is organized as follows. In Sect.~\ref{sec:2} we describe
the calculation of the ${\bar D}$ self-energy in
nuclear matter
and present our results for the ${\bar D}$ optical potential. We carry out a self-consistent calculation in
symmetric nuclear matter at zero temperature for energies around the
${\bar D}$ mass. In Sect.~\ref{sec:3} we present our results for the
energies and widths of the $D^-$ mesic atom levels in $^{12}$C,
$^{40}$Ca, $^{118}$Sn and $^{208}$Pb. For this purpose, we solve the
Schr\"odinger equation with a finite nuclei ${\bar D}$ optical potential
obtained from that derived for nuclear matter, in the previous
section, and making use of the local density approximation. In this
section, we also extend our study to the case of ${\bar D}^0$ bound states.
Finally in Sect.~\ref{sec:4}, we discuss possible decay mechanisms of
the bound states, while in Sect.~\ref{sec:5} we summarize the main
conclusions of the present work.
\section{The ${\bar D}$ self-energy and optical potential}
\label{sec:2}
The self-energy in symmetric nuclear matter for the ${\bar D}$ meson is obtained
following a self-consistent procedure in coupled channels, as similarly done
for the $D$ meson \cite{Tolos:2009nn}. The $s$-wave transition potential of the
Bethe-Salpeter equation is derived from an effective Lagrangian that
implements HQSS. This is an approximate QCD symmetry that treats on equal
footing heavy pseudo-scalar and vector mesons
\cite{Isgur:1989vq,Neubert:1993mb}. Therefore, we calculate simultaneously the
self-energy of the ${\bar D}^*$, the HQSS partner of the ${\bar D}$.
As shown in \cite{GarciaRecio:2006wb,GarciaRecio:2005hy}, the
Weinberg-Tomozawa (WT) meson-baryon Lagrangian admits a unique and
natural extension with spin-flavor symmetry for any number of flavors.
In addition to $0^+$ mesons and $1/2^+$ baryons, this requires the
inclusion of $1^-$ mesons and $3/2^+$ baryons. For four flavors this
interaction has SU(8) symmetry and automatically enjoys HQSS in the
$C=-1$ sector. Schematically \cite{GarciaRecio:2008dp,Gamermann:2010zz},
\begin{equation}
{\cal L_{\rm WT}^{\rm SU(8)}} = \frac{1}{f^2} \left((M^\dagger\otimes
M)_{{\bf 63}_a}\otimes (B^\dagger\otimes B)_{{\bf 63}}\right)_{{\bf 1}}
.
\label{eq:NOcoupl}
\end{equation}
The tree level amplitudes for different isospin ($I$), total angular momentum
($J$), charm ($C$) and strangeness ($S$) take the form
\begin{equation}
V^{IJSC}_{ab}(\sqrt{s})= D^{IJSC}_{ab}
\frac{2\sqrt{s}-M_a-M_b}{4\,f_a f_b} \sqrt{\frac{E_a+M_a}{2M_a}}
\sqrt{\frac{E_b+M_b}{2M_b}} \,, \label{eq:vsu8break}
\end{equation}
where $M_a$ ($M_b$) and $E_a$ ($E_b$) are, respectively, the mass and the
center of mass energy of the baryon in the $a$ ($b$) channel. The matrix
elements $D^{IJSC}_{ab}$ of the SU(8) WT interaction can be obtained from Wick
contractions using the hadronic wave functions~\cite{GarciaRecio:2008dp} or by means of
the SU(8)$\supset$SU(4)$\otimes$SU(2) Clebsch-Gordan
coefficients~\cite{GarciaRecio:2010vf}. The spin-flavor SU(8) symmetry is strongly broken
in nature and this is incorporated by adopting the physical hadron masses and
different weak decay constants, $f_a$, for non-charmed and charmed,
pseudo-scalar and vector mesons \cite{GarciaRecio:2008dp,Gamermann:2010zz}.
In what follows, we focus in the non-strange ($S=0$) and singly anti-charmed
($C=-1$) sector, where the ${\bar D} N$ and ${\bar D}^* N$ states are embedded. The
channels involved in the coupled-channel calculation are: ${\bar D} N$ and ${\bar D}^*
N$ for $I=0, J=1/2$; ${\bar D}^* N$ for $I=0, J=3/2$; ${\bar D} N$, ${\bar D}^* N$ and ${\bar D}^*
\Delta$ for $I=1, J=1/2$; and ${\bar D} \Delta$, ${\bar D}^* N$ and ${\bar D}^* \Delta$ for
$I=1, J=3/2$.
The amplitudes in nuclear matter [$T^{\rho,IJ}(P_0,{\bm P})$] are obtained by
solving the on-shell Bethe-Salpeter equation with the tree level amplitude
$V^{IJ}(\sqrt{s})$:
\begin{eqnarray}
T^{\rho,IJ}(P) &=& \frac{1}{1-
V^{IJ}(\sqrt{s})\, G^{\rho,IJ}(P)}\,V^{IJ}(\sqrt{s})
,
\label{eq:scat-rho}
\end{eqnarray}
where the diagonal $G^{\rho,IJ}(P)$ matrix accounts for the
meson-baryon loop in nuclear matter. The logarithmic divergence in
the vacuum part of the loop function, $G^0(\sqrt{s})$, is removed by
subtraction. Following ~\cite{GarciaRecio:2008dp,Gamermann:2010zz}, we set
$G^{0,IJ}(\sqrt{s}=\mu^{IJ})=0$ with
\begin{equation}
\left (\mu^{IJ}\right)^2 = \alpha
\left(m_{{\rm th}}^2+M^2_{{\rm th}}\right)
.
\label{eq:sp}
\end{equation}
Here $m_{{\rm th}}$ and $M_{{\rm th}}$ are, respectively, the meson and baryon
masses of the hadronic channel with lowest mass threshold for the given $I,J$.
The value of the parameter $\alpha$ is set to one \cite{Hofmann:2005sw}. However, in the
following, we will also vary $\alpha$ to have an estimate of the sensitivity
of our results against changes in the regularization scale.
Nuclear matter effects enter in the meson-baryon loop function
$G^{\rho,IJ}(P)$. One of the sources of density dependence comes from Pauli
blocking. Another source is related to the change of the properties of mesons
and baryons in the intermediate states due to the interaction with nucleons of
the Fermi sea. We proceed as in Ref.~\cite{Tolos:2009nn}, where the most important
changes in matter came from the Pauli blocking of nucleons and from the
self-consistent treatment of the open charm self-energies.
Thus, for the ${\bar D} N$ and ${\bar D}^* N$ channels, the meson-baryon loop function
in matter is given by ~\cite{Tolos:2009nn}:
\begin{eqnarray}
{G^\rho}_{{\bar D}({\bar D}^*)N}(P)
&=&
G^0_{{\bar D}({\bar D}^*)N}(\sqrt{s})+
\int \frac{d^3 q}{(2 \pi)^3} \,
\frac{ M_N }{ E_N({\bm p})} \,
\Bigg[
\frac{-n({\bm p})}{(P^0 - E_N({\bm p}))^2-\omega({\bm q})^2+i\varepsilon}
\,
\label{eq:Glarga}
\\ &&
+(1-n({\bm p}))
\Bigg(
\frac{-1/(2 \omega({\bm q}))}
{P^0 -E_N({\bm p})-\omega({\bm q})+i \varepsilon}
+
\int_{0}^{\infty} \, d\omega \,
\frac{S_{{\bar D}({\bar D}^*)}(\omega,{\bm q})}{P^0 -E_N({\bm p})-\omega+i\varepsilon}
\, \Bigg) \Bigg] \Bigg|_{{\bm p}={\bm P}-{\bm q}}
,
\nonumber
\end{eqnarray}
where $E_N({\bm p})=\sqrt{{\bm p}^2+M_N^2}$ is the nucleon energy and
$\omega({\bm q})=\sqrt{{\bm q}^2+m_{{\bar D}({\bar D}^*)}^2}$ is the
${\bar D}({\bar D}^*)$ energy. The free loop function $G^0(\sqrt{s})$ is
corrected in matter by terms proportional to the nucleon Fermi
distribution $n({\bm p})=\Theta(|{\bm p}|-p_F)$ that takes into
account Pauli blocking effects. The quantities ${\bm p}$ and $p_F$ are
the momentum of the nucleon and the Fermi momentum at nuclear density
$\rho$, respectively. The implementation of
the ${\bar D}$ and ${\bar D}^*$ properties in matter comes through the meson
spectral functions, $S_{{\bar D}({\bar D}^*)}(\omega,{\bm q})$, which are
defined from the in-medium ${\bar D}$ and ${\bar D}^*$ meson propagators:
\begin{eqnarray}
D^\rho_{{\bar D} ({\bar D}^*)}(q)
&=&
\left ((q^0)^2 -\omega({\bm q})^2-\Pi_{{\bar D}({\bar D}^*)}(q) \right )^{-1}
,
\nonumber \\
S_{{\bar D}({\bar D}^*)}(q) &=& -\frac{1}{\pi}\,{\rm Im} D^\rho_{{\bar D} ({\bar D}^*)}(q)
\quad \mbox{(for~$q^0>0$)}
.
\label{eq:Drho}
\end{eqnarray}
The self-energies, $\Pi_{{\bar D}({\bar D}^*)}(q^0,{\bm q}; \rho)$, are obtained
self-consistently from the in-medium ${\bar D} N$ and ${\bar D}^* N$ effective
interactions as we will show in the following.
As for ${\bar D} \Delta$ and ${\bar D}^* \Delta$ channels, we include the self-energy
of the ${\bar D}$ and ${\bar D}^*$ mesons. Then, the equivalent of
Eq.~(\ref{eq:Glarga}) for those channels reads \cite{Tolos:2009nn}
\begin{eqnarray}
{G^\rho}_{{\bar D}({\bar D}^*)\Delta}(P) &=&
G^0_{{\bar D}({\bar D}^*)\Delta}(\sqrt{s})
+\int \frac{d^3 q}{(2 \pi)^3} \,
\frac{ M_\Delta }{ E_\Delta({\bm p})} \,
\Bigg (
\frac{-1/(2 \omega({\bm q}))}
{P^0 -E_\Delta({\bm p})-\omega({\bm q})+i \varepsilon}
\label{eq:propDD}\\
&&
+
\int_{0}^{\infty} \,
d\omega \,
\frac{S_{{\bar D}({\bar D}^*)}(\omega,{\bm q})}{P^0 -E_\Delta({\bm p})
-\omega+i\varepsilon}
\, \Bigg ) \Bigg| _{{\bm p}={\bm P}-{\bm q}}
,
\nonumber
\end{eqnarray}
with $E_\Delta({\bm p})=\sqrt{{\bm p}^2+M_\Delta^2}$. The effect of
the vacuum width of the $\Delta$ has not been included. The strong
width of the ${\bar D}^*$ is very small, as a consequence of HQSS.
The ${\bar D}$ self-energy in symmetric nuclear matter is obtained by summing
the different isospin transition amplitudes for ${\bar D} N$ over the nucleon Fermi
distribution as
\begin{eqnarray}
\Pi_{{\bar D}}(q^0,{\bm q}; \rho) &=& \int_{p \leq p_F}
\frac{d^3p}{(2\pi)^3} \,
\Big[\, T^{\rho,0,1/2}_{{\bar D} N} (P^0,{\bm P}) +
3 \, T^{\rho,1,1/2}_{{\bar D} N}(P^0,{\bm P}) \Big]
.
\label{eq:selfd}
\end{eqnarray}
Simultaneously, the ${\bar D}^*$ meson self-energy is derived from the sum
over the ${\bar D}^*N$ amplitudes as\footnote{We neglect the enhancement in
the ${\bar D}^*$ width due to coupling to ${\bar D}\pi$ (and their medium
corrections). The analogous mechanism for $\bar{K}^*\to\bar K \pi$
was considered in \cite{Tolos:2010fq}.}
\begin{eqnarray}
\Pi_{{\bar D}^*}(q^0,{\bm q}; \rho\,) &=& \int _{p \leq p_F} \frac{d^3p}{(2\pi)^3} \,
\Bigg[ \frac{1}{3} \, T^{\rho,0,1/2}_{{\bar D}^* N}(P^0,{\bm P}) +
T^{\rho,1,1/2}_{{\bar D}^* N}(P^0,{\bm P})
\nonumber \\
&&
+ \frac{2}{3} \,
T^{\rho,0,3/2}_{{\bar D}^* N}(P^0,{\bm P}) +
2 \, T^{\rho,1,3/2}_{{\bar D}^* N}(P^0,{\bm P})\Bigg]
.
\label{eq:selfds}
\end{eqnarray}
\noindent
In the above equations, $P^0=q^0+E_N({\bm p})$ and ${\bm
P}={\bm q}+{\bm p}$ are the total energy and momentum of the
meson-nucleon pair in the nuclear matter rest frame, and $(q^0,{\bm
q})$ and $(E_N,{\bm p})$ stand for the energy and momentum of the
meson and nucleon, respectively, in that frame.
As mentioned
previously, those self-energies are determined self-consistently since they
are obtained from the in-medium amplitudes which contain the meson-baryon loop
functions, and those quantities themselves are functions of the self-energies.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{Pi.eps}
\caption{\small Real and imaginary parts of the ${\bar D}$ self-energy over
$2m_{\bar D}$, at ${\bm q}=0$, as functions of the meson energy $q^0$ for
different densities and two subtraction points with $\alpha=1$ (left
panels) and $\alpha=1.2$ (right panels). The oblique line is the
function $((q^0)^2-m_{\bar D}^2)/(2 m_{\bar D})$. The SU(4) ${\bar D}$ self-energy
obtained in Ref.~\cite{Tolos:2007vh} for normal nuclear matter
density is also displayed.}
\label{fig:self}
\end{center}
\end{figure}
We are interested in studying possible ${\bar D}$ bound states in nuclei.
Therefore, we concentrate on the self-energy for $q^0$ around the ${\bar D}$
mass. In Fig.~\ref{fig:self} we show the ${\bar D}$ self-energy over $2m_{\bar D}$, as a
function of the ${\bar D}$ energy, for various nuclear densities $\rho$, and with
the ${\bar D}$ meson momentum $\bm q=0$. We display results for two values for the
subtraction point (see Eq.~(\ref{eq:sp})): $\alpha=1$ (left panels) and
$\alpha=1.2$ (right panels). For comparison, we also show results for the
SU(4) WT model of Ref.~\cite{Tolos:2007vh} at normal nuclear density,
$\rho_0=0.17 \ {\rm fm^{-3}}$.
It is worth noticing a resonant structure (more pronounced for the
preferred value $\alpha=1$) close to the ${\bar D} N$ threshold, which will
be of up-most importance for the study of ${\bar D}$ bound states. This
structure results from a pole in the free space amplitude of the
sector $I=0,J=1/2$ at $2805\,{\rm MeV}$ (a weakly bound pentaquark
state) that strongly couples to ${\bar D} N$ and ${\bar D}^* N$ states
\cite{Gamermann:2010zz} (also found in \cite{Yasui:2009bz}). For reference
we will call this state $X(2805)$\footnote{This state is bound by only
about 1 MeV in the free space, and it is one of the most interesting
predictions of Ref.~\cite{Gamermann:2010zz}. Moreover, it appears as a
consequence of considering heavy vector meson degrees of freedom, as
required by HQSS. Indeed, the diagonal ${\bar D} N$ WT interaction is zero
in this sector and thus, the $X(2805)$ is generated thanks to the
coupled channel dynamics between the ${\bar D} N$ and ${\bar D}^* N$
pairs. Thus, this bound state is absent in the free space SU(4) WT
model of Ref.~\cite{Hofmann:2005sw} in which is based the nuclear
medium approach of Ref.~\cite{Tolos:2007vh}.}. The situation
has some similarities with the $\bar{K}N$ interaction, which is
governed by the $\Lambda(1405)$ resonance. The $\Lambda(1405)$
dominates the behavior of the $\bar K N$ interaction close to
threshold similarly to the pole in $2805\,{\rm MeV}$ for the $\bar D
N$ amplitude. However, the $\Lambda(1405)$ can decay into $\pi
\Sigma$, whereas the $X(2805)$ is below all thresholds for strong
interaction decay. The exotic $X(2805)$ has a HQSS partner with
$I=0,J=3/2$, a ${\bar D}^* N$ bound state with mass $2922\,{\rm MeV}$, as seen
in Ref.~\cite{Gamermann:2010zz}.
In contrast to the SU(8) scheme and as mentioned above, a resonant
structure is not observed in the SU(4) WT model of
Ref.~\cite{Tolos:2007vh}. The SU(4) amplitude is repulsive
and shows a smooth behavior as a function of the energy. A similar
repulsive effect was observed in the $t$-channel vector meson exchange
models of Refs. \cite{Lutz:2005vx,JimenezTejero:2011fc}.
Due to the strong energy dependence of the in-medium effective interaction in
the SU(8) WT scheme close to threshold, any slight change in the parameters of
the model as well as in the self-consistent procedure may have strong
consequences on the formation of ${\bar D}$-nucleus bound states. In order to mimic
those changes, we have slightly varied the subtraction point, namely, to
$\alpha=1.2$. In this way we study two very distinct situations for the
formation of bound states and set our theoretical uncertainties.
The ${\bar D}$ self-energy is evaluated in infinite nuclear matter. In
finite nuclei we use the local density approximation (LDA),
substituting $\rho$ by $\rho (r)$, which is the local density at each
point in the nucleus. For the $s$-wave, as it is the case here, it was
shown in Ref.~\cite{Nieves:1993ev} that the LDA gave the same results as a
direct finite nucleus calculation. The LDA ${\bar D}$ self-energy allows to
define a local optical potential. In mesic atoms this optical
potential is often taken to be energy independent and fixed to its value at
threshold ($q^0=m_{\bar D},\, {\bm q}=0$). However, both the real and the
imaginary parts of the ${\bar D}$ self-energy, around the ${\bar D}$-meson mass,
show a pronounced energy dependence, as can be appreciated in
Fig.~\ref{fig:self}. Hence, a realistic determination of the ${\bar D}$
bound states should take this energy dependence into account, as done
previously for $\eta$- and $D^0$-nucleus systems
\cite{GarciaRecio:2002cu,GarciaRecio:2010vt}. Thus, we use an energy dependent
optical potential defined as:
\begin{equation}
V_{\rm opt}(r,q^0) = \frac{1}{2 q^0} \Pi_{{\bar D}}(q^0,{\bm q}=0,\,\rho(r))
.
\label{eq:UdepE}
\end{equation}
Most of the imaginary part for $q^0<m_{\bar D}$ displayed in
Fig.~\ref{fig:self} comes from particle-hole production and this is
allowed due to the attractive potential felt by the ${\bar D}$ in the
medium. The quantity $((q^0)^2-m_{\bar D}^2)/(2 m_{\bar D})$ is displayed in
Fig.~\ref{fig:self} by a dashed-dotted-dotted solid line. The
leftmost crossing point of this line with the real part of the
self-energy (divided by $2m_{\bar D}$) signals the opening of the
${\bar D}$-particle-hole threshold. For the energies displayed,
$((q^0)^2-m_{\bar D}^2)/(2 m_{\bar D})$ is essentially $q^0-m_{\bar D}=E$, the non
relativistic energy of the ${\bar D}$ (and so almost a straight
line). Therefore, the difference $E-{\rm Re}\, V_{\rm opt}(E)$
corresponds to the kinetic energy of the non-relativistic problem. The
two lines $E$ and ${\rm Re}\,V_{\rm opt}(E)$ cross at the classical
turning point. Roughly speaking, there should not be imaginary part
in the classically forbidden region $E<{\rm Re}\, V_{\rm opt}(E)$, as
there is no available phase space for decay (i.e., no kinetic energy
to expend). Also, the bound states should appear predominantly for
energies fulfilling the condition $E>{\rm Re}\, V_{\rm opt}(E)$, since
the expectation value of the kinetic energy in the bound state cannot
be negative. Of course, these arguments are only qualitative because
the optical potential is complex and strongly energy dependent. This
allows for the ${\bar D}$ in the medium to have some non intuitive
behavior. For instance, a ${\bar D}$ with energy $E$ can eject a
particle-hole going to a lower energy $E^\prime$, and yet end up with
more kinetic energy to expend, provided $E-E^\prime < {\rm Re}\,
V_{\rm opt}(E) - {\rm Re}\, V_{\rm opt}(E^\prime)$.
Due to the ${\bar D} N$ bound state close to threshold, $X(2805)$, the low
density approximation $ T \rho$ breaks down very early. For a given
value of the energy the density dependence of the optical potential is
far from linear. For subsequent use, we have computed the optical
potential for several densities (those in Fig.~\ref{fig:self}) and a
fine lattice of energies, and have used an interpolation procedure for
other values of density and energy. The presence of the bound
state/resonance prevented the self-consistent procedure to be
continued for densities below $0.1\,\rho_0$.
\section{Results}
\label{sec:3}
\input{tables_coulomb.tex}
\input{tables_a18.tex}
\input{tables_d0.tex}
\input{tables_Cx.tex}
\input{tables_su4.tex}
We look first for $D^-$-nucleus bound states by solving the
Schr\"odinger equation:
\begin{equation}
\left[ -\frac{{\bm \nabla}^2}{2 m_{\rm red}} + V_{\rm{coul}}(r)
+ V_{\rm{opt}}(r) \right] \Psi \,
= (-B-i \Gamma /2) \Psi
.
\label{eq:SchE}
\end{equation}
In this equation, $B$ is the binding energy ($B>0$), $\Gamma$ the width
of the bound state and $m_{\rm red}$ is the ${\bar D}$-nucleus reduced
mass. $V_{\rm coul}(r)$ is the Coulomb potential of the $D^-$
including the nucleus finite size and the Uehling vacuum
polarization. $V_{\rm{opt}}(r)$ is the energy dependent optical
potential. Because the electromagnetic interaction is introduced by means of
the minimal coupling prescription (to be consistent with gauge invariance and
electric charge conservation), $V_{\rm coul}(r)$ must be introduced wherever
the energy is present. So the energy dependent optical potential of
Eq.~(\ref{eq:UdepE}) is applied with argument $q^0=m_{\bar D}-B-V_{\rm coul}(r)$.
The non relativistic approximation is used since the ${\bar D}$-meson optical
potential is much smaller than its mass, and we expect the relativistic
corrections to be tiny and certainly smaller than the theoretical
uncertainties of the interaction. In the same approximation the denominator
$2q^0$ in Eq.~(\ref{eq:UdepE}) can also be set to $2m_{\bar D}$.
We solve the Schr\"odinger equation in coordinate space by using a
numerical algorithm~\cite{Oset:1985tb,GarciaRecio:1989xa}, which has
been extensively tested in similar problems of
pionic~\cite{Nieves:1993ev,GarciaRecio:1991wk} and
anti-kaonic~\cite{Baca:2000ic,Yamagata:2005ic} atomic states, in the search
of possible anti-kaon~\cite{Baca:2000ic}, $\eta$~\cite{GarciaRecio:2002cu},
$\phi$~\cite{YamagataSekihara:2010rb}, and $D^0$~\cite{GarciaRecio:2010vt}
nuclear bound states. Charge densities are taken from
Refs.~\cite{DeJager:1974dg,DeJager:1987qc}. For each nucleus, we take the neutron matter
density approximately equal to the charge one, though we consider
small changes, inspired by Hartree-Fock calculations with the
density-matrix expansion~\cite{Negele:1975zz} and corroborated by pionic atom
analysis~\cite{GarciaRecio:1991wk}. All the densities used throughout
this work can be found in Table~1 of Ref.~\cite{Baca:2000ic}. The correction
in the nuclear density to remove the finite size of the nucleon is
introduced following the scheme of
Refs.~\cite{Salcedo:1987md}\footnote{$\pi R^2$ in Eq.~(6.13) of
\cite{Salcedo:1987md} should be corrected to $\pi^2 R$.} and
\cite{GarciaRecio:1991wk}. We have also considered that in nuclei it
is necessary a finite energy, of the order of few MeV's, to extract a
nucleon. However, in nuclear matter this is not the case and
particle-hole excitations can be produced at zero energy transfer. To
improve on this deficiency, we have included in our calculation an
average energy-gap in the nucleon spectrum. It is used to
shift the imaginary part of the optical potential, thereby reducing the
available phase space for extracting a nucleon from the Fermi sea.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.\textwidth,angle=0]{todos1.eps}
\caption {\small $D^-$ atom levels for different nuclei and angular
momenta. ``$\odot$'' points stand for pure Coulomb potential binding
energies (Table \ref{tab:coul}), while ``$\times$'' symbols stand
for the binding energies and widths of atomic levels predicted by
the SU(8) model derived in this work (see Fig.~\ref{fig:self}), with
$\alpha=1$ and gap $8\,{\rm MeV}$ (Table \ref{tab:a18}). The results are
scaled down by a factor $Z^{5/4}$.}
\label{fig:levels1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.\textwidth,angle=0]{todos2.eps}
\caption {\small Same as in Fig.~\ref{fig:levels1}, but including
states of nuclear type (pentagons) as well. In this case no scale
factor has been applied.}
\label{fig:levels2}
\end{center}
\end{figure}
In Tables~\ref{tab:coul}--\ref{tab:su4}, we present results for
$^{12}$C, $^{40}$Ca, $^{118}$Sn and $^{208}$Pb and for several
interactions. We have considered:
\begin{itemize}
\item[ $i)$] only Coulomb interaction, neglecting totally the nuclear
optical potential (Table \ref{tab:coul}).
\item[ $ii)$] Coulomb interaction plus the SU(8) optical potential of
Fig.~\ref{fig:self}, with $\alpha=1$ ($\alpha$ is defined in
Eq.~(\ref{eq:sp})) and a gap in the nucleon spectrum of $8\,{\rm MeV}$
(Table \ref{tab:a18}).
\item[ $iii)$] only the SU(8) optical potential with $\alpha=1$ and a
gap of 8 MeV, thus neglecting in this case the Coulomb interaction
(Table \ref{tab:nocoul}). This applies to ${\bar D}^0$-nucleus states.
\item[ $iv)$] Coulomb interaction plus the SU(8) optical
potential, but with $\alpha=1.2$ or without a nucleon gap (Table
\ref{tab:Cx} with results only for $^{12}$C).
\item[ $v)$] Coulomb interaction plus the SU(4) optical potential of
Ref.~\cite{Tolos:2007vh}, where the ${\bar D}^* N$
coupled-channel effects are ignored (Table
\ref{tab:su4}).
\end{itemize}
The calculation that we deem more realistic for $D^-$ states is that
obtained by using the SU(8) model with $\alpha=1$ and with a nucleon
extraction energy (gap) of $8\,{\rm MeV}$. The predicted spectrum of
low-lying states is given in Table \ref{tab:a18} and displayed in
Figs.~\ref{fig:levels1} and ~\ref{fig:levels2}. In these figures, the
pure Coulomb levels are also shown for comparison. A salient feature
of the spectrum is the presence of two types of states: atomic and
nuclear ones.
The states of atomic type follow from distortion of the pure
Coulombian levels, they have moderate widths and they exist for all
angular momenta. For these states, the nuclear interaction is a
perturbation and their wave functions have support mainly outside of
the nucleus. As compared to the Coulombian levels, the states of
atomic type are shifted upwards, i.e., they are less bound. So
effectively, they feel a repulsive interaction. The atomic states are
only sensible to the region of small densities and small energies, and
in this region the potential can be repulsive. (To interpret correctly
the optical potential profile in Fig.~\ref{fig:self}, it should be
taken into account that, by minimal coupling, the energy argument of
optical potential is not $q^0$ but $q^0$ increased by the local
Coulomb potential.) Part of the repulsion comes also from the
imaginary part of the optical potential, a well known effect in exotic
atoms \cite{Krell:1971mx}. In addition, the existence of states of
nuclear type should tend to push upwards the atomic states. Yet, for
heavier nuclei, some spurious repulsion could be introduced by our
simplifying approximation of using symmetric nuclear matter in the
calculation of the optical potential\footnote{ Attending to the in
vacuum ${\bar D} N$ $T$-matrix, since
$(T^{(I=1,J=1/2)}-T^{(I=0,J=1/2)})(\rho_n-\rho_p)$ is negative near
threshold, the asymmetry effect is expected to be attractive for
heavier nuclei richer in neutrons than in protons. }. As
expected, strong interaction shifts and widths become much larger for
low angular momenta and heavier nuclei. Roughly, the nuclear
interaction turns out to be significant for $L\le 1,2,5$, and $6$ for
$^{12}$C, $^{40}$Ca, $^{112}$Sn and $^{208}$Pb,
respectively. Likewise, the (strong) widths and shifts are larger for
states with lower quantum numbers due to a greater overlap of the wave
function with the nucleus. (Of course, the electromagnetic width, not
included, increases with the quantum number instead.)
On the other hand, the spectrum of the states of nuclear type lies below, and
well separated from, that of the atomic states and also from the Coulombian
levels. The gap between nuclear and atomic states ranges for 15 to $20\,{\rm MeV}$
for all nuclei, whereas the gap with the Coulomb states decreases with the
nuclear size. The nuclear states have widths ranging from few keV to several
MeV and have considerable binding energies of tenths of MeV, pointing out to a
sizable overlap of their wave function with the nucleus. The states of nuclear
type exist only for the lower angular momenta and there is only a finite
number of them that increases with the nuclear mass. We should note that,
being the optical potential complex and energy dependent, the usual theorems
of classification of states by nodes do not apply, and so it is much harder to
guarantee that all levels in a given region of the complex energy plane have
been found.\footnote{We do not include some very wide states that overlap with
the continuum. The fact that these states are very rare makes more
appropriate our approximation of using only the real part of the energy as
argument of the ${\bar D}$ optical potential in the Schr\"odinger equation.} An
interesting feature of the nuclear states is that their widths decrease as the
binding energies increase. (This is opposite to what happens to atomic states
regarding their strong width.) The profile of widths as a function of the
energy of the states just mimics the profile of the imaginary part of the
optical potential (see Fig.~\ref{fig:self}). The lowest states have small
widths as they fall in the tail of the imaginary part of the optical
potential. The low lying states are already inside the nucleus, so the overlap
does not increase by lowering the energy, and instead they have less
phase-space available for decay. This also explains why the widths of the
nuclear states decrease with the size of the nucleus: for larger nuclei the
ground state tends to be closer to the bottom of the potential, and hence the
available kinetic energy to knockout nucleons decreases.
In Table \ref{tab:a18} we also quote results from Ref.~\cite{Tsushima:1998ru}
obtained in $^{208}$Pb within a quark-meson coupling model
\cite{Guichon:1987jp}. Widths were disregarded in \cite{Tsushima:1998ru}. The
numbers quoted for their model $\tilde{V}^q_\omega$ turn out to be not
very different from ours for atomic states. Besides, one would be
tempted to say that the $1s$, $2s$ and $1p$ levels of the model
$V^q_\omega$ of Ref.~\cite{Tsushima:1998ru} match our $3s$, $4s$, and $3p$
levels of nuclear type.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.\textwidth,angle=0]{todosD0b.eps}
\caption {\small ${\bar D}^0$ nuclear levels, for different nuclei and
angular momenta predicted by the SU(8) model derived in this work
(see Fig.~\ref{fig:self}), with $\alpha=1$ and gap $8\,{\rm MeV}$ (Table
\ref{tab:nocoul}).}
\label{fig:levels3}
\end{center}
\end{figure}
In Table \ref{tab:nocoul} and Fig.~\ref{fig:levels3} we show the spectrum of
${\bar D}^0$-nucleus bound states. This spectrum approximately matches that of the
$D^-$-states of the nuclear type. The $1p$ levels of the
two heavier nuclei are missing in the ${\bar D}^0$ spectrum. The most likely
scenario is that those states exist but we have been unable to pin them down
due to numerical instabilities. The ${\bar D}^0$ and $D^-$ binding energies show a
systematic difference, which can be traced back to the missing Coulomb
attraction in the ${\bar D}^0$ case. The widths are also comparable but
systematically larger in the charged case. This can be easily understood
since in presence of the Coulomb attraction, the binding energies are larger,
which forces the $D^-$-states to explore/be sensitive to higher nuclear
densities. In the same table we also compare with the $V^q_\omega$ model
predictions of Ref.~\cite{Tsushima:1998ru} for lead. (The model
$\tilde{V}^q_\omega$ does not produce bound states.) We find an excellent
agreement with these results for the $1s$ and $2s$ levels.
Next, we try to better understand some systematic effects that affect
to our predictions. First, we have considered the dependence of our
results on the choice of the subtraction point used to renormalize the
ultraviolet divergent loop functions. Thus, we have re-calculated
binding energies and widths, of both atomic and nuclear levels,
with $\alpha=1.2$. These new results are collected in
Table~\ref{tab:Cx} for carbon and can be compared with those of
Table~\ref{tab:a18} obtained with $\alpha=1.0$. From the behaviour
exhibited in Fig.~\ref{fig:self}, one might expect moderate changes,
that would lead, in general, to smaller widths and binding energies
when $\alpha$ is set to 1.2. However, the computed $\alpha=1.2$
levels (see Table~\ref{tab:Cx}) do not always follow this pattern. The
observed deviations are probably induced by the
strong energy dependence of the optical potential. Though changes are
very small for atomic states with $L>0$, they become much larger for
the $L=0$ levels and also for the spectrum of nuclear states. A similar
calculation for
heavier nuclei shows that the effect is smaller, but the number of nuclear states
changes occasionally. This might be a true qualitative change in the
spectrum induced by the differences in the potentials. However, it
could be just due to the fact that some nuclear states have been
missed in the difficult search throughout the complex
plane. For the ${\bar D}^0$ levels, the effect of changing $\alpha$ from
1 to 1.2 is completely similar to that already noted for the states of
nuclear type in the charged case, that is, less binding and smaller
widths, and occasionally, a change in the number of levels. In the
case of the $(C=+1, S=0)$ sector, the subtraction point $\alpha$ could
be fixed in the free space by tuning the pole position of the three
star $\Lambda_c (2595)$ and $\Lambda_c (2625)$ resonances (see for
instance \cite{GarciaRecio:2008dp}). Thus in contrast with the situation here,
our previous study~\cite{GarciaRecio:2010vt} on the possible existence of
$D^0$-nuclear bound states was free, to a large extent, of this
source of theoretical uncertainties. The lack of experimental
information on the $C=-1$ sector, however, prevents us to fix more
precisely the subtraction point used in the renormalization scheme
proposed in \cite{Hofmann:2005sw} and employed in the free space
calculation of Ref.~\cite{Gamermann:2010zz}, in which the results of this
work are based\footnote{For instance, note that for $\alpha=1$, there
exists a prominent delta-like structure in the in-medium amplitudes,
at very low densities, due to the $X(2805)$ exotic bound
state. However, it is clearly smeared out when $\alpha$ is set to 1.2,
since the $X(2805)$ baryon pentaquark is not longer bound for this
value of $\alpha$, and it becomes then a more or less narrow
resonance.}. Thus, we should take the differences between the results
displayed in Tables~\ref{tab:a18} and \ref{tab:Cx} as a hint on the nature
of the theoretical uncertainties that affect to our results. Other
sources of uncertainties will be discussed below, but among all of
them, those related to the choice of the subtraction point are
certainly the largest ones.
For instance, in the calculation we have also included in an
approximated way the effect of the existence of a non-zero gap in the
nucleon spectrum, separating nucleons in the Fermi sea from the free
ones. In Table ~\ref{tab:Cx} we show the results without gap for
$^{12}$C. The gap reduces the width of the states of nuclear type,
but however their binding energies are not much affected. On the other
hand, the changes in atomic states are also small. With regard to the
SU(4) model, which ignores ${\bar D}^* N$ coupled channel effects, we
observe a repulsive interaction for the ${\bar D}$ in nuclear matter. As a
consequence the corresponding optical potential is repulsive and
purely real in the region of interest. This model predicts $D^-$ atoms
stable under strong interactions with levels of atomic type uniformly
moved upwards in energy as compared to the pure Coulombian prediction
(see Table~\ref{tab:su4}). The repulsion is smaller than for SU(8),
presumably due to the lack of imaginary part. However, we believe the
results of Table~\ref{tab:a18} are more realistic than any of those
commented above, because neither neglecting the finite nucleon
extraction energy, nor ignoring the HQSS constraints/requirements are
approximations physically acceptable.
\section{$D^-$ atom decay modes}
\label{sec:4}
As noted in the Introduction, $D^-$ atoms\footnote{Throughout this discussion,
``$D^-$ atoms'' refer to all ${\bar D}$-nucleus bound states, whether they are of
atomic or of nuclear type.} stand out among other exotic atoms. This is also
true regarding their decay modes. Two mechanisms are available for decay,
namely, particle-hole production, ${\bar D} \to {\bar D} N N^{-1}$, and pentaquark
production, ${\bar D} \to X(2805) N^{-1}$.
Let us disregard pentaquark production momentarily. A particle-hole
production mechanism is of course present in other exotic
atoms. However, in other atoms this is not the dominant decay: In
pionic atoms the non electromagnetic width comes from absorption of
the pion by two or more nucleons. In $\eta$-nucleus systems the $\eta$
carries no charge and it can be easily absorbed into particle-hole
excitations or else, it can be traded by the much lighter pion. In
$\bar{K}$-atoms the $K^-$ carries strangeness and so it cannot just be
absorbed into particle-hole excitations, but the $s$ quark can be
passed to a baryon. There is energy available for processes with
mesons in the final state, $\bar{K} \to \pi\Lambda N^{-1}$ and
$\pi\Sigma N^{-1}$, or without them, $\bar{K} \to N \Lambda
N^{-1}N^{-1}$ and $N\Sigma N^{-1}N^{-1}$ \cite{Ramos:1999ku}. Likewise, in
$D^0$-nucleus systems mesonic mechanisms, $D^0\to \pi\Lambda_c N^{-1}$
and $D^0\to \pi\Sigma_c N^{-1}$, and non mesonic ones, $D^0\to
N\Lambda_c N^{-1}N^{-1}$ and $D^0\to N\Sigma_c N^{-1}N^{-1}$, are
energetically allowed. In those decay modes, the $c$ quark is
transferred to a baryon.
In this regard, the situation of the $D^-$ atoms is qualitatively different
from all the previous ones. Of course, the $D^-$ cannot just be absorbed into
particle-hole excitations, as was the case of pion or $\eta$. Also, it cannot
combine with a nucleon to decay to a lighter meson-baryon channel, as happens
for $K^-$ or $D^0$, because baryons cannot carry the negative charm of the
${\bar D}$ and there are no lighter charmed mesons. Put in another way, clusters
like ${\bar D} N$ or ${\bar D} NN$ are stable under strong decay as there are no lighter
hadronic states with same charm and baryonic quantum numbers. (Recall that the
possibility of pentaquark formation is not being considered yet.)
These remarks would also apply to an hypothetic $K$-nucleus bound
state. However such system does not exist since the $K N$ interaction
is repulsive, as it is Coulomb for the $K^+$. On the contrary the
$D^-$ will form of necessity a bound state with the nucleus, if by no
other mechanism, through Coulomb interaction. (Even if the strong
interaction were repulsive it would vanish outside the nucleus and the
atom would be formed anyway.) So $D^-$ atoms are truly special: other
exotic atoms decay through {\em hadronic} mechanisms (to lighter
hadronic states) while $D^-$ atoms can only decay through {\em
many-body} mechanisms, e.g., ${\bar D} N \to {\bar D} N$, where the ${\bar D}$ falls
to a lower level transferring energy to the nucleus.
The fact that particle-hole production is the dominant mechanism for decay has
important consequences for $D^-$ atoms, both phenomenological and theoretical.
Consider for instance the ground state of a $K^-$ atom. Although it is the
lowest atomic state, nothing prevents it from decaying to lighter hadronic
states (transferring the $s$ quark to a baryon as discussed above). On the
other hand, for the {\em ground state} of a $D^-$ atom no such lighter
hadronic state exists, so one should expect no width in this case. Put in
another way, the ${\bar D}$ cannot go to a lower atomic state to be able to eject a
nucleon. To be precise, for the final state we should actually consider, not
the spectrum of the original atom but that of the daughter nucleus (with one
less nucleon). The difference between the two spectra is not expected to be
large, at least for not very small nuclei, and moreover, we expect the ground
state of the daughter-nucleus atom to be less bound, reinforcing the
argument. In this view, the fate of the $D^-$ atom would be to form a stable
${\bar D}$-nucleus bound state, which would eventually decay through weak
interaction.
From the theoretical side, the ground state argument just presented shows that
the widths, as predicted by a naive application of the ${\bar D}$ optical
potential, tend to be overestimated for low lying states. The LDA replaces the
true discrete spectrum of the ${\bar D}$ (in the daughter nucleus) by a continuum
of states starting from the bottom of the optical potential upwards. As the
energy of the ground state will be above that bottom, the LDA incorrectly
assigns available phase space for particle-hole decay. Of course, the same
mechanism of {\em spectral blocking} is present in the application of the LDA
to the study of other exotic atoms but this is not so crucial there because
the decay is dominated by other mechanisms which give sizable width even to
the ground state.
Another effect has to be considered as well, namely, the existence of a gap in
the spectrum of nucleons, separating nucleons in the Fermi sea from the free
ones (or from excited states, beyond the LDA). The {\em gap blocking} tends to
quench the widths from particle-hole production, as the nucleons need a
minimum energy to escape the nucleus, and so it helps to reduce or even remove
the width of low lying $D^-$ atom states. We have included such an effect in
an approximated way, just by reducing the energy argument in the imaginary
part of the optical potential by a constant amount of $8\,{\rm MeV}$.\footnote{A
more correct procedure would be to shift only the energy of the hole line in
Eqs.~(\ref{eq:selfd}) and (\ref{eq:selfds}), i.e., $E_N(\bm p)\to E_N(\bm
p)-E_{\rm gap}$, but this turns out to be technically involved due to the
presence of the $X(2805)$ state. This is a pole in the $T$-matrix which
turns from a bound state to a resonance as the nuclear density increases.}
Up to now we have disregarded the pentaquark production mechanism.
The $X(2805)$ in the vacuum scheme of Ref.~\cite{Gamermann:2010zz} is a bound
state of $N$ and ${\bar D}$. The binding energy is quite small, about
$1.4\,{\rm MeV}$, so this pole is very close to the ${\bar D} N$ threshold. A
key issue is whether this state remains bound or moves to a resonance
at finite nuclear density. When Pauli blocking is enforced, the
threshold moves upwards, favoring the bound state over the resonance.
The situation changes completely when the ${\bar D}$ optical potential is
also taken into account by means of the self-consistent procedure. In
the region of interest, the effect of the ${\bar D}$ optical potential
turns out to be attractive. This brings the threshold downwards. In
addition, the pole in medium moves to higher energies. The net result
is that, even for a density as small as $0.1\,\rho_0$ (the lowest
density accessible by our calculation), the pole lies above the
threshold and turns into a resonance. For this density and higher, the
pentaquark would decay into $N$ and ${\bar D}$, and this brings us back to
the particle-hole decay mechanism. Nevertheless, note that, by
continuity, there should be a critical density below which the pole is
below threshold, and so allowing the pentaquark production
mechanism. Thus this mechanism always has some contribution, however
small, to the width (even for the ground state). The situation may
also change due to gap blocking and spectral blocking, which tend to
push the threshold upwards, thereby favoring the pentaquark production
mechanism.
The presence of a pole in the $T$-matrix in the region of interest makes the
technical problem rather difficult. This has forced us to extrapolate the
optical potential from $\rho=0.1\,\rho_0$ to lower densities when needed,
without spectral blocking and with approximate gap blocking. This results in
treating the pole always as a resonance in our calculation. So, even for the
ground state, our calculation nominally attributes all widths to particle-hole
production. A more detailed treatment would provide pentaquark production as
well, below a certain critical density. In fact, for the ground state this
would be the only decay mode. Although nominally the widths we find come from
particle-hole, for low lying states there should be a genuine contribution
coming from the pole albeit distorted by the coupling to particle-hole. This
suggests that the width computed for the ground state is a rough estimate of
the true width from pentaquark production to be obtained in a more complete
treatment without spurious particle-hole decay in the ground state.
Phenomenologically, it is important to note that the in-medium $X(2805)$ state
is produced by a bound $N$ and a bound ${\bar D}$. Because the pentaquark formation
energy is so small, the kinetic energy released is also small and the
pentaquark remains bound in the nucleus after formation. This suggests that
after the electromagnetic cascade in the atomic levels and the subsequent
particle-hole emission cascade in the nuclear levels, the fate of the $D^-$
atom could be a pentaquark-nucleus bound state. This would be stable until
weak decay of the ${\bar D}$ meson. Of course this is a fascinating possibility both
theoretically and experimentally.
The approximate implementation of the gap blocking and the lack of spectral
blocking suggest that the actual widths will be somewhat smaller that those
obtained here. A side effect of a smaller imaginary part would be an effective
increase in the binding of the states.
\section{Summary and conclusions}
\label{sec:5}
A self-consistent calculation of the ${\bar D}$ self-energy has been
carried out in symmetric nuclear matter using unitarized
coupled-channel theory. The model is based on SU(8) spin-flavor
symmetry and enjoys heavy quark spin symmetry. Two renormalization
prescriptions have been used in order to estimate the systematic error
involved. We find that the presence of a bound state near the ${\bar D} N$
threshold makes the optical potential to be strongly energy and
density dependent. In contrast with SU(4) based models, the optical
potential is mostly attractive (except for a repulsive region for low
densities near threshold, relevant for levels of atomic type) and
develops an imaginary part to particle-hole production and possibly to
pentaquark production inside the nucleus. Unlike other hadronic atoms,
no
other relevant decay mechanisms exist for the ${\bar D}$ in nuclear matter
around threshold.
Using the local density approximation, we have computed the levels and
widths for low lying states of several nuclei, light and heavy. The
results are summarized in Figs.~\ref{fig:levels1} and \ref{fig:levels2} for
$D^-$-atoms and in Fig.~\ref{fig:levels3} for ${\bar D}^0$-nucleus bound states.
The spectrum presents two types of states, atomic and
nuclear, for all studied nuclei\footnote{Of course, atomic like states
do not appear in the ${\bar D}^0$ case.}. The nuclear states
exists for lower angular momenta only. As compared to the pure Coulomb
levels, the atomic states are less bound and the nuclear ones are more
bound and may present a sizable width.
A number of approximations have been necessary in an already highly
sophisticated calculation to render it feasible. Nevertheless, this is
the first systematic study of $D^-$ atoms that accounts properly for
HQSS and for the many-body mechanisms responsible for the widths of
the states. We can draw two general conclusions from the present
work. First, that in the study of nuclear systems involving charm, it
is important to use a model fulfilling the QCD requirement of heavy
quark spin symmetry. The vector-meson partner of the ${\bar D}$, the
${\bar D}^*$, has a similar mass and hence its inclusion substantially
modifies the ${\bar D} N$ dynamics producing a non trivial structure in its
$T$-matrix near threshold. And second, the anti-quark $c$ in the ${\bar D}$
cannot be transferred to the baryons, and in particular, a ${\bar D} N$
pair has no open channels to decay. For this reason it has been often
assumed that the ${\bar D}$ would not interact much with the nucleus and
could be treated as an spectator. We find that this is not the case,
and in fact a rich spectrum is obtained with sizable shifts and widths
in the states.
The observation of the states predicted here might be feasible in the
PANDA and CBM experiments at the future FAIR facility at Darmstadt,
and it would certainly shed light to unravel the fascinating ${\bar D} N$
dynamics, both in the free space and when the pair is embedded
in a dense nuclear medium.
\bigskip
{\bf Acknowledgments}
We warmly thank E. Oset for useful discussions.
This research was supported by DGI and FEDER funds, under contracts
FIS2008-01143/FIS, FPA2010-16963 and the Spanish Consolider-Ingenio 2010
Programme CPAN (CSD2007-00042), by Junta de Andaluc{\'\i}a grant
FQM-225, by Generalitat Valenciana contract PROMETEO/2009/0090, and it
is part of the European Community-Research Infrastructure Integrating
Activity “Study of Strongly Interacting Matter” (acronym
HadronPhysics2, Grant Agreement n. 227431), under the Seventh EU
Framework Programme. L.T. acknowledges support from Ministerio de
Ciencia e Innovaci\'on under contract FPA2010-16963 and Ramon y Cajal
Research Programme, and from FP7-PEOPLE-2011-CIG under contract
PCIG09-GA-2011-291679, and the Helmholtz International Center for FAIR within the framework of the LOEWE program by the State of Hesse (Germany).
|
1,108,101,565,170 | arxiv | \section{Introduction}
At high energy and large impact parameter, gravitational scattering is dominated by the exchange of the highest-spin massless states in the theory~\cite{tHooft:1987vrq, Amati:1987wq,Amati:1987uf,Muzinich:1987in,Sundborg:1988tb}. The focus of this work is on standard gravitational field theories in the two derivative approximation, so the highest spin particle is the graviton. The observable we study is the elastic $2 \to 2$ amplitude in the ultra-relativistic regime, where the centre of mass energy $E_\textrm{cm}=\sqrt{s}$ is much larger than any other energy scale in the process (which goes under the name of Regge limit). Notice that this implies that the impact parameter $b$, which is related by Fourier transform to the total momentum exchanged in the scattering, is much larger than the gravitational length scale\footnote{As usual, working in a $D$-dimensional theory is also convenient for regularising the IR divergences of the $D=4$ case.} $R^{D-3} \sim G_N\sqrt{s}$, defined in analogy with the Schwarzschild radius. Since the process is dominated by the exchange of gravitons, it is natural to expect a universal result for the high energy gravitational scattering. It is well known that this is the case at the first order in $R/b\ll 1$ when the result is captured by the leading eikonal phase $\delta_0$. It is then natural to ask whether such universality persists at higher orders in $R/b$, at least in the class of theories mentioned above.
The first non-trivial contribution to the ultra-relativistic result appears at sub-sub-leading order in the small $R/b$ limit which captures the 3PM (post-Minkowskian) correction. This contribution can be encoded in terms of a correction $\delta_2$ to the full eikonal. A first novelty with respect to the leading result is that $\delta_2$ has both a real and an imaginary part, so ${\rm e}^{2i \delta}$ is not just a phase. The imaginary part is related to non-conservative processes, such as the bremsstrahlung emission of massless particles, while the real part captures the conservative dynamics. A first result for $\delta_2$ was obtained in~\cite{Amati:1990xe} for the scattering in pure general relativity (GR) of two massless scalars representing, at high energy, classical Aichelburg-Sexl
shock-waves. The analysis of~\cite{Bellini:1992eb} suggested that the same result for $\mathrm{Re}(\delta_2)$ should hold also for supergravity theories and this was explicitly verified in~\cite{DiVecchia:2019kta} for the case of the maximally supersymmetric theory. Explicit comparison between the GR and the ${\cal N}=8$ results shows that $\mathrm{Im}(\delta_2)$ is not universal (although a crucial $\log(s)$-enhanced term is), which is expected since the massless spectrum of the two theories is different. The universality of the 3PM conservative dynamics in the massless case has been confirmed and extended~\cite{Bern:2020gjj} to theories with different amounts of supersymmetry.
A similar, but slightly different setup is to consider the scattering of two scalar particles of masses $m_1$ and $m_2$. There has been considerable interest, recently, in the use of new scattering amplitude techniques for extracting information about the conservative scattering process in the case $s\sim m_i^2$~\cite{Cheung:2018wkq,Bjerrum-Bohr:2018xdl,Bern:2019nnu,KoemansCollado:2019ggb,Bern:2019crd,Bjerrum-Bohr:2019kec,Kalin:2019rwq,Kalin:2019inp,Cristofoli:2020uzm}: the aim is to pin down the relevant ingredients needed for describing the inspiral phase of black-hole mergers. The importance of such a result for computing the templates for actual gravitational-wave experiments have been stressed, in particular by Damour
The first result at 3PM level was derived a little while ago in \cite{Bern:2019nnu,Bern:2019crd} and was later confirmed in \cite{Cheung:2020gyp,Kalin:2020fhe}, while the case of maximally supersymmetric gravity was obtained recently in~\cite{Parra-Martinez:2020dzs}. The ultra-relativistic limit of these massive results is qualitatively different from the massless case mentioned above, as the leading term of $\mathrm{Re}(\delta_2)$ has a $\log (s/(m_1 m_2))$ enhancement with respect to the one obtained in~\cite{Amati:1990xe}.
From the amplitude perspective the origin of this $\log (s/(m_1 m_2))$-enhanced contribution lies in a particular scalar integral which appears in exactly the same way in both the ${\cal N}=0$ and the ${\cal N}=8$ cases, hence the proposal of~\cite{Parra-Martinez:2020dzs} that there is a separate universality class for the logarithmically-enhanced term of the massive scattering which is different from the massless case where there is no $\log(s)$-enhancement.
This unexpected result of \cite{Bern:2019nnu,Bern:2019crd} has provoked quite a lot of discussion since, taken at face value, it would lead to a divergent deflection angle in either the massless or the ultra-relativistic limit. Since gravity is known \cite{Weinberg:1965nx} to be free of mass/collinear divergences the authors of \cite{Bern:2019nnu,Bern:2019crd}
have immediately pointed out that their result only holds for sufficiently small values of the ratio $\frac{q}{m}$ where $q \sim \frac{\hbar}{b}$ is the momentum transfer in the perturbative amplitude.
They have also given \cite{Bern:2019crd} a one-loop example (involving a non-classical term) of how the $m \to 0$ and the $q \to 0$ limits can be quite different.
On the other hand the above-mentioned divergence is also present at finite $m$ for $s \to \infty$ and would persist even if, for $m > q$, one would replace the $\log (s/(m_1 m_2))$ by a $\log (s/q^2)$. There is also some tension with expectations based on the ``self-force'' approach to PM dynamics. On these different grounds Damour suggested that $\log (s/(m_1 m_2))$ enhancement $\mathrm{Re}(\delta_2)$ cannot be present in the ultra-relativistic limit. He proposed in~\cite{Damour:2019lcq} a modification of the result of \cite{Bern:2019nnu,Bern:2019crd} with a smooth massless limit that however is different (even in sign!) from the one of~\cite{Amati:1990xe}. Finally, a check proposed in \cite{Damour:2019lcq} for distinguishing the two alternatives at 6PN order has contradicted his original proposal while it is consistent with \cite{Bern:2019nnu,Bern:2019crd}. Other alternatives, still at variance with \cite{Amati:1990xe}, have been proposed in a later version of \cite{Damour:2019lcq}.
As mentioned above, in this short note we will reassess the ultra-relativistic limit $s\gg m^2_i$ and ask whether one recovers the massless shock-wave result for the real part of the eikonal $\delta$ at 3PM level.
We will address this issue by using two complementary techniques. The first approach follows the argument of~\cite{Amati:1990xe} where the 3PM result was derived by exploiting the analyticity and crossing properties of the scattering amplitudes among scalars. These properties imply a dispersion relation connecting the leading energy behaviour of the imaginary and real parts of the $2\to 2$ amplitude. Then, in this limit, the conservative 3PM dynamics can be derived from a particular unitarity cut of the two-loop amplitude, thus replacing the full-fledged 2-loop calculation by a phase-space integral of a product of two tree-level amplitudes. As an aside, notice that this provides an explicit check that the 3PM conservative dynamics is entirely determined by classical on-shell data. The second approach follows the direct loop amplitude derivation of~\cite{Bern:2019nnu,Bern:2019crd,Parra-Martinez:2020dzs}, where we just reconsider the calculation of the integrals without relying on the ``potential'' approximation.
We use the eikonal approach to extract the classical contribution, instead of the effective field theory comparison of~\cite{Cheung:2018wkq}, and we work with the full result of the integrals in the ``soft region'' (i.e. in the limit of small momentum transfer). Both the approach based on analyticity/crossing and the one using the explicit loop amplitudes yield the same $\mathrm{Re}(\delta_2)$ which also agrees with the massless result of~\cite{Amati:1990xe,DiVecchia:2019kta,Bern:2020gjj}, thus suggesting that the origin of the different ultra-relativistic behaviour of~\cite{Bern:2019nnu,Bern:2019crd,Cheung:2020gyp,Parra-Martinez:2020dzs,Kalin:2020fhe} lies in the use of the ``potential'' approximation in evaluating the loop integrals. When including the full contribution of the soft region, the ultra-relativistic limit at 3PM order is universal regardless of the mass of the external states. We conjecture that this is actually true to all orders in the PM expansion, since the relevant contributions are described by diagrams where the external states emit only gravitons and the two highly boosted lines representing them are connected through a tree-level GR amplitude~\cite{Amati:2007ak}. Both ingredients are universal in the class of theories we focus on.
The paper is organized as follows. In Section~\ref{analyticity} we revisit the analyticity/crossing of~\cite{Amati:1990xe} deriving on general grounds a relation between the ultra-relativistic limit of $\mathrm{Re}(\delta_2)$ and the leading high energy behaviour of the imaginary part of the 2-loop amplitude. The analysis holds for arbitrary values of the masses as long as $s\gg m_i^2$ and also shows that the ultra-relativistic limit of $\mathrm{Re}(\delta_2)$ can be universal only if $\mathrm{Im}(\delta_2)$ is enhanced by a single power of $\log(s)$.
In Section~\ref{highenergy}, we again follow~\cite{Amati:1990xe} and evaluate the $2\to 3$ tree-level amplitude relevant for the 3-particle cut determining the imaginary part of the 2-loop amplitude. Since we are interested in the ultra-relativistic case, we focus on a double Regge limit of the tree-level amplitude and show that, in this regime, it is universal and equal to the massless result of~\cite{Amati:1990xe,Ademollo:1990sd}. This implies that the ultra-relativistic limit of $\mathrm{Im}(\delta_2)$ is enhanced by a single factor of $\log(s)$ and that the ultra-relativistic limit of $\mathrm{Re}(\delta_2)$ is also universal and has no enhancement at all. In Section~\ref{direct} we follow the approach of~\cite{Bern:2019nnu,Bern:2019crd} and in particular focus on the massive ${\cal N}=8$ case studied in~\cite{Parra-Martinez:2020dzs}. We provide the results for the integrals necessary to calculate the 3PM massive eikonal in ${\cal N}=8$ supergravity in the ultra-relativistic limit. As mentioned, we consider the full ``soft'' region result and show that the universal ultra-relativistic result for $\mathrm{Re}(\delta_2)$ of Section~\ref{highenergy} is recovered. We also provide the result for $\mathrm{Re}(\delta_2)$ and the 3PM deflection angle for the generic values of $s/m_i^2$ so as to facilitate the comparison with~\cite{Parra-Martinez:2020dzs} even if we leave the discussion of the derivation to another work \cite{toap}. In Appendix~\ref{Appa} we provide some further details on the consequences of analyticity and crossing used in the main text. In Appendix~\ref{Appb} we collect all results we need for the scalar integrals in the soft region.
\section{The analyticity/crossing argument revisited}
\label{analyticity}
In this Section we review, improve and extend the arguments given in \cite{Amati:1990xe} for connecting the leading high-energy expression of $\mathrm{Re}(\delta_2)$ to the inelastic contribution to the imaginary part of the two loop scattering amplitude. The latter is the convolution of two on-shell tree amplitudes and thus an easier object to deal with.
For definiteness we will consider the case of the elastic gravitational scattering of two non-identical scalar particles of mass $m_1, m_2$. Crossing symmetry under the exchange of the two Mandelstam variables $s$ and $u$ simplifies considerably our analysis although we believe that the final results generalize to elastic processes that lack exact $s \leftrightarrow u$ symmetry.
The essential ingredients of the argument presented in~\cite{Amati:1990xe} are:
\begin{enumerate}
\item[i)] real-analyticity of the scattering amplitude $A(s^*, t) = A^*(s, t)$ as a function of the complex variable $s$ at $t \le 0$, where $-t=q^2$ is the exchanged momentum (squared);
\item[ii)] its $s \leftrightarrow u $ crossing symmetry $A(s,t) = A(u, t)$ with $u = - s -t + 2 (m_1^2 + m_2^2)$;
\item[iii)] some information about its high-energy asymptotic behavior at each loop order, a property that fixes the number of subtractions needed in order to write a convergent dispersion relation for $A$, see Appendix A.
\end{enumerate}We will also make use of the exponentiation in impact-parameter space needed for recovering a classical limit.
We will be helped by an amusing mathematical analogy with high-energy hadronic (QCD) scattering whereby the elastic hadronic amplitude $A^{\rm Had}(s,t)$ is believed to behave, at high energy, as $s \log^p s$ with some (non necessarily integer) $p$. An important quantity discussed in that context \cite{Bronzan:1974jh}, \cite{Kang:1974gt} (see \cite{Fagundes:2017iwb} for a recent review) is the ratio $\frac{\mathrm{Re} A^{\rm Had}(s,0)}{\mathrm{Im} A^{\rm Had}(s,0)}$. Such a ratio (extended to the case of non-vanishing $t$) plays an important role in \cite{Amati:1990xe} as we shall see hereafter. Note that constraints based on analyticity and crossing, being linear, apply at each loop order, at each order in $\epsilon$, and also separately to different terms in the high-energy expansion (like in the hadronic case where the Pomeron may be accompanied by subleading Regge pole contributions).
Since it is not easy to find a self-contained account of this methodology in the literature, and we are not constrained here by the Froissart bound, we will sketch for completeness the basic argument in Appendix A and refer to the above literature for further details.
We start by recalling the expression for the high energy tree-level amplitude $A_0(s,t)$ and the corresponding eikonal phase $2 \delta_0$ for generic $D = 4- 2 \epsilon$ keeping only the first relevant terms in an expansion in inverse powers of $s$. One finds \cite{Kabat:1992tb,KoemansCollado:2019ggb}:
\begin{equation}
A_0(s,t) = - \frac{8 \pi G_N s^2 (1- \frac{\Sigma}{s}+ {\cal O}(s^{-2}))}{t} +\text{analytic terms in }t \, ,
\label{A0}
\end{equation}
where $\Sigma \equiv 2(m_1^2 + m_2^2)$ and correspondingly\footnote{The kinematics is discussed in more detail before~\eqref{PHI10}. In our conventions the eikonal phase is $2 \delta$ and is given by the Fourier transform of $\frac{1}{4 E_\textrm{cm} P } A(s,t)$, with $2E_\textrm{cm} P = 2\sqrt{(p_1 p_2)^2-m_1^2 m_2^2}= s(1 - \frac{\Sigma}{2s} + {\cal O}(s^{-2}))$ is the product of the center mass energy and momentum. Following \cite{KoemansCollado:2019ggb}, we denote by $\tilde{M}(s,b)$ the Fourier transform of $\frac{1}{4 E_\textrm{cm} P} M(s,t)$ for a generic function $M(s,t)$.}:
\begin{equation}
\tilde{A}_0 \equiv \int \frac{d^{D-2} \vec{q}}{(2\pi)^{D-2}} \frac{A_0(s,t=-\vec{q}^{\;2})}{4\sqrt{(p_1 p_2)^2-m_1^2 m_2^2}} = G_N s \left(1- \frac{\Sigma}{2s}+ {\cal O}(s^{-2})\right) \frac{\Gamma(- \epsilon)}{(\pi b^2)^{-\epsilon}}\, .
\label{delta0}
\end{equation}
The leading energy behaviour of $\tilde{A}_0$ determines the leading eikonal
\begin{equation}
\label{eq:d0def}
2 \delta_0 = G_N s \frac{\Gamma(- \epsilon)}{(\pi b^2)^{-\epsilon}}\, .
\end{equation}
In the Regge limit the full amplitude is encoded by the\footnote{Since we will focus on a specific $s-u$-symmetric amplitude here we follow slightly different conventions from Eq.~(2.15) ~\cite{DiVecchia:2019kta} where the tree-level structure ${\hat{A}}^{(0)}$ was factorised.}
\begin{equation}
\label{eq:25}
i\tilde{A} = \left(1+2 i \Delta(s,b) \right) {\rm e}^{2 i \delta} - 1\;,
\end{equation}
where $\delta=\delta_0 +\delta_1+\delta_2+ \cdots$ is the classical eikonal and $\Delta$ encodes the quantum corrections, so at one-loop we know that $A$ must include a leading imaginary contribution, growing like $s^3$ (without extra logs), responsible for the start of the exponentiation of $2 \delta_0$. This indeed comes out of the box plus crossed box contribution in the form \cite{KoemansCollado:2019ggb}:
\begin{equation}
\mathrm{Im} A_1^{(1)}(s,t) = s^3 \left(1- \frac{3 \Sigma}{2s}+ {\cal O}\left(s^{-2}\right)\right)F_1(t, m_i^2)\,;~ {\rm with} ~~ \tilde{F}_1
= \frac12 (2 \delta_0)^2 \, ,
\label{delta02}
\end{equation}
which therefore fixes $F_1$ modulo analytic terms as $t\to 0$. As explained in
Appendix A analyticity and crossing symmetry imply
the following form for the amplitude itself:
\begin{align}
& A_1^{(1)}(s,t) =
-\frac{1}{\pi} \left[ \left(s - \frac{\Sigma}{2}\right)^3 \log (-s) + \left(u - \frac{\Sigma}{2}\right)^3 \log (-u) \right] F_1 (t,m_i^2) \nonumber \\
& \sim s^3 \left(1 - \frac{3\Sigma}{2s}\right) F_1(t,m_i^2) \left(i + \frac{3 t}{\pi s} \left(\log s + {\cal O}(1)\right) \right) + {\cal O}(\Sigma^2 s,t^2 s)\, ,
\label{A11}
\end{align}
as confirmed by explicit calculations \cite{KoemansCollado:2019ggb}.
It is known \cite{Amati:1990xe} that, in order to compute the classical two loop contribution to the eikonal, one has to take into account the first quantum contribution at one-loop level up to ${\cal O}(\epsilon)$. It is also known from explicit calculations \cite{Henn:2019rgj, KoemansCollado:2019ggb,DiVecchia:2019myk, DiVecchia:2019kta} that such a contribution contains two powers of $s$ and no $\log s$ in its imaginary part\footnote{There are also terms with an $s^{-\epsilon}$ behaviour in the analytic part of the amplitude \cite{Henn:2019rgj, DiVecchia:2019kta}. Although they are irrelevant for our subsequent discussion, we have checked that they also fulfill the constraints of analyticity and crossing.}. Using \eqn{neven} with $p=0$ from Appendix A we get:
\begin{equation}
A_1^{(2)}(s,t)
\sim s^2 G_1(t, m_i^2) (-i \pi + 2 \log s) + {\cal O}(s^2) \, .
\label{A12}
\end{equation}
In \cite{Amati:1990xe} this second structure was omitted since, in the specific case discussed there, there was no correction to the leading imaginary part appearing in \eqn{A11} for $D=4$. This was also shown \cite{KoemansCollado:2019ggb} to be the case for non-vanishing\footnote{We have checked that the structure of \eqn{A11} and \eqn{A12} is exactly recovered, at all $D$, by taking the high-energy limit of the explicit results (2.21), (2.25) given in \cite{KoemansCollado:2019ggb}.} $m_{1,2}$. However in other processes, in $D \ne 4$, or in supergravity theories, the structure \eqn{A12} is also present. We shall see that, provided a certain exponent takes a particular value, $A_1^{(2)}$ actually drops out in the final expression for $\mathrm{Re}(\delta_2)$, in agreement with several explicit calculations \cite{DiVecchia:2019kta,Bern:2020gjj}.
We finally turn to the two-loop amplitude $A_2$ . We know that it should have a leading ${\cal O}(s^4)$ real term in order to reproduce the correct third-order term in the exponentiation of $\delta_0$. Analyticity and crossing symmetry now fixes the amplitude to be of the form (see again Appendix A):
\begin{equation}
\begin{gathered}
A_2^{(1)}(s,t) = \left(s^4 + 2 s^3 (t - \Sigma)+ {\cal O}(t^2s^2, \Sigma^2 s^2)\right) F_2(t, m_i^2) \\
{\rm with}~ s^4\left(1- 2 \frac{\Sigma}{s}\right) \tilde{F}_2(b, m_i^2) = - \frac16 (2 \delta_0)^3 \, .
\end{gathered}
\label{A21}
\end{equation}
As a consequence $F_2(t,m_i^2)$ is known (up to analytic terms at $t=0$). Note that the terms ${\cal O}(s^2)$ in \eqn{A21} do not contribute to the classical phase shift.
In order to have a classical contribution in the ultra relativistic (or massless) limit we need a term in the amplitude proportional to $s^3$ (up to logarithms)\footnote{We should mention that in the massive case other structures emerge both at the one and at the two-loop level. At one-loop the ${\cal O}(s^2)$ contribution actually contains a classical piece of the form (with $D=4$ for concreteness)
$
\mathrm{Re} A_1^{(3)}(s,t) \sim s^2 \frac{m}{q} \Rightarrow \delta_1 \sim \frac{G_N s}{\hbar} \frac{G_N m}{b}$. At two loops we can get classical contributions from an amplitude going like $s^2 m^2 \log q^2$ while an amplitude behaving like $s^3 (m/q) \log q^2$ should also be present in order to accommodate the $\delta_0 \delta_1$ interference. Fortunately, all these terms arrange among themselves and do not interfere with the rest of our argument.}
contributing, in general, to both the real and the imaginary parts of $A_2$.
Let us parametrize the latter in the form
\begin{equation}
\mathrm{Im} A_2^{(2)}(s,t) = G_2(t, m_i^2) s^3 \log^p(s) ~~{\rm with~some}~ p > 0 \, .
\label{ImA22}
\end{equation}
We may then use \eqn{nodd} of Appendix A to get:
\begin{equation}
\mathrm{Re} A_2^{(2)}(s,t) = \frac{\pi p}{2 \log s} \mathrm{Im} A_2^{(2)}(s,t) \left(1 + {\cal O}\left(\frac{1}{\log^2 s}\right) \right) \, .
\label{ReA22}
\end{equation}
A highly non-trivial test of \eqn{A21} and \eqn{ReA22} is provided by the explicit results of \cite{Henn:2019rgj}. These are reported, for instance, in Eq.~(B.1) of \cite{DiVecchia:2019kta} where both the structure of \eqn{A21} and the one of \eqn{ImA22} are present. The process discussed there is not necessarily $s-u$ symmetric but becomes so (at the order we consider) by choosing the external states appropriately\footnote{This can be done by choosing an axion and a dilaton as incoming state, which implies that $\hat{A}^{(0)}=(1+\frac{t}{s})+\ldots$ in the notations of~\cite{DiVecchia:2019kta}.}. After doing so all the real sub-leading terms (of ${\cal O}(s q^2)$) are matched either by the leading ones on the first line of (B.1) through \eqn{A21}, or by the logarithmically-enhanced imaginary terms through \eqn{ReA22}, or both. Nothing constrains instead the non logarithmically-enhanced imaginary terms at this level since their real counterparts would be sub-sub-leading in $s$.
We finally use exponentiation in impact parameter space to argue that:
\begin{align}
& \mathrm{Re} \tilde{A}_2(s,b) =\mathrm{Re} \tilde{A}_2^{(1)}(s,b) +\mathrm{Re} \tilde{A}_2^{(2)}(s,b) = - \frac43 \delta_0^3 - 4 \mathrm{Im} \delta_1 \delta_0 + 2 \mathrm{Re} \delta_2 \nonumber \\
& \Rightarrow \mathrm{Re} \tilde{A}_2^{(2)}(s,b) = 2 \mathrm{Re} \delta_2 - 2 s^3\widetilde{(t F_2)} - 4 \delta_0 \mathrm{Im} \delta_1 ~;~ 2 \mathrm{Im} \delta_1 = - \pi s^2~ \tilde{G}_1 \, ,
\label{RetA22}
\end{align}
where we used \eqn{A21} including the contribution $2 s^3 t F_2$ to $2 \mathrm{Re}( \delta_2)$. Analogously,
\begin{equation}
\mathrm{Im} \tilde{A}_2(s,b) = 2 \mathrm{Im} \delta_2 + 4 \mathrm{Re}(\delta_1) \delta_0~;~ 2 \mathrm{Re}(\delta_1) = 2 s^2 \log s ~\tilde{G}_1 + \frac{3}{\pi} s^2 \log s ~ \widetilde{(t F_1)}\, ,
\label{ImtA22}
\end{equation}
where the term $4 \delta_0 \mathrm{Re}(\delta_1)$ represents the full elastic contribution to the $s$-channel discontinuity of the amplitude while $2 \mathrm{Im} (\delta_2)$ represents the inelastic (3 particle) contribution.
We are interested in extracting $\mathrm{Re}( \delta_2)$ from Eqs.~\eqref{RetA22} and \eqref{ImtA22}. In particular we would like to connect $\mathrm{Re}( \delta_2)$ to $\mathrm{Im}( \delta_2)$, an easier quantity to evaluate. Since $F(t,M_i^2)$ and $H(t, m_i^2)$ are both known in terms of $\delta_0$ (see below) the only obstacle for obtaining the sought for relation is to eliminate the non-universal $G_1(t,m_i^2)$ contribution appearing in $\mathrm{Re}(\delta_1)$ and $\mathrm{Im} \delta_1$. Using \eqn{ReA22} a straightforward calculation gives:
\begin{align}
& 2 \mathrm{Re} (\delta_2) = \frac{\pi p}{2 \log s} (2 \mathrm{Im} \delta_2) + \pi (p-1) 2 \delta_0 s^2 \tilde{G}_1 +\frac{3p}{2} s^2 (2 \delta_0) \widetilde{(t F_1)} + 2 s^3 \widetilde{(tF_2)} + {\cal O}\left(\frac{1}{\log s}\right) \nonumber \\
&= \frac{\pi p}{2 \log s} (2 \mathrm{Im} \delta_2) - \frac{4-3p}{s} \delta_0 (2 \nabla \delta_0)^2 - (p-1) (2 \delta_0) (2 \mathrm{Im} \delta_1) + {\cal O}\left(\frac{1}{\log s}\right) \, ,
\label{Redelta2}
\end{align}
where in the last equation we have used
\eqn{delta02} and \eqn{A21} to express $ \widetilde{(t F_1)}$ and $\widetilde{(t F_2)}$ in terms of $\delta_0$, and \eqn{A12} to express $ \tilde{G}_1$ in terms of $\mathrm{Im} \delta_1$.
We thus notice that only for $p=1$ $\mathrm{Re}(\delta_2)$ is given entirely in terms of $\mathrm{Im} \delta_2$ and of $\delta_0$. For $p \ne 1$, $\mathrm{Re}(\delta_2)$ will also depend on $\tilde{G}_1$ i.e. on $\mathrm{Im}( \delta_1)$ which is non universal. We shall see in Section~\ref{highenergy} that $\mathrm{Im}(\delta_2)$ is indeed universal at high energy, and that \eqn{ImA22} holds with $p=1$. As a result, also $\mathrm{Re}(\delta_2)$ is universal in the high-energy limit and given by:
\begin{equation}
2 \mathrm{Re}(\delta_2) = \frac{\pi}{2 \log s} \mathrm{Im}(2\delta_2) -\frac{ \delta_0 }{s} (\nabla 2 \delta_0)^2 + {\cal O}\left(\frac{1}{\log s}\right) \, .
\label{Redelta2p1}
\end{equation}
Note that both $\mathrm{Im}(\delta_2)$ and $\delta_0 (\nabla \delta_0)^2$ are IR divergent, but these divergences cancel so that the physical observables, such as the deflection angle, derived from $\mathrm{Re}(\delta_2)$ are finite.
\section{High energy limit of the 3-particle cut}
\label{highenergy}
In this section we focus on the 3-particle cut contribution to $A_2$ as depicted in Fig.~\ref{fig:3pdisc}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\path [draw, ultra thick, blue] (-5,6)--(-3,5)--(-1,5);
\path [draw, ultra thick, blue] (-5,0)--(-3,1)--(-1,1);
\path [draw] (-3,3)--(-1,3);
\path [draw] (-3,1)--(-3,5);
\path [draw, ultra thick, blue] (5,6)--(3,5)--(1,5);
\path [draw, ultra thick, blue] (5,0)--(3,1)--(1,1);
\path [draw] (3,3)--(1,3);
\path [draw] (3,1)--(3,5);
\draw[dashed] (-3,3) ellipse (1.3 and 2.3);
\draw[dashed] (3,3) ellipse (1.3 and 2.3);
\node at (0,5){$k_2$};
\node at (0,3){$k$};
\node at (0,1){$k_1$};
\node at (-5,6)[left]{$p_2$};
\node at (5,6)[right]{$p_3$};
\node at (-5,0)[left]{$p_1$};
\node at (5,0)[right]{$p_4$};
\node at (-3,4)[left]{$q_2$};
\node at (3,4)[right]{$q_3$};
\node at (-3,2)[left]{$q_1$};
\node at (3,2)[right]{$q_4$};
\end{tikzpicture}
\end{center}
\caption{\label{fig:3pdisc} The lines in bold represent energetic massive states, while the others represent massless states. The process depicted inside the dashed bubbles should not be interpreted as a specific Feynman diagram contribution, but just as a visual aid to recall the definition of the kinematic variables $q_i$. We are interested in the {\em full} $2\to 3$ tree level process, see~\eqref{ampli} as an explicit example.}
\end{figure}
Since we are interested in the ultra-relativistic limit, the amplitude is dominated by the graviton exchange whose contribution is universal. Thus for the sake of simplicity we focus on a gravity theory that can be obtained by taking the double copy of gauge theory amplitudes or equivalently the field theory limit of string amplitudes. The presence of extra fields, and the dilaton in particular, becomes irrelevant in the limit $s\gg m_i^2$. In order to obtain an explicit result we consider a 5-point amplitude in bosonic string theory where the external states $p_1$ and $k_1$ have a Kaluza-Klein momentum along one compact direction and $p_2$ and $k_2$ along another so they describe massive scalars in the uncompact dimensions\footnote{We used the KLT procedure to obtain the closed string amplitude from the open string one, see for instance Eq.~(5.3) of~\cite{BjerrumBohr:2010zs}: in our case, in order to encode the dependence on the KK masses, one needs to use $2 k_i k_j$, instead of $s_{ij}$, in the second line of that equation where all the scalar products are restricted to the uncompact directions.}. By taking the field theory limit we obtain the following result for the $2\to 3$ process in the left part of Fig.~\ref{fig:3pdisc},
\begin{align}
\label{ampli}
M^{\mu \nu} = &\; 2(8\pi G_N)^{\frac{3}{2}} \Bigg\{ (k_1p_2)(k_2 p_1) \left( -\frac{k_{1\mu}}{k_1 k} + \frac{k_{2\mu}}{k_2 k} \right)\left( -\frac{p_{2\nu}}{p_2 k} + \frac{p_{1\nu}}{p_1 k} \right) + 4 q_1^2 q_2^2 \\
\times & \Bigg[ \frac{q_1^\mu (p_1p_2) + p_2^\mu (p_1k) - p_1^\mu (p_2k)}{q_1^2 q_2^2}
+ \frac{k_2^\mu}{2k_2 k} \left( \frac{p_1p_2}{q_1^2} + \frac{1}{2} \right) - \frac{k_1^\mu}{2k_1k} \left( \frac{p_1p_2}{q_2^2} +\frac{1}{2} \right) \Bigg] \nonumber \\
\times & \Bigg[ \frac{q_1^\nu (k_1k_2) + k_2^\nu (k_1k) - k_1^\nu (k_2k) }{q_1^2 q_2^2}
- \frac{p_1^\nu}{2p_1 k} \left( \frac{k_1k_2}{q_2^2} + \frac{1}{2} \right) + \frac{p_2^\nu}{2p_2 k} \left( \frac{k_1k_2}{q_1^2} + \frac{1}{2} \right) \Bigg] \Bigg\},
\nonumber
\end{align}
where we introduced the variables $q_1 =-p_1-k_1$, $q_2 = - p_2-k_2$, which satisfy $k=q_1+q_2$. The overall normalization was fixed by considering the leading high energy term for the Weinberg soft limit $k\to 0$, where~\eqref{ampli} has to reduce to the leading $2\to 2$ amplitude times a universal factor.
One can then calculate the contribution to the imaginary part of $A_2$ from the 3-particle cut by the usual phase space integral
\begin{align}
2 \mathrm{Im} A_2^{(3p)} & = \frac{1}{\left((2\pi)^{D-1}\right)^3} \int \frac{d^{D-1} k_1}{2E_{k_{1}}}\int \frac{d^{D-1} k_2}{2E_{k_{2}}} \int \frac{d^{D-1} k}{2E_{k}}
{\it M}^{\mu \nu}(p_1, p_2 ; k_1, k_2 ,k) \nonumber \\
\times &
{\it M}_{\mu \nu} (-k_1,-k, -k_2; p_3, p_4) (2\pi)^{D} \delta^{(D)} (p_1+p_2 + k_1 +k_2 +k)\;.
\label{UR1}
\end{align}
Here we are interested in the ultra-relativitic limit, so it is possible to approximate~\eqref{UR1} in the double Regge limit\footnote{This limit can be performed on the full $2\to 3$ amplitude~\eqref{ampli} by scaling $k_1^\mu$ and $k_2^\mu$ as $s_{1,2}$ and one obtains directly Eq.~(4.10) of~\cite{Ademollo:1990sd}; notice that the latter result is traceless showing explicitly that the dilaton decouples in the double Regge limit.}
\begin{equation}
\label{eq:DRegge}
s\gg s_1, s_2 \to \infty\;, \quad \mbox{with }~ \frac{s_1 s_2}{s}\;,~ q_i^2\;,~ m_i^2~\mbox{ fixed}\;,
\end{equation}
where $s=-(p_1+p_2)^2$ and $s_i= - (k+k_i)^2$. In this regime it is convenient to write the kinematic variables in terms of the $(D-2)$ space-like vectors orthogonal to the direction where the energetic states are boosted (which we take to be $x^{D-1}$). By working in the Breit frame and taking light-cone variables for the time and longitudinal direction $(p_0 +p_{D-1},\vec{p}, p_0-p_{D-1})$, we have
\begin{align}
p_1 & = ( {\overline{m}}_1 {\rm e}^{y_1}, -\frac{\vec{q}}{2}, {\overline{m}}_1 {\rm e}^{-y_1})~;~p_2= ( {\overline{m}}_2 {\rm e}^{y_2}, \frac{\vec{q}}{2}, {\overline{m}}_2 {\rm e}^{-y_2}) \nonumber \\
p_4 & = (- {\overline{m}}_1 {\rm e}^{y_1}, -\frac{\vec{q}}{2}, -{\overline{m}}_1 {\rm e}^{-y_1})~~;~~p_3 = (-{\overline{m}}_2 {\rm e}^{y_2}, \frac{\vec{q}}{2}, {-\overline{m}}_2 {\rm e}^{-y_2})\;,
\label{PHI10}
\end{align}
where $y_i$ are the rapidities of the external particle and
${\overline{m}}_{1,2}^2 =m_{1,2}^2 + \frac{\vec{q}^{\;2}}{4}$
The intermediate states with momentum $k_1, k_2, k$ (all incoming) are given by
\begin{align}
k_1 & = ( -{\overline{m}}_1 ' {\rm e}^{y_1'}, \frac{\vec{q}}{2}-{\vec{q}}_1, -{\overline{m}}_1 ' {\rm e}^{-y_1'})~~;~~
k_2 = (-{\overline{m}}_2 ' {\rm e}^{y_2'}, -\frac{\vec{q}}{2}-{\vec{q}}_2, {-\overline{m}}_2 ' {\rm e}^{-y_2'})\;,
\nonumber \\
k & = (-|k| {\rm e}^{y}, \vec{k} , -|k| {\rm e}^{-y})\;,
\label{PHI12}
\end{align}
where
$({\overline{m}}_{1} ')^2 = m_1^2 + ( \frac{\vec{q}}{2} -{\vec{q}}_1)^2$ and $({\overline{m}}_{2} ')^2 = m_2^2 + ( \frac{\vec{q}}{2} +{\vec{q}}_2)^2$.
In the double Regge limit, one can use approximations such as $q_i^2 \sim \vec{q}_i^{\;2}$ and~\eqref{ampli} reduces to the result of~\cite{Amati:1990xe} and Eq.~(4.10) of~\cite{Ademollo:1990sd} even if in those papers the external states are massless. This is not surprising since we are keeping the masses $m_i^2$ fixed as the Mandelstam variables $s$, $s_i$ become large, however there is a point that deserves a further comment. By always keeping the leading order when rewriting the $D$-dimensional kinematics in terms of the transverse, we obtain that the ultra-relativistic limit of Eq.~\eqref{UR1} agrees with the result of~\cite{Amati:1990xe}
\begin{align} \label{fqi}
\mathrm{Im} A_2^{(3p)} \simeq &~ \frac{(16 \pi G_N)^3s^3}{2\pi} \int dy \int \frac{d^{D-2} \vec{q}_1}{(2\pi)^{D-2}} \int \frac{d^{D-2} \vec{q}_2}{(2\pi)^{D-2}} \frac{1}{(\vec{k}^2)^2} \\
\times & \left[ \frac{\left[ (\vec{q}_1 \vec{q}_4)(\vec{q}_2 \vec{q}_3) + (\vec{q}_1 \vec{q}_2)(\vec{q}_3 \vec{q}_4) -
(\vec{q}_1 \vec{q}_3)(\vec{q}_2 \vec{q}_4) \right]^2}{\vec{q}_1^{\;2} \vec{q}_2^{\;2} \vec{q}_3^{\;2} \vec{q}_4^{\;2}} +1 - \frac{( \vec{q}_1 \vec{q}_2)^2}{\vec{q}_1^{\;2} \vec{q}_2^{\;2}}- \frac{(\vec{q}_3 \vec{q}_4)^2}{\vec{q}_3^{\;2} \vec{q}_4^{\;2}} \right], \nonumber
\end{align}
where we used the delta function in~\eqref{UR1} in order to perform the integrals over the rapidities\footnote{This is possible only if $m_{t,k}{\rm e}^{\pm y} \leq \sqrt{s}$, which provides the limits of integration for the rapidity variable $y$.} $y'_{1,2}$ and over the spatial components $\vec{k}$. This result has a milder than naively expected IR behaviour since in~\eqref{UR1} there are no terms diverging as ${q}_1^{-2} {q}_2^{-2}$ (or as ${q}_3^{-2} {q}_4^{-2}$) the the relevant ${q}_i$ are small. The cancellation of such contribution ensures that there are no $1/\epsilon^2$ contributions in the massless case and, crucially for us, no $\log^2(s)$-enhanced terms, {\em i.e.} no contribution with $p=2$ in~\eqref{ImA22}. This can be seen as follows: terms where the integration over the $\vec{q}_i$ are factorised can produce $1/\epsilon^2$ contribution if each integration is independently IR divergent\footnote{The UV divergences in~\eqref{ampli} cancel, see~\cite{Amati:1990xe}.}, however in this case, one should keep subleading corrections to the approximation $q_i^2 \sim \vec{q}_i^{\;2}$ by including terms that break the factorisation such as $q_1^2 = \vec{q}_1^{\;2} + A (\vec{q}_1+\vec{q}_2)^2 +\ldots$ and $q_2^2 = \vec{q}_2^{\;2} + B (\vec{q}_1+\vec{q}_2)^2 + \ldots$; then $A B\sim (m_1 m_2/s)^2$ acts as regulator in the deep IR region producing a contribution proportional to $\epsilon^{-1} \log(m_1 m_2/s)$ instead of $1/\epsilon^2$. Since terms diverging as ${q}_1^{-2} {q}_2^{-2}$ are absent in~\eqref{ampli} then this mechanism does not apply and the only possibility to generate a factor of $\log(s)$ is from the integration over $y$.
Thus starting from~\eqref{fqi} it is possible to follow the derivation of~\cite{Amati:1990xe} and obtain
\begin{equation}
\mathrm{Im} {\widetilde{A_2^{(3p)}}} (s, b) \simeq \frac{1}{2 s} \, \frac{ (8G_N s)^3 \log s \Gamma^3 (1-\epsilon)}{16 (\pi b^2)^{1-3\epsilon}}
\left[ - \frac{1}{4\epsilon}+ \frac{1}{2} + {\cal{O}} (\epsilon) \right]\;.
\label{expaMtilde}
\end{equation}
This result is nothing else than $2\mathrm{Im}(\delta_2)$ and we can use it in the general result~\eqref{Redelta2p1}. First one can check that, as in \cite{Amati:1990xe} for the massless case, IR divergences cancel in \eqn{Redelta2p1} and notice that for this cancellation to happen it is crucial that~\eqref{expaMtilde} implies that $p=1$ in~\eqref{ImA22}. Then the finite term provides
finite, smooth, and universal result for the high-energy limit of $\mathrm{Re}(\delta_2)$ in $D=4$:
\begin{equation}\label{eq:ur}
2 \mathrm{Re}(\delta_2) \simeq \frac{4 G_N^3 s^2}{\hbar b^2}
\end{equation}
in agreement with \cite{Amati:1990xe,DiVecchia:2019kta,Bern:2020gjj}.
\section{Direct calculation of $\mathrm{Re} \delta_2$ at high energy}
\label{direct}
In order to corroborate with an explicit example the general results obtained in the previous sections, in this section we provide an explicit expression for the four-point two loop amplitude in ${\cal{N}}=8$ supergravity for the scattering of two scalar particles with masses $m_1$ and $m_2$. This case has been already studied at one loop in~\cite{Caron-Huot:2018ape} and at two loops in~\cite{Parra-Martinez:2020dzs}. We follow closely the procedure of~\cite{Parra-Martinez:2020dzs} where the two-loop amplitude in ${\cal{N}}=8$ massive supergravity is written in terms of a set of basic scalar integrals $I_\mathrm{T}$, where the subscript $\mathrm{T}\in\{\mathrm{III},\overline{\mathrm{III}},\mathrm{IX},\overline{\mathrm{IX}}\}$ indicates the diagram's topology,
\begin{align}
A_2 (s, q^2) = & ~ \frac{(8\pi G_N)^3}{2} \Bigg( (s-m_1^2-m_2^2)^4 + (u-m_1^2-m_2^2)^4 - t^4 \Bigg)
\nonumber \\
\times & \Bigg[(s-m_1^2-m_2^2)^2 \left( I_{\rm III} + I_{\rm IX} + I_{\rm XI}\right) \nonumber \\
+& (u-m_1^2-m_2^2)^2 \left( I_{\overline{\rm III}} +I_{\overline{\rm IX}} +I_{\overline{\rm XI}} \right) +t^2 \left( I_{\rm H} + I_{\overline{\rm H}} + \cdots \right)\Bigg]\,.
\label{CR1}
\end{align}
Here we follow the notation of Eq.~(3.16) of~\cite{Parra-Martinez:2020dzs} and, for the sake of simplicity, we have taken the angle appearing in that reference specifying the Kaluza-Klein reduction to be $\phi = \frac{\pi}{2}$; in addition we have chosen the incoming particles to be an axion and a dilaton to give an amplitude that is symmetric under the exchange of $s$ with $u$. Finally we neglected all the integral structures that are subleading in the high energy regime we are interested in or, equivalently, in limit of small momentum transfer.
The integrals $I_{\rm H}$ and $I_{\overline{\rm H}}$ were computed in an $\epsilon$ expansion for arbitrary kinematics in~\cite{Bianchi:2016yiq}, while the remaining integrals in~\eqref{CR1} were studied in~\cite{Parra-Martinez:2020dzs}: they adapted the differential equation approach, adopted in particular in \cite{Henn:2013woa} for the double box integral\footnote{The double box and non-planar double box were also calculated in \cite{Smirnov:2001cm,Heinrich:2004iq,Henn:2013woa} via a Mellin--Barnes representation; whenever possible we checked that the results in our
Appendix~\ref{Appb} are consistent with the papers mentioned above.}, by implementing the soft limit $|t| \ll s, m_1^2, m_2^2$ from the beginning and then further simplified the problem by calculating the boundary conditions of the relevant differential equation in the ``potential'' approximation. Here instead we do not take this extra approximation and we provide the result valid in the full soft region even if we focus on the ultra-relativistic $(s\gg m_i^2)$ case. We collect in Appendix~\ref{Appb} the results for all scalar integrals needed and leave a detailed discussion of the generic kinematics to a followup paper~\cite{toap}.
When we add all the contributions we see that all $\log s$ in the real part of the amplitude cancel and one is left only with a $\log s$ in the imaginary part. In conclusion the complete ultra-relativistic amplitude is
\begin{eqnarray}
A_2 (s, q^2) & \simeq & \frac{(8\pi G_N)^3 s^3}{(4\pi)^4} \Bigg\{ - \frac{2\pi^2 s}{\epsilon^2 q^2} \left( \frac{4\pi {\rm e}^{-\gamma_E}}{q^2}\right)^{2\epsilon} - \frac{4\pi (i -\pi)}{\epsilon^2}\left( \frac{4\pi {\rm e}^{-\gamma_E}}{q^2}\right)^{2\epsilon} \nonumber \\
&+ & \frac{1}{\epsilon}\left( \frac{4\pi {\rm e}^{-\gamma_E}}{q^2}\right)^{2\epsilon} \left[ 4\pi^2 + 8\pi i \log \frac{s}{m_1m_2} -8\pi i - i \frac{\pi^3}{3} \right] \Bigg\} + {\cal{O}}(\epsilon^0)\,.~
\label{CR27}
\end{eqnarray}
Before proceeding further, let us notice that this amplitude perfectly satisfies some of the general properties discussed in Sect. \ref{analyticity}. In fact, the ratio between the leading and the real part of the subleading term at the order $\frac{1}{\epsilon^2}$ is equal to $\frac{s}{2t}$ in agreement with the first line of Eq. \eqref{A21}, while, at the order $\frac{1}{\epsilon}$, the first two terms of the square bracket satisfy Eq. \eqref{ReA22} for $p=1$.
This is the consequence of a non-trivial cancellation since the individual integrals contain higher $\log^p(s)$ contributions, see for instance the contribution coming from $I_{\rm H}$ in~\eqref{eq:H+bH}. There are also further non-trivial cancellations that ensure that the leading term at large distance, proportional to $(q^2)^{-1+2\epsilon}$, takes a particularly simple form in line with the expectation of the eikonal exponentiation~\eqref{eq:25}. This is more easily seen in impact parameter space where we get
\begin{align}
{\tilde{A}}_2 (s, b) \simeq &~ \Bigg\{ \frac{ G_N^3 s^3 (\pi b^2 )^{3\epsilon}\Gamma^3 (1-\epsilon)}{6 \epsilon^3} - \frac{8G_N^3 (i -\pi)s^2 (\pi b^2 )^{3\epsilon}\Gamma^3 (1-\epsilon) }{\epsilon \pi b^2 } \nonumber \\
+ & \frac{2 G_N^3s^2 \Gamma^3 (1-\epsilon)}{(\pi b^2)^{1-3\epsilon} } \left[ 4\pi + 8 i \log \frac{s}{m_1m_2} -8 i - i \frac{\pi^2}{3} \right] + {\cal O}(\epsilon) \Bigg\}\;.
\label{CR28a}
\end{align}
In order to compute the new contribution to the eikonal we must first subtract the contribution of the lower eikonal $\delta_0$ and $\Delta_1$ that are equal to\footnote{The result for $\mathrm{Im}(2 \Delta_1)$ is apparently different from the one obtained in~\cite{DiVecchia:2019kta} because of the different conventions mentioned in footnote around Eq.~\eqref{eq:25}}
\begin{eqnarray}
2\delta_0 = \frac{G_N s \Gamma (1-\epsilon) (\pi b^2)^{\epsilon} }{-\epsilon}~~;~~2 \mathrm{Im} \Delta_1 \simeq
\frac{8 G_N^2 s (\pi b^2)^{2\epsilon} \Gamma^2 (1-\epsilon)}{b^2} \left(1+ \frac{\epsilon}{2} \right)\;.
\end{eqnarray}
Using them we can write the real part of the amplitude in Eq. \eqref{CR28a} as follows:
\begin{equation}
\mathrm{Re} {\tilde{A}}_2 (s, b) \simeq - \frac{i}{6}(2i\delta_0)^3 - \mathrm{Im} (2\Delta_1) 2 \delta_0 +\frac{4G_N^3 s^2 (\pi b^2)^{3\epsilon} \Gamma^3 (1-\epsilon)}{ b^2} + {\cal O}(\epsilon)\,.~~~~~~
\end{equation}
The last term in this equation is the ultra-relativistic limit of $\mathrm{Re}(\delta_2)$ and is immediate to check that in $D=4$ one recovers the universal result~\eqref{eq:ur} obtained in the previous section by generalising the approach of~\cite{Amati:1990xe}.
For completeness, let us also present the real part of the eikonal to third post-Minkowskian order for generic $s=m_1^2+m_2^2+2m_1m_2\sigma$ (with $\sigma \geq1$)~\cite{toap},
\begin{equation}\label{}
\mathrm{Re}(\delta_2)= \frac{2 G_N^3 (2m_1 m_2 \sigma)^2 }{b^2}\left[\frac{\sigma^4}{
\left(\sigma
^2-1\right)^2}-\cosh ^{-1}(\sigma ) \left(\frac{\sigma^2}{\sigma ^2-1}-\frac{\sigma ^3
\left(\sigma ^2-2\right)
}{\left(\sigma ^2-1\right)^{5/2}}\right) \right]\,.
\end{equation}
Furthermore, the corresponding 3PM contribution to the scattering angle as a function of the angular momentum $J$ reads
\begin{align}\label{eq:chi3}
\chi_{3\mathrm{PM}}=&-\frac{16 m_1^3 m_2^3 \sigma ^6 G_N^3}{3 J^3
\left(\sigma ^2-1\right)^{3/2}}+\frac{32 m_1^4 m_2^4
\sigma ^6 G_N^3}{J^3 \left(\sigma ^2-1\right)s}
\\\nonumber
&- \frac{4}{J^3 s} \left(16 m_1^4 m_2^4 \sigma ^4 G_N^3-\frac{16 m_1^4 m_2^4 \sigma^5 \left(\sigma ^2-2\right) G_N^3}{\left(\sigma ^2-1\right)^{3/2}}\right) {\rm arcsinh}\sqrt{\frac{\sigma-1}{2}}\;.
\end{align}
Let us conclude with a few comments on the 3PM result~\eqref{eq:chi3}. The first term in the first line is entirely determined by $\chi_{1\mathrm{PM}}$, while the first term in the second line is due to integrals $I_{\rm H}$ and $I_{\overline{\rm H}}$, see~\eqref{eq:H+bH}. Together they reproduce exactly Eq.~(6.41) of~\cite{Parra-Martinez:2020dzs}. The second term in the second line is a contribution coming from the full soft-region analysis of the crossed double-box integrals. In the ultra-relativistic limit $\sigma \gg 1$ the leading term ${\cal O}(\sigma^4$) in each term in the round parenthesis in the second line cancels and only the first line survives reproducing the universal and finite ultra-relativistic result which was the main focus of this paper. Thanks to analyticity/crossing argument in Section~\ref{analyticity}, this cancellation is a consequence of the cancellation in the imaginary part mentioned below Eq.~\eqref{CR27}. Notice that the functional form of~\eqref{eq:chi3} matches exactly Eq.~(3.65) of~\cite{Damour:2019lcq} (where of course the Schwarzschild contribution is substituted by the probe-limit relevant to this supersymmetric case) and so it is possible to define a function $\overline{C}(\sigma)$ also for the ${\cal N}=8$ case.
It is interesting to look at the opposite limit $\sigma \to 1$ which is relevant to the PN (Post-Newtonian) limit: the terms already present in Eq.~(6.41) of~\cite{Parra-Martinez:2020dzs} have a standard $n$PN expansion where $n$ is integer. The new terms yield contributions only at half-integer PN orders, starting at $1.5$PN, and so they do not modify the integer PN data. However, these half-integer PN terms, usually associated with dissipative phenomena, are somehow unexpected at order $G_N^3$, so it would certainly be interesting to repeat the same analysis for pure GR, instead of ${\cal N}=8$ supergravity case. If a similar pattern appears also in GR, as it seems reasonable since the same integrals $I_{\rm T}$ appear in all cases, that would of course be the best setup where to investigate these issues in more detail.
\vspace{5mm}
\noindent {\large \textbf{Acknowledgements} }
We thank Zvi Bern, Arnau Koemans Collado, Thibault Damour, Stephen Naculich, Julio Parra Martinez, Augusto Sagnotti and Congkao Wen for useful discussions. We thank Claude Duhr and Vladimir Smirnov for helping us check with independent methods some of our results and Gudrun Heinrich for some numerical checks. The research of RR is partially supported by the UK Science and Technology Facilities Council (STFC) Consolidated Grant ST/P000754/1 ``String theory, gauge theory and duality''. The research of CH (PDV) is fully (partially) supported by the Knut and Alice Wallenberg Foundation under grant KAW 2018.0116. PDV, RR and GV would like to thank the Galileo Galilei Institute for hospitality during the workshop ``String theory from the worldsheet perspective'' where they started discussing this topic.
|
1,108,101,565,171 | arxiv | \section{Introduction}\label{sec:introduction}
Our knowledge of solar magnetism relies heavily on our ability to detect and interpret the polarization signatures of magnetic fields in solar spectral lines.
Consequently, new Stokes polarimeters are designed to have the capability to observe the solar atmosphere in a variety of spectral lines over a wide wavelength range.
One immediate instrument requirement stemming from this need for wavelength diversity is that the polarization modulation scheme must be \emph{efficient} at all wavelengths within the working range of the spectropolarimeter.
(For the definition of polarimetric efficiency see \citealp{2000ApOpt..39.1637D}.)
Typically, one attempts to achieve this goal by achromatizing the polarimetric response of a modulator.
This, for instance, is the rational behind the design of super-achromatic wave plates \citep{1974MExP...12..361S,2004JQSRT..88..319S,2008ChJAA...8..349M}.
\cite{2010ApOpt..49.3580T} argued that for many instruments achromaticity is too strong a constraint, and instead proposed the concept of the \emph{polychromatic} modulator that is efficient at all wavelengths of interest, but has polarimetric properties that vary with wavelength.
In this paper, we present the development process of a modulator for the CRisp Imaging SpectroPolarimeter (CRISP) Fabry-Perot tunable narrow-band imaging instrument \citep{2006A&A...447.1111S,2008ApJ...689L..69S} at the Swedish 1-m Solar Telescope \citep[SST,][]{2003SPIE.4853..341S}.
First, we compare the performance of two possible modulator designs, and use a Monte-Carlo tolerance analysis to evaluate their robustness.
We analyze the sensitivity of the modulator to thermal conditions, and present the opto-mechanical packaging and electrical interfaces.
The modulator was constructed and tested at the High Altitude Observatory (HAO).
We compare as-built properties to those of the design.
Finally, we show some example polarimetric observations made using this modulator.
This modulator was designed and built to replace a modulator based on Liquid Crystal Variable Retarders (LCVRs).
LCVRs are electro-optical devices that have a fixed fast axis orientation, but, as the name implies, can be set to any retardance within some range by applying an AC voltage.
LCVRs generally have much slower switching speeds than Ferro-electric Liquid Crystals (\edit1{FLCs}).
In contrast to LCVRs, \edit1{FLCs} have a constant retardance but switch their fast axis orientation between two states separated by a switching angle, typically around $45\degr$.
The LCVRs in the old CRISP modulator had to be ``overdriven'' and the modulator state order had to be optimized in order to switch during the $10~\mathrm{ms}$ readout time of the CRISP cameras.
More importantly, however, thermal sensitivities of the setup forced polarimetric calibration more frequently than desired \citep{2008A&A...489..429V}.
We limit ourselves to designs that use \edit1{FLCs} because their fast switching speed allows the state of the modulator to be changed in less than the allotted $10~\mathrm{ms}$, thus allowing for the highest possible modulation rate.
Fast modulation is desirable because seeing-induced crosstalk between Stokes parameters that is a dominant source of error in ground-based polarimeters is less at higher modulation rates \citep{1987ApOpt..26.3838L,2004ApOpt..43.3817J,2012ApJ...757...45C}.
Also, it is of importance in maximizing the overall efficiency of the polarimeter.
Many present-day CCD and CMOS detectors allow simultaneous exposure and readout \edit1{so} that the switching speed of the modulator becomes the limitation in the overall duty cycle---and thus efficiency---of the modulator.
\edit1{For a recent review of instrumentation for solar spectropolarimetery we direct the reader to \cite{2019OptEn..58h2417I}.
\cite{2014SPIE.9099E..0LR} present a more general review of instrumentation for measurements of polarized light.}
\section{Design}\label{sec:design}
A computer program was developed at HAO to determine component parameters for a given modulator design \citep{2010ApOpt..49.3580T}.
This program was used successfully to design the modulators for the ProMag, CoMP-S, SCD, ChroMag, and UCOMP instruments built or under construction by HAO \citep{2008SPIE.7014E..16E, Kucera2010, 2015IAUGA..2246687K, 2012SPIE.8446E..78D}.
More recently, it, or similar programs derived from it or independently implemented by others, have been used to design modulators, e.g., for the DKIST \citep{2018JATIS...4d4006H}.
The code can use several different merit functions.
We choose to minimize the maximum of the deviation of the modulation efficiency \edit1{$\epsilon_Q$, $\epsilon_U$, and $\epsilon_V$} in Stokes $Q$, $U$, and $V$ from the optimal value of $1/\sqrt{3}$ for balanced modulation at a number of user-specified wavelengths, normalized by the efficiency \edit1{$\epsilon_I$} in Stokes $I$.
\edit1{It is possible to bias the modulation efficiency to prefer linear or circular polarization.
However, such schemes are of limited use because the SST is not a polarization-free telescope \citep{Selbing2005}.}
We study two designs: one consisting of two \edit1{FLC} devices followed by one fixed retarder that we will refer to as the FFR design, and one consisting of an \edit1{FLC}, a fixed retarder, a second \edit1{FLC}, and a second fixed retarder that we will refer to as the FRFR design.
The HAO-designed instruments mentioned above all use the FFR design.
Others have implemented FRFR designs \citep[e.g.,][]{1999OptEn..38.1402G, 2001ASPC..236...16K,2016A&A...590A..89I}
The FFR design has 5 free parameters, whereas the FRFR design has 7 (see Table~\ref{tab:designs}).
Both have significant freedom to optimize the design over wide wavelength ranges.
All modulators discussed here were optimized for balanced modulation, i.e., equal efficiency in $Q$, $U$, and $V$, at 16~equidistant wavelengths over the 500---900~nm operating wavelength range of the CRISP instrument.
We allow the program to choose the retardances of the \edit1{FLCs} and the retarders, as well as the orientations of the second \edit1{FLC} and the retarders.
Experience has shown that the best configurations have an orientation very close to $0$ or $90\degr$ with respect to the orientation of the analyzing polarizer for \edit1{the bisector of} the first \edit1{FLC}.
We therefore fix the orientation of the first \edit1{FLC} at 0~degrees to eliminate one free parameter.
The switching angle of an \edit1{FLC} is sensitive to both temperature and drive voltage \citep{Gisler2005,2003SPIE.4843...45G}.
Hence, we assume that we can \edit1{adjust the drive voltage at a given operating temperature so that} the \edit1{FLC} switching angles \edit1{are} 45~degrees.
We also account for dispersion of birefingence for all elements of the modulator \edit1{using measurements from similar optics}.
The program can use several different optimization techniques.
We first use a Latin Hypercube Sampling algorithm \citep{10.2307/1268522} to probe the parameter space.
We use a large population size of 25,000 but only 5 iterations in which we shrink the parameter space around the best solution.
We then apply a downhill simplex method \citep{10.1093/comjnl/7.4.308} to refine the solution.
To increase confidence that we did not find a local minimum, we repeat the search several times and check that we consistently find the same solution.
The resulting designs are summarized in Table~\ref{tab:designs}.
\begin{table}[tbp]
\caption{\label{tab:designs}FRFR and FFR modulator designs.
The orientation value of the \edit1{FLCs} refers to the bisector of the two fast axis positions that are separated by $45\degr$.}
\centering
\begin{tabular}{lrr}
\hline
Component & Retardance & Orientation \\
& waves at 665~nm & degrees\\ \hline\hline
\multicolumn{3}{c}{FRFR} \\ \hline
\edit1{FLC} 1 & $0.429$ & $0$ \\
Retarder 1 & $0.181$ & $121.0$ \\
\edit1{FLC} 2 & $0.324$ & $17.5$ \\
Retarder 2 & $0.543$ & $109.6$ \\ \hline\hline
\multicolumn{3}{c}{FFR} \\ \hline
\edit1{FLC} 1 & $0.490$ & $0$ \\
\edit1{FLC} 2 & $0.248$ & $112.9$ \\
Retarder 1 & $0.228$ & $108.8$ \\
\end{tabular}
\end{table}
The next step is to evaluate the robustness of the design using a Monte-Carlo tolerancing method.
Efficiencies were calculated for a total of 1000 modulator realizations with parameters chosen from a uniform distribution around the design values.
The width of the distributions was chosen to be the vendor-supplied accuracies for the retardances of the devices of $50~\mathrm{nm}$ for the \edit1{FLCs} and $2~\mathrm{nm}$ for the retarder.
\edit1{For the orientation we assume an error of up to 1~degree.
Experience has shown that alignment of the optics with this accuracy is possible by hand with a simple lab setup.}
The switching angle of the \edit1{FLCs} is assumed to be 45~degrees with insignificant error.
\edit1{FLCs} typically have large manufacturing errors in their retardances.
Since as-built retardances will be known prior to assembly of the modulator, the tolerancing process re-optimizes the angles of the components after the retardances have been chosen.
As an example, a realization of the FFR modulator might have \edit1{FLCs} with retardances of $0.513$ and $0.236$ waves, and a retarder with a value of $0.243$ waves, at the reference wavelength of $665~\mathrm{nm}$.
The tolerancing procedure would then re-optimize the modulator design to find optimal angles of $115.5$ and $109.9\degr$ for the second \edit1{FLC} and the retarder.
Then, the procedure will perturb all the angles to account for mounting errors, to, say, $-0.6$, $114.8$, and $109.8\degr$, and finally calculate the efficiencies of this modulator realization.
The resulting expected modulator performance is shown in Figs.~\ref{fig:FRFR} and~\ref{fig:FFR}.
An even better result can be achieved by re-optimizing the retardances of the fixed retarders in addition to the orientations after the as-built \edit1{FLC} retardances are known.
This was not pursued for the CRISP modulator due to time constraints, and because the design is shown to be tolerant to expected manufacturing errors.
Figures~\ref{fig:FRFR} and~\ref{fig:FFR} show that both designs are well-behaved.
The nominal FRFR design exhibits better overall performance than the FFR design, which is not surprising in view of its higher number of degrees of freedom.
The tolerance analysis shows that the FRFR design is considerably less resistant to manufacturing errors than the FFR design, particularly in $\epsilon_V$ between 500 and 800~nm.
We show this design here to demonstrate the importance of performing a tolerance analysis.
It is possible to find other FRFR designs that have slightly worse performance, but are more robust against manufacturing errors.
However, the FFR design performs very well over this wavelength range and has the benefit of one less component, and thus results in a thinner stack of optics with fewer interfaces.
In our case, we select the FFR design primarily because the modulator must fit in a tight space in the existing CRISP optical setup.
\narrowfig{f1}{fig:FRFR}{Theoretical $I$, $Q$, $U$, and $V$ efficiencies with tolerances for the FRFR design.
Solid curves: design performance of the modulator as a function of wavelength.
Horizontal dotted lines: theoretical efficiencies for a perfectly balanced and optimally efficient modulator.
Vertical dashed lines: lower and upper bound of the design wavelength range.
The grayscale background shows the expected spread of performance as a result of component and construction tolerances.}
\narrowfig{f2}{fig:FFR}{Theoretical $I$, $Q$, $U$, and $V$ efficiencies with tolerances for the FFR design in the same format as Fig.~\ref{fig:FRFR}}
There is considerable freedom to pick a reference wavelength.
Our experience has shown that a wavelength at or slightly below the middle of the operational range is a good choice for practical reasons.
Here, we picked $665~\mathrm{nm}$, also because the program chooses to use \edit1{FLCs} with retardances that are equal to $\lambda/2$ and $\lambda/4$ at that wavelength within the margin of error.
We fix these components at those values \edit1{out of convenience} and optimize the fixed retarder and component orientations.
We find that the orientation of the 2nd \edit1{FLC} does not change.
The fixed retarder value and orientation change slightly to $0.225\lambda$ and $109.1~\mathrm{degrees}$.
\edit1{FLCs with these specifications were procured from Citizen Finetech Miyota.
The FLC used in these devices is MX8068.
A polycarbonate retarder was procured from Meadowlark Optics.}
\section{Thermal Analysis}
The switching angle of \edit1{FLCs} is somewhat sensitive to temperature.
\cite{2003SPIE.4843...45G} measured it as a function of temperature and found a mostly linear relationship with a coefficient of $-0.4\degr/\mathrm{K}$.
\edit1{This coefficient is specific to the FLC.
\cite{Gisler2005} shows an example measurement with a coefficient of $-0.41\degr/\mathrm{K}$.
For this analysis we use the larger, more conservative value.}
We evaluate the effect of temperature change of the modulator following a procedure similar to \cite{2013SoPh..283..601L}.
\edit1{We follow the notation of \cite{2000ApOpt..39.1637D} and refer the reader to that work for a rigorous mathematical treatment of polarimetric measurements.}
The Stokes vector $\mathbf{S}$ is modulated into a vector of intensities $\mathbf{I}$.
The modulation can be described by a modulation matrix $\mathbf{O}$,
\begin{equation}
\mathbf{I} = \mathbf{O}\,\mathbf{S}.
\end{equation}
A demodulation matrix $\mathbf{D}$ is used to recover the Stokes vector,
\begin{equation}
\mathbf{S} = \mathbf{D}\,\mathbf{I}.
\end{equation}
A difference in temperature of the modulator during observations and calibrations will result in a mismatch of the modulation and demodulation matrices.
We denote with \edit1{$\mathbf{O}'$ and $\mathbf{D}'$ the modulation and demodulation matrices} derived from the calibration, and with $\mathbf{S}'$ the inferred Stokes vector,
\begin{equation}
\mathbf{S}' = \mathbf{D}'\,\mathbf{I}.
\end{equation}
We can then relate the inferred Stokes vector $\mathbf{S}'$ and the real Stokes vector $\mathbf{S}$ through an error matrix,
\begin{equation}
\mathbf{S} = \mathbf{X}\,\mathbf{S}'.
\end{equation}
It is easy to see that we now have
\begin{equation}
\mathbf{X}\,\mathbf{D}' = \mathbf{D},
\end{equation}
and using $\mathbf{D}'\,\mathbf{O}' = \mathbf{I}_4$ we find
\begin{equation}
\mathbf{X} = \mathbf{D}\,\mathbf{O}'.
\end{equation}
\edit1{For our error analysis,} we can calculate the modulation matrix $\mathbf{O}'$ from the unperturbed design, and demodulation matrices $\mathbf{D}$ for several switching angles to determine the permissible change in temperature.
Limits must be imposed on every element of the matrix $\mathbf{X}$.
The diagonal elements represent a scale error that is much less sensitive than crosstalk errors.
The scaling on $I$ is unconstrained after normalizing $\mathbf{S}$ by $I$.
Furthermore, the elements in the $Q$ and $U$ columns can be scaled by the maximum expected linear polarization signal, and those in the $V$ column can be scaled by the maximum expected circular polarization signal.
We follow \cite{2008SoPh..249..233I} and adopt maxima of $e=0.001$ \edit1{of $I$} for the crosstalk error \edit1{between Stokes $Q$, $U$, and $V$}, $a=0.05$ for scale error, $p_\mathrm{l}=15\%$ \edit1{of $I$} for linear polarization, and $p_\mathrm{v}=20\%$ \edit1{of $I$} for circular polarization.
We then find
\begin{equation}\label{eq:xtol}
|\mathbf{X}-\mathbf{I}_4|\le\begin{pmatrix}
&0.333&0.333&0.250\\
0.001&0.050&0.007&0.005\\
0.001&0.007&0.050&0.005\\
0.001&0.007&0.007&0.050
\end{pmatrix},
\end{equation}
\edit1{where, e.g., the second and third element of the top row are given by the ratio $a/p_\mathrm{l}$, and the second and third element of the last column are given by the ratio $e/p_\mathrm{v}$.}
\narrowfig{f3}{fig:xmatrix}{The $|\mathbf{X}-\mathbf{I}_4|$ matrix elements for the error introduced by $0.5~\mathrm{K}$ change in temperature assuming a $-0.4\degr/\mathrm{K}$ coefficient for the switching angle of the \edit1{FLCs}.
The first column of the matrix is omitted because it is identically $0$.
Gray areas are outside the limits given in Eq.~\ref{eq:xtol}.}
Figure~\ref{fig:xmatrix} shows the $\mathbf{X}-\mathbf{I}_4$ matrix elements for the error introduced by the change in switching angle for a $0.5~\mathrm{K}$ change in temperature.
The $Q$-to-$U$ term is the worst offender and is just below the limit at $500~\mathrm{nm}$.
There are other contributors to $\mathbf{X}$ than changes in switching angle with temperature.
E.g., the retardances of the \edit1{FLCs} and the retarder also have a small temperature dependence.
The polarimetric calibration procedure also has a finite accuracy \citep{2008A&A...489..429V}.
We do not explicitly model these effects here, since the switching angle is expected to be the dominant source of error.
\edit1{For example, polycarbonate retarders have typical temperature coefficients around $0.4~\mathrm{pm}/\mathrm{nm}/\mathrm{K}$, and a $5~\mathrm{K}$ change in temperature is required to exceed the permissible error.}
Instead we assign a fraction of the permissible error to changes in switching angle and set the requirement for thermal stability to $\pm0.2~\mathrm{K}$.
\section{Opto-Mechanical Design}
Figure~\ref{fig:crosssection} shows a cross-section of the modulator.
The mechanical design borrows heavily from the HAO Lyot filter designs used in the CoMP, CoMP-S, SCD, and ChroMag instruments.
The modulator optics are glued into mounts that allow the optic to be oriented to any angle using a RTV silicone.
The mounts consist of two parts.
The inner part is round and holds the optic.
It can be oriented to the desired angle and glued to the hexagonal outer part that is indexed to the inner mount assembly.
The optics stack is assembled between parallel windows using index-matching gel.
The windows rest on O-rings in their mounts.
The entrance window mount is spring-loaded against the inner mount assembly with 4.4~N.
\narrowfig{"f4"}{fig:crosssection}{A cross-section view of the modulator with major components labeled.
\edit1{Optical components are shown in dark blue:}
1.~entrance window;
2.~\edit1{FLC} 1;
3.~\edit1{FLC} 2;
4.~fixed retarder; and
5.~exit window.
\edit1{The mechanical assembly is color-coded by its major components:}
6.~pressure plate \edit1{(dark green)};
7.~oven \edit1{(gray)};
8.~optic holders \edit1{(yellow)};
9.~inner mount assembly \edit1{(light green)}; and
10.~Delrin shell \edit1{(light blue)}.
See the text for details on the mechanical design.}
The inner mount assembly is inserted in an oven consisting of an aluminum tube with a silicone rubber heater element and aerogel insulation wrapped around it.
An off-the-shelf precision temperature controller is used to stabilize the oven to $35.0\degr\textrm{C}$ to better than $0.1\degr\textrm{C}$.
The modulator is encased in a Delrin housing.
Electrical connections for the \edit1{FLCs} and the heater system are routed to two D-subminiature connectors on the housing.
A custom controller based on an Arduino Uno microcontroller board was built to drive the \edit1{FLCs}.
The camera software sends a voltage sequence to the controller via a serial interface, which is preloaded into two Burr-Brown DAC714 digital-to-analog converters (DACs).
A synchronization pulse is then used to update the voltages when the chopper that controls the exposure of the cameras is closed.
The \edit1{FLCs} are primarily capacitive loads, with capacitance of about $80~\mathrm{nF}$.
The DACs are capable of driving $5~\mathrm{mA}$, which is more than adequate to drive the \edit1{FLCs} between states in under a millisecond.
The controller also resets the voltage to zero after a few seconds of inactivity as a safety feature because the \edit1{FLCs} may be damaged if driven by a constant voltage for a prolonged period of time.
\section{Performance}\label{sec:performance}
The components of the modulator must be accurately aligned to ensure proper functioning of the assembled device.
HAO has a facility Lab Spectropolarimeter (LSPM) test setup for polarimetric characterization of optics that was used to test components of the CRISP modulator after they were mounted.
\narrowfig{f5}{fig:LSPM_tests}{Solid lines: retardances (orange) and fast axis positions (blue) of the $\frac12$-wave \edit1{FLC} (top panel), the $\frac14$-wave \edit1{FLC} (middle panel), and the retarder (bottom panel), as determined by a fit to the Mueller matrix inferred from LSPM measurements at room temperature.
The panels for the \edit1{FLCs} show two fast axis positions for $\pm8~\mathrm{V}$ drive voltages.
Dashed lines: fast axis bisector.
Dotted lines: design retardances and fast axis positions.
In these measurements the signal level drops below acceptable levels around $550~\mathrm{nm}$.}
The LSPM consists of relay optics that feed light from a halogen bulb through, in order, a calibration package that consists of a polarizer and a retarder in individual rotation stages, the sample under test, and a polychromatic polarization modulator and analyzer, into an Ocean Optics USB4000 fiber-fed spectrograph.
This setup allows for characterization of the full Mueller matrix of the sample as a function of wavelength.
The spectrograph covers the wavelength range from about $450~\textrm{nm}$ to about $1100~\textrm{nm}$, though signal levels are low under $550~\textrm{nm}$.
We solve for retardance and fast axis position of a linear retarder that matches the Mueller matrices derived from LSPM measurements as a function of wavelength.
Figure~\ref{fig:LSPM_tests} shows the results for the three CRISP modulator components.
The figure also shows the design retardances and fast axis positions.
The $\lambda/2$ and $\lambda/4$ \edit1{FLCs} have measured retardances of $0.47\lambda$ and $0.24\lambda$ at $665~\mathrm{nm}$, \edit1{and mean switching angles of $48.2\degr$ and $46.7\degr$.}
The fixed retarder is measured at $0.251\lambda$.
\edit1{All three components show a curious increase in the fast axis position at the shortest wavelengths.
However, the signal level is low, and it may be that the effect is the result of systematic errors in the measurement.}
As discussed in Sect.~\ref{sec:design}, we can re-optimize the design with these values.
However, we made our measurements at room temperature.
The measurement should have been performed at the operating temperature of $35\degr\mathrm{C}$ because component retardance has some temperature dependence.
Using the retardance values at room temperature we find fast axis angles for the 2nd \edit1{FLC} and the fixed retarder are $113.5$ and $108.5~\mathrm{degrees}$.
However, if we assume the retarder will have its design retardance at $35\degr\mathrm{C}$, the fast axis positions revert to the nominal design.
\edit1{The FLC switching angles are larger than the nominal $45\degr$ design, but also expected to decrease at a higher operating temperature, possibly to angles below $45\degr$ \citep{2003SPIE.4843...45G,Gisler2005}.}
We choose not to change the modulator design because of the unknown effect of temperature on the component retardance \edit1{and FLC switching angle,} and because only marginal improvement of performance is expected.
The modulator was first assembled with air gaps, and once the proper relative alignment of the components was confirmed using the LSPM, the modulator was assembled in its housing using Nye OCF-452 optical coupling fluid on the glass interfaces.
The purpose of the coupling fluid is to reduce internal Fresnel reflections between the surfaces of the optics.
Nye OCF-452 was used because it has a refractive index that is well-matched to the Corning XG glass of the \edit1{FLCs} and the BK7 glass of the retarder and windows.
Internal Fresnel reflections at the optical interfaces of the components are limited to below the $2\times10^{-5}$ level.
\narrowfig{f6}{fig:eff_wet}{\edit1{Modulation efficiencies of the modulator.
The top four panels shows the efficiency in Stokes $I$, $Q$, $U$, and $V$.
The bottom panel shows the RSS of the efficiencies in Stokes $Q$, $U$, and $V$.}
Solid black lines: modulation efficiencies of the assembled modulator measured with the LSPM.
\edit1{Solid green lines: design efficiencies.}
Solid orange lines: modulation efficiencies expected from the measurements in Fig.~\ref{fig:LSPM_tests}.
Blue crosses and diamonds: modulation efficiencies measured at the telescope in the transmitted and reflected beams, resp.
The horizontal dotted line in each panel shows the theoretical maximum efficiency for a balanced modulation scheme.}
The fully assembled modulator was brought to operating temperature and tested again on the LSPM.
The LSPM produces measurements of the Mueller matrix of the modulator in each of its 4 states.
We simulate a perfect analyzer in $Q$ to calculate modulation efficiencies, shown as a function of wavelength in Fig.~\ref{fig:eff_wet}.
The measured efficiencies largely show the expected behavior when compared to the design \edit1{shown in solid green curves in Fig.~\ref{fig:eff_wet}}.
However, differences in the model and measured efficiencies cannot be fully attributed to as-built retardances and component alignment.
The differences are likely due to several factors that were not included in the tolerance analysis.
The model assumes that the components are perfect retarders with a known dispersion of birefringence, and that the \edit1{FLCs} have an exact $45\degr$ switching angle.
In reality, the components have imperfections such as chromatic variation of the fast axis, and the actual dispersion of birefringence is different from the model.
This can be seen in Fig.~\ref{fig:LSPM_tests}.
The solid blue lines are not horizontal, and the solid and dotted orange lines are not parallel.
The \edit1{FLC} switching angle is also not exactly $45\degr$.
Lastly, bulk rotational alignment of the modulator to the analyzer was not included in the analysis.
\edit1{The efficiencies of some} modulator designs, in particular the traditional rotating retarder, are invariant under rotation of the modulator with respect to the analyzer.
This design is not invariant, and rotation of the modulator results in depressed efficiencies.
Figure~\ref{fig:eff_wet} also shows the modulation efficiencies computed from the Mueller matrices of the measured components \edit1{in orange}.
They show good agreement with the measured efficiencies of the assembled modulator.
We attribute the differences mostly to the components not being measured at operating temperature.
There are also likely small differences in the relative orientation of the components in the assembled modulator compared to the individual measurements.
The CRISP instrument is intended for high-resolution imaging.
The modulator, therefore, must have low transmitted wavefront distortion (TWD).
Because the internal optics are coupled using an index-matching gel, the TWD is dominated by the by the entrance and exit windows.
Fortunately, excellent quality windows are inexpensive and commonly available.
The TWD of the assembled modulator was measured using a Zygo interferometer.
It was found to be $0.20~\mathrm{waves}$ at $632~\mathrm{nm}$ RMS over the clear aperture after removal of the tilt component, but including $0.14~\mathrm{waves}$ of power that introduces primarily a shift in focus position.
\begin{table}[tbp]
\caption{\label{tab:efficiencies}Modulation efficiencies measured at the telescope averaged over the field of view.}
\centering
\begin{tabular}{l*{5}{>{$}c<{$}}}
\hline
Wavelength & \epsilon_I & \epsilon_Q & \epsilon_U & \epsilon_V & \epsilon_{QUV}\\ \hline\hline
\multicolumn{6}{c}{Transmitted} \\\hline
$517.3~\mathrm{nm}$ & 0.992&0.500&0.588&0.560&0.954\\
$589.6~\mathrm{nm}$ & 0.996&0.573&0.591&0.539&0.984\\
$617.3~\mathrm{nm}$ & 0.991&0.506&0.608&0.554&0.966\\
$630.2~\mathrm{nm}$ & 0.993&0.576&0.579&0.536&0.977\\
$854.2~\mathrm{nm}$ & 0.933&0.570&0.504&0.501&0.911\\\hline\hline
\multicolumn{6}{c}{Reflected} \\\hline
$517.3~\mathrm{nm}$ & 0.988&0.485&0.572&0.543&0.926\\
$589.6~\mathrm{nm}$ & 0.997&0.557&0.572&0.524&0.955\\
$617.3~\mathrm{nm}$ & 0.986&0.506&0.607&0.549&0.962\\
$630.2~\mathrm{nm}$ & 0.994&0.576&0.579&0.530&0.974\\
$854.2~\mathrm{nm}$ & 0.951&0.563&0.497&0.493&0.898\\
\end{tabular}
\end{table}
The modulator was installed at the SST in October 2014.
\edit1{CRISP uses a polarizing beamsplitter to analyze the polarization signal in two orthogonal directions simultaneously \citep{2015A&A...573A..40D}.
This kind of setup is known as a dual-beam polarimeter and allows for the removal of crosstalk resulting from atmospheric seeing from Stokes $I$ to Stokes $Q$, $U$, and $V$ \citep{2012ApJ...757...45C}.}
\edit1{Measured modulation efficiency averaged over the field of view for the transmitted and reflected beams at five wavelengths commonly used for solar polarimetry are given in Table~\ref{tab:efficiencies} and also shown in Fig.~\ref{fig:eff_wet} as blue crosses and diamonds.}
\edit1{It is not possible to directly compare the efficiencies in the individual Stokes parameters.}
The telescope measurements include all the optics on the tables, which include a number of mirrors and lenses, a dichroic beamsplitter, the CRISP prefilter, a gray beamsplitter, and the CRISP etalons.
These elements cannot be separated from the modulator.
The calibration procedure fits all optics on the table between the calibration optics and the polarization analyzer as one modulation matrix \citep{2008A&A...489..429V}.
In effect, all optical elements between the calibration optics and the polarimetric analyzer together act as the modulator.
\edit1{There are oblique reflections that may cause some mixing of all Stokes parameters due to retardance of the mirror coatings, and the reference frame of polarization of these measurements is not normal to the optical table.
It is still valuable to compare the performance in the telescope to the design performance and lab measurements, but this can only be done in an aggregate way, i.e., by comparing the RSS of the efficiencies in Stokes $Q$, $U$, and $V$,
\begin{equation}
\epsilon_{QUV} = \sqrt{\epsilon_Q^2+\epsilon_U^2+\epsilon_V^2}.
\end{equation}
The overall agreement of the performance in the telescope setup and the assembled modulator is very good.
The largest difference in the RSS of the efficiencies in $Q$, $U$, and $V$ is $3.7\%$ in the reflected beam at $589.6~\mathrm{nm}$.}
The transmitted and reflected beams show very similar behavior.
\edit1{The differences in efficiencies between the beams are on the order of a few percent, and may result from differences in the contrast of the polarizing beamsplitter in the transmitted and reflected beams, or from polarizing components in the telescope such as the wide-band beamsplitter that has a highly uneven ratio of the transmitted and reflected light.}
The overall performance is excellent with the lowest efficiencies only slightly below $50\%$ (cf.~the optimum and balanced efficiency of $57.7\%$).
\edit1{The RSS of the efficiencies in Stokes $Q$, $U$, and $V$ are $94\%$, $97\%$, $96\%$, $98\%$, and $90\%$ for these 5 wavelengths.}
\section{Conclusion}\label{sec:conclusion}
The trade-offs and procedures described in this paper were employed to design the polarimetric modulator for the CRISP instrument, but can be applied to the design of modulators for other instruments.
We chose to omit some steps that could result in somewhat improved performance of the modulator.
If schedule permits, it is possible to incrementally optimize the design with measured optic properties.
We could have delayed the purchase of the retarder until the \edit1{FLCs}, which have the largest errors, had been characterized at operating temperature, so that the value of the retarder could have been optimized for the as-built \edit1{FLCs}.
While this design with only three components is robust, such incremental re-optimization may be necessary to guarantee acceptable efficiencies for modulator designs with more optical elements that cover larger wavelength ranges \citep{2012SPIE.8446E..25S}.
We did not specifically consider polarized spectral ``fringes'' in our design process.
A description of polarized spectral fringes can be found in reviews by \cite{1991sopo.work..166L}, \cite{2003A&A...401....1S}, and \cite{2004JOptA...6.1036C}.
They are interference patterns that are produced by reflections between parallel surfaces in a system with polarization optics, such as the components of the modulator, that are difficult to characterize \citep{2017JATIS...3d8001H} and remove \citep{2006ApJ...649..553R,2012ApJ...756..194C,2019ApJ...872..173C}.
\cite{2015SPIE.9613E..0GS} optimized components to suppress polarized fringes for their application by ensuring that the periods of the fringes are much smaller than the spectral resolution of their instrument.
\edit1{This approach cannot be applied in modulators using FLCs (or LCVRs) because the FLC layer thickness that determines the period of the fringes is set by the required retardance.
Because the FLC layer is very thin, fringes caused by reflections at the FLC-glass interfaces have periods of ten or more of nanometers \citep{2003SPIE.4843...45G}, i.e., much larger than the CRISP bandpass of less than ten picometers \citep{2008ApJ...689L..69S}.
These fringes will consequently be relatively stable over the CRISP bandpass and therefore will be nearly completely removed in the polarimetric calibration.
Fringes caused by reflections between surfaces at larger optical distance have smaller periods and can be a problem.
For those,} the only available option \edit1{is to reduce the amplitude of fringes by reducing the amplitude of} Fresnel reflections from the interfaces of the optical elements.
The use of optical coupling fluid is therefore not only required to address etaloning, but also to suppress these fringes.
\widefig{f7}{fig:exdata}{Example spectro-polarimetric data of AR12471 in the \ion{Fe}{1} lines at $630.2~\mathrm{nm}$.
The observation was recorded on 2019-05-10 around 09:10~UT.
Top row: images of the Stokes parameter in the blue wing of the $630.25~\mathrm{nm}$ line at $-90~\mathrm{mA}$ from line center.
Spectra at the black, red, and blue crosses are shown in the bottom row.
The locations of the spectral sampling are indicated by crosses.}
The polarimetric modulator described here has been in use for science observations at the SST starting with the 2015 observing season.
Example data of a sunspot are shown in Fig.~\ref{fig:exdata}.
The data reduction procedures are described in detail by \cite{2015A&A...573A..40D} and \cite{2018arXiv180403030L}.
These data can be fit using forward-modeling procedures to derive quantitative measures of atmospheric parameters, most notably the strength and direction of magnetic field.
For example, \cite{2018ApJ...860...10K} studied the structure and evolution of temperature and magnetic field in a flaring active region using full-Stokes CRISP observations in the \ion{Ca}{2} line at $854.2~\mathrm{nm}$, \cite{2019A&A...627A.101V} used similar data in combination with data from the IRIS mission \citep{2014SoPh..289.2733D} to study Ellerman bombs and UV bursts, \cite{2020arXiv200901537V} inferred the photopheric and chromospheric magnetic field vector in a flare target and studied their differences, \cite{2019A&A...621A..35L} used CRISP observations in the \ion{He}{1} $\mathrm{D}_3$ line in a study of a flare, \cite{2020arXiv200614487M} and \cite{2020arXiv200614486P} studied chromospheric magnetic fields in plage targets and estimated a canopy mean field strength of $400~\mathrm{G}$ in the chromosphere, and \cite{2020A&A...641L...5J} studied very small-scale reconnection in the solar photosphere using CRISP polarimetry and CHROMIS \citep{2019A&A...626A..55S} observations in H$\beta$.
Figure~\ref{fig:qs} shows a region of quiet sun with the line-of-sight component of the magnetic field inferred from full-Stokes observations of the \ion{Fe}{1} $617.3~\mathrm{nm}$ line profile using a spatially-regularized Milne-Eddington inversion method \citep{2019A&A...631A.153D}.
This example highlights the power of CRISP combined with this modulator.
Quiet-sun magnetic fields are weak and difficult to detect.
Telescopes and instruments that achieve high spatial resolution, have adequate spectral resolving power, and have high system efficiency are required to study them.
We refer the interested reader to \cite{2019LRSP...16....1B} for a comprehensive review of observations of quiet-sun magnetic field.
\narrowfig{f8}{fig:qs}{Intensity and line-of-sight magnetic field strength for a quiet sun region observed on 2020-07-14 around 08:41~UT during a period of very good seeing conditions.
The magnetic field strength was inferred from observations of the \ion{Fe}{1} $617.3~\mathrm{nm}$ line and is scaled between $-25$ and $25~\mathrm{G}$.}
The high throughput and efficiency of CRISP with this modulator also enables observations in many lines with polarimetry while maintaining sufficient cadence for studies of dynamic events.
Such multi-line observations were used by \cite{2018A&A...612A..28L} in a study of chromospheric heating in an emerging flux region.
They used the STiC code \citep{2019A&A...623A..74D} to simultaneously interpret the signals from several lines.
\cite{2019ApJ...870...88E} used the same code in a similar way to study penumbral microjets.
\acknowledgments{We acknowledge R.~Casini for the development of the codes used for optimization and tolerancing of the modulator designs.
This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977.
CRISP and the modulator were funded by the Marianne and Marcus Wallenberg Foundation.
This research has made use of NASA's Astrophysics Data System,
NumPy \citep{2011CSE....13b..22V},
matplotlib, a Python library for publication quality graphics \citep{2007CSE.....9...90H},
Astropy, a community-developed core Python package for Astronomy \citep{2018AJ....156..123A, 2013A&A...558A..33A},
and the IPython package \citep{2007CSE.....9c..21P}.
The acknowledgements were compiled using the Astronomy Acknowledgement Generator.}
|
1,108,101,565,172 | arxiv | \section{Introduction}
In the pioneering works by A.A.~Andronov and E.A.~Leontovich \cite{AL1,AL2}
all main bifurcations of stable periodic orbits of dynamical systems in a plane had been studied: the emergence
of a limit cycle from a weak focus,
the saddle-node bifurcation through a merger of a stable limit cycle with an unstable one and their consecutive
annihilation, the birth of a limit cycle from a separatrix loop to a saddle, as well as from a separatrix loop
to a saddle-node equilibrium. Later, in the 50-60s these bifurcations were generalized for the multi-dimensional case,
along with two additional bifurcations: period doubling and the birth of a two-dimensional torus. Apart from that,
in \cite{lp1,lp2} L.~Shilnikov had studied the main bifurcations of saddle periodic orbits out of homoclinic loops
to a saddle and discovered a novel bifurcation of homoclinic loops to a saddle-saddle\footnote{an equilibrium state, alternatively called a Shilnikov saddle-node,
due to a merger of two saddles of different topological types}.
Nevertheless, an open problem still remained: could there be other types of codimension-one bifurcations
of periodic orbits? Clearly, the emphasis was put on bifurcations of {\em stable} periodic orbits, as only
they generate robust self-sustained periodic oscillations, the original paradigm of
nonlinear dynamics. One can pose the problem as follows:\\ {\em
In a one-parameter family $X_{\mu}$ of systems of differential equations,
can both the period and the length of a structurally stable periodic orbit ${\cal L}_\mu$
tend to infinity as the parameter $\mu$ approaches some bifurcation value, say $\mu_0=0$?} \\
Here, structural stability means that none of the multipliers of the periodic orbit ${\cal L}_\mu$ crosses the unit circle, i.e. ${\cal L}_\mu$
does not bifurcate at $\mu\neq\mu_0$. Of particular interest is the case where ${\cal L}_\mu$ is stable, i.e.
all the multipliers are strictly inside the unit circle.
A similar formulation was given by J.~Palis and Ch.~Pugh \cite{PP} (notable Problem~37), however the structural stability
requirement was missing there. Exemplary bifurcations of a periodic orbit whose period becomes arbitrarily large
while the length remains finite as the bifurcation moment is approached are
a homoclinic bifurcation of a saddle with a negative saddle value and that
of a saddle-node \cite{lp0,book2}. These were well-known at the time, so in \cite{PP} an additional condition
was imposed, in order to ensure that the sought bifurcation is really of a new type: the periodic orbit ${\cal L}_\mu$ must stay
away from any equilibrium states (this would immediately imply that the length of the orbit grows to infinity
in proportion to the period). As R.~Abraham put it, the periodic orbit must ``disappear in the blue-sky'' \cite{Ab}.
In fact, a positive answer to ``Problem 37'' could be found in an earlier paper \cite{F}. In explicit form, a solution
was proposed by V.~Medvedev \cite{Me}. He constructed examples of flows on a torus and a Klein bottle with
stable limit cycles whose lengths and periods tend to infinity as $\mu\to\mu_0$, while at $\mu=\mu_0$
both the periodic orbits disappear and new, structurally unstable saddle-node periodic orbits appear
(at least two of them, if the flow is on a torus). The third example of \cite{Me} was a flow on a 3-dimensional torus
whose all orbits are periodic and degenerate, and for the limit system the torus is foliated by two-dimensional invariant tori.
Medvedev's examples are not of codimension-1: this is obvious for the torus case that requires at least two saddle-nodes, i.e.
$X_{\mu_0}$ is of codimension 2 at least. In case of the Klein bottle one may show \cite{book2,AfS,TSh3,Li,Il}
that for a generic perturbation of the Medvedev family the periodic orbits existing at $\mu\neq\mu_0$
will not remain stable for all $\mu$ as they undergo an infinite sequence of
forward and backward period-doubling bifurcations (this is a typical behavior of fixed points of a non-orientable
diffeomorphism of a circle).
A blue-sky catastrophe of codimension 1 was found only in 1995 by L.~Shilnikov and D.~Turaev \cite{TSh3,TSh1,TSh2,ShT}.
The solution was based on the study of bifurcations of a saddle-node periodic orbit whose entire unstable manifold
is homoclinic to it. The study of this bifurcation was initiated by V.~Afraimovich and L.~Shilnikov \cite{AfS,AfS1,AfS2,AfS3}
for the case where the unstable manifold of the saddle-node is a torus or a Klein bottle (see Fig.~\ref{fig1}).
As soon as the saddle-node disappears, the Klein bottle may persist, or it may break down to cause chaotic
dynamics in the system \cite{AfS4,NPT,TSh,Sync}. In these works, most of attention was paid to the torus case,
as its breakdown provides a geometrical model of the quasiperiodicity-toward-chaos transition encountered
universally in Nonlinear Dynamics, including the onset of turbulence \cite{Sh00}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig1.jpg}
\end{center}
\caption{Two cases of the unstable manifold $W^u_L$ homoclinic to the saddle-node periodic orbit $L$:
a 2D torus (A) or a Klein bottle (B).}
\label{fig1}
\end{figure}
In the hunt for the blue sky catastrophe, other distinct configurations of the unstable manifold of the saddle-node were suggested
in \cite{TSh1}. In particular, it was shown that in the phase space of dimension 3 and higher
the homoclinic trajectories may spiral back onto the saddle-node orbit in the way shown in Fig.~\ref{fig2}.
If we have a one-parameter family $X_\mu$ of systems of differential equations
with a saddle-node periodic orbit at $\mu=\mu_0$ which possesses this special kind of the homoclinic unstable
manifold and satisfy certain additional conditions, then as the saddle-node disappears the inheriting attractor
consists of a single stable periodic orbit ${\cal L}_\mu$ which
undergoes no bifurcation as $\mu\to\mu_0$ while its length tends to infinity. Its topological limit, $M_0$, is
the entire unstable manifold of the saddle-node periodic orbit.
\begin{figure}[htb!]
\begin{center}
\includegraphics[height=0.45\textheight]{fig2.jpg}
\end{center}
\caption{Original construction of the blue sky catastrophe from \cite{TSh1}.}
\label{fig2}
\end{figure}
The conditions found in \cite{TSh1} for the behavior of the homoclinic orbits ensuring the blue-sky catastrophe are open,
i.e. a small perturbation of the one-parameter family $X_\mu$ does not destroy the construction. This implies
that such a blue-sky catastrophe occurs any time a family of systems of differential equations crosses
the corresponding codimension-1 surface in the Banach space of smooth dynamical systems. This surface constitutes
a stability boundary for periodic orbits. This boundary is drastically new comparable to those
known since the 30-60s and has no analogues in planar systems. There are reasons to conjecture that this
type of the blue-sky catastrophe closes the list of main stability boundaries for periodic orbits (i.e. any new stability
boundary will be of codimension higher than 1).
In addition, another version of blue-sky catastrophe leading to the birth of a uniformly-hyperbolic strange attractor
(the Smale-Williams solenoid \cite{Sm,W}) was also discovered in \cite{TSh1,TSh2}. This codimension-1 bifurcation
of a saddle-node which corresponds yet to a different configuration of the homoclinic unstable manifold of the periodic orbit
(the full classification is presented in \cite{book2}). Here, the structurally stable attractor existing all the way
up to $\mu=\mu_0$ does not bifurcate so that the length of each and every (saddle) periodic orbit in it tends to
infinity as $\mu\to\mu_0$.
Initially we believed that the corresponding configuration of the unstable manifold would be too exotic for
the blue-sky catastrophe to occur naturally in a plausible system. In contrast, soon after, a first explicit
example of the codimension-1 blue-sky catastrophe
was proposed by N.~Gavrilov and A.~Shilnikov \cite{GSh}, in the form of a family of 3D systems of differential equations
with polynomial right-hand sides. A real breakthrough came in when the blue-sky catastrophe has turned out to be a typical
phenomenon for slow-fast systems. Namely, in \cite{book2,mmj} we described a number of very general scenarios leading
to the blue-sky catastrophe in such
systems with at least two fast variables; for systems with one fast variable the blue-sky catastrophe was found in \cite{GKR}.
In this way, the blue-sky catastrophe has found numerous applications in mathematical neuroscience, namely, it explains a smooth
and reversible transition between tonic spiking and bursting in exact Hodgkin-Huxley type models of
interneurons \cite{leech1,leech2} and in mathematical models of square-wave bursters \cite{hr}.
The great variability of the burst duration near the blue-sky catastrophe
was shown to be the key mechanism ensuring the diversity of rhythmic patterns generated by small neuron complexes
that control invertebrate locomotion \cite{DG1,DG2,DG3}.
In fact, the term ``blue sky catastrophe" should be naturally treated in a broader way. Namely, under this term we allow
to embrace a whole class of dynamical phenomena that all are due to the existence of a stable (or, more generally, structurally stable)
periodic orbit, ${\cal L}_\mu$, depending continuously on the parameter $\mu$ so that both, the length and the period of ${\cal L}_\mu$ tend
to infinity as the bifurcation parameter value is reached. As for the topological limit, $M_0$,
of the orbit ${\cal L}_\mu$ is concerned, it may possess a rather degenerate structure that does not prohibit $M_0$ from
having equilibrium states included. As such, the periodic regime ${\cal L}_\mu$ could emerge as a composite construction
made transiently of several quasi-stationary states: nearly constant, periodic, quasiperiodic, and even chaotic fragments.
As one of the motivations (which we do not pursue here) one may think on slow-fast model where the fast
3D dynamics is driven by a periodic motion in a slow subsystem.
\section{Results}
In this paper we focus on an infinitely degenerate case where $M_0$
is comprised of a saddle periodic orbit with a continuum of homoclinic trajectories.
Namely, we consider a one-parameter family of sufficiently smooth systems of differential equations
$X_\mu$ defined in $R^{n+1}$, $n\geq 2$, for which we need to make a number of assumptions as follows.\\
\noindent {\bf (A)} There exists a saddle periodic orbit $L$ (we assume the period equals $2\pi$) with the
multipliers\footnote{the eigenvalues of the linearization of the Poincare map} $\rho_1,\dots,\rho_n$. Let the multipliers satisfy
\begin{equation}\label{rh1}
\max_{i=2,\dots,n-1} |\rho_i|<|\rho_1|<\;1\;<|\rho_n|.
\end{equation}
Once this property is fulfilled at $\mu=0$, it implies that the saddle periodic orbit $L=L_\mu$ exists
for all small $\mu$ and smoothly depends on $\mu$. Condition (\ref{rh1}) also holds for all small $\mu$.
This condition implies that the
stable manifold $W^s_\mu$ is $n$-dimensional\footnote{the intersection of $W^s_\mu$ with any cross-section to $L_\mu$ is $(n-1)$-dimensional}
and the unstable manifold $W^u_\mu$ is two-dimensional.
If the unstable multiplier $\rho_n$ is positive (i.e. $\rho_n>1$), then
the orbit $L_\mu$ divides $W^u_\mu$ into two halves, $W^+_\mu$ and $W^-_\mu$, so
$W^u_\mu=L_\mu\cup W^+_\mu\cup W^-_\mu$. If $\rho_n$ is negative ($\rho_n<-1$), then
$W^u_\mu$ is a M\"obius strip, so $L_\mu$ does not divide $W^u_\mu$; in this case we denote
$W^+_\mu=W^u_\mu\backslash L_\mu$.
Concerning the stable manifold, condition (\ref{rh1}) implies that in $W^s_\mu$ there
exists (at $n\geq 3$) an $(n-1)$-dimensional strong-stable invariant manifold $W^{ss}_\mu$ whose
tangent at the points of $L_\mu$ contains the eigen-directions
corresponding to the multipliers $\rho_2,\dots,\rho_{n-1}$, and the orbits in $W^s_\mu\backslash W^{ss}_\mu$
tend to $L_\mu$ along the direction which correspond to the leading multiplier $\rho_1$.\\
\noindent {\bf (B)} At $\mu=0$ we have $W^+_0\subset W^s_0\backslash W^{ss}_0$,
i.e. we assume that {\em all} orbits from $W^+_0$ are
homoclinic to $L$. Moreover, as $t\to +\infty$, they tend to $L$ along the leading direction.\\
\noindent {\bf (C)} We assume that the flow near $L$ contracts three-dimensional volumes, i.e.
\begin{equation}\label{contr}
|\rho_1\rho_n| <1.
\end{equation}
This condition is crucial, as the objects that we obtain by bifurcations of the homoclinic surface $W^+_0\cup L$ are meant to
be attractors. Note that this condition is similar to the negativity of the saddle value condition from the theory of
homoclinic loops to a saddle equilibrium \cite{AL1,AL2,lp0}, see (\ref{sadl}).\\
\noindent {\bf (D)} We assume that one can introduce linearizing coordinates near $L$. Namely, a small neighborhood $U$ of $L$
is a solid torus homeomorphic to $S^1\times R^n$, i.e. we can coordinatize it by an angular variable $\theta$
and by normal coordinates $u\in R^n$. Our assumption is that these coordinates are chosen so that
the system in the small neighborhood of $L$ takes the form
\begin{equation}\label{lfr}
\dot u=C(\theta,\mu) u, \qquad \dot \theta=1,
\end{equation}
where $C$ is $2\pi$-periodic in $\theta$. The smooth linearization is not always possible, and our results
can be obtained without this assumption. We, however, will avoid discussing the general case here, in order
to make the construction more transparent.
It is well-known that by a $4\pi$-periodic transformation of the coordinates $u$ system (\ref{lfr}) can be brought to
the time-independent form. Namely, we may write the system as follows
\begin{equation}\label{lcfr}
\begin{array}{l}
\dot x=-\lambda(\mu) x, \qquad \dot y=B(\mu) y,\\
\dot z=\gamma(\mu) z,\\
\dot \theta=1,\end{array}
\end{equation}
where $x\in R^1$, $y\in R^{n-2}$, $z\in R^1$, and $\lambda=-\frac{1}{2\pi}\ln|\rho_1|>0$, $\gamma=\frac{1}{2\pi}\ln|\rho_n|>0$
and, if $n\geq 2$, $B(\mu)$ is an $(n-2)\times(n-2)$-matrix such that
\begin{equation}\label{nev}
\|e^{Bt}\|=o(e^{-\lambda t}) \qquad (t\to+\infty).
\end{equation}
Note also that condition {\bf (C)} implies
\begin{equation}\label{sadl}
\gamma-\lambda<0.
\end{equation}
By (\ref{lcfr}), the periodic orbit $L(\mu)$ is given by $x=0$, $y=0$, $z=0$, its local stable manifold is given
by $z=0$, and the leading direction in the stable manifold is given by $y=0$; the local unstable manifold
is given by $\{x=0,y=0\}$.
Recall that the $4\pi$-periodic transformation we used to bring system (\ref{lfr}) to the autonomous form (\ref{lcfr})
is, in fact, $2\pi$-periodic or $2\pi$-antiperiodic. Namely, the points $(\theta,x,z,y)$ and $(\theta+2\pi,\sigma(x,z,y))$
are equal (they represent the same point in the solid torus $U$), where $\sigma$
is an involution which changes signs
of some of the coordinates $x,z,y_1,\dots,y_{n-2}$. More precisely, $\sigma$ changes the orientation of each of the directions
which correspond to the real negative multipliers $\rho$. In particular, if all the multipliers $\rho$ are positive, then $\sigma$
is the identity, i.e. our coordinates are $2\pi$-periodic in this case.\\
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig4.jpg}
\end{center}
\caption{Poincar\'e map $T_1$ takes a cross-section $S_1$ transverse to the unstable manifold $W^u$
to a cross-section $S_0$ transverse to the stable manifold $W^s$.}
\label{fig3}
\end{figure}
\noindent {\bf (E)} Consider two cross-sections $S_0:\{x=d,\quad \|y\|\leq \varepsilon_1,\quad |z|\leq \varepsilon_1\}$ and
$S_1:\{z=d,\quad \|y\|\leq\varepsilon_2,\quad |x|\leq\varepsilon_2\}$ for some small positive $d$ and $\varepsilon_{1,2}$.
Denote the coordinates on $S_0$ as $(y_0,z_0,\theta_0)$ and the coordinates on $S_1$ as $(x_1,y_1,\theta_1)$.
The set $S_0$ is divided by the stable manifold $W^s$ into two regions, $S_0^+:\{z_0>0\}$ and $S_0^-:\{z_0<0\}$.
Since $W^+_0\subset W^s_0$ by assumption 2, it follows that the orbits starting at $S_1$
define a smooth map $T_1:S_1\to S_0$ (see Fig.~\ref{fig3}) for all small $\mu$:
\begin{equation}\label{glom}
\begin{array}{l}
z_0 =f(x_1,y_1,\theta_1,\mu)\\
y_0 =g(x_1,y_1,\theta_1,\mu)\\
\theta_0 =m\theta_1 + h(\theta_1,\mu)+\tilde h(x_1,y_1,\theta_1,\mu),
\end{array}
\end{equation}
where $f,g,h,\tilde h$ are smooth functions $4\pi$-periodic in $\theta_1$, and the function $\tilde h$ vanishes at $(x_1=0,y_1=0)$.
Condition $W^+_0\subset W^s_0$ reads as
$$f(0,0,\theta_1,0)\equiv 0.$$
We assume that
\begin{equation}\label{qqfff}
f(0,0,\theta_1,\mu)=\mu\alpha(\theta_1,\mu),
\end{equation}
where
\begin{equation}\label{alpt}
\alpha(\theta_1,\mu)>0
\end{equation}
for all $\theta_1$, i.e. {\em all the homoclinics are split simultaneously and in the same direction}, and
the intersection $W^+_\mu\cap S_0$ moves inside $S_0^+$ with a non-zero velocity as $\mu$ grows across zero.
The coefficient $m$ in the last equation of (\ref{glom}) is an integer. In order to see this, recall that
two points $(\theta,x,z,y)$ and $(\hat\theta,\hat x,\hat z,\hat y)$ in $U$ are the same if and only if
$\hat\theta=\theta+2\pi k, (\hat x,\hat z,\hat y)=\sigma^k (x,z,y)$ for an integer $k$. Thus, if we increase $\theta_1$ to $4\pi$
in the right-hand side of (\ref{glom}), then the corresponding value of $\theta_0$ in the left-hand side
may change only to an integer multiple of $2\pi$, i.e. $m$ must be an integer or a half-integer. Let us show that
the half-integer $m$ are forbidden by our assumption (\ref{alpt}). Indeed, if the multiplier $\rho_n$ is positive, then
the involution $\sigma$ keeps the corresponding variable $z$ constant. Thus, $(z=d,\theta=\theta_1, x=0, y=0)$ and
$(z=d,\theta=\theta_1+2\pi, x=0, y=0)$ correspond, in this case, to the same point on $W^+_\mu\cap S_1$, hence their image
by (\ref{glom}) must give the same point on $S_0$, i.e. the corresponding values of $\theta_0$ must differ on an integer multiple
of $2\pi$, which means that $m$ must be an integer. If $\rho_n<0$, then $\sigma$ changes the sign of $z$, i.e. if two values of
$\theta_0$ which correspond to the same point on $S_0$ differ on $2\pi k$, the corresponding values of $z$ differ to a factor of
$(-1)^k$. Now, since the increase of $\theta_1$ to $4\pi$ leads to the increase of $\theta_0$ to $4\pi m$ in (\ref{glom}),
we find that $f(0,0,4\pi,\mu)=(-1)^{2m}f(0,0,0,\mu)$ in the case $\rho_n<0$. This implies
that if $m$ is a half-integer, then $f(0,0,\theta)$ must have zeros at any $\mu$ and (\ref{alpt}) cannot be satisfied.
The number $m$ determines the shape of $W^+\cap S_0$. Namely, the equation of the curve $W^+_0\cap S_0$ is
$$\theta_0 =m\theta_1 + h_1(\theta_1,0),\qquad y_0 =g(0,0,\theta_1,0), \qquad z_0=0,$$
so $|m|$ defines the homotopic type of this curve in $S_0\cap W^s_0$, and the sign of $m$ is responsible for the orientation.
In the case $n=2$, i.e. when the system is defined in $R^3$, the only possible case is $m=1$. At $n=3$ (the system in $R^4$)
the curve $W^+_0\cap S_0$ lies in the two-dimensional intersection of $W^s$ with $S_0$. This is either an annulus (if $\rho_1>0$),
or a M\"obius strip (if $\rho_1<0$). Since the smooth curve $W^+_0\cap S_0$ cannot have self-intersections, it follows that
the only possible cases are $m=0,\pm1$ when $W^s\cap S_0$ is a two-dimensional annulus and $m=0,\pm1,\pm2$ when
$W^+_0\cap S_0$ is a M\"obius strip. At large $n$ (the system in $R^5$ and higher) all integer values of $m$ are possible.\\~\\
Now we can formulate the main results of the paper.\\
\noindent{\bf Theorem.}
Let conditions {\bf (A-E)} hold. Consider a sufficiently small neighborhood $V$ of the homoclinic surface $\Gamma=W^+_0\cap L$.\\
\begin{enumerate}
\item If $m=0$ and, for all $\theta$,
\begin{equation}\label{bsky}
|h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}|<1,
\end{equation}
then a single stable periodic orbit ${\cal L}_\mu$ is born as $\Gamma$ splits. The orbit ${\cal L}_\mu$ exists at all small $\mu>0$; its
period and length tend to infinity as $\mu\to+0$.
All orbits which stay in $V$ for all positive times and
which do not lie in the stable manifold of the saddle orbit $L_\mu$ tend to ${\cal L}_\mu$.\\
\item If $|m|=1$ and, for all $\theta$,
\begin{equation}\label{tor}
1+m \left[h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}\right]>0,
\end{equation}
then a stable two-dimensional invariant torus (at $m=1$) or a Klein bottle (at $m=-1$) is born as $\Gamma$ splits. It exists
at all small $\mu>0$ and attracts all the orbits which stay in $V$ and which do not lie in the stable manifold of $L_\mu$.\\
\item If $|m|\geq 2$ and, for all $\theta$,
\begin{equation}\label{hypat}
|m+h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}|>1,
\end{equation}
then, for all small $\mu>0$, the system has a hyperbolic attractor (a Smale-Williams solenoid) which is an $\omega$-limit set
for all orbits which stay in $V$ and
which do not lie in the stable manifold of $L_\mu$. The flow on the attractor is topologically conjugate to
suspension over the inverse spectrum limit of a degree-$m$ expanding map of a circle. At $\mu=0$, the attractor
degenerates into the homoclinic surface $\Gamma$.\\
\end{enumerate}
\noindent{\em Proof.} Solution of (\ref{lcfr}) with the initial conditions $(x_0=d,y_0,z_0,\theta_0)\in S_0$ gives
$$\begin{array}{l}
x(t)=e^{-\lambda t} d, \qquad y(t)=e^{B t} y_0,\\
z(t)=e^{\gamma t} z_0,\\
\theta(t)=\theta_0+t.\end{array}
$$
The flight time to $S_1$ is found from the condition
$$d = e^{\gamma t} z_0,$$
which gives $\displaystyle t=-\frac{1}{\gamma}\ln\frac{z_0}{d}$. Thus the orbits in $U$ define the map $T_0: S_0^+\to S_1$:
$$\begin{array}{l}
x_1=d^{1-\nu} z_0^\nu, \qquad y_1=Q(z_0) y_0,\\
\theta_1=\theta_0-\frac{1}{\gamma}\ln\frac{z_0}{d}\end{array}
$$
where $\nu=\lambda/\gamma>1$ and $\|Q(z_0)\|=o(z_0^\nu)$ (see (\ref{nev}),(\ref{sadl})). By (\ref{glom}), we may write the
map $T=T_0T_1$ on $S_1$ as follows (we drop the index ``$1$''):
$$\begin{array}{l}
\bar x=d^{1-\nu} (\mu\alpha(\theta,\mu)+O(x,y))^\nu, \qquad \bar y=Q(\mu\alpha+O(x,y)) g(x,y,\theta,\mu),\\
\bar\theta=m\theta+h(\theta,\mu)-\frac{1}{\gamma}\ln(\frac{\mu}{d}\alpha(\theta,\mu)+O(x,y))+O(x,y).\end{array}
$$
For every orbit which stays in $V$, its consecutive intersections with the cross-section $S_1$ constitute an orbit of
the diffeomorphism $T$. Since $\nu>1$, the map $T$ is contracting in $x$ and $y$, and it is easy to see
that all the orbits eventually enter a neighborhood of $(x,y)=0$ of size $O(\mu^\nu)$.
We therefore rescale the coordinates $x$ and $y$ as follows:
$$x=d^{1-\nu}\mu^\nu X,\qquad y=\mu^\nu Y.$$
The map $T$ takes the form
\begin{equation}\label{mapt}
\begin{array}{l}
\bar X= \alpha(\theta,0)^\nu +o(1), \qquad \bar Y=o(1),\\
\bar\theta=\omega(\mu)+m\theta +h(\theta,0)-\frac{1}{\gamma}\ln\alpha(\theta,0)+o(1),
\end{array}
\end{equation}
where $o(1)$ stands for terms which tend to zero as $\mu\to+0$, along with their first derivatives,
and $\omega(\mu)=\frac{1}{\gamma}\ln(\mu/d)\to\infty$ as $\mu\to+0$. Recall that $\alpha>0$ for all $\theta$
and that $\alpha$ and $h$ are periodic in $\theta$.
It is immediately seen from (\ref{mapt}) that all orbits
eventually enter an invariant solid torus $\{|x-\alpha(\theta,0)^\nu|< K_\mu,\;\|y\|<K_\mu\}$
for appropriately chosen $K_\mu$, $K_\mu\to 0$ as $\mu\to +0$ (see Fig.~\ref{fig4}). Thus, there is an attractor in $V$ for
all small
positive $\mu$, and it merges into $\Gamma$ as $\mu\to+0$. Our theorem claims that the structure of the attractor depends
on the value of $m$, so we now consider different cases separately.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig5.jpg}
\end{center}
\caption{Case $m=0$: the image of the solid torus is contractible to a point; case $m = 1$: contraction transverse to the longitude;
case $m = 2$: the solid-torus is squeezed,
doubly stretched and twisted within the original and so on, producing the solenoid in the limit.}
\label{fig4}
\end{figure}
If $m=0$ and (\ref{bsky}) holds, then map (\ref{mapt}) is, obviously, contracting at small $\mu$,
hence it has a single stable fixed point. This fixed point corresponds to the sought periodic orbit
$A_\mu$. Its period tends to infinity as $\mu\to+0$: the orbit intersects both the
cross-sections $S_0$ and $S_1$, and the flight time from $S_0$ to $S_1$ is of order $\frac{1}{\gamma}|\ln\mu|$.
The length of the orbit also tends to infinity, since the phase velocity never vanishes in $V$.
In the case $m=\pm 1$ we prove the theorem by referring to the ``annulus principle'' of \cite{AfS3}. Namely, consider a map
$$\bar r=p(r,\theta),\qquad \bar\theta=q(r,\theta)$$
of a solid torus into itself (here $\theta$ is the angular variable and $r$ is the vector of normal variables).
Let the map $r\mapsto p(r,\theta)$ be a contraction for every fixed $\theta$, i.e.
$$\left\|\frac{\partial p}{\partial r}\right\|_\circ<1$$
(where by $\|\cdot\|_\circ$ we denote the supremum of the norm over the solid torus under consideration)
and let the map $\theta\mapsto q(r,\theta)$ be a diffeomorphism of a circle for every fixed $r$. Then
it is well-known \cite{AfS3,book2} that if
$$1-\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ \cdot
\left\|\frac{\partial p}{\partial r}\right\|_\circ >
2\sqrt{\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ \cdot
\left\|\frac{\partial q}{\partial r}\right\|_\circ
\left\|\frac{\partial p}{\partial \theta}\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ},$$
then the map has a stable, smooth, closed invariant curve $r=r^*(\theta)$ which attracts all orbits from the solid torus.
These conditions are clearly satisfied by map (\ref{mapt}) at $|m|=1$ if (\ref{tor}) is true (here $r=(X,Y)$,
$p=(\alpha(\theta,0)^\nu +o(1), o(1))$, $q=\omega(\mu)+m\theta +h(\theta,0)-\frac{1}{\gamma}\ln\alpha(\theta,0)+o(1)$).
Thus, the map $T$ has a a closed invariant curve in this case. The restriction of $T$ to the invariant curve preserves
orientation if $m=1$, while at $m=-1$ it is orientation-reversing. Therefore, this invariant curve on the cross-section
corresponds to an invariant torus of the flow at $m=1$ or to a Klein bottle at $m=-1$.
It remains to prove the theorem for the case $|m|\geq 2$. The proof is based on the following result.\\
\noindent{\bf Lemma.} Consider a diffeomorphism $T:(r,\theta)\mapsto (\bar r,\bar\theta)$ of a solid torus, where
\begin{equation}\label{maptr}
\bar r=p(r,\theta),\qquad \bar\theta=m\theta+s(r,\theta)=q(r,\theta),
\end{equation}
where $s$ and $p$ are periodic functions of $\theta$
Let $|m|\geq 2$, and
\begin{equation}\label{frc}
\left\|\frac{\partial p}{\partial r}\right\|_\circ <1,
\end{equation}
\begin{equation}\label{cndir}
\left(1-\left\|\frac{\partial p}{\partial r}\right\|_\circ\right)
\left(1-\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ\right)>
\left\|\frac{\partial p}{\partial \theta}\right\|_\circ\; \left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}
\frac{\partial q}{\partial r}\right\|_\circ.
\end{equation}
Then the map has a uniformly-hyperbolic attractor, a Smale-Williams solenoid,
on which it is topologically conjugate to the inverse spectrum limit
of $\bar \theta=m\theta$, a degree-$m$ expanding map of the circle.\\
\noindent{\em Proof.} It follows from (\ref{frc}),(\ref{cndir}) that
$\|(\frac{\partial q}{\partial \theta})^{-1}\|$ is uniformly bounded. Therefore, $\theta$
is a uniquely defined smooth function of $(\bar\theta, r)$, so we may rewrite (\ref{maptr})
in the ``cross-form''
\begin{equation}\label{crmps}
\bar r=p^\times(r,\bar\theta),\qquad \theta=q^\times(r,\bar\theta),
\end{equation}
where $p^\times$ and $q^\times$ are smooth functions. It is easy to see that conditions (\ref{frc}),
(\ref{cndir}) imply
\begin{equation}\label{frc0}
\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ <1,\qquad
\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ <1
\end{equation}
\begin{equation}\label{cncrs}
\left(1-\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ\right)
\left(1-\left\|\frac{\partial q^\times}{\partial \theta}\right\|_\circ\right)\geq
\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ\; \left\|\frac{\partial q^\times}{\partial r}\right\|_\circ.
\end{equation}
These inequalities imply the uniform hyperbolicity of the map $T$ (note that (\ref{cndir}) coincides with
the hyperbolicity condition for the Poincare map for the Lorenz attractor from \cite{ABS}).
Indeed, it is enough to show that there exists
$L>0$ such that the derivative $T'$ of $T$ takes every cone $\|\Delta r\|\leq L\|\Delta \theta\|$ inside
$\|\Delta \bar r\|\leq L\|\Delta \bar \theta\|$ and is uniformly expanding in $\theta$ in this cone,
and that the inverse of $T'$ takes
every cone $\|\Delta \bar\theta\|\leq L^{-1}\|\Delta \bar r\|$ inside
$\|\Delta \theta\|\leq L^{-1}\|\Delta r\|$ and is uniformly expanding in $r$ in this cone.
Let us check these properties. When $\|\Delta r\|\leq L\|\Delta \theta\|$, we find from (\ref{crmps}) that
\begin{equation}
\|\Delta\theta\|\leq \frac{\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ}
{1-L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ} \|\Delta\bar\theta\|
\end{equation}
and
\begin{equation}
\|\Delta\bar r\|\leq \left\{\frac{L \left\|\frac{\partial p^\times}{\partial r}\right\|_\circ
\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ}
{1-L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ} +
\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ\right\}
\|\Delta\bar\theta\|.
\end{equation}
Similarly, if
$\|\Delta \bar\theta\|\leq L^{-1}\|\Delta \bar r\|$, we find from (\ref{crmps}) that
\begin{equation}
\|\Delta\bar r\|\leq \frac{\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ}
{1-L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ} \|\Delta r\|
\end{equation}
and
\begin{equation}
\|\Delta\theta\|\leq \left\{\frac{L^{-1} \left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ
\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ}
{1-L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ} +
\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ\right\}
\|\Delta r\|.
\end{equation}
Thus, we will prove hyperbolicity if we show that there exists $L$ such that
$$\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ < 1-
L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ$$
and
$$\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ < 1-
L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ.$$
These conditions are solved by any $L$ such that
$$\frac{\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ}
{1-\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ}<L<
\frac{1-\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ}
{\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ}.$$
It remains to note that such $L$ exist indeed when (\ref{frc0}),(\ref{cncrs}) are satisfied.
We have proved that the attractor $A$ of the map $T$ is uniformly hyperbolic. Such attractors are structurally stable,
so $T|_A$ is topologically conjugate to the restriction to the attractor of any diffeomorphism which
can be obtained by a continuous deformation of the map $T$ without violation of conditions
(\ref{frc}) and (\ref{cndir}). An obvious example of such a diffeomorphism is given by the map
\begin{equation}\label{epd}
\bar r=p(\delta r,\theta),\qquad \bar\theta=q(\delta r,\theta)
\end{equation}
for any $0<\delta\leq 1$. Fix small $\delta>0$ and consider a family of maps
$$\bar r=p(\delta r,\theta),\qquad \bar\theta=q(\varepsilon r,\theta),$$
where $\varepsilon$ runs from $\delta$ to zero. When $\delta$ is sufficiently small, every map in this family is a diffeomorphism
(otherwise we would get that the curve $\{\bar r=p(0,\theta), \bar\theta= q(0,\theta)\}$ would have points of self-intersection,
which is impossible since this curve is the image of the circle $r=0$ by the diffeomorphism $T$), and each satisfies
inequalities (\ref{frc}),(\ref{cndir}). This family is a continuous deformation of map (\ref{epd}) to the map
\begin{equation}\label{skd}
\bar r=p(\delta r,\theta),\qquad \bar\theta=q(0,\theta)=m\theta+s(0,\theta).
\end{equation}
Thus, we find that $T|_A$ is topologically conjugate to the restriction of diffeomorphism (\ref{skd}) to its attractor.
It remains to note that map (\ref{skd}) is a skew-product map of the solid torus, which contracts along the fibers $\theta=const$
and, in the base, it is an expanding degree-$m$ map of a circle. By definition, the attractor of such map is the sought
Smale-Williams solenoid \cite{Sm,W}. This completes the proof of the lemma.
Now, in order to finish the proof of the theorem, just note that map (\ref{mapt}) satisfies the conditions of the Lemma
when (\ref{hypat}) is fulfilled.
\section*{Acknowledgment}
This work was supported by RFFI Grant No.~08-01-00083 and the Grant 11.G34.31.0039 of the Government of the Russian Federation
for state support of ``Scientific research conducted under supervision of leading scientists in
Russian educational institutions of higher professional education" (to L.S); NSF grant DMS-1009591,
MESRF ``Attracting leading scientists to Russian universities" project 14.740.11.0919 (to A.S) and
the Royal Society Grant "Homoclinic bifurcations" (to L.S. and D.T.)
|
1,108,101,565,173 | arxiv |
\section{Introduction}
\label{ph_sec_introduction}
\input{ph_sec_introduction}
\section{Dantzig-Wolfe decomposition}
\label{ph_sec_dantzig_wolfe_decomposition}
\input{ph_sec_dantzig_wolfe_decomposition}
\section{Primal heuristics}
\label{ph_sec_primal_heuristics}
\input{ph_sec_primal_heuristics}
\section{Numerical experiments}
\label{ph_sec_numerical_experiments}
\input{ph_sec_numerical_experiments}
\section{Conclusion}
\label{ph_sec_conclusion}
\input{ph_sec_conclusion}
\printbibliography
\input{ph_appendix}
\end{document}
\section{Problem formulation}
\label{sec_appendix_problem_formulation}
We follow one of the standard formulations in literature and formulate the following constraints:
\begin{itemize}
\item \textbf{Load balance}: Generators have to meet all the demand in each time period (generation shedding of 0 cost is allowed).
\item \textbf{Reserve}: To deal with contingencies, it is required to keep a sufficient amount of back up in each time period, which can be activated quickly.
\item \textbf{Power output bounds}: Each generator's power output has to be within its limit.
\item \textbf{Ramp rate}: Generators can only change their outputs within the ramp rates.
\item \textbf{Minimum up/downtime}: If switched on (off), each generator has to stay on (off) for a given minimum period.
This is to avoid thermal stress in the generators which may cause wear and tear of the turbines.
\end{itemize}
The formulation of the model is as follows.
\begin{itemize}
\item{Parameters}
\begin{itemize}
\item $G$: The number of generators
\item $T$: The number of time periods where decisions are taken
\item $C^{\mathrm{nl}}_{g}$: no-load cost of generator $g$
\item $C^{\mathrm{mr}}_{g}$: marginal cost of generator $g$
\item $C^{\mathrm{up}}_{g}$: startup cost of generator $g$
\item $P^{\max/\min}_{g}$: maximum/minimum generation limit of generator $g$
\item $P^{\mathrm{ru}/\mathrm{rd}}_{g}$: operating ramp up/down limits of generator $g$
\item $P^{\mathrm{su}/\mathrm{sd}}_{g}$: startup/shutdown ramp limits of generator $g$
\item $T^{\mathrm{u}/\mathrm{d}}_{g}$: minimum uptime/downtime of generator $g$
\item $P^{\mathrm{d}}_{t}$: power demand at time $t$
\item $P^{\mathrm{r}}_{t}$: reserve requirement at time $t$
\end{itemize}
\item{Variables}
\begin{itemize}
\item $\alpha_{gt} \in \{0, 1\}$: 1 if generator $g$ is on in period $t$, and 0 otherwise
\item $\gamma_{gt} \in \{0, 1\}$: 1 if generator $g$ starts up in period $t$, and 0 otherwise
\item $\eta_{gt} \in \{0, 1\}$: 1 if generator $g$ shut down in period $t$, and 0 otherwise
\item $p_{gt} \ge 0$: power output of generator $g$ in period $t$
\end{itemize}
\item The objective is the total cost
\[
\min \sum_{t = 1}^T \sum_{g = 1}^G
\left( C^{\mathrm{nl}}_g \alpha_{gt} + C^{\mathrm{mr}}_g p_{gt} +
C^{\mathrm{up}}_g \gamma_{gt}
\right).
\]
This is to be minimised subject to the following constraints.
\item Load balance
\begin{equation*}
\sum_{g = 1}^G p_{gt} \ge P^{\mathrm{d}}_{t}
\qquad t = 1, 2, \ldots, T.
\label{eq:uc_first_constraint}
\end{equation*}
\item Reserve
\begin{equation*}
\sum_{g = 1}^G (P^{\max}_{g} \alpha_{gt} - p_{gt})
\ge P^{\mathrm{r}}_t
\qquad t = 1, 2, \ldots, T.
\end{equation*}
\item Power output bounds
\begin{equation*}
P^{\min}_{g} \alpha_{gt} \le p_{gt} \le P^{\max}_{g} \alpha_{gt}
\qquad g = 1, 2, \ldots, G, t = 1, 2, \ldots, T
\end{equation*}
\item Ramp rate
\begin{equation*}
p_{gt} - p_{g \, t-1} \le P^{\mathrm{ru}}_g \alpha_{g \, t-1}
+ P^{\mathrm{su}}_g \gamma_{gt}
\qquad g = 1, 2, \ldots, G, t = 2, 3, \ldots, T.
\end{equation*}
\begin{equation*}
p_{g \, t-1} - p_{gt} \le P^{\mathrm{rd}}_g \alpha_{gt}
+ P^{\mathrm{sd}}_g \eta_{gt}
\qquad g = 1, 2, \ldots, G, t = 2, 3, \ldots, T.
\end{equation*}
\item Minimum up/downtime
\begin{equation*}
\sum_{u=\max\{t-T^\mathrm{u}_g+1, 1\}}^t \gamma_{gu} \le \alpha_{gt}
\qquad g = 1, 2, \ldots, G, t = 1, 2, \ldots, T
\end{equation*}
\begin{equation*}
\sum_{u=\max\{t-T^\mathrm{u}_g+1, 1\}}^t \eta_{gu} \le 1 - \alpha_{gt}
\qquad g = 1, 2, \ldots, G, t = 1, 2, \ldots, T
\end{equation*}
\item Polyhedral/Switching constraints (to enforce binaries to work as we expect)
\begin{equation*}
\alpha_{gt} - \alpha_{g \, t-1} = \gamma_{gt} - \eta_{gt}
\qquad g = 1, 2, \ldots, G, t = 2, 3, \ldots, T
\end{equation*}
\begin{equation*}
1 \ge \gamma_{gt} + \eta_{gt}
\qquad g = 1, 2, \ldots, G, t = 2, 3, \ldots, T
\label{eq:uc_last_constraint}
\end{equation*}
\end{itemize}
We note that the complicating constraints are inequality in the above formulation but the discussion in this paper (e.g.\ the number of fractional values in the solution to the \gls{RMP}) still holds.
\subsection{Evaluation: Time to close gap with Dantzig-Wolfe decomposition}
\subsubsection{Experimental Setups}
In this experiment we used the primal heuristics alongside Dantzig-Wolfe decomposition and measured computational time to find a primal feasible solution and prove that its suboptimality was smaller than a prescribed tolerance.
To prepare the dataset used by the neural network and nearest neighbour partial-fixing, as many training instances as possible were solved to 0.25\% optimality in the training phase.
The training budget was 24 hours on 8 CPU cores.
To solve each training instance we used Dantzig-Wolfe decomposition (Algorithm \ref{alg_plain_dw}) with the feasibility recovery local search primal heuristic.
The number of solved instances are reported in Table \ref{tab_nn_training_summary}.
After the dataset was constructed, a neural network model was trained to predict the values of the binary variables.
We used a feed-forward neural network with 2 hidden-layers of 400 units per layer with ReLU activation function.
For simplicity, the time to train a neural network model, which is shown in Table \ref{tab_nn_training_summary}, is not included in the training budget of 24 hours.
When solving a test instance, we used the threshold values of 0.8, 0.9, 0.95, 0.99 and 1.0.
\begin{table}
\begin{center}
\small
\caption{Statistics on the training of neural network models}
\label{tab_nn_training_summary}
\begin{tabular*}{9cm}{
@{\extracolsep{\fill}}lrrr}
\toprule
& 200 & 600 & 1000 \\
\midrule
number of training instance & 15,419 & 7,241 & 4,886 \\
time to train a model (s) & 757 & 1,514 & 1,837 \\
\bottomrule
\end{tabular*}
\end{center}
\end{table}
The nearest neighbour method used the same dataset as the neural network model.
When solving a test instance, the parameter of the instance was compared with those of training instances, and the 50 closest instances (in Euclidean distance) were chosen to compute the average values of the binary variables.
To solve a test instances, Dantzig-Wolfe decomposition (Algorithm \ref{alg_plain_dw}) was used and the primal heuristics were run in the end of each column generation iteration.
The time limit of the primal heuristics was set to
\begin{equation}
\text{(primal heuristic time limit)} = \text{(time spent to solve pricing subproblems)} \cdot \left( \frac{k}{10} + 2\right),
\label{ph_eq_time_limit}
\end{equation}
where $k$ is the iteration number.
When the iteration number is large, the lower bound is typically tight and the upper bound provided by the primal heuristics is loose unless more time is allocated to it.
Thus, we allowed the primal heuristics to use more time in later iterations.
We note that some primal heuristics may stop before reaching the time limit.
For example, the partially-fixed \gls{MILP} solved in the \gls{RMP} partial-fixing primal heuristic was often solved before the timelimit was reached.
For comparison, we also ran CPLEX on the original \gls{MILP} problem without decomposition.
\subsubsection{Results}
In this experiment 40 test instances are solved.
The demand data to construct the test instances are sampled from a different year than those of the instances used to train the neural network and the nearest neighbour model.
Table \ref{ph_tab_dw_summary} shows the number of instances solved within the time limit of 20 minutes, the average computational time and the average number of column generation iterations to solve the instances or to reach the time limit.
When calculating the averages, the instances that are not solved within the time limit have the time of 20 minutes and the number of iterations reached by the 20-minute time limit is used.
\begin{table}
\begin{center}
\caption{Computational time and required number of iterations}
\label{ph_tab_dw_summary}
\begin{tabular*}{\textwidth}{
@{\extracolsep{\fill}}l@{\hskip0.1cm}l@{\hskip0.1cm}
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r}
\toprule
& & \multicolumn{3}{l}{tol: 0.5\%} & \multicolumn{3}{l}{0.25\%} & \multicolumn{3}{l}{0.1\%} \\
\cmidrule{3-5}
\cmidrule{6-8}
\cmidrule{9-11}
size & method & solved & time & iter & solved & time & iter & solved & time & iter \\
\midrule
200 & feasibility recovery & 40 & 15.9 & 3.0 & 40 & 56.0 & 11.3 & 4 & 551.8 & 68.6 \\
& column combination & 40 & 32.8 & 7.1 & 40 & 33.6 & 7.2 & 30 & 276.3 & 35.6 \\
& RMP partial-fixing & 40 & 22.8 & 5.5 & 40 & 24.0 & 5.8 & 40 & \textbf{37.5} & 8.4 \\
& network partial-fixing & 40 & \textbf{9.3} & 1.0 & 40 & \textbf{12.0} & 1.3 & 40 & 56.3 & 5.2 \\
& nearest partial-fixing & 40 & 11.6 & 1.2 & 40 & 15.5 & 1.6 & 40 & 74.4 & 6.3 \\
& CPLEX & 40 & 215.8 & 0.0 & 37 & 336.0 & 0.0 & 18 & 558.0 & 0.0 \\
\midrule
600 & feasibility recovery & 40 & 42.2 & 2.0 & 40 & 84.4 & 5.4 & 30 & 263.5 & 16.8 \\
& column combination & 40 & 83.9 & 5.2 & 40 & 86.1 & 5.3 & 40 & 105.6 & 6.4 \\
& RMP partial-fixing & 40 & 60.7 & 4.4 & 40 & 62.8 & 4.6 & 40 & \textbf{76.8} & 5.7 \\
& network partial-fixing & 40 & \textbf{39.6} & 1.3 & 40 & \textbf{41.7} & 1.3 & 40 & 128.0 & 4.2 \\
& nearest partial-fixing & 40 & 51.8 & 1.6 & 40 & 60.2 & 1.8 & 40 & 136.0 & 4.3 \\
& CPLEX & 5 & 589.7 & 0.0 & 5 & 589.7 & 0.0 & 1 & 593.5 & 0.0 \\
\midrule
1000 & feasibility recovery & 40 & \textbf{66.6} & 1.8 & 40 & 117.9 & 4.2 & 37 & 236.1 & 9.4 \\
& column combination & 40 & 114.1 & 3.9 & 40 & 120.1 & 4.1 & 40 & 166.4 & 5.6 \\
& RMP partial-fixing & 40 & 85.5 & 3.8 & 40 & 91.4 & 4.1 & 40 & \textbf{113.7} & 5.2 \\
& network partial-fixing & 40 & 70.8 & 1.3 & 40 & \textbf{75.1} & 1.4 & 40 & 181.6 & 3.9 \\
& nearest partial-fixing & 40 & 91.6 & 1.8 & 40 & 94.3 & 1.9 & 40 & 208.8 & 4.3 \\
& CPLEX & 1 & 597.4 & 0.0 & 1 & 597.4 & 0.0 & 0 & 600.0 & 0.0 \\
\bottomrule
\end{tabular*}
\end{center}
\end{table}
When the tolerance is loose, such as 0.5\% or 0.25\%, the neural network partial-fixing usually performs best.
We observe that the number of column generation iterations in these cases is very small and with averages all less than 2.
That is, on typical instances the neural network partial-fixing very quickly find a primal feasible solution satisfying the suboptimality tolerance with 0.25\% and Dantzig-Wolfe decomposition give a lower bound to assert that the suboptimality was smaller than 0.25\%.
The nearest neighbour partial-fixing is second best in half of the cases.
However, for all the cases, the average performance of the neural network partial-fixing is better than that of the nearest neighbour partial-fixing, in terms of the computational time and the number of column generation iterations.
The other primal heuristics are based on decomposition and require more column generation iterations to find primal feasible solutions of acceptable suboptimality.
This results in longer computational time for many instances.
If the target tolerance is tight (0.1\%), the results are different.
The \gls{RMP} partial-fixing outperforms the other methods in all the cases.
The neural network partial-fixing requires a smaller number of column generation iterations but needs longer computational time.
This is because the \gls{RMP} partial-fixing does not use all the time allocated to the primal heuristics but the neural network partial-fixing always run as long as the time limit.
The neural network and nearest neighbour partial-fixing primal heuristics found a primal solution with suboptimality smaller than 0.1\% for all the test instances.
However, the neural network partial-fixing is on average slower than the \gls{RMP} partial-fixing in the all cases and the nearest neighbour partial-fixing primal heuristic is even slower.
The column combination primal heuristic failed to find a primal solution for some test instances for the 200-generator case.
However, it successfully found a primal solution on all the test instances of size 600 and 1,000 and the average computational time is faster than the neural network partial-fixing primal heuristic.
The feasibility recovery local search primal heuristic also has better performance for larger test instances but is still inferior to the other primal heuristics.
\subsubsection{Analysis of the effect of training budget}
In this section, we study the effect of the training budget.
We consider cases where the training budget is 6, 12, 36 or 48 hours instead of 24 hours and observe how the performance of the methods is affected.
To this end, the neural network model is trained using the dataset generated within each of these training budgets.
The performance of the models is then evaluated as before and the results are reported in Table \ref{tab_training_budget}.
In all cases, all of the test instances are solved to within 0.1\% tolerance.
In many cases, both the neural network model and the nearest neighbour model tend to perform better with a larger training dataset.
Comparing the neural network models with 6-hour training, those with 48-hour training are all on average faster.
However, there is not a systematic improvement in the performance beyond 24-hour of training.
The room for additional performance gain seems limited if further training budget is given.
Similar discussion holds for the result of the nearest neighbour model.
\begin{table}
\caption{Performance of models with different training budgets}
\label{tab_training_budget}
\begin{tabular*}{\textwidth}{
@{\extracolsep{\fill}}
l@{\hskip0.1cm}l@{\hskip0.1cm}l@{\hskip0.1cm}
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r}
\toprule
& & & \multicolumn{3}{l}{tol: 0.5\%} & \multicolumn{3}{l}{0.25\%} & \multicolumn{3}{l}{0.1\%} \\
\cmidrule{4-6}
\cmidrule{7-9}
\cmidrule{10-12}
size & method & budget & solved & time & iter & solved & time & iter & solved & time & iter \\
\midrule
200 & network & 6 & 40 & 7.7 & 1.1 & 40 & 9.4 & 1.7 & 40 & 30.5 & 6.8 \\
& & 12 & 40 & 7.8 & 1.1 & 40 & 8.9 & 1.5 & 40 & 29.5 & 6.6 \\
& & 24 & 40 & 7.5 & 1.1 & 40 & 9.3 & 1.6 & 40 & 27.8 & 6.3 \\
& & 36 & 40 & 7.5 & 1.1 & 40 & 8.6 & 1.4 & 40 & 25.4 & 6.0 \\
& & 48 & 40 & 7.3 & 1.0 & 40 & 8.4 & 1.4 & 40 & 26.3 & 6.3 \\
\midrule
& neighbour & 6 & 40 & 8.0 & 1.2 & 40 & 10.2 & 1.9 & 40 & 41.9 & 8.6 \\
& & 12 & 40 & 7.7 & 1.1 & 40 & 10.7 & 2.1 & 40 & 40.8 & 8.6 \\
& & 24 & 40 & 7.6 & 1.1 & 40 & 9.8 & 1.8 & 40 & 38.2 & 8.2 \\
& & 36 & 40 & 7.6 & 1.1 & 40 & 9.3 & 1.6 & 40 & 34.0 & 7.6 \\
& & 48 & 40 & 7.9 & 1.2 & 40 & 10.2 & 1.9 & 40 & 31.4 & 7.3 \\
\midrule
600 & network & 6 & 40 & 31.2 & 1.5 & 40 & 33.7 & 1.8 & 40 & 71.2 & 5.3 \\
& & 12 & 40 & 27.8 & 1.2 & 40 & 32.1 & 1.5 & 40 & 65.8 & 4.8 \\
& & 24 & 40 & 27.5 & 1.1 & 40 & 30.3 & 1.4 & 40 & 64.6 & 4.7 \\
& & 36 & 40 & 29.7 & 1.4 & 40 & 30.6 & 1.4 & 40 & 64.0 & 4.7 \\
& & 48 & 40 & 30.2 & 1.4 & 40 & 30.2 & 1.4 & 40 & 64.5 & 4.7 \\
\midrule
& neighbour & 6 & 40 & 33.6 & 1.8 & 40 & 37.4 & 2.1 & 40 & 83.7 & 6.0 \\
& & 12 & 40 & 32.7 & 1.6 & 40 & 36.8 & 2.0 & 40 & 75.4 & 5.6 \\
& & 24 & 40 & 32.0 & 1.5 & 40 & 35.0 & 1.9 & 40 & 76.1 & 5.5 \\
& & 36 & 40 & 32.4 & 1.6 & 40 & 35.3 & 1.9 & 40 & 71.7 & 5.4 \\
& & 48 & 40 & 34.2 & 1.8 & 40 & 37.3 & 2.1 & 40 & 73.8 & 5.6 \\
\midrule
1000 & network & 6 & 40 & 50.3 & 1.5 & 40 & 53.9 & 1.7 & 40 & 105.7 & 5.0 \\
& & 12 & 40 & 50.5 & 1.4 & 40 & 53.2 & 1.6 & 40 & 97.9 & 4.5 \\
& & 24 & 40 & 44.6 & 1.2 & 40 & 45.1 & 1.2 & 40 & 89.7 & 4.2 \\
& & 36 & 40 & 45.5 & 1.2 & 40 & 45.9 & 1.3 & 40 & 91.3 & 4.3 \\
& & 48 & 40 & 47.2 & 1.4 & 40 & 48.4 & 1.4 & 40 & 89.5 & 4.2 \\
\midrule
& neighbour & 6 & 40 & 56.8 & 1.9 & 40 & 63.3 & 2.4 & 40 & 143.5 & 6.6 \\
& & 12 & 40 & 55.2 & 1.9 & 40 & 59.7 & 2.1 & 40 & 122.4 & 5.7 \\
& & 24 & 40 & 62.5 & 2.1 & 40 & 69.5 & 2.5 & 40 & 132.6 & 6.0 \\
& & 36 & 40 & 47.8 & 1.4 & 40 & 52.4 & 1.6 & 40 & 110.7 & 5.1 \\
& & 48 & 40 & 52.1 & 1.6 & 40 & 57.3 & 1.9 & 40 & 112.9 & 5.2 \\
\bottomrule
\end{tabular*}
\end{table}
\subsubsection{Analysis on neural network model architecture}
In the following, the effect of the model architecture is studied.
In the previous experiments, small feed-forward neural network models with 2 hidden-layers of 400 units per layer were considered.
Here, we additionally train deeper neural network models and measure their performances.
A deeper model consists of 4 hidden layers of 1000 units per layer.
We use the same training dataset which is generated with the training budget of 24 hours.
The performance of the models are evaluated similarly and the results are reported in Table \ref{tab_deeper_model}.
The difference in performance is relatively small.
Although we observed that the performance of the neural network model is noticeably better than the nearest neighbour model, there is no systematic advantage of using the deeper, more expressive model.
\begin{table}
\begin{center}
\caption{Performance of the original and deeper neural network models}
\label{tab_deeper_model}
\begin{tabular*}{\textwidth}{
@{\extracolsep{\fill}}l@{\hskip0.1cm}l@{\hskip0.1cm}
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r
r@{\hskip0.1cm}r@{\hskip0.1cm}r}
\toprule
& & \multicolumn{3}{l}{tol: 0.5\%} & \multicolumn{3}{l}{0.25\%} & \multicolumn{3}{l}{0.1\%} \\
\cmidrule{3-5}
\cmidrule{6-8}
\cmidrule{9-11}
size & model & solved & time & iter & solved & time & iter & solved & time & iter \\
\midrule
200 & original & 40 & 7.5 & 1.0 & 40 & 9.3 & 1.6 & 40 & 27.8 & 6.4 \\
& deeper & 40 & 7.4 & 1.0 & 40 & 8.6 & 1.4 & 40 & 28.5 & 6.6 \\
\midrule
600 & original & 40 & 27.5 & 1.2 & 40 & 30.3 & 1.4 & 40 & 64.6 & 4.7 \\
& deeper & 40 & 28.6 & 1.2 & 40 & 33.3 & 1.6 & 40 & 63.1 & 4.6 \\
\midrule
1000 & original & 40 & 44.6 & 1.2 & 40 & 45.1 & 1.2 & 40 & 89.7 & 4.2 \\
& deeper & 40 & 43.9 & 1.1 & 40 & 44.3 & 1.2 & 40 & 97.5 & 4.6 \\
\bottomrule
\end{tabular*}
\end{center}
\end{table}
\subsection{Evaluation: Best upper bound within time limit}
\subsubsection{Setups}
All of the discussions so far aimed to find a primal feasible solution with guaranteed suboptimality (e.g.\ 0.1\%).
In this section, we consider a case where we are only interested in obtaining as good primal feasible solution as possible within a prescribed time budget.
This is of interest when we do not necessarily have enough time to achieve a given proven tolerance.
To evaluate the performance of the primal heuristics in this setup, we compare them by the quality (suboptimality) of feasible solutions found within a prescribed time limit.
The neural network and nearest neighbour partial-fixing primal heuristics do not require Dantzig-Wolfe decomposition to be run and so are used as stand-alone methods.
That is, we formulate the partially-fixed \gls{MILP} instances and solve them sequentially (from those with small threshold values).
We note that even though we are not interested in computing a lower bound, the other primal heuristics still require Dantzig-Wolfe decomposition to be run.
We use the same neural network and nearest neighbour models as in the previous experiments.
The configuration of the other primal heuristics are the same as well.
\subsubsection{Results}
For evaluation, the same 40 test instances are used.
The results are shown in Table \ref{ph_tab_standalone_summary}.
The columns labelled as 'solved' are the number of test instances where the primal heuristics found a feasible solution within the time limits of 1, 2 and 5 minutes respectively.
Among the instances where the primal heuristics found a feasible solution, the gap between the best upper bounds found by the primal heuristics and the best lower bounds found by running CPLEX for 4 hours are computed and shown in the columns labelled as 'suboptimality'.
When the time limit is 1 minute, just finding a primal feasible solution may not be trivial.
On the instances of size 200, all of the primal heuristics can find feasible solutions.
However, on larger instances such as those of size 600 or 1000, only the neural network partial-fixing and the feasibility recovery local search primal heuristic found a feasible solution on all of the test instances.
We also note that the neural network partial-fixing found primal feasible solutions of smaller suboptimality compared with the feasibility recovery local search primal heuristic on average.
The column combination primal heuristic failed to find any primal feasible solutions on more than half of the test instances of size 1000 within 1 minute.
With longer time limit, the primal heuristics are more likely to find primal feasible solutions.
On 200-generator case, the neural network partial-fixing performs best on average on any time limits.
However, on 600-generator and 1000-generator case, given sufficiently long time limit, such as 5 minutes, the \gls{RMP} partial-fixing finds the primal feasible solution of the smallest suboptimality among the other primal heuristics on average and the neural network partial-fixing finds the second best solutions.
We note that on any setups, the nearest neighbour partial-fixing performs worse than the neural network partial-fixing.
\begin{table}
\begin{center}
\caption{Quality of feasible solutions found within time limits}
\label{ph_tab_standalone_summary}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} llrrrrrr}
\toprule
& & \multicolumn{2}{l}{1 minute} & \multicolumn{2}{l}{2 minutes} & \multicolumn{2}{l}{5 minutes} \\
\cmidrule{3-4}
\cmidrule{5-6}
\cmidrule{7-8}
size & method & solved & subopt. & solved & subopt. & solved & subopt. \\
\midrule
200 & feasibility recovery & 40 & 0.208 & 40 & 0.159 & 40 & 0.147 \\
& column combination & 40 & 0.094 & 40 & 0.084 & 40 & 0.078 \\
& RMP partial-fixing & 40 & 0.056 & 40 & 0.050 & 40 & 0.047 \\
& network partial-fixing & 40 & \textbf{0.039} & 40 & \textbf{0.034} & 40 & \textbf{0.028} \\
& nearest partial-fixing & 40 & 0.054 & 40 & 0.043 & 40 & 0.032 \\
\midrule
600 & feasibility recovery & 40 & 0.317 & 40 & 0.172 & 40 & 0.088 \\
& column combination & 10 & 0.107 & 32 & 0.062 & 40 & 0.024 \\
& RMP partial-fixing & 22 & 0.117 & 40 & \textbf{0.028} & 40 & \textbf{0.021} \\
& network partial-fixing & 40 & \textbf{0.067} & 40 & 0.039 & 40 & 0.026 \\
& nearest partial-fixing & 39 & 0.242 & 40 & 0.046 & 40 & 0.032 \\
\midrule
1000 & feasibility recovery & 40 & 0.301 & 40 & 0.277 & 40 & 0.072 \\
& column combination & 0 & - & 23 & 0.112 & 40 & 0.033 \\
& RMP partial-fixing & 5 & 0.220 & 40 & 0.079 & 40 & \textbf{0.014} \\
& network partial-fixing & 40 & \textbf{0.243} & 40 & \textbf{0.042} & 40 & 0.028 \\
& nearest partial-fixing & 39 & 0.550 & 40 & 0.052 & 40 & 0.035 \\
\bottomrule
\end{tabular*}
\end{center}
\end{table}
\subsection{Review of primal heuristics based on decomposition}
\label{ph_subsec_review_of_primal_heuristics_based_on_decomposition}
\citet{MerlinAndSnadrin1983} solved \gls{UC} by applying Lagrangian relaxation and using a subgradient method for the dual problem.
In each iteration of the subgradient method, the pricing subproblems \eqref{eq_pricing_problem} were solved.
They then constructed a primal solution using the pricing subproblem solutions and tested if this was feasible for \eqref{problem_eq_mip}.
If the generation capacity was not sufficient to meet the demand or the spinning reserve at some time periods, they modified the dual step direction to increase the dual variable corresponding to the violated constraint to encourage more generators to be on at the shortage periods.
We note that this modification affects the optimisation of the dual variable and the resulting lower bounds may be suboptimal.
Instead of modifying the dual step direction, \citet{Guanetal1992} fixed infeasibility of the primal solutions by applying local search.
In each iteration, a primal solution was obtained from the subproblem solutions.
If this was infeasible, the cheapest available generators were committed to meet the demand and the spinning reserve.
After a feasible commitment decision was found, the amount of power output of each generator was optimised.
That is, the values of the binary variables in the original problem were fixed to the values corresponding to the feasible commitment and the resulting \gls{LP} was solved to compute the amount of power output.
In this paper, we refer to this approach as the {\em feasibility recovery local search primal heuristic}.
Another heuristic based on decomposition is the {\em column combination primal heuristic}.
In this heuristic, the solutions to the pricing subproblems are stored in a pool.
Then a constraint is added to the problem \eqref{problem_eq_mip} to restrict the pattern of the solution to those in the pool.
Let $\{ x_{\idxcomponent i}' = (u_{\idxcomponent i}, z_{\idxcomponent i}) \in \mathbb{R}^n \times \{0, 1\}^m \mid i \in J_{\idxcomponent} \}$ be the pool of solutions to pricing subproblem $\idxcomponent$ where $J_\idxcomponent$ is the index set.
Then, binary variables $w_{\idxcomponent i}$ ($\oneofidxcomponents$, $i \in J_\idxcomponent$) are added to the original problem \eqref{problem_eq_mip} together with the following constraints
\begin{gather*}
z_\idxcomponent = \sum_{i \in J_\idxcomponent} w_{\idxcomponent i} z_{\idxcomponent i}', \qquad \oneofidxcomponents, \\
\sum_{i \in J_\idxcomponent} w_{\idxcomponent i} = 1, \qquad \oneofidxcomponents, \\
w_{\idxcomponent i} \ge 0, \qquad \oneofidxcomponents, i \in J_\idxcomponent.
\end{gather*}
This is referred to as the restricted master IP by \citet{Vanderbeck2005}.
A standard \gls{MILP} solver can be used to solve the problem but the solution space is much smaller than the original problem.
This method was used to solve \gls{UC} by \citet{Takritietal2000} and to solve a stochastic version of \gls{UC} problems by \citet{Schulzeetal2017}.
\subsection{RMP partial-fixing}
\label{ph_subsec_primal_heuristics_based_on_rmp}
In this section, we introduce a new primal heuristic which uses the \gls{RMP} to construct primal feasible solutions.
In the following, we assume that the \gls{RMP} \eqref{problem_eq_rmp} is feasible, which may be ensured by adding some artificial columns.
First we solve the \gls{RMP} \eqref{problem_eq_rmp} without regularisation and for each subproblem $s$ compute a weighted-average of columns
\begin{equation*}
\hat{x}_\idxcomponent := \sum_{i \in \hat{I}_\idxcomponent} x_{\idxcomponent i} p_{\idxcomponent i}, \qquad \oneofidxcomponents,
\end{equation*}
where $\hat{I}_\idxcomponent$ is the index set of columns $\{x_{\idxcomponent i} \mid i \in \hat{I}_\idxcomponent\}$ used to formulate the \gls{RMP} for each $\idxcomponent$.
Using solutions $\hat{x}_\idxcomponent$ for each subproblem $\idxcomponent$, we construct a primal solution $\hat{x}$.
Although the integer variables in $x_{\idxcomponent i}$ satisfy the integrality constraint for any $\idxcomponent$ and $i \in \hat{I}_\idxcomponent$, those in $\hat{x}$ may not.
In this primal heuristic, we check whether the elements of $\hat{x}$ that correspond to binary decisions satisfies the integrality constraint and fix those which do.
In this way, we obtain a partially-fixed \gls{UC} problem, which is then solved by an \gls{MILP} solver.
In each iteration of the column generation procedure, we repeat the above process.
We refer this primal heuristic as {\em \gls{RMP} partial-fixing primal heuristic}.
Remarkably, the number of elements of $\hat{x}$ which violate the integrality constraint is bounded by a constant independent of the number of generators assuming the Simplex Method is used to solve the \gls{RMP}.
Thus, the difficulty to solve the partially-fixed \gls{MILP} is bounded.
This is a notable feature of this approach compared with typical primal heuristics.
For example, the problem solved in the column combination primal heuristic gets harder as the number of generator increases.
The above property can be shown using an argument similar to the one given by \citet{Bertsekasetal1983}.
Let $C$ be the number of complicating constraints in \eqref{problem_eq_mip}.
That is, $a \in \mathbb{R}^C$.
If we use the formulation shown in Appendix \ref{sec_appendix_problem_formulation}, $S$ is the number of generators and $C$ is double the number of time periods.
In the following we assume that $S > C$.
We note that in the \gls{RMP} \eqref{problem_eq_rmp}, there are $S + C$ equality constraints.
Thus, any basic solution to the \gls{RMP} has at most $S + C$ variables positive.
The second constraint in \eqref{problem_eq_rmp} ensures that for each $\idxcomponent$ at least one variable among $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$ is positive.
Therefore, at most $C$ generators have two or more positive values among $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$.
In other words, at least $S - C$ generators have exactly one positive value of $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$.
In such a case, exactly one of $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$ is equal to 1 and all of the others are equal to 0.
Thus, $\hat{x}_\idxcomponent$ equals exactly one of $\{x_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$ and has integer values.
In practice the number of generators with multiple non-zeros in $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$ may be smaller than $C$.
Furthermore, even if multiple elements in $\{p_{\idxcomponent i}\}_{i \in \hat{I}_\idxcomponent}$ are positive, only a small part of $\hat{x}_\idxcomponent$ may have fractional values.
We note that the \gls{RMP} \eqref{problem_eq_rmp} is not necessarily feasible.
Typically we need to run a few column generation iterations to gather enough columns to make the \gls{RMP} feasible.
Furthermore, even if the \gls{RMP} is feasible, the partially-fixed \gls{MILP} is not necessarily feasible.
In our experiments on \gls{UC} instances with practical data, we do not observe instances where the partially-fixed schedule is infeasible and typically the above primal heuristic successfully finds a primal feasible solution.
However, on some instances, it fails to provide a solution with small suboptimality, such as 0.1\%.
To handle such cases, we modify the method as follows.
Every time when we run the method, we record the upper bound.
If the \gls{RMP} is feasible but the upper bound is not improved for a prescribed number of the successive iterations (3 iterations in our implementation), we relax integer variables which are adjacent in time periods to those with fractional values.
For example, if the on-off status of generator $g$ on time $t$ is set free, we unfix the on-off status of the same generator of time $t-1$ and $t+1$.
By relaxing more binary variables, the partially-fixed \gls{UC} gets harder to solve but more likely to provide a better solution.
\subsection{Partial-fixing based on machine learning}
\label{ph_subsec_primal_heuristics_with_pretrained_model}
All of the primal heuristics discussed above use information gathered through the execution of the column generation procedure (or the subgradient methods).
In this section, we consider a primal heuristic based on machine learning.
We assume that the problem \eqref{problem_eq_mip} is to be solved repeatedly with different demand data $\omega$.
In the training phase, we sample $\omega$ (e.g.\ from historical data) and solve as many training instances as possible.
In this way, we obtain the data set for each sample $\omega$ and the corresponding optimal values of the binary variables.
Then, we use the data set to create a prediction model which takes $\omega$ as input and predicts the value of the binary variables.
We consider two alternatives: a neural network model and a nearest neighbour model.
The neural network model we consider in this paper is a feed-forward neural network \cite{Bishop2006}.
The model takes $\omega$ as input and outputs a vector each of whose elements is a predicted probability of the corresponding binary variables to be 1.
The neural network model is trained as a standard binary classification problem.
The nearest neighbour model is also considered as an alternative model to predict the values of binary variables.
When solving a new instance the nearest neighbour model compares the problem parameter $\omega$ with those in the training data set.
A prescribed number of the closest neighbours are selected and the average of the values of the binary variables are computed and used as the prediction of the probability.
The output of a prediction model can be used to find a feasible solution.
The simplest approach is to round the prediction to the nearest integer.
However, such a solution is usually infeasible when the problem is highly constrained.
Instead, as described below, we use the prediction to fix only a subset of binary variables so that the problem size is reduced.
Similar ideas have been explored by \citet{Xavieretal2020} and \citet{Wang2021}.
Pick a threshold value $\alpha \in (0.5, 1]$.
If the prediction is larger than $\alpha$ (smaller than $1 - \alpha$), fix the corresponding binary variables to 1 (0), and leave all the other variables unfixed.
Then the resulting partially-fixed \gls{MILP} is solved with an \gls{MILP} solver.
Choosing a suitable threshold value is a subtle task.
If we fix many binary variables the problem becomes small and can be solved quickly.
However, fixing too many variables may result in infeasibility or unacceptably large suboptimality.
On the other hand, fixing fewer variables results in a harder problem which takes longer to solve.
Instead of fixing the threshold value to a single value a priori, we try various values adaptively.
Namely, we first try a small threshold value and solve the partially-fixed \gls{MILP}.
If the resulting problem is infeasible, or if the resulting problem is solved, we try a larger threshold value.
We refer to the method based on neural network and nearest neighbour as {\em neural network partial-fixing primal heuristic} and {\em nearest neighbour partial-fixing primal heuristic} respectively.
In the context of Dantzig-Wolfe decomposition, the above method can be combined with the column generation procedure.
At the end of each iteration of the column generation procedure, we run one of the above primal heuristics and solve partially-fixed \gls{MILP}s using an optimisation solver.
We may impose a limit on the amount of time spent by the primal heuristic.
Namely, after the solver spends a certain amount of time, it is halted and the next iteration of the column generation procedure is executed.
In the next run of the primal heuristic, the solver is resumed from where it was interrupted in the previous iteration.
|
1,108,101,565,174 | arxiv | \section{The Interstellar Medium}
The Interstellar Medium (ISM) constitutes the reservoir of matter, that was and still is turned into stars and planets and gave also rise to the existence of our own solar system and our world.
If its study wasn't already interesting for just that reason, there are many complex processes that impact it chemically as well as energetically.
The ISM is being enriched with heavier elements by the more massive stars in their late evolutionary phases, but also diluted by the influx of extragalactic matter.
Feedback from the different phases of stellar life, but also cosmic rays and AGN inject energy, which is released by emission in many atomic and molecular lines ([C\,{\sc ii}], [O\,{\sc i}], C$_2$H$_2$, H$_2$O, PAHs, etc.) as well as thermal emission by different kinds of dust.
Even though the general scenario of star formation is reasonably well understood, the details of the complex interplay of stellar radiation, gravitation, turbulence and magnetic fields, that determine the timescales and the interstellar mass function, are not.
A large number of these lines as well as the peak of the thermal emission are located in the mid- to far-infrared (MIR 3-30$\mu$m, FIR 30-300$\mu$m) wavelength range as illustrated in Fig.~\ref{Meixner:fig:ismsed} (top), making this portion of the electromagnetic spectrum key to studying the ISM and a multitude of related scientifically interesting phenomena.
However, this is also a rather difficult spectral range to observe as shown in the lower part of Fig.~\ref{Meixner:fig:ismsed}, which illustrates the atmospheric transmission at the levels of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Stratospheric Observatory for Infrared Astronomy (SOFIA). Telluric water vapor and ozone leave only certain windows in the MIR and sub-millimeter ranges, while the FIR is effectively unobservable from the ground and requires observatories in the stratosphere or in space.
\begin{figure}[htp]
\begin{minipage}{0.50\textwidth}
\includegraphics[width=\textwidth]{MeixnerIsmSED.pdf}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.4\textwidth}
\caption{The Spectral Energy Distribution (SED) of the interstellar medium from mid- to far-infrared wavelengths (top) and the corresponding transmission spectra of the Earth atmosphere at the operating altitudes of ALMA and SOFIA (bottom).}
\label{Meixner:fig:ismsed}
\end{minipage}
\end{figure}
\section{JWST}
Launched at the end of 2021, the James Webb Space Telescope (JWST) provides access to the full MIR spectrum in space since its first science data were released in July 2022.
Its high spatial resolution, similar to that of Hubble in the visible spectrum, and its access to PAH emission as well as the ro-vibrational lines of molecular hydrogen and water, make JWST an excellent probe of star formation regions.
JWST images of 30 Doradus, aka the Tarantula Nebula, show the stars, molecular hydrogen and PAHs with NIRCam (0.6-5~$\mu$m) and warm dust and PAHs with MIRI (4.9 to 28.8~$\mu$m).
These kind of maps provide an unprecedented amount of detail at those wavelengths and will play an important role in further investigating the hot and warm ISM.
The spectroscopic capabilities of JWST are considerable as well, yet limited in terms of spectral resolution with R~$\approx 2700$ \citep{meixner:cite:Boeker2022} for NIRSpec and R~$\approx 1300$ to 3700 for MIRI \citep{meixner:cite:Wells2015}.
This is where SOFIA provided complementary high spectral resolution spectroscopy (R~$\approx10^5$) with EXES \citep{meixner:cite:Richter2018}, even though at lower spatial resolution and sensitivity.
\section{SOFIA}
\subsection{Importance and Successes}
With its five exchangeable scientific instruments (SIs), SOFIA filled nicely the large spectral gap in the FIR between JWST and ALMA and provided further complementary capabilities like high spectral resolution at JWST wavelengths with EXES and the ability to observe very bright sources with FORCAST, that filled in the overexposed areas in MIR maps of the Galactic Center region, made by Spitzer \citep{meixner:cite:Hankins2020}.
The recent discovery of water on the sunlit surface of the Moon by \cite{meixner:cite:Honniball2021} falls into that category as well.
The heterodyne instrument GREAT covers such important atomic fine structure lines as [C\,{\sc ii}]~and [O\,{\sc i}]~at the highest spectral resolutions of up to R~$\approx 10^6$ and fills in the spectral gaps that are inaccessible for ALMA due to atmospheric extinction.
This enabled not only the discovery of new molecules in the ISM like Helium Hydride \citep{meixner:cite:Guesten2019}, but also very detailed kinematic studies e.g. feedback processes in Orion by \cite{meixner:cite:Pabst2019} that triggered a very successful SOFIA legacy program by \cite{meixner:cite:Schneider2020}.
When sensitivity became an issue and could be gained by sacrificing spectral resolution, in particular for extragalactic work, the FIFI-LS spectrometer provided a good alternative for observations of fine structure lines as shown by \cite{meixner:cite:Fadda2021}, \cite{meixner:cite:Spinoglio2022} or \cite{meixner:cite:Pineda2018}.
Last but not least, where very high sensitivity was required to reveal the peak of the cold dust emission of high redshift objects, HAWC+ provided the FIR imaging capability.
HAWC+, however, also provided an entire new dimension for ISM studies, that had before only briefly been available with ISO in the FIR.
Polarization mapping revealed the vectors of magnetic fields in the ISM thanks to the FIR emission of aligned elongated dust particles. Many publications sparked a lot of new observational as also theoretical interest in this previously rather dormant field \citep{meixner:cite:Pillai2020, meixner:cite:Lopez-Rodriguez2021, meixner:cite:Zielinski2021}.
\subsection{Mission Success and Conclusion}
In the face of the tremendous scientific successes of this true ISM-Machine, the decision by NASA and DLR to end the SOFIA mission after only 9 observing cycles, is certainly very hard to understand.
Following the recommendations from the Flagship Mission Review from 2019, the project has transformed since then with a tremendous growth in science productivity as demonstrated in the SOFIA Status and Future Prospects Report \citep{meixner:cite:Rangwala2022}\footnote{This report was already prepared for NASA's Senior Review Process.}.
Annual publication rates for SOFIA have doubled over the past three years on topics ranging from the Earth to high-z galaxies \citep{meixner:cite:Schmelz2021}.
The Decadal Survey Astro~2020 recommended to NASA to terminate the SOFIA mission, which unfortunately was based on outdated ($>2$~years) and incorrect information\footnote{SOFIA science addresses 50\% of Astro~2020 key science questions, not 10\%.}.
NASA holds Astro 2020 recommendations as superior to Senior Review process results and hence removed SOFIA from the Senior Review.\footnote{This avoided potentially ending up with two contradicting recommendations.}
Arguments that SOFIA's science productivity was insufficient can be easily refuted by comparing the observing time that is spent on average per refereed publication to that of Herschel. Eight years after launch Herschel had provided about 23,500 hours of observing time and produced 2,145 publications, resulting in $\approx$11~h$/$paper. SOFIA with 3458 hours and 330 publications after 8 years since achieving full science operational capability in 2014 results in very similar 10.5~h$/$paper.
Fortunately the last year was particularly productive in terms of observations, so there is a considerable amount of science data in the IRSA archive. As there is only a minimal post-operational phase of one year planned by NASA at this point, we hope DLR will provide the means to conduct data reprocessing also for the time before Cycle~5, advanced water vapor and pointing analysis and more comprehensive corrections, which are currently not included in the plans. In the next section we'll lay out that the time to the next FIR mission might be rather long. Already collected FIR photons might thus be even more valuable for astronomy and funds for maximizing their scientific usability will be well spent.
\section{Future Far-Infrared Observatories}
\subsection{History and Guidance}
Fig.~\ref{Meixner:fig:firhistory} illustrates the history of FIR astronomy by showing the operational phases of all major observatories as green boxes, starting in the sixties until today and the current outlook towards 2045. Up to today, there was an almost continuous capability to supply astronomers with current FIR observations except for the few years between ISO and Spitzer. With the sudden cancellation of SOFIA, which was originally scheduled to continue until 2034, and the cancellation of SPICA by ESA in 2021, the opportunities for FIR data collection have become sparse.
In \cite{meixner:cite:Rangwala2022} Page~4, a traceability matrix can be found, that links Astro 2020 science questions to key measurements in the MIR and FIR, that could have been performed with SOFIA.
This list should still be useful as a collection of science requirements for the design of future stratospheric- and space-observatories.
\subsection{New Opportunities in Space}
Even though Astro~2020 recommended the cancellation of SOFIA, it acknowledged the importance of the FIR spectral region for astrophysics and recommended the launch of a Probe space mission for 2030 that will specialize either in FIR- or X-ray- astronomy.
NASA followed this up by issuing an announcement of opportunity and a proposal deadline of October 2023, a downselection end of 2025, a cost cap of 1B$\$$ excluding the launcher and a launch date not later than 2032 \citep{meixner:cite:NASA-AO2022}.
If history is a guide, such a schedule is highly optimistic. In reality a launch might rather be expected in the mid 2030s, not to mention that continuing the SOFIA mission until its planned end would have cost substantially less, especially when taking into account the launcher as well.
Given that the X-ray community is also competing for another opportunity, it is everything but a done deal that NASA's probe mission will be dedicated to the FIR.
If that doesn't happen, then also the dream of a more ambitious true observatory for the FIR such as ORIGINS in the 2040s \citep{meixner:cite:Meixner2019} may become unrealistic with observational FIR astronomy having lost a lot of its expertise by then.
Therefore at this point it is quite important for the FIR community to look towards the future which at least in space will be the Probe mission. There are four mission proposals for the FIR named PRIMA (PI, Jason Glenn)\footnote{\url{https://prima.ipac.caltech.edu}}, SPICE (PI Lee Mundy)\footnote{\url{https://asd.gsfc.nasa.gov/spice/index.html}}, FIRSST (PI Asantha Cooray) and SALTUS (PI Chris Walker), which were presented at the IR Astrophysics Workshop 2022 in Colorado. The concepts comprise more traditional space observatories with cold telescopes like PRIMA and FIRSST, and more unusual ones like the interferometer SPICE or the large inflatable telescope concept SALTUS. Details as presented at the workshop are available at the workshop website \citep{meixner:cite:Irstig2022}.
\subsection{Stratospheric Opportunities}
In the meantime the FIR community should also investigate other opportunities to reclaim a permanent capability in that part of the spectrum.
This will in particular enable more time dependent FIR astronomy, that we consider being still in its infancy.
The fairly short life spans of FIR missions so far have been a hindrance while time-domain astronomy has really taken off in other parts of the electromagnetic spectrum.
The astrophysical community should investigate the available potential in the FIR.
SOFIA was likely the last airplane observatory, and future stratospheric platforms will probably be of the lighter-than-air category.
Current balloon experiments are, however, rather short lived, extremely weather dependent with very few launch opportunities, can't stay in a particular region for long and have only a 50~\% survival rate upon landing.
Such missions are still seen rather as serving technology maturation and the training of instrumentalists than being able to support serious general observatory type projects for the astronomical community.
This school of thought needs to change as better technologies become available that could address many of the shortcomings mentioned above.
Longer lived robotic stratospheric platforms with propulsion may also be interesting to a wider community including UV- and FIR-astronomy but also climate research and general Earth observation \citep{meixner:cite:Miller2014}.
\section{Conclusion}
Even though the end of SOFIA is a blow to FIR astronomy, the mission and its team have performed excellently and are concluding at peak performance with much data in the archive that await analysis and publication.
JWST is the observatory now to study the ISM in warm/hot conditions, while there will be new opportunities for observatories that can study the cold ISM from space or from the stratosphere.
|
1,108,101,565,175 | arxiv | \section{INTRODUCTION}
Hot channel (HC) refers to the high temperature structure that is
revealed first by coronal images of AIA (Atmospheric Imaging
Assembly) 131 \AA \ passband (sensitive to temperature of $\sim$
10 MK), while the structure is invisible from cooler temperature
images, e.g., images of the AIA 171 \AA \ passband (sensitive to
temperature of $\sim$ 0.6 MK) (Zhang et al. 2012; Cheng et al.
2013a, 2013b, 2014a, 2014b, 2014c; Li \& Zhang 2013). HC appears
as a hot blob structure if observed along the channel axis (Cheng
et al. 2011; Patsourakos et al. 2013; Song et al. 2014a, 2014b)
due to the projection effect. Hereafter, we will use HC to refer
to both hot channel and hot blob structures.
HC has been generally regarded as a proxy of magnetic flux rope
(MFR, a volumetric plasma structure with the magnetic field lines
wrapping around a central axis) since its discovery with AIA on
board \textit{Solar Dynamics Observatory (SDO)}. This is supported
by the following observational studies: (1) Cheng et al. (2014a)
observed an HC that showed helical threads winding around an axis.
In the meantime, cool filamentary materials descended spirally
down to the chromosphere, providing direct observational evidence
of intrinsical helical structure of HC; (2) Cheng et al. (2011)
reported that HC can grow during the eruption, similar to the MFR
growth process according to the classical magnetic reconnection
scenario in eruptive flares; Song et al. (2014a) presented the
formation process of an HC during a CME and found that the HC was
formed from coronal arcades through magnetic reconnection. Their
works further support that the HC is an MFR structure based on the
relation between HC and magnetic reconnection; (3) Cheng et al.
(2014b) found an HC was initially cospatial with a prominence,
then a separation of the HC top from that of the prominence was
observed during the eruption initiated by the ideal kink
instability (T\" or\" ok et al. 2004). It is widely accepted that
prominence/filament can exist at the dip of a flux rope (Rust \&
Kumar 1994). Therefore, this observation offered another important
support that HC is an MFR;
Except HC, several lines of observations in the lower corona have
also been proposed as MFRs, including sigmoid structure in active
region (Titov \& D\'{e}moulin 1999; Mckenzie \& Canfield 2008) and
coronal cavity in quiescent region (Wang \& Stenborg 2010). A
sigmoid has either a forward or reverse S-shape with enhanced
X-ray emissions (implying an entity of high temperature) with its
center straddling along the polarity inversion line of the hosting
active region. Zhang et al. (2012) showed that the HC initially
appeared like a sigmoidal structure and then changed to a
semi-circular shape. Therefore, sigmoid and HC might represent the
same structure, their different shapes are likely from different
perspectives and evolution phase. Both structures are featured by
high temperature, a possible result of flare magnetic reconnection
(e.g., Song et al. 2014a, 2014b). Coronal cavity, on the other
hand, observed as dark circular or oval structure above solar limb
in coronal images with temperatures close to the background corona
(Fuller et al. 2008; Gibson et al. 2010; Kucera et al. 2012), is
also interpreted as MFR. As mentioned, the long-studied feature of
solar filament/prominence shown best in H$\alpha$ images has been
interpreted as situated along the dip in MFR. Therefore,
prominence lying in the dip of coronal cavity is not rare. The
eruption of coronal cavity (or filament) from quiescent region
doesn't show high-temperature signature like HC, which might be
attributed to lack of obvious heating acquired from the weak
magnetic reconnection (e.g., Song et al. 2013).
According to the descriptions above, at least two different types
of MFRs can be identified in the inner corona depending on their
temperatures, i.e., high-temperature MFR like HC and
low-temperature MFR like coronal cavity. Note that it is possible
that the HC has a low initial temperature but heated later by
flare magnetic reconnection during the eruption (e.g., Song et al.
2014a, 2014b). One obvious question arises as what the difference
is between these two MFR structures when they are detected in situ
near 1 AU. Magnetic cloud (MC), with lower temperature than the
background solar wind, is a well known interplanetary structure
(Burlaga et al. 1981; Lepping et al. 1990). Can the HC maintain
its higher temperature than the background at 1 AU, or will it
evolve into a cool MC? In this paper, we will try to address this
question with instruments on board \textit{Solar TErrestrial
RElations Observatory (STEREO)} through tracing an HC eruption
from the Sun to $\sim$ 1 AU. In section 2, we introduce the
instruments. The observations and discussion are presented in
Section 3, which are followed by a summary in our last Section.
\section{INSTRUMENTS}
Our event was observed by three spacecraft including \textit{SDO},
\textit{SOHO (Solar and Heliospheric Observatory)}, and
\textit{STEREO}. The AIA on board \textit{SDO} provides the solar
atmosphere images in 10 narrow UV and EUV passbands with a high
cadence (12 seconds), high spatial resolution (1.2 arcseconds) and
large FOV (1.3 R$_\odot$). The AIA passbands cover a large
temperature range from 0.6 to 20 MK (O'Dwyer et al. 2010; Del
Zanna et al. 2011; Lemen et al. 2012). During an eruption, the
131~\AA\ passband is sensitive to the hot plasma from flare
regions and erupting HC (e.g., Zhang et al. 2012; Cheng et al.
2011; Song et al. 2014a, 2014b). AIA's high cadence and broad
temperature coverage make it possible for constructing
differential emission measure (DEM) models of corona plasma (Cheng
et al. 2012 and references therein). In addition, the COR
coronagraph instrument (Howard et al. 2008) on board
\textit{STEREO} (Kaiser et al. 2008) and LASCO on board
\textit{SOHO} (Domingo et al. 1995) provide CME images in the
outer corona from different perspectives. Heliospheric Imager (HI,
Howard et al. 2008) on board \textit{STEREO} images the whole
propagation process of the associated ICME from near the Sun to
$\sim$ 1 AU. PLASTIC and IMPACT on board \textit{STEREO} measure
the solar wind properties and interplanetary magnetic field. Data
from the above instruments are analyzed in the following section.
\section{OBSERVATIONS AND DISCUSSION}
On 2012 January 27, an X1.7 class soft X-ray (SXR) flare was
recorded by the \textit{Geostationary Operational Environmental
Satellite (GOES)}, which started at 17:37 UT and peaked at 18:37
UT. The flare location was at $\sim$N33W85 (NOAA 11402) from the
perspective of the Earth. Figure 1 shows the positions of
different spacecraft in the ecliptic plane, including
\textit{SDO/SOHO}, \textit{STEREO} A and B. During this flare,
\textit{STEREO} A and B were 107.8$^{\circ}$ west and
114.5$^{\circ}$ east of the Earth with a distance of 0.96 AU and
1.06 AU, respectively. Therefore, the source location on the Sun
was $\sim$23$^{\circ}$ east of the central meridian as viewed from
\textit{STEREO} A, whereas $\sim$70$^{\circ}$ behind the west limb
for \textit{STEREO} B. Obviously, \textit{STEREO} A provides the
best disk observation of the active region, while \textit{SDO} and
\textit{SOHO} give the limb views of the eruption.
\subsection{HC Eruption in the Inner Corona}
For this event, a very clear HC can be observed during the
eruption, rising from 17:37 UT onward and arriving at the rim of
AIA FOV at 18:15 UT. The HC showed an interesting morphological
evolution from a channel with twisted or writhed axis (Figure
2(a)) to a channel with loop-like axis (Figure 2(c)), as indicated
by the dotted lines. This morphological evolution is very similar
to the event reported by Zhang et al. (2012). During the
evolution, the two footpoints of the evolving HC remained fixed on
the Sun (see the first animation accompanying Figure 2 for the
whole process). To describe the overall thermal properties of the
HC, DEM-weighted temperature maps (see Cheng et al. 2012 and Song
et al. 2014b for the validation and other details) are
reconstructed and presented in Figures 2(b) and (d), which show
the HC temperature is around 10 MK at the times of Figures 2(a)
and (c), respectively. \textbf{Here we also acquire the HC density
through DEM analysis (see Cheng et al. 2012 for the method), which
is around 10$^{9}$ cm$^{-3}$ and much higher than the density of
its surrounding corona at the same altitude.} By carefully
inspecting the AIA and LASCO animations, one can deduce that the
HC eruption induced a CME (see the second animation accompanying
Figure 2), which was recorded by LASCO and COR from three distinct
perspectives as described in the following subsection. With
combined observations of \textit{SDO}, \textit{STEREO} A and B, we
conclude that no other CMEs or large blowout jets took place
during the time of interest (see the third animation accompanying
Figure 2), which concludes that the CME was caused by the HC
eruption.
\subsection{CME Observations in the Outer Corona}
In the outer corona, the CME was well observed by the LASCO, COR-A
and COR-B instruments as shown in Figures 3(a)-(c) (also see the
accompanying animation). The CME appeared in LASCO C2 FOV first at
18:27 UT, and its linear speed was 2508 km s$^{-1}$ in the LASCO
C2/C3 FOV. The three viewpoints provide three distinct projections
of the CME. We can distinguish a coherent bright structure and a
preceding CME front region in all three perspectives. The CME
front region ahead of the MFR likely consists of three components:
plasma pile-up of the MFR, an outer diffuse shock front and the
sheath region between them (Vourlidas et al. 2013; Cheng et al.
2014a). Through inspecting the HC eruption and CME propagation in
LASCO FOV carefully, we believe that the coherent bright structure
and preceding front region are the HC and pile-up plasma,
respectively, which is consistent with the conclusions of Cheng et
al. (2014a). This is further supported by the graduated
cylindrical shell (GCS) model (Thernisien et al. 2006)
Using the GCS model of Thernisien et al. (2006), we can
reconstruct the three-dimensional (3D) morphology of the HC. The
model depends on six parameters: the source Carrington longitude
($\phi$) and latitude ($\theta$), the MFR tilt angle ($\gamma$),
height ($r$) and aspect ratio ($\kappa$), as well as the
half-angle ($\alpha$) between the two legs of MFR. We first
estimate $\phi$ (186$^{\circ}$), $\theta$ (37$^{\circ}$), and
$\gamma$ (79$^{\circ}$) using the location and neutral line of the
active region through the Extreme Ultraviolet Imager (EUVI) 195
\AA \ images, then vary $\alpha$ (57$^{\circ}$), $\kappa$ (0.17),
and $r$ (5.6 R$_\odot$) till we achieve the best visual fit in the
three coronagraph images simultaneously. The numbers in the
brackets are the final positioning and model parameters of the HC
for the time shown in Figure 3. The results are displayed in
Figures 3(d)-(f). It's clear that LASCO and COR-A were observing
the HC face on, and COR-B edge on. Therefore, the HC appeared as a
bright channel in LASCO and COR-A FOV and a bright blob in COR-B
FOV. It's clear that our CME is a limb event from the Earth
perspective, and the HC is almost along the west solar limb. With
the fitting results of GCS model and assuming that the HC
experienced a self-similar expansion (M\"ostl et al. 2014), we got
the longitude range of the HC is not over 40 degrees, which was
shown with red dash lines in Figure 1, if assuming the CME
propagated outward radially in the ecliptic plane along the red
solid line in Figure 1. However, we note that the CME might
deflect in the corona and interplanetary space (Wang et al. 2004,
2013; Gopalswamy et al. 2009; Shen et al. 2011; Gui et al. 2011).
Figure 1 shows that the MFR will be likely detected by
\textit{STEREO A}, with the spacecraft trajectory far away from
its center, which might influence the in-situ detection of the MFR
(D\'{e}moulin et al. 2013; Riley \& Richardson 2013). The in-situ
observations will be discussed in Section 3.4.
It's well accepted that the typical morphology of a normal CME
contains the so-called three-part structure: a bright front loop,
a dark cavity and an embedded bright core (Illing \& Hundhausen,
1985), corresponding to the pile-up plasma, MFR, and the erupting
filament (House et al. 1981), respectively. However, for CME
induced by an HC eruption without a filament, the embedded bright
part corresponds to the HC, instead of the filament. In this case,
the CME will show a bright front loop and a coherent bright
structure, corresponding to the pile-up plasma and HC (or MFR),
respectively. It's reasonable because the HC is not only hotter,
but also denser than the background plasma (Cheng et al. 2012).
The shock can be generated if CMEs move fast enough. In our event,
the shock, pile-up plasma, and HC (MFR) can be observed directly
in the coronagraphic FOV as depicted with arrows in Figure 3(c).
Usually, the diffuse front ahead of the pile-up region is
interpreted as a shock structure (e.g., Vourlidas et al. 2003,
2013; Feng et al. 2012, 2013), and the diffusive layer corresponds
to the sheath region. A type II solar radio burst associated with
this event was detected (not shown here), which further confirmed
the existence of a shock. Therefore, in this event we expect that
the shock, sheath, pile-up plasma (front region), HC (MFR), and
remainder of the ICME (rear region) are all observed by the
coronagraphs, and may have their corresponding in-situ
counterparts (e.g., Kilpua et al. 2013), as will be presented
later.
\subsection{ICME Propagation in Interplanetary Space}
The CME propagation in interplanetary space was well observed by
HI-1 and HI-2, as presented in Figures 4(a) and (b). The ICME
first appeared in the HI-1A FOV at 19:29 UT on January 27, and in
the HI-2A FOV at 02:09 UT on January 28. We produce a
time-elongation map by stacking the running difference images
within a slit along the ecliptic plane as shown in Figures 4(a)
and (b) with the red rectangle, and present it in Figure 4(c).
Here to trace the propagation of ICME in interplanetary space, we
just use HI-1 and HI-2 images. Note that the elongation angles are
plotted in a logarithmic scale to expand HI-1 data, so tracks are
not J-like as in traditional linear-linear plots (Liu et al.
2010). The time-elongation map shows one obvious and continuous
track as indicated with the red dotted line. The vertical red line
in Figure 4(c) depicts the arrival time of the ICME shock to
\textit{STEREO} A, which is 13:04 UT on January 29. And no other
ICME propagation was observed by HI from near the Sun to $\sim$1
AU during these days.(see the animation accompanying Figure 4 for
the whole propagation process). These observations show that the
ICME detected by \textit{STEREO} A is the one we are tracing.
\subsection{ICME (HC) Detection near 1 AU}
Figure 5 shows the in situ measurements from the IMPACT and
PLASTIC instruments on board \textit{STEREO} A at 0.96 AU. From
top to bottom, the panels show the normalized pitch angle (PA)
distribution of 93.47 eV electrons (with electron flux values
descending from red to black), the proton bulk speed (black line)
and ratios of three components to the total speed, magnetic field
strength (black line) and its three components, proton density and
temperature, plasma $\beta$ and total pressure, and entropy. Note
the velocity (panel b) and magnetic field (panel c) components are
plotted in RTN coordinates, where R (red line) points from the Sun
center to the spacecraft, T (green line) is parallel to the solar
equatorial plane and along the direction of planet motion and N
(blue line) completes the right-handed system.
As mentioned in Section 3.2, we expect that the shock, sheath,
pile-up plasma, HC (MFR), and remainder of ICME can be detected
one by one with in situ measurements. An obvious forward shock
(depicted with 1 in panel b) passed \textit{STEREO} A at 13:04 UT
on January 29. The transit time is 43.5 h taking the flare start
time (17:37 UT on January 27) to be the CME launch time. One ICME
can be identified from the magnetic field data behind the shock.
The PA distributions in panel a distinguish the different parts of
ICME. The sheath region is very turbulent (e.g., Burlaga et al.
1981), so electrons presented PA between 0 $\sim$ 180$^{\circ}$ in
this region (depicted with 2 in panel b, the left shaded region),
while for the pile-up region, the anti-parallel electron flow
dominated (depicted with 3 in panel b, between the two shaded
regions), similar to the background solar wind, supporting that it
is the pile-up materials of background plasma. Bidirectional
electrons (BDEs) appeared within a high-temperature structure
(HTS, $\sim$1.5 MK, as depicted with 4 in panel b in the right
shaded region), indicating that it corresponds to a magnetic
structure with both footpoints anchored on the Sun. The remainder
of ICME is depicted with 5 in panel b. The final part likely ends
around 18:00 UT on January 30 as indicated with the vertical blue
dot dash line, when the magnetic filed, temperature, and total
pressure approach to the background values.
\subsection{Discussion}
The total magnetic field strengths in the shock sheath and HTS
keep around $\sim$45 nT and $\sim$20 nT, respectively, and vary
between 30 and 50 nT in plasma pile-up region. The R and T
components of HTS keep almost constant while the N component
direction shows irregular rotation, which will be explained later.
The density of HTS is $\sim$15 cm$^{-3}$ and higher than the
background solar wind, while it's lower than that of the sheath
and plasma pile-up region (panel e) due to its expansion during
propagation from near the Sun to $\sim$1 AU. Based on its BDEs,
high temperature, strong magnetic field strength, high density,
and its association with the shock, sheath, and plasma pile-up
region, we suggest that the HTS is the interplanetary counterpart
of the HC observed in lower corona as shown in Figure 2. The
presence of the embedding high Fe charge state further supports
this conclusion, which will be discussed later. The HC started at
19:00 UT and ended at 23:50 UT, the average bulk velocity is 570
km s$^{-1}$ during this period (panel b), so the scale of the
measured HC is around 14 R$_\odot$. The plasma $\beta$ in the HC
is around 1 (panel e), which means the thermal pressure is nearly
equal to the magnetic pressure. The high thermal pressure is
attributed to the high temperature. The entropy in the HC region
is considerably higher than its surroundings (panel f). From above
descriptions, we find the temperature and density of HC decreased
from $\sim$10 MK and $\sim$10$^{9}$ cm$^{-3}$ to $\sim$1.5 MK and
$\sim$15 cm$^{-3}$ from near the Sun to $\sim$1 AU, respectively.
According to the ICME list provided on the \textit{STEREO}
website\footnote{$http://stereo-ssc.nascom.nasa.gov/data/ins_data/impact/level3/$},
this ICME is sorted into Group 3, which means the spacecraft
passed far away from the ICME center, displaying a rapid rise and
then gradual decay in total pressure (Jian et al. 2006). It is
consistent with our CME propagation analysis in Figure 1. This may
lead to two consequences as mentioned above: First, the scale of
the measured HC is small compared to the typical MC structure near
1 AU, which is around 0.25 AU (over 50 R$_\odot$) (see, e.g.,
Lepping et al. 2006); Second, it is not easy to observe a regular
rotation of magnetic field. Therefore, we do not acquire a nice
MFR structure with the Grad-Shafranov (GS) reconstruction method
(Hu \& Sonnerup 2002), which works best for spacecraft passing
near the ICME center. The weakening of the MFR signature with
increasing distance of the spacecraft from the ICME center has
been demonstrated by multi-spacecraft observations (Cane et al.,
1997; Kilpua et al. 2011), consistent with our observations.
As mentioned above, an MC (Burlaga et al. 1981) can be frequently
identified in ICME structures, usually behind the shock, sheath,
and plasma pile-up region. The magnetic field vectors in a typical
MC are observed to have a large rotation, consistent with the
passage of an MFR. The field strength is high, and the density and
temperature are relatively low with a low plasma $\beta$ (less
than 0.1, see Lepping et al. 1997). The total pressure inside the
cloud is higher than outside, causing the cloud to expand with its
propagation, even to a distance beyond 1 AU (Burlaga et al. 1981).
However, in our case, an ICME structure with a much higher
temperature ($\sim$1.5 MK) and irregular rotation of Bn was
detected, and the associated plasma $\beta$ was around 1, which
obviously is not the traditional MC. \textbf{According to a very
recent statistical study based on 325 ICMEs from 1996 to 2008
(Mitsakou \& Moussas 2014), the temperatures of ICMEs at 1 AU are
usually lower than 0.25 MK, and their averaged value is only 0.076
MK.} We conjecture that there exist two types of interplanetary
MFR (IMFR) structures mainly according to their temperatures,
i.e., the low-temperature IMFR (or MC) corresponding to MFR (e.g.,
coronal cavity) without obvious heating during its eruption (e.g.,
Song et al. 2013), and the high-temperature IMFR corresponding to
MFR (e.g., HC) with significant heating during or before its
eruption (e.g., Song et al. 2014a, 2014b). In our event, the later
can keep its temperature higher than background even to 1 AU. It
might be confusing why the temperature of HC didn't decrease to a
level lower than the background wind through its faster expansion
in the interplanetary space. To address this, we note that the
total pressure ahead of the HC is much higher (see Figure 5(e))
than the usual solar wind, which might prevent the HC from a free
expansion.
According to the statistical study (Richardson \& Cane 2010; Wu \&
Lepping 2011), MCs are detected in only about 30\% of ICMEs. Riley
and Richardson (2013) listed several explanations for why some
ICMEs are observed to be MCs and others are not, e.g., the
observational selection effect of ICMEs, the interactions of an
MFR with itself or between neighboring MFRs, the effect of
evolutionary process of MFRs, and the different initiation
mechanisms of CMEs. As mentioned above, there are different
observational lines raised as proxies of MFRs in the lower corona,
e.g., filaments/prominences, coronal cavities, sigmoid structures,
and hot channels. Therefore, it's natural to argue that ICMEs with
or without MCs might correspond to different coronal structure
eruptions. Our results indicate that the HC eruption might not
evolve into a typical MC under some special conditions. More
events are necessary to conclude this point.
If the HTS really corresponds to HC in the lower corona, then we
should be able to detect high charge state of Fe element with in
situ measurements, because the charge state distribution is fully
established within a few solar radii from the Sun, and remains
frozen in after that (e.g., Esser \& Edgar, 2001; Chen et al.
2004). Unfortunately, high temporal resolution Fe charge state
data is not available for this event. The ICME list provided on
\textit{STEREO} website (the same address with above) indicated
that there was a significant increase of Fe charge state during
our event, which hints the coronal origin of the HTS and supports
our conclusion.
It should be mentioned that a weak shock was observed at 2:13 UT
on January 29 before the ICME shock (See the red arrow in Figure
5(b)). It seems to be a forward shock generated by a corotating
interaction region (CIR, see e.g., Wu et al. 2014), \textbf{whose
presence is supported by the appearance of a low latitude coronal
hole ahead of NOAA active region 11402 according to the
observations of the X-ray telescope on board \textit{HINODE}}. As
mentioned, this CIR structure is the reason for the presence of
the high-pressure region ahead of the HC, \textbf{which acts as an
obstacle and inhibits the HC expansion. We suggest that a
preceding CIR (or ICME, e.g., Liu et al. 2014) shall be a
necessary condition for the presence of a HC at 1 AU.} It is
likely that the CME-driven shock ran into the CIR, which makes the
interplanetary transient looks complex as presented in Figure 5.
Regions 2 and 3 in Figure 5 might include the compressed CIR
plasma. Nevertheless, we believe that the ICME-CIR interaction
will not change our interpretation of the detected HTS based on
the descriptions and discussion of BDEs, magnetic field,
temperature, and total pressure. As mentioned, the different
trajectories of spacecraft through ICME make the observational
characteristics of ICME difference. For this event, it also seems
that the regions 2, 3, and 4 are all belong to the sheath, and
just region 5 corresponds to the ejecta according to Figure 5(b).
However, we think this possibility is not high because the total
magnetic field in region 5 is at the background level, and the
BDEs analysis in Figure 5(a) doesn't support this point, either.
\section{SUMMARY}
In this paper, an HC eruption associated with an X1.7 class SXR
flare was recorded by \textit{SDO} and \textit{GOES}. The
corresponding fast CME can be well observed from three distinct
viewpoints by coronagraphs on board \textit{SOHO}, \textit{STEREO}
A and B. The shock, pile-up region and HC can be well observed in
coronagraphic FOVs. And the HC (coherent bright structure) in
coronagraph images can be well fitted with the GCS model. The CME
propagation into the interplanetary space can be traced with the
HI-1/2 instruments, and detected in-situ by instruments on board
\textit{STEREO} A. Further, no other ICME propagation in HI FOV
during these days. This concludes that the HI ICME is the HC
eruption we are tracing. For the first time, we might taste the HC
in interplanetary space, which is mainly identified by its high
temperature, appearance behind shock, sheath and pile-up region,
and the BDEs. The preliminary Fe charge-state report from the
\textit{STEREO} team further supports that the high temperature
property observed near 1 AU has its origin in the inner corona.
Compared with the background solar wind, the interplanetary HC has
a strong magnetic field, and shows obvious BDE flow, indicating
its two footpoints still connecting to the Sun. This supports that
the interplanetary HC belongs to an MFR structure. Nevertheless,
it's likely that the spacecraft passed far away from the ICME
center, so the rotation of magnetic field components was not
obvious and it's difficult to obtain a nice flux rope structure
with the GS reconstruction method. In future studies, we expect
that a suitable event will enable us to observe the known MFR
signatures in the aftermath of a HC eruption.
\acknowledgments We thank the referee for constructive comments
that have greatly improved this manuscript. We are grateful to L.
Jian, B. Li, Q. Hu, Q. M. Lu, C. L. Shen and C. L. Tang for their
valuable discussions. \textit{SDO} is a mission of NASA's Living
With a Star Program, \textit{SOHO} is a mission of international
cooperation between ESA and NASA, and \textit{STEREO} is the third
mission in NASA's Solar Terrestrial Probes program. This research
is supported by the 973 program 2012CB825601, NNSFC grants
41274177, 41274175, and 41331068. J. Zhang is supported by NSF
grant ATM-0748003, AGS-1156120 and AGS-1249270. G. Li is supported
by ATM-0847719 and AGS-1135432.
|
1,108,101,565,176 | arxiv | \section{Introduction}
We consider one dimensional solutions and higher dimensional radial
solutions of the following Klein-Gordon equation:
\begin{equation}\label{equrn}
\left\{
\begin{array}{l}
\partial_t^2 U =\Delta U+|U|^{p-1}U-U,\\
U(0)=U_0\mbox{ and }U_t(0)=U_1,
\end{array}
\right.
\end{equation}
where $U(t):x\in\R^N \rightarrow U(x,t)\in\R$, $U_0\in \rm H^1_{\rm loc,u}$
and $U_1\in \rm L^2_{\rm loc,u}$.\\
The space $\rm L^2_{\rm loc,u}$ is the set of all $v$ in
$\rm L^2_{\rm loc}$ such that
\[
\|v\|_{\rm L^2_{\rm loc,u}}\equiv\d\sup_{a\in
\R^N}\left(\int_{|x-a|<1}|v(x)|^2{\mathrm{d}}x\right)^{1/2}<+\infty,
\]
and the space ${\rm H}^1_{\rm loc,u}= \{ v\;|\;v, \nabla v \in {\rm L}^2_{\rm loc,u}\}$.
\bigskip
The nonlinear Klein Gordon equation appears as a model of self-focusing waves in nonlinear optics (see Bizo\'n, Chamj and Szpak \cite{BCS11}).
\bigskip
More generally, we consider the following semilinear wave
equation:
\begin{equation}\label{gen}
\left\{
\begin{array}{l}
\partial_t^2 U =\Delta U+|U|^{p-1}U+f(U)+g(|x|,t,\nabla U.\frac{x}{|x|},\partial_t U ),\\
U(0)=U_0\mbox{ and }U_t(0)=U_1.
\end{array}
\right.
\end{equation}
We assume that the functions $f$ and $g$ are ${\cal {C}}^1$
functions, where $f:\R\rightarrow \R $ and $g:\R^{4}\rightarrow \R
$ satisfy the following conditions
\begin{eqnarray*}
(H_f)& |{f(u)}|\le M(1+|u|^q), \ &{\textrm {for all }}\ y\in \R \ \qquad{{\textrm {with}}}\ \ (q<p,\ \ M>0),\\
(H_g)& |{g(X,t,v,z)}|\le M(1+|v|+|z|), & {\textrm {for all
}}\ X, t,v,z\in \R \ \qquad{{\textrm {with}}}\ (M>0).
\end{eqnarray*}
We assume also that the function $g$ is globally Lipschitz. Finally, we assume that
\begin{equation}\label{condp}
1<p\mbox{ and }p\le 1+\frac 4{N-1}\mbox{ if } N\ge 2.
\end{equation}
Since $U$ is radial if $N\ge2$, we introduce
\begin{equation}\label{defu}
u(r,t) = U(r,t)\mbox { for }r\in \R, \mbox { if } N=1
\end{equation}
and
\begin{equation}\label{defu1}
u(r,t) = U(x,t)\mbox { if }r=|x| \mbox { and } N\ge2
\end{equation}
and rewrite \eqref{gen} as
\begin{equation}\label{equ}
\left\{
\begin{array}{l}
\partial^2_{t} u =\partial^2_r u+\frac{(N-1)}r \pr u+|u|^{p-1}u+f(u)+g(r,t,\partial_r u, \partial_{t}u),\\
\pr u(0,t)=0 \mbox { if } N\ge2,\\
u(r,0)=u_0(r)\mbox{ and }u_t(r,0)=u_1(r),
\end{array}
\right.
\end{equation}
where $u(t):r\in I \rightarrow u(r,t)\in\R$, with $I=\R^+$ if $N\ge2$ and $I=\R$ if $N=1$.
\bigskip
The Cauchy problem of equation \eqref{gen} is solved in
$H^{1}_{loc,u}\times L^{2}_{loc,u}$. This follows from the finite
speed of propagation and the wellposedness in $H^{1} \times
L^{2}$ (see for example Georgiev and Todorova
\cite{GTjde94}) , valid whenever $ 1< p<1+\frac{4}{N-2}$. The existence of
blow-up solutions $u(t)$ of \eqref{gen} follows from energy techniques (see for example
Levine and Todorova
\cite{LT})
and Todorova
\cite{Tn00}).
\bigskip
If $u$ is a blow-up solution of \eqref{equ}, we define (see for
example Alinhac \cite{Apndeta95}) a 1-Lipschitz curve
$\Gamma=\{(r,T(r))\}$ where $r\in I $
such that the maximal influence domain $D$ of $u$ (or the domain of definition of $u$) is written as
\begin{equation}\label{defdu}
D=\{(r,t)\;|\; r\in I,\ t< T(r)\}.
\end{equation}
$\Gamma$ is called the blow-up graph of $u$.
A point $r_0\ge 0$ is a non-characteristic point if there are
\begin{equation}\label{nonchar}
\delta_0\in(0,1)\mbox{ and }t_0<T(r_0)\mbox{ such that }
u\;\;\mbox{is defined on }{\cal C}_{r_0, T(r_0), \delta_0}\cap \{t\ge t_0\}\cap\{r\in I\}
\end{equation}
where ${\cal C}_{\bar r, \bar t, \bar \delta}=\{(r,t)\;|\; t< \bar t-\bar \delta|r-\bar r|\}$.
We denote by $\RR\subset I$ (resp. $\SS\subset I$) the set
of non-characteristic (resp. characteristic) points.
\bigskip
In the case $(f,g)\equiv (0,0)$, equation (\ref{gen}) reduces to the semilinear wave equation:
\begin{equation}\label{par}
\left\{
\begin{array}{l}
\partial_t^2 U =\Delta U+|U|^{p-1}U,\\
U(0)=U_0\mbox{ and }U_t(0)=U_1.
\end{array}
\right.
\end{equation}
In a series of papers \cite{MZjfa07}, \cite{MZcmp08}, \cite{MZajm10} and \cite{MZisol10}
(see also the note \cite{MZxedp10}), Merle and Zaag give a full picture of the blow-up
for solutions of \eqref{par} in one space dimension. Recently, in
\cite{CZarxiv}, C\^ote and Zaag refine some of those results and construct a blow-up solution with a
characteristic point $a$, such that the asymptotic behavior of the
solution near $(a, T (a))$ shows a decoupled sum of $k$ solitons
with alternate signs.
Moreover, in \cite{MZbsm11},
Merle and Zaag extend all their results to higher dimensions in the radial case, outside the origin.
Our aim in this work is to generalize the result obtained for
equation (\ref{par}) in \cite{MZbsm11} to equation (\ref{gen}).
Let us note that all our results and proofs hold in both cases of (\ref{equ}) ($N=1$ and $N\ge 2$).
However, the situation is a bit more delicate when $N\ge 2$,
since we have to
avoid the origin which brings a singular term
$\frac{N-1}r\partial_ru$ to (\ref{equ}). Thus, for completeness, we focus on the case $N\ge 2$ avoiding $r=0$, and stress the fact that all our results hold in the case $N=1$, even when $r=0$, and with no symmetry assumptions.
\bigskip
Throughout this paper, we consider $U(x,t)$ a radial blow-up
solution of equation \eqref{gen}, and use the notation $u(r,t)$
introduced in \eqref{defu}.
We proceed in 3 sections:\\
- in Section \ref{seclyap}, we give a new Lyapunov functional for equation \eqref{equ} and bound the solution in the energy space.\\
- in Section \ref{secnonchar}, we study $\RR$, in particular the blow-up behavior of the solution and the regularity of the blow-up set there.\\
- in Section \ref{secchar}, we focus on $\SS$, from the point
of view of the blow-up behavior, the regularity of the blow-up set
and the construction of a multi-soliton solution.
\bigskip
We are aware that our analysis is a
generalization of the radial case of equation \eqref{par} treated by
Merle and Zaag in \cite{MZbsm11}. For that reason, we will give the
statements of the results for equation \eqref{gen} and focus only on
how to deal with the new perturbation terms appearing in
\eqref{gen}. Let us add that, we believe that our contribution is
non trivial and introduces a new approach for perturbed problems.
Moreover, it proves a bunch of results, especially for the
Klein-Gordon equation (\ref{equrn}).
\bigskip
\section{A Lyapunov functional for equation \eqref{eqw} and a blow-up criterion in the radial case}\label{seclyap}
\bigskip
We showed in \cite{HZlyap10} and \cite{HZlyapc10} that the argument
of Antonini, Merle and Zaag in \cite{AMimrn01}, \cite{MZajm03},
\cite{MZma05} and \cite{MZimrn05} extends through a perturbation
method to equation \eqref {gen}
with no gradient terms even for non radial solutions. The key idea is to modify the Lyapunov functional of
\cite{AMimrn01} with exponentially small terms and define a new
functional which is decreasing in time and gives a
blow-up criterion. In \cite{MZbsm11}, Merle and Zaag successfully
used our ideas to derive a Lyapunov functional for the radial
case with no perturbations (i.e. for equation (\ref{equ}) with
$(f,g)\equiv(0,0)$). Here, we further refine our argument in
\cite{HZlyap10} and \cite{HZlyapc10} to derive a
Lyapunov functional for equations bearing the two features: the
presence of perturbation terms and the radial symmetry.
For the reader's convenience,
we briefly recall the argument in the following.
\bigskip
Given $r_0>0$, we recall the following similarity variables' transformation
\begin{equation}\label{defw}
w_{r_0}(y,s)=(T(r_0)-t)^{\frac 2{p-1}}u(r,t),\;\;y=\frac{r-r_0}{T(r_0)-t},\;\;
s=-\log(T(r_0)-t).
\end{equation}
The function $w=w_{r_0}$ satisfies the following equation for all
$y\in (-1,1)$
and $s\ge \max\left(-\log T(r_0), -\log r_0\right)$:
\begin{eqnarray}
\label{eqw}
\partial^2_{s}w&=& \L w-\frac{2(p+1)}{(p-1)^2}w+|w|^{p-1}w
-\frac{p+3}{p-1}\partial_sw-2y\partial^2_{y,s}
w\nonumber\\
&&+e^{-s}\frac{(N-1)}{r_0+ye^{-s}} \py w
+e^{-\frac{2ps}{p-1}}f\Big(e^{\frac{2s}{p-1}}w\Big)\\ &&+
e^{-\frac{2ps}{p-1}}g\Big(r_0+ye^{-s},T_0-e^{-s},e^{\frac{(p+1)s}{p-1}}\partial_yw,e^{\frac{(p+1)s}{p-1}}(\partial_sw+y\partial_y
w+\frac{2}{p-1}w)\Big) ,\nonumber
\end{eqnarray}
\begin{equation}\label{defro}
\mbox{where }\L w = \frac 1\rho \py \left(\rho(1-y^2) \py w\right)\mbox{ and }
\rho(y)=(1-y^2)^{\frac 2{p-1}}.
\end{equation}
In the whole paper, we denote
\begin{equation}\label{F}
F(u)=\int_0^uf(v){\mathrm{d}}v.
\end{equation}
\bigskip
Let us recall that for the case $(f,g)\equiv(0,0)$, the
Lyapunov functional in one space dimension is
\begin{equation}\label{defE}
E_0(w(s))=\iint \left(\frac 12 (\ps w)^2 + \frac 12
\left(\partial_y w\right)^2 (1-y^2)+\frac{(p+1)}{(p-1)^2}w^2 - \frac
1{p+1} |w|^{p+1}\right)\!\rho {\mathrm{d}}y,
\end{equation}
which is defined in the Hilbert space
\begin{equation}\label{defnh0}
\H = \left\{q \in {\rm H^1_{loc}}\times {\rm L^2_{loc}(-1,1)}
\;\;|\;\;\|q\|_{\H}^2\equiv \int_{-1}^1 \left(q_1^2+\left(q_1'\right)^2
(1-y^2)+q_2^2\right)\rho {\mathrm{d}}y<+\infty\right\}.
\end{equation}
Introducing
\begin{eqnarray}
\label{defF} E(w(s),s)&=&E_0(w(s))+I(w(s),s)+J(w(s),s)
\end{eqnarray}
where,
\begin{eqnarray}
\label{f2} \quad
I(w(s),s)&=&- e^{-\frac{2(p+1)s}{p-1}}\displaystyle\iint F(e^{\frac{2s}{p-1}}w)\rho
{\mathrm{d}}y,
\end{eqnarray}
\begin{eqnarray}
\label{f3} \quad
J(w(s),s)&=& -e^{-\gamma s}\displaystyle\iint w\ps w\rho
{\mathrm{d}}y,
\end{eqnarray}
with \begin{eqnarray}\label{gamma} \gamma=\min(\frac12,\frac{p-q}{p-1}
)>0,
\end{eqnarray}
we claim the following:
\begin{prop}[A new functional for equation \eqref{eqw}]\label{prophamza}
$ $\\
(i) There exists $C=C(p,N,M,q)>0$ and $S_0(p,N,M,q)\in\R$ such that
for all $r_0>0$ and for all $s\ge \max\left(-\log T(r_0), S_0,
-4\log r_0,-\log \frac{r_0}2\right)$,
\begin{equation}\label{edoF}
\frac d{ds}E(w_{r_0}(s),s) \le \frac{p+3}2 e^{-\gamma
s}E(w_{r_0}(s),s) -\frac 3{p-1}\iint (\ps w_{r_0})^2
\frac{\rho}{1-y^2} {\mathrm{d}}y+Ce^{-2\gamma s}.
\end{equation}
(ii) {\bf (A blow-up criterion)}
There exists $S_1(p,N,M,q)\in \R$ such that, for all $s\ge
\max\left(s_0, S_1,\right)$, we have $H(w(s),s)\ge 0$.
\end{prop}
{\bf Remark}: From (i), we see that the Lyapunov functional
for equation \eqref{eqw} is in fact $H(w_{r_0}(s),s)$ where
\begin{equation}\label{defH}
H(w(s),s)=E(w(s),s)e^{\frac{ p+3}{2\gamma} e^{-\gamma s}}+\mu
e^{-2\gamma s}
\end{equation}
not $E(w_{r_0}(s),s)$ nor $E_0(w_{r_0}(s))$, for some large constant $\mu$. \\
{\bf Remark}:
We already know from \cite{HZlyap10} and \cite{HZlyapc10} that even
in the non-radial setting, equation \eqref{equ} has a Lyapunov
functional given by a perturbed form of a natural extension to
higher dimensions of $E(w_{r_0}(s),s)$ \eqref{defE}. Unfortunately,
already for the non-perturbed case of (\ref{gen}) with $(f,g)\equiv
(0,0)$, due to the lack of information
on stationary solutions in similarity variables in dimensions $N\ge 2$,
it wasn't possible to go further in the analysis, and the
investigation
had to stop at the step of bounding the solution
in similarity variables. On the contrary, when $N=1$,
Merle and Zaag could obtain a very precise characterization of blow-up in \cite{MZjfa07}, \cite{MZcmp08},
\cite{MZajm10}, \cite{MZisol10} (with some refinements by C\^ote and Zaag in \cite{CZarxiv}).\\
Here, considering perturbations as stated in
(\ref{gen}) and
restricting ourselves
to one dimensional solutions or higher dimensional
radial solutions,
we find a different Lyapunov
functional.
Considering arbitrary blow-up points in one-space
dimension(including the origin) and any non-zero blow-up
in higher dimensions,
the characterization of stationary solutions in one space dimension is enough, and
we are able to go in our analysis as far as in the one-dimensional case.
\bigskip
Following \cite{MZajm03} and \cite{MZjfa07}, together with our
techniques to handle perturbations in \cite{HZlyap10} and
\cite{HZlyapc10}, we derive with no difficulty the following:
\begin{prop}\label{boundedness}{\bf (Boundedness of the solutions of equation
\eqref{eqw} in the energy space)} For all $r_0>0$,
there is a $C_2(r_0)>0$ and $S_2(r_0)\in\R$ such that for all $r\in[\frac{r_0}2, \frac{3r_0}2]$ and $s\ge S_2(r_0)$,
\[\iint\left((\py w_r)^2 (1-y^2) + (w_r)^2+ (\partial_s w_r)^2+|w_r|^{p+1}\right) \rho {\mathrm{d}}y\le C_2(r_0).\]
\end{prop}
{\it Proof}: The adaptation is straightforward from \cite{MZajm03}
and Proposition 3.5 page 66 in \cite{MZjfa07}. The only difference
is in
the justification of the limit at infinity of $E_0(w_{r_0}(s))$,
which follows from the limit of $H(w_{r_0}(s),s)$ defined in \eqref{defH}.
In fact, we know from Proposition \ref{prophamza} that $H(w_{r_0}(s),s)$ is
decreasing and bounded from below, and such an information is unavailable for $E_0(w_{r_0}(s))$.
\Box
\bigskip
{\it Proof of Proposition \ref{prophamza}}:\\
(i) Consider $r_0>0$, $s\ge \max(-\log T(r_0),0, -\log \frac{r_0}2,
-4\log {r_0}
)$ and write $w=w_{r_0}$ for simplicity. From the similarity
variables' transformation \eqref{defw}, we see that
\begin{equation}\label{range}
r=r_0+ye^{-s}\in\left[\frac{r_0}2, \frac{3r_0}2\right].
\end{equation}
Multiplying equation \eqref{eqw} by $\ps w\rho$ and integrating
for $y\in(-1,1)$, we see by \eqref{defE} and \eqref{f2} that
\begin{multline}\label{Eprime}
\frac d{ds}\big(E_0(w(s))+I(w(s),s)\big) =\frac {-4}{p-1} \iint
\frac{(\ps w)^2} {1-y^2}\rho {\mathrm{d}}y +\underbrace{(N-1)
e^{-s} \iint \ps w\py w
\frac{\rho}r {\mathrm{d}}y}_{I_1(s)}\nonumber\\
+\underbrace{\frac{2(p+1)}{p-1}e^{-\frac{2(p+1)s}{p-1}}\iint
F\Big(e^{\frac{2s}{p-1}}w\Big)\rho {\mathrm{d}}y}_{I_2(s)}
+\underbrace{\frac{2}{p-1}e^{-\frac{2ps}{p-1}}\iint f\Big(e^{\frac{2s}{p-1}}w\Big)w\rho {\mathrm{d}}y }_{I_3(s)}\\
+\underbrace{ e^{-\frac{2ps}{p-1}}\! \iint\!\!
g\Big(r_0+ye^{-s},T_0-e^{-s},e^{\frac{(p+1)s}{p-1}}\partial_yw,e^{\frac{(p+1)s}{p-1}}(\partial_sw+y\partial_y
w+\frac{2w}{p-1})\Big) \ps w\rho
{\mathrm{d}}y}_{I_4(s)}.\nonumber
\end{multline}
where $r$ is defined in \eqref{range}.
Using \eqref{range}, we write
\begin{eqnarray}
\label{CS1} |I_1(s)|\le Ce^{-s}\iint (\py w)^2\rho(1-y^2)
{\mathrm{d}}y + \frac{Ce^{-s}}{r_0^2}\iint (\ps w)^2
\frac{\rho}{1-y^2} {\mathrm{d}}y.
\end{eqnarray}
Using the fact that
\begin{equation}\label{born} |{F(x)}|+|x{f(x)}|\le C(
1+|x|^{q+1})\le C( 1+|x|^{p+1}),
\end{equation}
where $F$ and $f$ are defined in
\eqref{F} and \eqref{gen}, we obtain that
\begin{eqnarray}\label{I10}
|I_2(s)|+|I_3(s)| &\le& Ce^{-\frac{2(p-q)s}{p-1}}+
Ce^{-\frac{2(p-q)s}{p-1}}\iint |w|^{p+1}\rho {\mathrm{d}}y.
\end{eqnarray}
Using the inequality $ab\le \frac{a^2}{2}+\frac{b^2}{2}$ and the
hypothesis $ (H_g)$, we write that
\begin{eqnarray}\label{I122}
|I_4(s)|\le Ce^{-s} \iint \Big((\ps w)^2+(w^2)\Big)\rho
{\mathrm{d}}y+ Ce^{-s}\iint|\py w ||\ps w|\rho {\mathrm{d}}y+
C e^{-s}.\qquad
\end{eqnarray}
Similarly, we prove that
\begin{eqnarray}\label{I13}
\iint|\py w ||\ps w|\rho {\mathrm{d}}y\le \iint (\ps
w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y +\iint (\py
w)^2(1-y^2)\rho{\mathrm{d}}y.
\end{eqnarray}
Combining (\ref{I122}) and (\ref{I13}), we conclude that
\begin{eqnarray}\label{I16}
|I_4(s)|\le Ce^{-s} \iint\!\! \Big( (\py w)^2(1-|y|^2) +
\frac{(\ps w)^2}{1-y^2}+w^2\Big)\rho {\mathrm{d}}y+
Ce^{-s}.
\end{eqnarray}
Then, by using (\ref{Eprime}), (\ref{CS1}), (\ref{I10}) and
(\ref{I16}), we deduce that
\begin{eqnarray}\label{E}
\frac d{ds}\big(E_0(w(s))+I(w(s),s)\big) &\le&(-\frac
4{p-1}+Ce^{-\frac{s}2})
\iint (\ps w)^2 \frac{\rho}{1-y^2} {\mathrm{d}}y\nonumber\\
&& + Ce^{-s}\iint
\Big( (\py w)^2(1-|y|^2) +w^2\Big)\rho {\mathrm{d}}y \nonumber\\
&&+ Ce^{-2 \gamma s}\iint |w|^{p+1}\rho
{\mathrm{d}}y+Ce^{-2\gamma s}.
\end{eqnarray}
Considering $J(w(s),s)$ defined in \eqref{f3}, we obtain from
equation (\ref{eqw}) and integration by parts
\begin{multline}\label{t1}
e^{\gamma s}\frac{d}{ds}J(w(s),s)= - \iint (\ps
w)^2\rho{\mathrm{d}}y + \iint(\py w)^2(1-y^2)\rho{\mathrm{d}}y
+\frac{2p+2}{(p-1)^2}\iint w^2\rho{\mathrm{d}}y\nonumber\\
- \iint |w|^{p+1}\rho{\mathrm{d}}y+
(\gamma +\frac{p+3}{p-1}-2N) \iint w \ps w (s) \rho{\mathrm{d}}y-2\iint w\ps wy \rho'{\mathrm{d}}y\qquad\qquad\\
-2\iint \ps w\py w
y\rho{\mathrm{d}}y-e^{-\frac{2ps}{p-1}}\iint
wf\Big(e^{\frac{2s}{p-1}}w\Big){\rho}{\mathrm{d}}y-(N-1) e^{-s} \iint w\py w \frac{\rho}r {\mathrm{d}}y\nonumber\\
- e^{-\frac{2ps}{p-1}}\iint w
g\Big(r_0+ye^{-s},T_0-e^{-s},e^{\frac{(p+1)s}{p-1}}\partial_yw,e^{\frac{(p+1)s}{p-1}}(\partial_sw+y\partial_y
w+\frac{2}{p-1}w)\Big){\rho}{\mathrm{d}}y.
\end{multline}
Combining (\ref{defF}), (\ref{f2}) and (\ref{t1}), we write
\begin{eqnarray}\label{theta}
&e^{\gamma s}\frac{d}{ds}J(w(s),s) \le \frac{p+3}2
\big(E_0(w(s))+I(w(s),s)\big) -\frac{p-1}{4} \iint (\py
w)^2(1-y^2)\rho{\mathrm{d}}y&\nonumber\\
&-\frac{p+1}{2(p-1)}\iint w^2\rho{\mathrm{d}}y
-\frac{p-1}{2(p+1)}\iint |w|^{p+1}\rho{\mathrm{d}}y&\nonumber\\
&+
\underbrace{(\gamma +\frac{p+3}{p-1}-2N+\frac{p+3}2 e^{-\gamma s}) \iint w\ps w \rho{\mathrm{d}}y}_{J_1(s)}&\\
&\underbrace{+\frac8{p-1} \iint w\ps w\frac{y^2}{1-y^2}
\rho{\mathrm{d}}y}_{J_2(s)} \underbrace{-2\iint \ps w\py w
y\rho{\mathrm{d}}y}_{J_3(s)}
\underbrace{-e^{-\frac{2ps}{p-1}}\iint
wf\Big(e^{\frac{2s}{p-1}}w
\Big){\rho}{\mathrm{d}}y}_{J_4(s)}&\nonumber\\ &\underbrace{ -
e^{-\frac{2ps}{p-1}}\iint w
g\Big(r_0+ye^{-s},T_0-e^{-s},e^{\frac{(p+1)s}{p-1}}\partial_yw,e^{\frac{(p+1)s}{p-1}}(\partial_sw+y\partial_y
w+\frac{2}{p-1}w)\Big) {\rho}{\mathrm{d}}y}_{J_5(s)}&
\nonumber\\
&+ \underbrace{\frac{p+3}2
e^{-\frac{2(p+1)s}{p-1}}\displaystyle\iint
F(e^{\frac{2}{p-1}s}w)\rho {\mathrm{d}}y}_{J_6(s)}\underbrace{-(N-1)
e^{-s} \iint w\py w \frac{\rho}r
{\mathrm{d}}y}_{J_7(s)}.\nonumber
\end{eqnarray}
We now study each of the last five terms. To estimate $J_1(s)$ and
$J_2(s)$, we use the Cauchy-Schwartz inequality to have
\begin{eqnarray}\label{J1}
|J_1(s)|& \le&
Ce^{\frac{\gamma s}2}\iint (\ps w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y+
C e^{-\frac{\gamma s}2}\iint w^2\rho
{\mathrm{d}}y.\qquad\qquad
\end{eqnarray}
\begin{eqnarray*}\label{J2}
|J_2(s)| &\le&
Ce^{\frac{\gamma s}2}\iint (\ps w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y+
C e^{-\frac{\gamma s}2}\iint w^2\frac{y^2\rho}{1-y^2}
{\mathrm{d}}y.
\end{eqnarray*}
Recalling the following Hardy-Sobolev estimate (see Appendix B
page 1163 in \cite{MZajm03} for the proof):
\begin{equation}\label{hs}
\iint h^2 \frac \rho{1-y^2} {\mathrm{d}}y \le C\iint h^2 \rho
{\mathrm{d}}y + C \iint (h'(y))^2 \rho(1-y^2){\mathrm{d}}y,
\end{equation}
we conclude that
\begin{eqnarray}\label{J22}
|J_2(s)| &\le& C e^{\frac{\gamma s}2} \iint (\ps
w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y+ C e^{-\frac{\gamma
s}2}\iint w^2\rho {\mathrm{d}}y\nonumber\\
&&+ C e^{-\frac{\gamma
s}2}\iint \!(\py w)^2\rho(1-y^2) {\mathrm{d}}y.
\end{eqnarray}
Using the Cauchy-Schwartz inequality, we have
\begin{eqnarray}\label{J3}
|J_3(s)| \le
C e^{\frac{\gamma s}2}\iint (\ps w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y+ C e^{-\frac{\gamma s}2}\iint \!(\py w)^2\rho(1-y^2) {\mathrm{d}}y.
\end{eqnarray}
From \eqref{born}, we write
\begin{eqnarray}\label{J4}
|J_4(s)|+|J_6(s)| &\le&C e^{-\gamma s}+
C e^{-\gamma s}\iint |w|^{p+1}\rho{\mathrm{d}}y.
\end{eqnarray}
In a similar way, using the hypothesis $(H_g) $ and (\ref{hs}),
we have
\begin{eqnarray}\label{J5}
|J_5(s)|&\le& Ce^{-\gamma s} \iint (\ps w)^2\frac{\rho}{1-y^2}
{\mathrm{d}}y +C e^{-\gamma s}\iint
(\py w)^2\rho(1-y^2){\mathrm{d}}y\nonumber\\
&& +Ce^{-\gamma s}\iint w^2\rho {\mathrm{d}}y+ C e^{-\gamma
s}.
\end{eqnarray}
Using \eqref{range} and (\ref{hs}), we write
\begin{eqnarray}\label{CS}
|J_7(s)| \le Ce^{-\frac{s}2}\iint (\py w)^2\rho(1-y^2)
{\mathrm{d}}y +Ce^{-\frac{s}2} \iint w^2 {\rho} {\mathrm{d}}y.
\end{eqnarray}
Finally, using (\ref{theta}), (\ref{J1}), (\ref{J22}), (\ref{J3}),
(\ref{J5}) and (\ref{CS}), we deduce that
\begin{eqnarray}\label{Jprime}
e^{\gamma s}\frac{d}{ds}J(w(s),s) &\le &\frac{p+3}2
\Big(E_0(w(s))+I(w(s),s)\Big)\\
&&+\Big(C e^{-\frac{\gamma s}2}-\frac{p-1}{4}\Big) \iint (\py w)^2(1-y^2)\rho{\mathrm{d}}y\nonumber\\
&&+\Big(C e^{-\frac{\gamma s}2}-\frac{p+1}{2(p-1)}\Big)\iint
w^2\rho{\mathrm{d}}y\nonumber\\
&&+\Big(C e^{-\frac{\gamma s}2}-\frac{p-1}{2(p+1)}\Big)\iint |w|^{p+1}\rho{\mathrm{d}}y\nonumber\\
&&+C e^{\frac{\gamma s}2} \iint (\ps
w)^2\frac{\rho}{1-y^2}{\mathrm{d}}y+C e^{-\gamma s}.\nonumber
\end{eqnarray}
From (\ref{E}) and (\ref{Jprime}), we obtain
\begin{eqnarray}
\frac d{ds}E(w(s),s)&\le &Ce^{-2\gamma s}+\frac{p+3}2 e^{-\gamma s}E(w(s),s)\nonumber\\
&&+\Big(Ce^{-\frac{\gamma
s}2}-\frac 4{p-1}\Big) \iint (\ps w)^2 \frac{\rho}{1-y^2} {\mathrm{d}}y \nonumber\\
&&+\Big(C e^{-\frac{\gamma s}2}-\frac{p+1}{2(p-1)}\Big)e^{-\gamma
s}\iint w^2\rho{\mathrm{d}}y
\nonumber\\
&& +\Big(C e^{-\frac{\gamma s}2}-\frac{p-1}{4}\Big) e^{-\gamma s}\iint (\py w)^2(1-|y|^2) \rho {\mathrm{d}}y\\
&& +\Big(C e^{-\frac{\gamma s}2}-\frac{p-1}{2(p+1)}\Big)e^{-
\gamma s}\iint |w|^{p+1}\rho{\mathrm{d}}y . \nonumber
\end{eqnarray}
We now choose $S_0\ge 0$, large enough, so that for all $s\ge
S_0$, we have
\begin{eqnarray*}
\frac{p-1}{4}-C e^{-\frac{\gamma s}2} \ge 0, \
\frac{p+1}{2(p-1)}
-C e^{-\frac{\gamma s}2} \ge 0, \ \frac{p-1}{2(p+1)}-C
e^{-\frac{\gamma s}2}\ge 0,\ \frac 1{p-1}-\C e^{-\frac{\gamma
s}2}\ge0.
\end{eqnarray*}
Then, we deduce that, for all $s\ge \max(S_0,-\log T(r_0),-\log
\frac{r_0}2, -4\log r_0)$, we have
\begin{eqnarray}\label{E111}
\frac d{ds}E(w(s),s)\le Ce^{-2\gamma s}+\frac{p+3}2 e^{-\gamma s}E(w(s),s)-\frac 3{p-1} \iint (\ps w)^2 \frac{\rho}{1-y^2}
{\mathrm{d}}y.\qquad
\end{eqnarray}
This yields (i) of Proposition
\ref{prophamza}.
\medskip
(ii)
\no We finish the proof of Proposition \ref{prophamza} here. More
precisely, we
prove that there exists $S_1(p,N,M,q)\in \R$ such that, for all $ x_0\in \R^N$ and $ T_0\in (0,T(x_0)]$,
\begin{equation}\label{254}
\forall \ s\ge \max\left(-\log T_0, S_1,\right)
\ \ \ H(w_{x_0,T_0}(s),s)\ge 0.
\end{equation}
We give the proof only in the case where $x_0$ is a non
characteristic point. Note that the case where $x_0$ is a
characteristic point can be done exactly as in Appendix A page 119 in \cite{MZjfa07}.
If $x_0$ is a non
characteristic point, the argument is the
same as in the corresponding part in \cite{AMimrn01}. We write the
proof for completeness. Arguing by contradiction, we assume that
there exists a non characteristic point $ x_0\in \R^N$, $ T_0\in
(0,T(x_0)]$ and $ s_1\ge \max\left(-\log T_0, S_1,\right)$ such that $H(w(s_1),s_1)<0$, where
$w=w_{x_0,T_0}$.
By
definition \eqref{defH} of $H$, we write
\begin{eqnarray*}
&&H(W(s),s)\ge\mu e^{-2\gamma s} -{e^{\frac{p+3}{2}e^{-\gamma s}}}\Big(\frac1{p+1}+Ce^{-2\gamma s}\Big)\iint |W|^{p+1} \rho {\mathrm{d}}y\\
&& +e^{\frac{p+3}2e^{-\gamma s}}\left(\left(\frac 12 -C e^{-\gamma
s}\right)\iint (\ps W)^2 \rho {\mathrm{d}}y
+\left(\frac{p+1}{(p-1)^2} - Ce^{-\gamma s}\right)\iint W^2 \rho {\mathrm{d}}y\right)\\
&\ge & -\frac 2{p+1}\iint |W|^{p+1} \rho {\mathrm{d}}y.
\end{eqnarray*}
if $s\ge S_2(p,N,q,M)\ge S_1(p,N,q,M)$ for some $S_2(p,N,q,M)\in\R$ large enough.
Using this inequality together with the fact that $H(W(s),s)$ is
decreasing by the remark following Proposition \ref{prophamza}, we
see that the argument used by Antonini and Merle in Theorem 2 page
1147 in \cite{AMimrn01} for the equation \eqref{par} works here and
we get the blow-up criterion. This concludes the proof of
Proposition \ref{prophamza}.\Box
\section{Blow-up results related to non-characteristic points}\label{secnonchar}
Let us first introduce for all $|d|<1$ the following solitons defined by
\begin{equation}\label{defkd}
\kappa(d,y)=\kappa_0 \frac{(1-d^2)^{\frac 1{p-1}}}{(1+dy)^{\frac 2{p-1}}}\mbox{ where }\kappa_0 =
\left(\frac{2(p+1)}{(p-1)^2}\right)^{\frac 1{p-1}} \mbox{ and }|y|<1.
\end{equation}
Note that $\kappa(d)$ is a stationary solution of \eqref{eqw}, in
the particular case where $(f,g)\equiv (0,0)$ and in one space
dimension.
\medskip
\noindent Adapting the analysis of \cite{MZjfa07} and \cite{MZcmp08}, we claim the following:
\begin{theor}[Blow-up behavior and regularity of the blow-up set on $\RR$]\label{thbb}$ $\\
(i) {\bf (Regularity related to $\RR$)} $\RR\neq \emptyset$, $\RR\cap \R^*_+$ is an open set, and $x\mapsto T(x)$ is of class $C^1$ on $\RR\cap \R^*_+$.\\
(ii) {\bf (Blow-up behavior in similarity variables)} There exist $\mu_0>0$ and $C_0>0$ such that for all $r_0\in\RR\cap \R^*_+$, there exist
$\theta(r_0)=\pm 1$ and $s_0(r_0)\ge - \log T(r_0)$ such that for all $s\ge s_0$:
\begin{equation}\label{profile}
\left\|\vc{w_{r_0}(s)}{\partial_s w_{r_0}(s)}-\theta(r_0)\vc{\kappa(T'(r_0))}{0}\right\|_{\H}\le C_0 e^{-\mu_0(s-s_0)}.
\end{equation}
Moreover, $E_0(w_{r_0}(s)) \to E_0(\kappa_0)$ as $s\to \infty$.
\end{theor}
{\bf Remark}: As stated in the introduction, this result holds also when $N=1$, with no symmetry assumptions an initial data, for all $r_0\in \R$, even $r_0=0$; when $N\ge 2$ and if $0\in\RR$, the asymptotic behavior of $w_0$ remains open.\\
{\it Proof}: As in the non-perturbed radial case (take
$(f,g)\equiv(0,0)$ in \eqref{equ}) treated in \cite{MZbsm11}, we
need to make some minor adaptations to the one-dimensional
non-perturbed case treated in \cite{MZjfa07} and \cite{MZcmp08}. It
happens that the same adaptation pattern works in the present case,
and that is the reason why we don't mention it, and refer the
reader to the proof of Theorem 1 in page 358 of \cite{MZbsm11}. The
only points to check are the following:
- {\it Continuity with respect to the scaling parameter}: Due to the fact that equation \eqref{equ} is no longer invariant
under the scaling
\[
\lambda \mapsto u_\lambda(\xi, \tau) = \lambda^{\frac 2{p-1}}u(\lambda \xi, \lambda \tau),
\]
we need to understand the continuous dependence of the solutions of the following family of equations
\begin{eqnarray}\label{eqxl}
\partial^2_{t} u =\partial^2_r u+\frac{(N-1)\lambda}{x+\lambda r} \pr u+|u|^{p-1}u
+\lambda^{\frac{2p}{p-1}}f\Big(\lambda^{\frac{-2}{p-1}}u\Big)\nonumber\\ +
\lambda^{\frac{2p}{p-1}}g\Big(\lambda r,\lambda
t,\lambda^{-\frac{p+1}{p-1}}\partial_ru,\lambda^{-\frac{p+1}{p-1}}\partial_tu\Big),
\end{eqnarray}
with respect to initial data and the parameters $x\ge 0$ and $\lambda>0$ (including the limit as $\lambda \to 0$
and this is a classical estimate.
\medskip
- {\it A new statement for the trapping result}:
This
is due to the fact that equation \eqref{eqw} in similarity variables depends on
a parameter $r_0>0$ and contains new terms of order $e^{-\gamma s}$ ($\gamma$ is defined in \eqref{gamma}) (it is no longer autonomous).
This is the trapping result in
our setting:
\begin{theor}{\bf (Trapping near the set of non zero stationary solutions of \eqref{eqw})}\label{thtrap}
\\
For all $\rho_0>0$, there exist positive $\epsilon_0$,
$\mu_0$ and $C_0$ such that for all $\epsilon^*\le \epsilon_0$, there exists
$s_0(\epsilon^*)$ such that if $r_0\ge \rho_0$, $s^*\ge s_0$ and $w\in C([s^*, \infty), \H)$ is a solution of equation \eqref{eqw} with
\begin{equation}\label{highenergy}
\forall s\ge s^*,\;\;E(w(s),s)\ge E_0(\kappa_0)-e^{-\frac {\gamma s}2},
\end{equation}
and
\begin{equation*
\left\|\vc{w(s^*)}{\partial_s w(s^*)}-\omega^*\vc{\kappa(d^*,\cdot)}{0}\right\|_{\H}\le \epsilon^*
\end{equation*}
for some $d^*\in(-1,1)$ and $\omega^*=\pm 1$,
then there exists $d_\infty\in (-1, 1)$ such that
\[\left|\argth{d_\infty} - \argth{d^*}\right|\le C_0 \epsilon^*,\]
and for all $s\ge s^*$,
\begin{equation*
\left\|\vc{w(s)}{\partial_s w(s)}-\omega^*\vc{\kappa(d_\infty, \cdot)}{0}\right\|_{\H}\le C_0 \epsilon^* e^{-\mu_0(s-s^*)}.
\end{equation*}
\end{theor}
{\it Proof}: The proof follows the pattern of the radial case treated in \cite{MZbsm11}. For that reason, we refer the reader to the Proof of Theorem 2 page 360 in that paper, and focus in the following only on how to treat the new terms generated by the perturbations $f$ and $g$ in \eqref{gen}. With respect to the pure power case in one space dimension, the difference comes from the linearization of \eqref{eqw} around the stationary solutions $\kappa(d,y)$ in \eqref{defkd}, where we see the following lower order terms:
\begin{eqnarray}
&&\left|\frac{(N-1)e^{-s}}{r_0+ye^{-s}}\py w\right|\le \frac
2{\rho_0} (N-1) e^{-s}|\py w|;\label{small}\\
&&e^{-\frac{2ps}{p-1}}\left|f\left(e^{\frac{2s}{p-1}}w\right)\right|
\le CMe^{-\frac{2(p-q)s}{p-1}}
+CMe^{-\frac{2(p-q)s}{p-1}}\big|w\big|^p;\nonumber\\
&&e^{-\frac{2ps}{p-1}}
\Big|g\Big(r_0+ye^{-s},T_0-e^{-s},e^{\frac{(p+1)s}{p-1}}\partial_yw,e^{\frac{(p+1)s}{p-1}}(\partial_sw+y\partial_y
w+\frac{2}{p-1}w)\Big)\Big|\nonumber\\
&&\le CMe^{-s}\Big(1+\big|\partial_sw\big|+\big|\partial_y
w\big|+\big|w\big|\Big).\nonumber
\end{eqnarray}
as soon as $r_0\ge \rho_0>0$, and $s\ge -\log \frac{\rho_0}2$.
For more details an the adaptation, we refer the reader to the proof
of Theorem 1 in page 358 of \cite{MZbsm11}.\Box
\section{Blow-up results related to characteristic points}\label{secchar}
The first question in this case is of course the existence of examples of initial data with $\SS \not= \emptyset$.
If the perturbation $g$ introduced in \eqref{gen} does not depend on $|x|$, then the existence of such an example
follows from the knowledge of the blow-up behavior at non-characteristic points, as in the pure power nonlinearity
case \eqref{par}. If $g$ depends on $|x|$, then we need to apply the constructive method of C\^ote and Zaag \cite{CZarxiv},
which relies fundamentally on the knowledge of the blow-up behavior near a characteristic point. For that reason, we leave
the existence issues to the end of the section, and start with the description of the blow-up features near characteristic
points. More precisely, we proceed in two sections:\\
- In Section \ref{description}, we consider arbitrary blow-up solutions having a non-zero characteristic point, and we give
a full description of its blow-up behavior and its blow-up set near this characteristic point.\\
- In Section \ref{existence}, we prove the existence of such a solution, and also give some criteria for the existence
or the non-existence of characteristic points.
\subsection{Description of the blow-up behavior and the blow-up set near a characteristic point}\label{description}
Now, given $r_0\in\SS \cap \R^*_+$, we have the same description for
the asymptotic of $w_{r_0}$ as in the one-dimensional case with no
perturbations (i.e. for equation (\ref{equ}) with $(f,g)\equiv(0,0)$)
refined recently by C\^ote and Zaag in \cite{CZarxiv}. In order to state the result, let us introduce
\begin{equation}\label{solpart}
\bar \zeta_i(s) = \left(i-\frac{(k+1)}2\right)\frac{(p-1)}2\log s + \bar\alpha_i(p,k)
\end{equation}
where the sequence $(\bar\alpha_i)_{i=1,\dots,k}$ is uniquely determined by the fact that $(\bar \zeta_i(s))_{i=1,\dots,k}$ is an explicit solution with zero center of mass for the following ODE system:
\begin{equation} \label{eq:tl}
\frac 1{c_1}\dot \zeta_i = e^{ - \frac{2}{p-1} (\zeta_i - \zeta_{i-1}) } - e^{- \frac{2}{p-1} (\zeta_{i+1} - \zeta_i) },
\end{equation}
where $c_1=c_1(p)>0$ and $\zeta_0(s)\equiv \zeta_{k+1}(s) \equiv 0$ (see Section 2 in \cite{CZarxiv} for a proof of this fact). Note that $c_1=c_1(p)>0$ is a constant appearing in system \eqref{eqz}, itself
inherited from Proposition 3.2 of \cite{MZajm10}.
With this definition, we can state our result
(for the statement in one space dimension, see Theorem 6 in
\cite{MZajm10} and Theorem 1 in \cite{CZarxiv}):
\begin{theor}\label{new}
{\bf (Description of the behavior of $w_{r_0}$ where $r_0$ is
characteristic)} Consider
$r_0\in \SS\cap \R^*_+$. Then, there is $\zeta_0(r_0)\in \R$
such that
\begin{equation}\label{cprofile00}
\left\|\vc{w_{r_0}(s)}{\ps w_{r_0}(s)}
- \theta_1\vc{\d\sum_{i=1}^{k(r_0)} (-1)^{i+1}\kappa(d_i(s),\cdot)}0\right\|_{\H} \to 0\mbox{ and }E_0(w_{r_0}(s))\to k(r_0)E_0(\kappa_0)
\end{equation}
as $s\to \infty$, for some
\begin{eqnarray}
&&k(r_0)\ge 2,\label{pb}\\
&&\theta_i=\theta_1(-1)^{i+1},\ \theta_1=\pm 1\label{ei}
\end{eqnarray}
and continuous $d_i(s) = -\tanh \zeta_i(s)$ with
\begin{equation}\label{equid
\zeta_i(s) = \bar \zeta_i(s) + \zeta_0,
\end{equation}
where $\bar \zeta_i(s)$ is introduced above in \eqref{solpart}.
\end{theor}
{\bf Remark}:
As stated in the introduction, this result holds also when $N=1$, with no symmetry assumptions an initial data, for all $r_0\in \R$, even $r_0=0$; when $N\ge2$ and if $0\in\SS$, the asymptotic behavior of $w_0$ remains open.\\
{\it Proof}: As in the one-dimensional case with no perturbations
(i.e. for equation (\ref{equ}) with $(f,g)\equiv(0,0)$),
the proof of the asymptotic behavior and the geometric
results on $\SS$ (see Theorem \ref{thgeo} below) go side by side.
Evidently the refined description given by \eqref{equid} is
obtained as in \cite{CZarxiv}. We leave the proof after the
statement of Theorem \ref{thgeo}.
\Box
Let us note that
we get the following result on the energy behavior from the asymptotic behavior
at a non-characteristic point (see (ii) of Theorem \ref{thbb}) and at a characteristic point (see Theorem \ref{new}):
\begin{coro}[A criterion for non-characteristic points]\label{corcriterion}$ $\\
For all $r_0>0$, there exist $C_3(r_0)>0$ and $S_3(r_0)\in\R$ such that:\\
(i) For all $r\in[\frac{r_0}2, \frac{3r_0}2]$ and $s\ge S_3$, we have
\[
E_0(w_r(s))\ge k(r)E_0(\kappa_0)-C_3(r_0)e^{-\gamma s}.
\]
(ii) If for some $r\in[\frac{r_0}2, \frac{3r_0}2]$ and $s\ge S_3$, we have
\[
E_0(w_r(s))<2 E_0(\kappa_0)-C_3(r_0)e^{-\gamma s},
\]
then $r\in \RR$.
\end{coro}
{\bf Remark}: With respect to the statement in one-space dimensions with no perturbations, (Corollary 7 in \cite{MZajm10}), this statement has additional exponentially small terms. This comes from the fact that the functional $E(w(s),s)$ is no longer decreasing, and that one has to work instead with the functional $H(w(s),s)$ \eqref{defH} which is decreasing, and differs from $E(w(s),s)$ by exponentially small terms, uniformly controlled for $r\in[\frac{r_0}2, \frac{3r_0}2]$ thanks to the uniform estimates of Proposition \ref{boundedness}.\\
{\it Proof}: If one replaces $E(w(s),s)$ by $H(w(s),s)$, then the proof is
straightforward from Theorems \ref{thbb} and \ref{new} together with
the monotonicity of $H(w(s),s)$ (see \eqref{defH} and
\eqref{prophamza}). Since the difference between the two functionals
is exponentially small, uniformly for $r\in[\frac{r_0}2,
\frac{3r_0}2]$ (see \eqref{defH}, \eqref{edoF} and Proposition
\ref{boundedness}), we get the conclusion of Corollary
\ref{corcriterion}.
\Box
\bigskip
Finally, we give in the following some geometric information related to
characteristic points (for the statement in one space dimension, see Theorem 1,
Theorem 2 and the following remark in \cite{MZisol10}):
\begin{theor}\label{thgeo}{\bf (Geometric considerations on $\SS$)}\\
(i) {\bf (Isolatedness of characteristic points)} Any $r_0\in \SS\cap \R^*_+$ is isolated.\\
(ii) {\bf (Corner shape of the blow-up curve at characteristic
points)} If $r_0\in \SS\cap \R^*_+$ with $k(r_0)$ solitons
and $\zeta_0(r_0)\in \R$ as center of mass of the solitons' center
as shown in \eqref{cprofile00} and \eqref{equid}, then
\begin{eqnarray}
T'(r)+\theta(r)&\sim&\frac{\theta(r)\nu e^{-2\theta(r)\zeta_0(r_0)}}{|\log|r-r_0||^{\frac{(k(r_0)-1)(p-1)}2}}\label{cor1}\\
T(r)-T(r_0)+|r-r_0|&\sim&\frac{\nu
e^{-2\theta(r)\zeta_0(r_0)}|r-r_0|}{|\log|r-r_0||^{\frac{(k(r_0)-1)(p-1)}2}}\label{cor0}
\end{eqnarray}
as $r\to r_0$, where $\theta(r) = \frac{r-r_0}{|r-r_0|}$ and
$\nu=\nu(p)>0$.
\end{theor}
{\it Proof}: See below.\\
{\bf Remark}:
As stated in the remark after Theorem 3, our result holds for $N=1$, with no symmetry assumptions an initial data, for all $r_0\in \R$, even $r_0=0$; when $N\ge2$, and if $0\in\SS$, the asymptotic behavior of $w_0$ remains open.\\
{\it Proof}: As in the one-dimensional case with no perturbations
{\bf Remark}: Note from (i) that the multi-dimensional version $U(x,t)=u(|x|,t)$ has a finite number
of concentric spheres of characteristic points in the set $\{\frac 1R < |x|<R\}$ for every $R>1$.
This is consistent with our conjecture in \cite{MZisol10} where we guessed that in dimension $N\ge 2$,
the $(N-1)$-dimensional Hausdorff measure of $\SS$ is bounded in compact sets of $\R^N$. Note that this
conjecture is related to the result of Vel\'azquez who proved in \cite{Viumj93}
that the $(N-1)$-dimensional Hausdorff measure of the blow-up set for the semilinear heat
equation with subcritical power nonlinearity is bounded in compact sets of $\R^N$.
\bigskip
As a consequence of our analysis, particularly the lower bound on $T(r)$ in \aref{cor0},
we have the following estimate on the blow-up speed in the backward light cone with
vertex $(r_0, T(r_0))$ where $r_0> 0$ (for the statement in one space dimension, see Corollary 3 in \cite{MZisol10}):
\begin{coro}\label{corspeed}{\bf (Blow-up speed in the backward light cone)} For all $r_0>0$,
there exists $C_4(r_0)>0$ such that for all $t\in[0, T(r_0))$, we have
\[
\frac{|\log(T(r_0)-t)|^{\frac{k(r_0)-1}2}}{C_4(r_0)(T(r_0)-t)^{\frac 2{p-1}}}\le
\sup_{|x-r_0|<T(r_0)-t}|u(x,t)|\le \frac{C_4(r_0) |\log(T(r_0)-t)|^{\frac{k(r_0)-1}2}}{(T(r_0)-t)^{\frac 2{p-1}}}.
\]
\end{coro}
{\bf Remark}: Note that when $r_0\in\RR\cap \R^*_+$, the blow-up rate of $u$ in the backward light cone with vertex $(r_0, T(r_0))$ is given by the solution of the associated ODE $u"=u^p$. When $r_0\in\SS\cap \R^*_+$, the blow-up rate is higher and quantified, according to $k(r_0)$, the number of solitons appearing in the decomposition \eqref{cprofile00}.\\
{\it Proof}: When $r_0\in\RR$, the result follows from the fact that the convergence in \aref{profile} is true also in $L^\infty\times L^2$ from
\eqref{profile}
and the Sobolev embedding in one dimension. When $r_0\in\SS$, see the proof of Corollary 3 of \cite{MZisol10} given in Section 3.3 of that paper.
\Box
\bigskip
{\it Proof of Theorems \ref{new} and \ref{thgeo}}: The proof follows the pattern of the original proof, given in \cite{MZjfa07}, \cite{MZajm10}, \cite{MZisol10} and \cite{CZarxiv}. In the following, we recall its different parts.
\bigskip
{\bf Part 1: Proof of \eqref{cprofile00} without \eqref{pb} nor \eqref{ei} and with the estimate
\begin{equation}\label{decuple}
\zeta_{i+1}(s)-\zeta_i(s) \to \infty\mbox{ as }s\to\infty
\end{equation}
instead of \eqref{equid}} (note that
\eqref{decuple}
is
meaningful only when $k(r_0)\ge 2$).
The original statement of this part is given in Theorem 2 (B) page 47 in \cite{MZjfa07} and the proof in section 3.2 page 66 in that paper. Note that this part doesn't exclude the possibility of having $k(r_0)=0$ or $k(r_0)=1$. The adaptation is straightforward. As in the non-characteristic case above, one has to use the Duhamel formulation in the radial which may be derived from \cite{SSnyu98}.
\bigskip
\label{pagepart2}{\bf Part 2: Assuming that \eqref{pb} is true, we prove \eqref{ei} with the estimates
\begin{eqnarray}
|\zeta_i(s)-\bar \zeta_i(s)|&\le & C,\label{equidnew}\\
T(r)-T(r_0)+|r-r_0|&\le &\frac{C|r-r_0|}{|\log|r-r_0||^{\frac{(k(r_0)-1)(p-1)}2}}\nonumbe
\end{eqnarray}
instead of \eqref{equid} and \eqref{cor0}}.
The original statement is given in Propositions 3.1 and 3.13 in \cite{MZajm10}. The reader has to read Section 3 and Appendices B and C in that paper.
The adaptation is straightforward, except for the effect of the new terms in equation \eqref{eqw},
which produce exponentially small terms in many parts of the proof (see \eqref{small}). In particular, Lemma 3.11 of \cite{MZajm10} has to be changed by adding $Ce^{-\gamma s}$ where $\gamma$ is defined in \eqref{gamma} to the right of all the differential inequalities.
\bigskip
{\bf Part 3: Proof of \eqref{pb} and the fact that the interior of $\SS$ is empty}.
The original statement is given in Proposition 4.1 of \cite{MZajm10}. The adaptation is as delicate as in \cite{MZbsm11}.
In particular, it involves the ruling-out of the occurrence of the case where, locally near the origin, the blow-up set of the multi-dimensional version $U(x,t)$ is a forward light cone with vertex $(0,T(0))$
(see Lemma 4.5 page 367 in \cite{MZbsm11}). As in that paper, the proof of the non-occurrence of this case is based in particular on a local energy estimate by Shatah and Struwe \cite{SSnyu98}. For the reader's convenience, we adapt in Appendix \ref{appenergy} that energy estimate to our case \eqref{gen}, namely to perturbations of the pure power equation \eqref{par}. For the other arguments, we refer to the corresponding part in \cite{MZbsm11} (see Part 3 page 336 in that paper).
\bigskip
{\bf Part 4: Proof of Theorem \ref{thgeo} with \eqref{cor1} and \eqref{cor0} replaced by
\begin{eqnarray*}
\d\frac{1}{C_0|\log(r-r_0)|^{\frac{(k(r_0)-1)(p-1)}2}}\le& T'(r)+\frac{r-r_0}{|r-r_0|} &\le \frac{C_0}{|\log(r-r_0)|^{\frac{(k(r_0)-1)(p-1)}2}},\\
\d\frac{|r-r_0|}{C_0|\log(r-r_0)|^{\frac{(k(r_0)-1)(p-1)}2}}\le &T(r)- T(r_0)+|r-r_0|& \le \frac{C_0|r-r_0|}{|\log(r-r_0)|^{\frac{(k(r_0)-1)(p-1)}2}}.
\end{eqnarray*}
}
The analogous statement in one space dimension with no perturbations is given in Theorems 1 and 2
in \cite{MZisol10}. Thus, one needs to say how to adapt the analysis of the paper \cite{MZisol10} to the present case. As in \cite{MZbsm11}, three ingredients are needed in the proof:\\
- the trapping result stated in Theorem \ref{thtrap};\\
- the energy criterion stated in Corollary \ref{corcriterion};\\
- the dynamics of equation \eqref{eqw} around a decoupled sum of solitons performed in \cite{MZisol10} and presented in Part 3 above.
Note that we have already adapted all these ingredients to the present context. With this fact, the adaptation given in \cite{MZbsm11} works here. See Part 4 page 371 in that paper for more details.
\bigskip
{\bf Part 5: Proof of \eqref{equid}, \eqref{cor1} and \eqref{cor0}}
This part corresponds to the contributions brought in \cite{CZarxiv} in the one-dimensional case. The orginal statements in the one-dimensional case are given in Theorem 1.1 and Corollary 1.4 in that paper. Following Part 2 where we proved that \eqref{cprofile00} holds with \eqref{equid} replaced by \eqref{equidnew}, a crucial step in one-space dimension was to prove that the solitons' centers satisfy the following ODE system for $s$ large enough:
\begin{equation}\label{eqz}
\frac 1{c_1}\dot \zeta_i = e^{ - \frac{2}{p-1} (\zeta_i - \zeta_{i-1}) } - e^{- \frac{2}{p-1} (\zeta_{i+1} - \zeta_i) }+O\left(\frac 1{s^{1+\eta}}\right)
\end{equation}
for some $\eta>0$. In \cite{CZarxiv}, we were able to use some ODE tools (particularly the Lyapunov convergence theorem) to further refine estimate \eqref{equidnew} and prove that
\[
\zeta_i(s) = \bar \zeta_i(s) + \zeta_0+o\left(\frac 1{s^\eta}\right)\mbox{ as }s \to \infty.
\]
Since we have for all $|d_1|<1$ and $|d_2|<1$
\[
\|\kappa(d_1)-\kappa(d_2)\|_{\H}\le C|\argth d_1 - \argth d_2|
\]
(see estimate (174) page 101 in \cite{MZjfa07} for a proof of this fact),
estimate \eqref{cprofile00} remains unchanged if one slightly modifies $\zeta_i(s)$ by setting $\zeta_i(s) = \bar \zeta_i(s) + \zeta_0$ which is the desired estimate in \eqref{equid}. That was the argument in one space dimension.\\
In our setting, since our perturbative terms contribute with additional exponentially decaying terms to the equation (see \eqref{small} and Part 2 above), we obtain that $\zeta_i(s)$ satisfy the same ODE system \eqref{eqz}. Thus, the refinements of \cite{CZarxiv} hold here with no need for any further adaptations, and \eqref{equid} holds.
\medskip
As for estimates \eqref{cor1} and \eqref{cor0}, let us point out that in one space dimension, they are derived in \cite{CZarxiv} as direct consequences of \eqref{equid} on the one hand, and on the other hand a small improvement of the last argument of the paper \cite{MZisol10} based on the equation in similarity variables. Since in our setting, \eqref{equid} holds and the equation in similarity variables differs from the one dimensional case with exponentially decaying terms (see \eqref{small}), the same argument holds. See Section 2 in \cite{CZarxiv}.
\subsection{Existence and non-existence of characteristic points}\label{existence}
Proceeding as in \cite{CZarxiv}, we have the following result:
\begin{theor}\label{mainth} {\bf (Existence of a solution with prescribed blow-up behavior at a characteristic point)} For any $r_0>0$ and $k\ge 2$,
there exists a blow-up solution $u(r,t)$ to equation \eqref{equ}
with $r_0\in\SS$ such that
\begin{equation}\label{cprofile0}
\left\|\vc{w_{r_0}(s)}{\ps w_{r_0}(s)} - \vc{\ds\sum_{i=1}^{k} (-1)^{i+1}\kappa(d_i(s))}0\right\|_{\H} \to 0\mbox{ as }s\to \infty,
\end{equation}
with
\begin{equation}\label{refequid1}
d_i(s) = -\tanh \zeta_i(s), \quad
\zeta_i(s) = \bar \zeta_i(s) + \zeta_0
\end{equation}
for some $\zeta_0\in \R$, where $\bar \zeta_i(s)$ is defined in \eqref{solpart}.
\end{theor}
{\bf Remark}: When $N=1$, we can take $r_0=0$. When $N\ge 2$, the multi-dimensional version $U(x,t)=u(|x|,t)$ has a sphere of characteristic points. Note also that this result uses the same argument as for Theorem \ref{new}, in particular, the analysis of the ODE system \eqref{eqz}.
If we simply want an argument for the existence of a blow-up solution with a characteristic point without caring about the number of solitons, then we have a more elementary proof which holds, however, only when $g$ does not depend on $|x|$. See the remark following Theorem \ref{thexis} below.\\
{\bf Remark}:
Note from \eqref{refequid1} and \eqref{solpart} that the barycenter of $\zeta_i(s)$ is fixed, in the sense that
\begin{equation}\label{barycenter}
\frac{\zeta_1(s)+ \dots +\zeta_k(s)}k= \frac{\bar\zeta_1(s)+ \dots +\bar\zeta_k(s)}k+\zeta_0=\zeta_0,\;\;\forall s\ge -\log T(0).
\end{equation}
Note that unlike in the one-dimensional case with a pure power nonlinearity treated in \cite{CZarxiv}, we are unable to prescribe the barycenter. Indeed, our equation \eqref{equ} is not invariant under the Lorentz transform.\\
{\bf Remark}: We are unable to say whether this solution has other characteristic points or not. In particular, we have been unable to find a solution with $\SS$ exactly equal to $\{0\}$. Nevertheless, let us remark that from the finite speed of propagation, we can prescribe more characteristic points, as follows:
\begin{coro}[Prescribing more characteristic points]\label{cormore} Let $J=\{1,...,n_0\}$ or $J=\N$ and for all $n\in J$, $r_n>0$, $T_n>0$ and $k_n \ge 2$
such that
\begin{equation}\label{necess}
r_n+T_n<r_{n+1}-T_{n+1}.
\end{equation}
Then, there exists a blow-up solution $u(r,t)$ of equation \eqref{gen}
with $\{x_n\;|\; n\in J\} \subset \SS$, $T(r_n)=T_n$ and for all $n\in I$,
\begin{equation*
\left\|\vc{w_{x_n}(s)}{\ps w_{x_n}(s)} - \vc{\ds\sum_{i=1}^{k_n} (-1)^{i+1}\kappa(d_{i,n}(s))}0\right\|_{\H} \to 0\mbox{ as }s\to \infty,
\end{equation*}
with
\begin{equation*
\forall i=1,\dots,k_n,\;\;d_{i,n}(s) = -\tanh \zeta_{i,n}(s),\;\;
\zeta_{i,n}(s) = \bar \zeta_i(s) + \zeta_{0,n}
\end{equation*}
for some $\zeta_{0,n}\in \R$, where $\bar \zeta_i(s)$ is defined in \eqref{solpart}.
\end{coro}
{\bf Remark}: Again, we are unable to construct a solution with $\SS = \{r_n\;|\; n\in J\}$. When $N=1$, we may take $r_0\in \R$.
\bigskip
{\it Proof of Theorem \ref{mainth} and Corollary \ref{cormore}}: First, note that thanks to condition \eqref{necess} which asserts that the sections at $t=0$ of the backward light cones with vertices $(r_n, T_n)$ do not overlap, Corollary \ref{cormore} follows from Theorem \ref{mainth} by the finite speed of propagation. As for the proof of Theorem \ref{mainth}, we claim that it follows like in \cite{CZarxiv}, since the ingredients of that paper are available here, thanks to the adaptations we performed in the previous sections:\\
- {\bf the analysis of the ODE system \eqref{eqz}}: let us emphasize the fact that we still encounter this sytem in our setting. Indeed, that system appears as a projection on the null modes of the linearization of equation \eqref{eqw} around the sum of decoupled solitons, and, as we said in Part 2 page \pageref{pagepart2}, that equation differs from the pure power case, only with exponentially small terms (see \eqref{small}), which are absorbed in the $O(\frac 1{s^{1+\eta}})$ in \eqref{eqz};\\
- {\bf a reduction to a finite dimensional problem}: this is done thanks to the analysis of the dynamics of the equation in similarity variables \eqref{eqw} around the sum of decoupled solitons, which we did already for the proof of the isolatedness of characteristic points (see (i) of Theorem \ref{thgeo}; see \cite{MZisol10} for the analysis in one space dimension);\\
- {\bf a topolgical argument to solve the finite dimensional problem}: this argument is based on a different formulation of Brouwer's Theorem. It is independent of the equation.\\
Note however that one argument of \cite{CZarxiv} does not work here: the argument that allows us to prescribe the barycenter of $\zeta_i(s)$. Indeed, that argument uses the invariance of the pure power wave equation under the Lorentz transform, which is no longer the case for equation \eqref{equ}.\Box
\bigskip
Let us give in the following a criterion about the existence of characteristic points:
\begin{theor}[Existence and generic stability of characteristic points]
\label{thexis}
$ $\\
(i) {\bf (Existence)} Let $0<a_1<a_2$ be two non-characteristic points such that
\[
w_{a_i}(s) \to \theta(a_i)\kappa(d_{a_i},\cdot)\mbox{ as }s\to \infty\mbox{ with }\theta(a_1)\theta(a_2)=-1
\]
for some $d_{a_i}$ in $(-1,1)$, in the sense \eqref{profile}. Then, there exists a characteristic point $c\in (a_1,a_2)$.\\
(ii) {\bf (Stability)} There exists $\epsilon_0>0$ such that if $\|(\tilde U_0,\tilde U_1)- (U_0, U_1)\|_{\h1\times \l2(\R^N)}\le \epsilon_0$, then, $\tilde u(r,t)$ the solution of equation \eqref{equ} with initial data $(\tilde u_0,\tilde u_1)(r)=(\tilde U_0,\tilde U_1)(x)$ if $r=|x|$ blows up and has a characteristic point $\tilde c\in [a_1,a_2]$.
\end{theor}
{\bf Remark}: This statement (valid for $N\ge2$) is different from the original one (Theorem 2 in \cite{MZajm10}) by two natural small facts:
we take positive points $a_1$ and $a_2$ in (i), and
we use the multi-dimensional norm in (ii) (of course, from the finite speed of propagation, it is enough to take a localized norm instead). When $N=1$, we don't need the restriction $a_1>0$.\\
{\bf Remark}: If one needs a quick argument for the existence of a blow-up solution for equation \eqref{equ} with a characteristic point, then this theorem allows us to avoid the heavy machinery of \cite{MZisol10}, namely the linearization of equation \eqref{eqw} around the sum of decoupled solitons. Indeed, we have a more elementary argument, based on the knowledge of the blow-up behavior at a non-characteristic point on the one hand, and on (i) of this theorem on the one hand. However, such an argument uses the fact that solutions of the ODE \eqref{ODE} associated to \eqref{equ} are also solution to \eqref{equ}
and this is possible only if $g$ defined in \eqref{gen} does not depend on $|x|$. For the statement with no perturbations, see Proposition 3 page 362 in \cite{MZbsm11}. For a further justification, see the Proof of Theorem \ref{thexis} below.
\medskip
{\it Proof of
Theorem \ref{thexis}}: As in \cite{MZbsm11}, there is no difficulty in adapting to the present context the proof of Theorem 2 of \cite{MZajm10} given in Section 2 of that paper, except may be for some natural extensions to the radial case.
Concerning the second remark following Theorem \ref{thexis},
the only delicate point is to find initial data $(u_0,u_1)$ satisfying the
hypothesis of (i) in Theorem \ref{thexis}. If $g$ does not depend on $|x|$, then, any solution of the ODE
\begin{equation}\label{ODE}
U'' =|U|^{p-1}U+f(U)+g(t,0,U'),
\end{equation}
is also a solution of the PDE \eqref{equ}, and it is enough to take initial data $(u_0,u_1)$ with large plateaus of
opposite signs. If $g$ does depend on $|x|$, then this simple idea breaks down, and the existence of initial data with characteristic points holds thanks to Theorem \ref{mainth}.
\Box
\bigskip
We also have the following result which relates the existence of characteristic points to the sign-change of the solution:
\begin{theor
{\bf (Non-existence of characteristic points if the sign is constant)}
Consider $u(r,t)$ a blow-up solution of \eqref{equ} such that $u(r,t)\ge 0$
for all $r\in (a_0,b_0)$ and $t_0\le t< T(r)$ for some real $0\le a_0<b_0$ and $t_0\ge 0$. Then, $(a_0, b_0)\subset \RR$.
\end{theor}
{\bf Remark}: When $N=1$, we don't need the restriction $a_0\ge 0$.\\
{\it Proof}: This result follows from Theorem \ref{new} above
exactly as in one space dimension with no perturbations (i.e. for
equation (\ref{equ}) with $(f,g)\equiv(0,0)$). See the proof of
Theorem 4 given in Section 4.2 in \cite{MZajm10}.\Box
|
1,108,101,565,177 | arxiv |
\section{Further Data from Numerical Simulations}
\label{appn::data}
Here we provide the average population size and average loss we obtained from every run of our numerical simulations.\footnote{every run was for 2000 iterations}.
\subsection{Reasonable Strategy}
\begin{itemize}
\item Discrete Setting:\\
$n=500$, $T=100:$
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 1544.1 & 10.07\\
\hline
2 & 1507.7 & 9.96\\
\hline
3 & 1558.7 & 10.13\\
\hline
4 & 1458.1 & 9.8\\
\hline
5 & 1456.4 & 9.82\\
\hline
6 & 1565.5 & 10.16\\
\hline
7 & 1487.6 & 9.91\\
\hline
8 & 1567 & 10.17\\
\hline
9 & 1499.5& 9.93\\
\hline
10 & 1508.3 & 9.99\\
\hline
\end{tabular}
\end{center}
\noindent
Average Population Size $= 1511.7\pm 3.7\%$.
\newline
Average Loss/T $= 9.99\pm 1.9\%$.
\item Continuum Setting ($=500$, $T=100$):\\
\noindent
Average Population Size $=1180.9$.
\newline
Average Loss/T $= 8.89$.
\end{itemize}
\subsection{Modified Reasonable Strategy}
\begin{itemize}
\item Discrete Setting:
\begin{itemize}
\item $n=500$, $T=100$:\\
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 4733.1 & 22.04\\
\hline
2 & 4735 & 22.06\\
\hline
3 & 4666.8 & 21.82\\
\hline
4 & 4827.4 & 22.38\\
\hline
5 & 4726 & 22.04\\
\hline
6 & 4685.4 & 21.87\\
\hline
7 & 4785.1 & 22.25\\
\hline
8 & 4732.8 & 22.06\\
\hline
9 & 4681.4 & 21.89\\
\hline
10 & 4721.2 & 22.02\\
\hline
\end{tabular}
\end{center}
\smallskip
\noindent
Average Population Size $= 4747.1\pm 1.7\%$.
\newline
Average Loss/T $= 22.1\pm 1.7\%$.
\item $n=500$, $T=200$:\\
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 6972.6& 32.39\\
\hline
2 & 7194.9& 33.17\\
\hline
3 & 7210.3& 33.19\\
\hline
4 & 7148.8& 33.01\\
\hline
5 & 7336.1& 33.67\\
\hline
6 & 7085.7& 32.79\\
\hline
7 & 7153.2& 32.97\\
\hline
8 & 7119.2& 32.94\\
\hline
9 & 7195.2& 33.11\\
\hline
10 & 7086.6 & 32.79\\
\hline
\end{tabular}
\end{center}
\smallskip
\noindent
Average Population Size $= 7154.4\pm 2.6\%$.
\newline
Average Loss/T $= 33.03\pm 2\%$.
\item $n=500$, $T=300$:\\
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 8963.4& 41.25\\
\hline
2 & 8930.5& 41.21\\
\hline
3 & 8937& 41.17\\
\hline
4 & 8744.5& 40.49\\
\hline
5 & 8889.2& 41.07\\
\hline
6 & 8798.6& 40.75\\
\hline
7 & 8854.8& 40.89\\
\hline
8 & 8783.7& 40.68\\
\hline
9 & 8682.9& 40.34\\
\hline
10 & 8705.3& 40.37\\
\hline
\end{tabular}
\end{center}
\smallskip
\noindent
Average Population Size $= 8823.2\pm 1.6\%$.
\newline
Average Loss/T $= 40.8\pm 1.2\%$.
\item $n=500$, $T=400$:\\
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 10554.7& 48.4\\
\hline
2 & 10281& 47.22\\
\hline
3 & 10060.9& 46.52\\
\hline
4 & 10168.1& 47\\
\hline
5 & 10748.9& 48.9\\
\hline
6 & 10295.3& 47.39\\
\hline
7 & 10185.9& 46.86\\
\hline
8 & 10094.6& 46.75\\
\hline
9 & 10555.1& 48.3\\
\hline
10 & 10185.3& 46.99\\
\hline
\end{tabular}
\end{center}
\noindent
\smallskip
Average Population Size $= 10404.9 \pm 3.4\%$.
\newline
Average Loss/T $= 47.71\pm 2.5\%$.
\item $n=500$, $T=500$:\\
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
1 & 11186.2& 52.09\\
\hline
2 & 11268.1& 52.38\\
\hline
3 & 11530& 53.44\\
\hline
4 & 12209.8& 55.8\\
\hline
5 & 12198.1& 55.63\\
\hline
6 & 12027.3& 55.07\\
\hline
7 & 11710.77& 54.03\\
\hline
8 & 11372.4& 52.78\\
\hline
9 & 11493.2& 53.27\\
\hline
10 & 11378.5& 52.77\\
\hline
\end{tabular}
\end{center}
\smallskip
\noindent
Average Population Size $= 11698\pm 4.4\%$.
\newline
Average Loss/T $= 53.95\pm 3.5\%$.
\end{itemize}
In summary:\\
\begin{center}
\begin{tabular}{||c c c c||}
\hline
n & T & Average population size & Average Loss/T\\ [0.5ex]
\hline\hline
500 & 100& $4747.1\pm 1.7\%$ & $22.1\pm 1.7\%$\\
\hline
500 & 200& $7154.4\pm 2.6\%$ & $33.03\pm 2\%$\\
\hline
500 & 300& $8823.2\pm 1.6\%$ & $40.8\pm 1.2\%$\\
\hline
500 & 400& $10404.9\pm 3.4\%$.& $47.71\pm 2.5\%$\\
\hline
500 & 500& $11698\pm 4.4\%$& $53.95\pm 3.5\%$\\
\hline
\end{tabular}
\end{center}
\item Continuum Setting ($n=500$, $T=100$):\\
\noindent
Average Population Size $= 4484.8$
\newline
Average Loss/T $= 20.85$.
\end{itemize}
\section{Deferred Calculation from Section 5}
\label{appn::loss_prob_calculation}
\begin{proof} (Final calculation for Theorem~\ref{thm::lb-loss-two_sex})~
\begin{align*}
\operatorname{Pr}[{\mathcal E}] \cdot\left(1-\tau T e^{-\frac{\delta^2 nw^2}{6T^2}}-\tau e^{-\frac{\epsilon^2 n}{4}}\right)
& \ge \Big(1-e^{-\frac{\delta^2nw}{6T}}\Big)^{\tau\frac{T}{w}}\cdot\Big(1-\tau T e^{-\frac{\delta^2nw^2}{6T^2}}-\tau e^{-\frac{\epsilon^2 n}{4}}\Big)\\
&\geq \Big[1-\Big(\frac{1}{3n^cT\tau}\Big)^{4\sqrt{T}}\Big]^{4\tau\sqrt{T}}\cdot\Big(1-\frac{1}{3n^c}-\frac{1}{3n^{c}}\Big)\\
&\geq \Big[1 - 4\tau\sqrt{T}\Big(\frac{1}{3n^cT\tau}\Big)^{2} \Big] \cdot\Big(1-\frac{1}{3n^c}-\frac{1}{3n^{c}}\Big) \\
&\geq \Big[1 - \Big(\frac{1}{3n^c}\Big)^{4 \sqrt{T}}\Big]\cdot\Big(1-\frac{2}{3n^c}\Big)
\geq \Big( 1 - \frac{1}{n^c}\Big).
\end{align*}
\end{proof}
\section{Deferred Proofs from Section \ref{sec::prelim}}
\label{appn::prelim}
\begin{proof} (Of Lemma~\ref{lem::negative_dependence_two_sex})~
Let $N=\max\{N_1,N_2\}.$
Consider any subset $S\subseteq[n]$ where $|S|=k$. W.l.o.g.\ let $S=[k]$. Then,
\begin{align*}
E\Big[\prod_{i\in S} X_i\Big] &= \operatorname{Pr}\Big[{\prod_{i\in S} X_i=1}\Big]=\pr{X_1=1,X_2=1,\ldots, X_k=1}\Big]\\
&=\pr{X_1=1}\cdot \pr{X_2=1|X_1=1},\ldots ,\pr{X_k=1|X_1=1,X_2=1,\ldots, X_{k-1}=1}\\
&=\frac{r}{N}\cdot\frac{r-1}{N-1},\ldots,\frac{r-k+1}{N-k+1}.
\end{align*}
Hence,
$$E\Big[\prod_{i\in S} X_i\Big]\leq \Big(\frac{r}{N}\Big)^k,$$
while
$$\prod_{i\in S} E[X_i]=\Big(\frac{r}{N}\Big)^k.$$
Similarly,
\begin{align*}
E\Big[\prod_{i\in S} (1-X_i)\Big] &= \operatorname{Pr}\Big[{\prod_{i\in S}(1-X_i)=1}\Big]=\pr{X_1=0,X_2=0,\ldots, X_k=0}\Big]\\
&=\pr{X_1=0}\cdot \pr{X_2=0|X_1=0},\ldots ,\pr{X_k=0|X_1=0,X_2=0,\ldots, X_{k-1}=0}\\
&=\frac{N-r}{N}\cdot\frac{N-r-1}{N-1},\ldots,\frac{N-r-k+1}{N-k+1}.
\end{align*}
Hence,
$$E\Big[\prod_{i\in S}(1-X_i)\Big]\leq \Big(\frac{N-r}{N}\Big)^k,$$
while
$$\prod_{i\in S} E[1-X_i]=\Big(\frac{N-r}{N}\Big)^k.$$
Thus the set $\{X_i\}$ is negative cylinder dependent.
By \cite[Theorem 3.4]{PanconesiSrinivasan} with $\lambda=1$, Chernoff bounds for sums of independent random variables apply to the sums of negative cylinder dependent random variables as well.
This concludes the proof.
\end{proof}
\begin{proof} (Of Lemma~\ref{lem::chernoff_bound})~
First we prove that
\begin{align*}
\operatorname{Pr}\left[\sum X_i \geq (1 + \delta) \overline{\mu} \right] \leq e^{- \frac{\delta^2 \overline{\mu}}{3}}.
\end{align*}
Let $\overline{\mu}=(1+\theta)\mu$. By Lemma \ref{lem::negative_dependence_two_sex},
\begin{equation}
\label{eqn::chernoff_with_upper_bounded_mean}
\begin{aligned}
\operatorname{Pr}\left[\sum X_i \geq (1 + \gamma) \mu \right] \leq e^{- \frac{\gamma^2 \mu}{3}}.
\end{aligned}
\end{equation}
Set $(1+\gamma)= (1+\theta)(1+\delta)$. Then, from equation \eqref{eqn::chernoff_with_upper_bounded_mean} we obtain,
\begin{equation*}
\begin{aligned}
\operatorname{Pr}\left[\sum X_i \geq (1 + \delta) \overline{\mu} \right] &\leq \exp\bigg[- \frac{(\theta+\delta+\theta\cdot\delta)^2 \mu}{3}\bigg]\leq \exp\bigg[- \frac{(\delta+\theta\cdot\delta)^2 \mu}{3}\bigg]\\
&\leq \exp\bigg[- \frac{\delta^2(1+\theta)^2 \mu}{3}\bigg]\leq \exp\bigg[- \frac{\delta^2(1+\theta) \mu}{3}\bigg]\leq \exp\bigg[- \frac{\delta^2 \overline{\mu}}{3}\bigg]
\end{aligned}
\end{equation*}
which proves the claim.
We will now prove that
\begin{align*}
\operatorname{Pr}\left[\sum X_i \leq \mu - \delta \overline{\mu} \right] \leq e^{- \frac{\delta^2 \mu}{2}}
\end{align*}
Let $\theta=\frac{\mu}{\overline{\mu}}$.
By Lemma \ref{lem::negative_dependence_two_sex},
\begin{align*}
\operatorname{Pr}\left[\sum X_i \leq \mu - \delta \mu \right] \leq e^{- \frac{\delta^2 \mu}{2}}
\end{align*}
Let $\gamma=\frac{\overline{\mu}}{{\mu}}$. Note that $\gamma\geq1$
\begin{equation*}
\begin{aligned}
\operatorname{Pr}\left[\sum X_i \leq \mu-\delta\overline{\mu} \right]&= \operatorname{Pr}\left[\sum X_i \leq \mu-\gamma\delta\mu \right] \leq \exp\bigg[- \frac{(\gamma\delta)^2 \mu}{2}\bigg]\leq \exp\bigg[- \frac{\delta^2\gamma\overline{\mu}}{2}\bigg]\leq \exp\bigg[-\frac{\delta^2\overline{\mu}}{2}\bigg]\\
\end{aligned}
\end{equation*}
which completes the proof.
\end{proof}
\section{Deferred Proofs from Section \ref{sec::upper_bound}}
\begin{proof} (Of Lemma~\ref{lem::ind-bound}.)~
We complete the sketch proof by bounding the failure probability. Per time step,
Theorems~\ref{thm::lb-size}, \ref{thm::total_size_upper_bound}, \ref{type1_upper}, \ref{type2_upper}
all have failure probability of at most $1/n^{2c+1}$, Theorem~\ref{thm::imbalance_bound} has
failure probability at most $2/n^{2c+1}$,
and Theorem~\ref{thm::initialization}, which is
applied once,
has failure probability at most $1/n^{c+1}$.
Theorem~\ref{thm::last_strip_size_upper_bound}
does not introduce any additional possibility of
failure.
Multiplying by the $n^c$ possible time steps,
gives a total failure probability of at most
$7/n^{c+1} < 1/n^c$.
\end{proof}
\subsection{Upper Bound on Loss due to a Match}
\label{appn::loss_from_match}
\begin{proof} (of Lemma~\ref{lem::loss_in_strip}).
Consider an agent (Agent $1$) at value $v$ and time $t$. Suppose they match with another agent (Agent $2$) who is present in the same strip.
The worst location for Agent 2 is to be on the low value strip boundary, and on this boundary to be at
one of the endpoints.
\noindent
\emph{Type $1$ strip}.
If Agent $2$ is at the top endpoint,
Agent $1$ obtains utility $w\cdot T-t$, where $w$ is the value at the top endpoint.
We can see that $w\ge v- \sqrt{T} -2t$ (move from $v$ horizontally to the lower boundary, a distance of at most $\sqrt{T}$ and then move up to the $t=0$ location, which subtracts $2t$ from the value).
Thus the utility Agent $1$ receives is at least
$(T-t)(v- \sqrt{T} -2t)$. Therefore the loss is at most $t(v+2T+\sqrt{T}) \le 4tT + 2T\sqrt{T}$.
If Agent $2$ is at the lower endpoint of a Type 1 strip, we argue as follows.
The $v\cdot T-t$ product is equal at the two endpoints of a boundary,
and therefore the loss is greatest at the top endpoint,
for the utility garnered by Agent $1$ would be $w\cdot (T-t)$ and not $w\cdot T$,
whereas at the bottom endpoint the garnered utility is $2T\cdot w/2 =wT$.
\smallskip
\noindent
\emph{Type $2$ Strip}.
We define the following $t$ values:
$a$ is the value for the left end of the top boundary of the strip,
$b$ the value for the left end of the bottom boundary.
and $c$ the value for the right end of the bottom boundary,
Then $a\le (2t+T-v)/2$, $b \leq 2a+\sqrt{T}$, and $c = T/2 + b \leq T/2 + 2a + \sqrt{T}$.
If Agent $2$ is at the lower endpoint, then
Agent 1 would receive utility $2T\cdot (T-c) \ge 2T(v -T/2 -2t -\sqrt{T})$.
Thus the loss is at most $T(T-v) + 4tT + 2T\sqrt{T}
\le 4tT + 2T\sqrt{T}$.
If Agent $2$ is at the top endpoint and Agent 1 is older than Agent 2,
then Agent $1$ receives utility $T(T-t)$.
As we are in a Type 2 strip, $(v-T)\le 2t$ or $v\le T+2t$.
So Agent $1$ incurs a loss of at most $vT-T(T-t) \le (T+2t)T - T(T-t) \le 3tT$.
While if Agent $1$ is no older than Agent $2$,
then Agent $1$ receives utility $T(T-b)\ge T(v - 2t - \sqrt{T})$.
Thus the loss is at most $vT - T(v - 2t - \sqrt{T}) = 2Tt + T\sqrt{T}$.
\end{proof}
\subsection{Lower Bound on the Total Population}
\label{appn::lower_bound_on size}
\begin{proof} (of Theorem~\ref{thm::lb-size})
The agents enter with one of $T$ values chosen uniformly at random and are equally likely to be men or women. Hence, for all $n^c$ time steps, for each value $v$,
$$\operatorname{Pr}\Big[\text{At most $\frac{n(1+\epsilon)}{2T}$ men enter with value $v$}\Big]\geq 1- n^c Te^{-\frac{\epsilon^2n}{6T}}.$$
Call this event $\mathcal E$. Henceforth we condition on $\mathcal E$.
Let's consider those agents that enter at times in the range $[\tau-\sqrt{T}+1,\tau]$ for some $\tau \le n^c$. We want to lower bound the number of these agents who are present in the pool for the match at time $\tau$.
In fact, henceforth, We will only consider men with values in the range $[T+\sqrt{T},2T)$. Among these men, consider those who have been in the pool for $t$ time, where $0\le t < \sqrt{T}$.
Let $p_i^t$ be the probability that during their $t$th time step, the men in strip $i$ are offered a match in their own strip. Even if all these men were still present in the matching pool,
\begin{align*}
\operatorname{Pr}\Big[\text{\# of these men matched in strip $i$ at age $t$ } \le \frac{n(1+\delta)(1+\epsilon)\cdot p_i\cdot w_i}{2T} \Big]\geq 1-e^{-\frac{\delta^2np_iw_i}{6T}},
\end{align*}
where $w_i$ is the horizontal width of strip $i$ occupied by these men when aged $t$.
For every Type 1 strip, $w_i \le \sqrt{T}$.
For the one Type 2 strip, since all values are
at least $T+\sqrt{T}$,
for ages up to $\sqrt{T}$, $w_i \le \sqrt{T}$.
By applying $\overline{\mu} = \frac{n(1 + \epsilon) \max\{p_i, \frac{1}{T}\} \sqrt{T}}{2 T}$ in Lemma~\ref{lem::chernoff_bound}, it follows that:
\begin{align*}
\operatorname{Pr}\Big[\text{\# of these men matched in strip $i$ at age $t$ } \le \frac{n(1+\delta)(1+\epsilon)\cdot \max\{p_i, \frac{1}{T}\}}{2\sqrt{T}} \Big]&\geq 1-e^{-\frac{\delta^2n \max\{p_i, \frac{1}{T}\} \sqrt{T}}{6T}} \\
&\geq 1-e^{-\frac{\delta^2n}{6T^{1.5}}},
\end{align*}
The sum of the match probabilities---the $p_i$s--- is at most $1$. Notice that at any fixed time we only need to consider $\sqrt{T}$ strips, because at any time step, the men we are considering will occupy only $\sqrt{T}$ many strips. This implies $\sum \max\{p_i, \frac{1}{T}\} \leq 1 + \frac{1}{\sqrt{T}}$. Therefore,
$$\operatorname{Pr}\Big[\begin{array}{l}\text{\# of these men being matched} \\ \text{over all the strips at age $t$}\end{array} \le \frac{ (1+\frac{1}{\sqrt{T}})n(1+\delta)(1+\epsilon)}{2\sqrt{T}}\Big]\geq 1-\sqrt{T}\cdot e^{-\frac{\delta^2n}{6T^{1.5}}}.$$
Hence, we can bound the probability of the number of men who entered at time $\tau -\Delta+1$ and left by time $\tau$, for any $\Delta \le \sqrt{T}$, as follows:
$$\operatorname{Pr}\Big[\begin{array}{l}\text{\# men being matched} \\ \text{in their first $\Delta$ steps}\end{array} \le \frac{ (1+\frac{1}{\sqrt{T}})n\Delta(1+\delta)(1+\epsilon)}{2\sqrt{T}}\Big]\geq 1- \Delta T\cdot e^{-\frac{\delta^2n}{6T^{1.5}}}.$$
Consequently, we can bound the probability for the number of men that enter in the time interval $[\tau -\sqrt{T}+1,\tau-1]$ and are matched no later than time $\tau-1$ as follows, where
we sum over all $\tau \le n^c$:
\begin{equation*}
\begin{aligned}
&\operatorname{Pr}\bigg[\begin{array}{l}\text{\# men who entered and were} \\ \text{matched in a $\sqrt{T}-1$ time window}\end{array} \le \frac{(1+\frac{1}{\sqrt{T}})n(1+\delta)(1+\epsilon)(\sqrt{T}-1)}{4} \bigg] \geq 1- \frac 12 n^c T^{1.5} e^{-\frac{\delta^2n}{6T^{1.5}}}.
\end{aligned}
\end{equation*}
We set $\delta=\Big[\frac{6T^{1.5}}{n} \ln \big(10n^{2c+1}n^c T^{1.5}\big)\Big]^{1/2}$
and $\epsilon=\Big[\frac{6T}{n}\ln\big(20n^{2c+1}n^c T\big)\Big]^{1/2}$. By constraint~\eqref{eqn::constraints}, $c\geq 1$, $400 \leq T\leq n$, and $n\geq 96T^2(3c+3)\ln n$,
therefore $\delta\leq 1/4$ and $\epsilon\leq {1}/{64}$.
This yields the bound:
\begin{align*}
\operatorname{Pr}\Big[\begin{array}{l}\text{\# men who entered in a $\sqrt{T}-1$} \\ \text{ window being matched }\end{array} \ge \frac{\frac{65}{64} \cdot \frac 54 n\sqrt{T}} {4} \Big]
\geq 1-\frac{1}{20n^{2c+1}}.
\end{align*}
Since we have been conditioning on $\mathcal E$, this bound holds with probability at least $1-\frac{1}{10n^{2c+1}}$.
The same bound applies to the women.
Recalling that we excluded the men with values less than $T+\sqrt{T}$,
this yields the following lower bound on the total population size, throughout this $n^c$ time period:
$$n\sqrt{T} - \frac{n(1 + \epsilon)}{2\sqrt{T}} - 0.635n\sqrt{T}\geq \frac 13 n\sqrt{T},$$
with probability at least $1-\frac{1}{5n^{2c+1}}.$
\end{proof}
\subsection{Upper Bound on The Total Population}
\label{appn::total_upper}
\begin{proof} (Of Theorem~\ref{thm::total_size_upper_bound}.)~
Let $P(t)$ be the total population at the start of time step $t$. Let $N$ be the total number of strips. By Constraint~\ref{eqn::constraints}, $T\geq 676$, so $N\leq \sqrt{T}+\log_2 \sqrt{T} + 1 \leq 5\sqrt{T}/4.$
If $P(t)\le \frac{3}{2}nN$, then $P(t+1) \le \frac{3}{2}nN+n$.
So we will only consider the case that $P(t)>\frac{3}{2}nN$.
In this case, the average strip population at the start of step $t$ is more than $\frac{3}{2}n$.
Next, we upper bound the number of men in the population; the same bound applies to the number of women.
\hide{\rjc{Using the bound on the strip imbalance from $H(t)$, we can see it is at most $[P(t) + nN/25\sqrt{T}]/2 =P(t)/2 + nN/50\sqrt{T}$.
WLOG we will assume that there are at least as many men as women in the overall population.
}}
\hide{Let
$$L\coloneqq\Big\{i \,|\, s_i \geq\frac{n}{\sqrt{T}}\Big\}.$$}
By $H(t)$, clause~\ref{itm::pop-imb},
the excess of men over women in each strip is
at most $n/25\sqrt{T}$ except for the last Type $2$ strip.
So the excess over all these $N - 1$ strips is at most $n(N-1)/25\sqrt{T}$. For the last Type $2$ strip, the population is less than $60n / \sqrt{T}$ which is smaller than $40 P(t) / 676$ as $T \geq 676$.
Consequently, there are at most $P(t)/2 + nN/50\sqrt{T} + 20 P(t) / 676 \le 11P(t)/20$
men in the total population.
The expected number of matches in strip $i$, $\mu_i$, is given by
$$E[\mu_i] =\frac{(\text{ number of women in strip }i) \times (\text{ number of men in strip } i)}{\text{number of men in the whole population}}.$$ Let $s_i$ denote the population of the $i$-th strip.
The denominator is at most $\tfrac{11}{20} P(t)$ and at least $\tfrac{1}{2} P(t)$. The numerator is minimized when the number of women and men in the strip are as far apart as possible. So, for the strips other than the last Type $2$ strip, the numerator is at least
$(s_i/2 + n/50\sqrt{T})(s_i/2 - n/50\sqrt{T}) =
s_i^2/4 - n^2/2500T^2$.
The numerator is maximized when the numbers of women and men are equal. Therefore,
\begin{align}
\label{eqn::bounds-on-Pt}
\frac{s_i^2/4 - n^2/2500T^2}{\frac{11}{20} P(t)}\leq E[\mu_i] \leq \frac{s_i^2}{2 P(t)}.
\end{align}
Consider an indicator random variable $X_i$ for each man in this strip, which is $1$ if that man gets matched. By Lemma \ref{lem::negative_dependence_two_sex} we can use a Chernoff bound to obtain:
\begin{align}\operatorname{Pr}\Big[\text{number of matches in strip } i \leq \mu_i(1-\epsilon)\Big]\leq e^{-\epsilon^2\mu_i/2}. \label{ineq::chernoff_upper_total}
\end{align}
For $E[\mu_i] \ge \alpha n / \sqrt{T}$,
\begin{align*}
\operatorname{Pr}\big[\text{number of matches in strip } i \leq E[\mu_i] (1-\epsilon)\big]\leq e^{-\epsilon^2E[\mu_i] /2} \leq e^{-\epsilon^2 \alpha n / (2\sqrt{T})}.
\end{align*}
Let $\epsilon = \sqrt{\frac{2\sqrt{T}}{\alpha n}\ln (N n^{2c + 1}) }$. Then $\operatorname{Pr}\big[\text{number of matches in strip } i \leq \mu_i(1-\epsilon)\big] \leq \frac{1}{N n^{2c + 1}}$
For $E[\mu_i] < \alpha n / \sqrt{T}$, let $\epsilon = \frac{\theta}{E[\mu_i] }$, then \eqref{ineq::chernoff_upper_total} becomes
\begin{align*}
\operatorname{Pr}\big[\text{number of matches in strip } i \leq E[\mu_i] - \theta \big]\leq e^{-\theta^2/(2 E[\mu_i] )} \leq e^{-\theta^2 \sqrt{T}/(2 \cdot \alpha n )}.
\end{align*}
Let $\theta = \sqrt{\frac{2 \alpha n}{\sqrt{T}} \ln (N n^{2c + 1})} = \alpha \epsilon \frac{n}{\sqrt{T}}$, then $\operatorname{Pr}\big[\text{number of matches in strip } i \leq \mu_i - \theta \big]\leq \frac{1}{N n^{2c + 1}}$
Let $\texttt{NL}$ be the set of all strips except for the last Type $2$ strip. Then, with probability at least $1 - \frac{1}{n^{2c+1}}$, the number of matches is larger than $(1 - \epsilon) \sum_{i \in \texttt{NL}} E[\mu_i] - N \theta$. In addition, by \eqref{eqn::bounds-on-Pt}, $\sum_{i \in \texttt{NL}} E[\mu_i] $ is lower bounded by:
\begin{align*}
\sum_{i \in \texttt{NL}} \frac{s_i^2/4 - n^2/2500T^2}{\tfrac {11}{20} P(t)} &\geq \sum_{i \in \texttt{NL}} \frac{\frac{1}{4} \left(\frac{9 P(t)}{10N}\right)^2 - n^2 /2500T^2}{\tfrac {11}{20} P(t)} \\
&\geq \left(\frac{81}{220} \frac{P(t)}{N} - \frac{n^2 N}{1375 T^2 P(t)}\right) \\
& \geq \left(\frac{243}{440} n - \frac{2n}{4125 T^2}\right) \\
& \geq \frac{11}{20} n.
\end{align*}
The first inequality follows as $\sum_{i \in \texttt{NL}} s_i \ge \frac{9}{10}P(t)$, and the next to last inequality follows as $P(t) \geq \frac{3}{2}n N$. Let $\alpha = 0.4$. Since $\epsilon \leq \frac{1}{22}$, as $n \geq 2420 \sqrt{T} (2c + 2) \ln n$, and $N \theta \leq \frac{5}{4} \alpha \epsilon n$, as $N \leq \frac{5}{4}\sqrt{T}$ and $\theta = \alpha \epsilon n / \sqrt{T}$,
\begin{align*}
(1 - \epsilon) \sum_{i \in \texttt{NL}} E[\mu_i] - N \theta \geq (1 - \epsilon) \frac{11}{20} n - \frac{5}{4} \alpha \epsilon n \geq \frac{n}{2}.
\end{align*}
This means the total number of people matched in the market is greater than $n$, which is the number of people entering, which completes the proof.
\hide{
The upper bound on $\sum_{i\in L}(1-\epsilon)\mu_i$ is minimized when every $s_i$ is equal, i.e.\ $s_i=\frac{19P(t)}{20N_l}$, where $N_l$ is the number of strips in $L$.
Recall that for $i\in L$, $s_i \ge n/\sqrt{T}$; also, $P(t) \le \frac{3}{2}nN+n$.
Thus,
$$\exp[-(\epsilon^2 \mu_i)/2)] \le \exp\left[-\epsilon^2 \frac{2}{5} n^2 \frac{1}{\frac{3}{2}nN+n}\right]
\le \exp\left[-\frac{\frac {4}{15}\epsilon^2\frac nN} {1 + \frac {2}{3N}}\right].$$
So, by a union bound,
$$\pr{\text{ total number of matches } \ge N_l\times \frac{2P(t)(1-\epsilon)}{5N_l^2}=\frac{2 P(t)(1-\epsilon)}{5 N_l}} \ge 1 - N_l \exp\left[-\frac{\frac {4}{15}\epsilon^2\frac nN} {1 + \frac {2}{3N}}\right].$$
The worst case both for the probability and for the number of matches in the above bound occurs when $N_l=N$. Thus,
$$\pr{\text{ total number of matches } \ge \frac{2P(t) (1-\epsilon)}{5N}} \ge 1 - N \cdot \exp\left[-\frac{\frac {4}{15}\epsilon^2\frac nN} {1 + \frac {2}{3N}}\right].$$
Recall that by assumption, $\frac{3}{2}nN < P(t) \le \frac{3}{2}nN+n$.
Let $\delta =\frac{3}{2}nN+n-P(t)$; so $0\le \delta < n$.
We now upper bound $P(t+1)$.
$$P(t+1)\le \frac{3}{2}nN+n-\delta+n- 2 \cdot(\text{number of matches in step } t+1).$$
Thus,
\begin{align*}
P(t+1) &\le \frac{3}{2}nN+n-\delta+n- \frac{4(\frac{3}{2}nN+n-\delta)(1-\epsilon)}{5N}\\
& \le (\frac{3}{2}nN+n)+n-\frac{6}{5}n(1-\epsilon)-\delta+\frac{4\delta(1-\epsilon)}{5N}\\
& \le (\frac{3}{2}nN+n)-n\left(\frac{1}{5}-\frac{6}{5}\epsilon\right)\\
& \le \frac{3}{2}nN+n,
\end{align*}
if $\epsilon \le \tfrac 16$.
\smallskip
Thus,
$$\pr{P(t+1) \le \frac{3}{2}nN+n} \ge 1 - N \cdot \exp\left[-\frac{\frac {4}{15}\epsilon^2\frac nN} {1 + \frac {2}{3N}}\right].$$
We set $\epsilon=\Big[\ln(n^{2c+1}N)\cdot \frac{15N}{4n}\cdot \left(1 + \frac {2}{3N}\right)\Big]^{1/2}$.
Thus it suffices to have $\Big[\ln(n^{2c+1} N)\cdot \frac{15N}{4n}\cdot \left(1 + \frac {2}{3N}\right)\Big]^{1/2} \le \frac{1}{6}$.
As $N \le \frac{3}{2}\sqrt{T}$, it suffices that $\Big[\ln[n^{2c+1}(\frac{3}{2}\sqrt{T})]\cdot \frac{3\sqrt{T}}{2n}\cdot \left(1 + \frac {2}{3N}\right)\Big]^{1/2}\le \frac{1}{6}$. With this choice of $\epsilon$,
$$\pr{P(t+1) \le \frac{3}{2}nN+n} \ge 1-N\cdot \frac{1}{n^{2c+1} N}.$$
So the upper bound is maintained with probability at least $1-\frac{1}{n^{2c+1}}$, for any constant $c\ge 1$, as long as $T\ge 400$ and
$\Big[\ln[n^{2c+1}(\frac{3}{2}\sqrt{T})]\cdot \frac{3\sqrt{T}}{2n}\cdot \left(1 + \frac {2}{3N}\right)\Big]^{1/2}\le \frac{1}{6}.$
Next, we confirm that $4n^2/9\ge T\ge 400$ and $n/\ln n\ge 56(2c+2)\sqrt{T}$ suffice. Note that $N\ge \sqrt{T}$, and as $N$ is integral, therefore $N\ge 20$.
Thus,
\begin{align*}
\ln[n^{2c+1}(\frac{3}{2}\sqrt{T})]\cdot \frac{3\sqrt{T}}{2n}\cdot \left(1 + \frac {2}{3N}\right)
& \leq (2c+2)\ln n\frac{1.55\sqrt{T}}{n} \le\frac{1.55}{56}\le \frac {1}{36},
\end{align*}
as required.
}
\end{proof}
\subsection{Upper Bound on the Size of a Type 1 Strip.}
\label{appn::type1_upper}
\begin{proof} (Of Theorem~\ref{type1_upper})~
Consider any Type $1$ strip $s$. For any two points $(v,t_1)$ and $(v,t_2)$ in $s$ which have the same value $v$, we have $|t_2-t_1|\leq \frac{\sqrt{T}}{2}$.
Let $s'$ be the strip immediately to the right of $s$.
We are going to lower bound the number of matches in time step $t$. Let's consider the agents who will be in strip $s$ at time $t+1$. They will all enter $s$ during a length $\sqrt{T}/2$ time interval ending at time $t+1$.
Let $P_{t-\sqrt{T}+1}$ be the agents in strip $s'$ at time $t-\sqrt{T}$.
By the inductive hypothesis, applied to $s'$ at time $t-\sqrt{T}$, we know that $|P_{t-\sqrt{T}+1}| \le dn$.
We are going to track the subset of these agents who remain
in the system after each step of matches, for the next $\sqrt{T}$ steps, along with the new agents who join the diagonals used
by this subset of agents.
Define $S_{t-\sqrt{T}+i}$ to be the rightmost $\sqrt{T}+1 -i$ diagonals in $s'$ plus the leftmost $i-1$ diagonals in $s$,
for $1\le i \le \sqrt{T}+1$.
Let $P_{t-\sqrt{T}+i}$ be the population occupying $S_{t-\sqrt{T}+i}$
at the start of step $t-\sqrt{T}+i$.
Then $P_{t-\sqrt{T}+i+1}$ is obtained from $P_{t-\sqrt{T}+i}$ by removed matched agents and then adding the new agents
for the diagonals in $S_{t-\sqrt{T}+i+1}$.
Our analysis will show that, with high probability,
for each of these $\sqrt T$ steps,
the number of matches is at least the number of new agents.
This implies the upper bound on the strip population
continues to hold.
By means of a Chernoff bound, we observe that the number of new agents per step can be bounded with high probability as follows:
\begin{equation}
\label{eqn::type1_entering}
\begin{aligned}
\pr{\text{\# new agents } \leq \frac{n}{\sqrt{T}}(1+\delta)} \ge 1-e^{-\frac{n\delta^2}{3\sqrt{T}}}.
\end{aligned}
\end{equation}
Let $\delta = \sqrt{\frac{3\sqrt{T}}{n} \ln (N n^{2c + 1})}$.
As $ n\ge 60 T (2c+2) \ln n$, $\delta \leq \frac{1}{20}$,
which
yields
\begin{align}
\label{eqn::type-one-new-agent-bound}
\operatorname{Pr}\Big[\text{\# new agents} \leq \frac{41n}{40\sqrt{T}} =1.025\frac{n}{\sqrt{T}}\Big] \ge 1-\frac{1}{N n^{2c + 1}}.
\end{align}
By \eqref{eqn::upper_and_lower_TotBound},
the maximum of the number of men and number of women in the market is at most $n\sqrt{T}$.
\hide{
Now, we will bound the number of agents matched in a single step. We assume that at the start of the round, $|P| \leq d n$.
We need to compute lower bounds on the match rates.
For this, we need the upper bound on the total population, from Theorem~\ref{thm::total_size_upper_bound},
of $(3/2)nN+n\le (15/8)n\sqrt{T} +n\le (751/400)n\sqrt{T}$, as by
Constraint~\ref{eqn::constraints}, $n\ge T\ge 400$. Therefore, the number of men and the number of women in the system is at most $\frac{1}{2} ((751/400)n\sqrt{T} + (288 / T) (n \sqrt{T}) + n N / (25 \sqrt{T})) \le 13\sqrt{T}/10$.
The second term is from the last Type $2$ strip. $T \geq 400$ and $N \leq \frac{5}{4} \sqrt{T}$ are used in the last inequality.
}
Let $P_{t'',s}$ and $P_{t'',s'}$ be the portions of population $P_{t''}$ at time $t''$ in strips $s$ and $s'$, resp., for $t+1-\sqrt{T} \le t''\le t$.
By Lemma~\ref{lem::upper_strip_tech}, the matches remove at least the following number of people from $P_{t'',s}$:
\begin{align*}
\frac{(|P_{t'',s}|^2)/2 - (n / 25 \sqrt{T})^2 / 2} n \sqrt{T} = \frac{ |P_{t'',s}|^2 - n^2 / 625 T}{2 n \sqrt{T}}
\end{align*}
A similar bound applies to the matches involving
$P_{t'',s'}$.
To minimize the terms $ |P_{t'',s'}|^2/2n\sqrt{T}$ for $P_{t'',s}$ and $P_{t'',s'}$,
we should make them equal.
Thus the expected number of matches of population $P_{t''}$ is at least
\begin{align}
\label{eqn::type1-matches-min}
\frac{|P_{t''}|^2} {4n\sqrt{T}} - \frac{ n}{625 T \sqrt{T}}.
\end{align}
Next, we want to obtain a high probability bound.
There are four sets of people, resp.\ the men and women in each of $P_{t'',s}$ and $P_{t'',s'}$. Let $\mu$ be the number of matches of one set. If $E[\mu] \geq \frac{\alpha n}{\sqrt{T}}$, then
\begin{align*}
\Pr\big[\mu \ge E[\mu](1-\epsilon) \big] \ge 1 - e^{-(E[\mu] \epsilon^2)/2} \ge 1 - e^{-\epsilon^2 \alpha n / (2 \sqrt{T})};
\end{align*}
letting $\epsilon = \sqrt{\frac{2 \sqrt{T}}{\alpha n} \ln (4 T n^{2c + 1})}$ yields $\Pr\big[\mu \ge E[\mu](1-\epsilon) \big] \ge 1 - \frac{1}{4 T n^{2c + 1}}$.
Otherwise, $E[\mu] \leq \frac{\alpha n}{\sqrt{T}}$, and
\begin{align*}
\Pr\big[\mu \ge E[\mu] - \theta \big] \ge 1 - e^{-(\theta^2)/(2 E[\mu])} \ge 1 - e^{-(\sqrt{T} \theta^2) /(2 \alpha n)};
\end{align*}
setting $\theta = \sqrt{\frac{2\alpha n}{\sqrt{T}} \ln (4 \sqrt{T} n^{2c + 1})} = \alpha \epsilon \frac{n}{\sqrt{T}}$ yields $\Pr\Big[\mu \ge E[\mu] - \theta \Big] \ge 1 - \frac{1}{4 T n^{2c + 1}}$.
Recall \eqref{eqn::type-one-new-agent-bound}, the high probability bound that the number of new agents is at most $1.025 \frac{n}{\sqrt{T}}$.
By \eqref{eqn::type1-matches-min}, the number of people matched is at least $(1 - \epsilon)\left[\frac{|P_{t''}|^2} {4n\sqrt{T}} - \frac{ n}{625 T \sqrt{T}}\right] - 4 \theta$.
Recall that $|P_{t''}| \leq d n$; we let $x = d n - |P_{t''}|$. Then, the number of people left is at most:
\begin{align*}
d n - x - \left[(1 - \epsilon)\left[\frac{(d n - x)^2} {3n\sqrt{T}} - \frac{ n}{625 T \sqrt{T}}\right] - 4 \theta \right] \leq dn - \left[(1 - \epsilon)\left[\frac{(d n)^2} {4n\sqrt{T}} - \frac{ n}{625 T \sqrt{T}}\right] - 4 \theta \right],
\end{align*}
if $1 \geq (1 - \epsilon) d / 2\sqrt{T}$. This number is upper bounded by
$d n - (1 - \epsilon) (\frac{ d^2}{4} - \frac{1}{422,500}) \frac{n}{\sqrt{T}} + 4 \alpha \epsilon \frac{n}{\sqrt{T}}$ as $T \geq 676$ and $|P_{t''}| \geq dn$.
Let $d = 2.6$ and $\alpha = \frac{3}{16}$. Also, $\epsilon \leq \frac{1}{10}$ as $n \ge 27 (2c + 2) T\ln n$ and $676 \le T \leq n$.
A final calculation shows that $(1 - \epsilon) (\frac{ d^2}{4} - \frac{1}{422,500}) \frac{n}{\sqrt{T}} - 4 \alpha \epsilon \frac{n}{\sqrt{T}}$ is at least $1.025 \frac{n}{\sqrt{T}}$, demonstrating the result.
\end{proof}
\subsection{Upper Bound on the Size of a Type 2 Strip}
\label{appn::type2_upper}
\begin{proof} (of Theorem~\ref{type2_upper})
Consider any Type $2$ strip s. If $s$ is the topmost Type $2$ strip, clearly we can upper bound its size by twice the bound on the size of a Type $1$ strip given in Theorem~\ref{type1_upper}. In addition, if $s$ is the Type $2$ strip next to the topmost Type $2$ strip, then the size of strip $s$ is less than that of the topmost Type $2$ strip at time $t + 1 - \sqrt{T}$, which completes the proof for this strip too.
We now assume that $s$ has at least two Type $2$ strips above it. Let $s'$ be the strip immediately above $s$, and let $h$ denote the height of $s$. Then the height of $s'$ is $h/2$ and $h \geq \sqrt{T}$.
Let's consider the agents who will be in strip $s$ at time $t+1$. They will all enter $s$ during a length $h$ time interval ending at time $t+1$. They can be partitioned into two sets as follows:
\begin{itemize}
\item $Y_{t+1} = \{\text{agents that will have spent less than $h/2$ time in strip $s$ by time $t+1$}\}$. .
\item $O_{t+1} =\{\text{agents that will have spent at least $h/2$ time steps in strip $s$ by time $t+1$}\}$.
\end{itemize}
The agents in $Y_{t+1}$ were all present at time $t'=t+1-h/2$ as part of the population of strip $s'$ at that time.
By the inductive hypothesis, applied to $s'$ at time $t'$, we know that there were at
most $2 \textnormal{\textsl{g}} n\sqrt{T}/h$ agents in $s'$ at that time.
Let $P_y$ denote this population.
The agents in $O_{t+1}$ were all present at time $t'=t+1-h$ as part of the population of strip $s'$ at that time.
By the inductive hypothesis, applied to $s'$ at time $t'$, we know that there were at
most $2\textnormal{\textsl{g}} n\sqrt{T}/h$ agents in $s'$ at that time.
Let $P_o$ denote this population.
Let $P^{y}_{t'',s}$ and $P^y_{t'',s'}$ be the remainder of population $P_y$ at time $t''$ in strips $s$ and $s'$, resp., for $t+1-\sqrt{T} \le t''\le t$.
Also, let $P^y_{t''} = P_{t'',s} \cup P_{t'',s'}$. Similarly, define $P^{o}_{t'',s}$, $P^o_{t'',s'}$ and $P^o_{t''}$.
\hide{We will now show that both populations $P_y$ and $P_o$ must have shrunk substantially over $h/2$ and $h$ steps respectively.}
To this end, we need to compute lower bounds on the match rates.
\rjc{By \eqref{eqn::upper_and_lower_TotBound},
the maximum of the number of men and number of women in the market is at most $n\sqrt{T}$.
}
\hide{
We recall the upper bound on the total population, from Theorem~\ref{thm::total_size_upper_bound},
of $(3/2)nN+n\le (15/8)n\sqrt{T} +n\le (751/400)n\sqrt{T}$, as by
Constraint~\ref{eqn::constraints}, $n\ge T\ge 400$. Therefore, the number of men and number of women in the market should not exceed $\frac{1}{2} ((751/400)n\sqrt{T} + (288 / T) (n \sqrt{T}) + n N / (25 \sqrt{T})) \leq 13 n\sqrt{T}/10 $. The second term is from the last type $2$ strip. $T \geq 400$ and $N \leq \frac{5}{4} \sqrt{T}$ is used in the last inequality.
}
We divide the period $[t+1-h,t+1)$ into two phases; Phase 1, $[t+1-h,t+1-h/2)$, and Phase $2$, $[t+1-h/2,t+1)$. We will show that the size of $P^y_{t''}$ at the end of Phase $1$ is at most $\textnormal{\textsl{g}}_1n\sqrt{T}/h$. We will specify $\textnormal{\textsl{g}}_1$ later. Then, at the start of Phase $2$ the size of $P^o\cup P^y$ is at most $2\textnormal{\textsl{g}} n\sqrt{T}/h + \textnormal{\textsl{g}}_1n\sqrt{T}/h$. We claim that after Phase $2$, the size of $P^o\cup P^y$ is at most $\textnormal{\textsl{g}} n\sqrt{T}/h$.
We analyze Phase $1$ first.
Consider the set $P^o_{t'',s}$ and time $t'' \in [t + 1 - h, t +1 - h /2)$.
By Lemma~\ref{lem::upper_strip_tech}, at time $t''$ these matches remove, in expectation, at least the following number of people from $P^o_{t'',s}$:
\begin{equation}
\label{eqn::type2_matches}
\begin{aligned}
\frac{ |P^o_{t'',s}|^2 - n^2 / \rjc{625} T}{2 n \sqrt{T}}
\end{aligned}
\end{equation}
Similar bounds will hold for the sets $P^o_{t'',s'}$.
Notice that the total expected number of matches from $P^o_{t''}$ is minimized if $|P^o_{t'',s}|=|P^o_{t'',s'}|$. Thus we obtain that the size of $P^o_{t''}$ reduces, in expectation, by at least
\begin{equation}
\label{eqn::type2_old_reduction_late_phase}
\begin{aligned}
\frac{|P^o_{t''}|^2} {4 n\sqrt{T}}- \frac{n}{625 T\sqrt{T}}.
\end{aligned}
\end{equation}
As in the analysis for the Type $1$ strip, we then give a high probability bound.
We have four sets of people, the men and the women in the sets $P^o_{t'',s'}$ and $P^o_{t'',s}$, respectively. Suppose $\mu$ be the number of matches in one of these set at time $t''$.
If $E[\mu]\geq \alpha n\sqrt{T}/h^2$, by Lemma \ref{lem::negative_dependence_two_sex},
\begin{equation*}
\begin{aligned}
\Pr\Big[\mu \ge E[\mu](1-\epsilon) \Big] \ge 1 - e^{-(E[\mu] \epsilon^2)/2} \ge 1 - e^{-\epsilon^2 \alpha n \sqrt{T}/2h^2},
\end{aligned}
\end{equation*}
Setting $\epsilon=\bigg[\frac{2h^2}{\alpha n\sqrt{T}}\ln(T (\log_2 \sqrt{T} + 1) n^{2c+1})\bigg]^\frac{1}{2}$ yields $\Pr\Big[\mu \ge E[\mu](1-\epsilon) \Big] \ge 1 - \frac{1}{T (\log_2 \sqrt{T} + 1) n^{2c+1}}$.
Otherwise, $E[\mu]\leq \alpha n\sqrt{T}/h^2$, so by Lemma \ref{lem::negative_dependence_two_sex},
\begin{equation*}
\begin{aligned}
\Pr\Big[\mu \ge E[\mu] - \theta \Big] \ge 1 - e^{-(\theta^2)/(2 E[\mu])} \ge 1 - e^{-(\theta^2 h^2) /(2 \alpha n\sqrt{T})},
\end{aligned}
\end{equation*}
Setting $\theta=\bigg[\frac{2\alpha n\sqrt{T}}{h^2}\ln(T (\log_2 \sqrt{T} + 1) n^{2c+1})\bigg]^\frac{1}{2} = \epsilon \alpha \frac{n \sqrt{T}}{h^2}$ yields $\Pr\Big[\mu \ge E[\mu]-\theta) \Big] \ge 1 - \frac{1}{T (\log_2 \sqrt{T} + 1) n^{2c+1}}$.
For each of the four sets we can use one of the two bounds above.
We can set $\alpha=0.1,\epsilon\leq 0.1$ by imposing the constraints $c\geq 1, 400\leq T\leq n, n\geq 125(2c+2.5)T\sqrt{T}\ln n$ which are provided by the constraints in \eqref{eqn::constraints}, $h \leq T / 4$, and $\log_2 \sqrt{T} + 1\leq \sqrt{T} / 4$ .
Let $\textnormal{\textsl{g}}(\cdot)$ be a real valued function.
Suppose the size of the set $P^o_{t''}$ at round $t''$ is smaller than $\textnormal{\textsl{g}}(t'') \cdot n \sqrt{T} / h$ and let $X = \textnormal{\textsl{g}}(t'') \cdot n \sqrt{T} / h - |P^o_{t''}|$. If $(1 - \epsilon) \textnormal{\textsl{g}}(t'') / (2 h) \leq 1$, \footnote{Note that this is satisfied when $\textnormal{\textsl{g}}(t'') \leq 21.5$ and $h \geq \sqrt{T} = 20$.} then the size of the set $P^o_{t'' + 1}$ at round $t'' + 1$ is at most
\begin{align*}
&\textnormal{\textsl{g}}(t'') n \sqrt{T} / h - X - (1 - \epsilon) \left[\frac{(\textnormal{\textsl{g}}(t'') n \sqrt{T} / h - X)^2} {4 n\sqrt{T}}- \frac{ n}{625 T\sqrt{T}}\right] + 4 \epsilon \alpha \frac{n \sqrt{T}}{h^2} \\
&\vspace*{0.4in} \leq \textnormal{\textsl{g}}(t'') n \sqrt{T} / h - (1 - \epsilon) \left[\frac{(\textnormal{\textsl{g}}(t'') n \sqrt{T} / h )^2} {4 n\sqrt{T}}- \frac{ n}{625 T\sqrt{T}}\right] + 4 \epsilon \alpha \frac{n \sqrt{T}}{h^2} \\
&\vspace*{0.4in} \leq \textnormal{\textsl{g}}(t'') n \sqrt{T} / h - \frac{9\textnormal{\textsl{g}}(t'')^2 n \sqrt{T} } {40 h^2 } + \frac{9 n}{6250 T\sqrt{T}} + \frac{n \sqrt{T}}{25 h^2} \\
&\vspace*{0.4in} \leq n \sqrt{T} / h \left[ \textnormal{\textsl{g}}(t'') - \frac{1}{h} \left(\frac{9\textnormal{\textsl{g}}(t'')^2 } {40} - 0.041\right)\right].
\end{align*}
The last inequality uses the constraint that $h \leq T / 4$.
Let $\textnormal{\textsl{g}}(t + 1 - h) = 2\textnormal{\textsl{g}}$ and $\textnormal{\textsl{g}}(t''+ 1) = \left[ \textnormal{\textsl{g}}(t'') - \frac{1}{h} \left(\frac{9\textnormal{\textsl{g}}(t'')^2 } {40} - 0.041\right)\right]$ for $t'' \in [t + 1 - h, t + 1 - h/2)$, then we have shown that the size of the set $P^o_{t''}$ at round $t + 1 - h/2$ is at most $\textnormal{\textsl{g}}(t+1 - h/2) \cdot n \sqrt{T} / h$. One way to solve $\textnormal{\textsl{g}}(\cdot)$ by using a differential equation.
Consider the differential equation $\texttt{d} \bar{\textnormal{\textsl{g}}} / \texttt{d} t = - \frac{1}{h} \left(\frac{9\bar{\textnormal{\textsl{g}}}^2 } {40} - 0.041\right)$ and $\bar{\textnormal{\textsl{g}}}(t + 1 - h) = 2\textnormal{\textsl{g}}$. Note that $\bar{\textnormal{\textsl{g}}}(t'') \geq \textnormal{\textsl{g}}(t'')$ for all $t'' \in [t + 1 - h, t+ 1 - h/2]$. \footnote{Suppose it is not true. Since $\bar{\textnormal{\textsl{g}}}(t + 1 - h) = \textnormal{\textsl{g}}(t + 1 - h) = 2\textnormal{\textsl{g}}$, there exists a $t'$, such that $\bar{\textnormal{\textsl{g}}}(t') \geq \textnormal{\textsl{g}}(t')$ and $\bar{\textnormal{\textsl{g}}}(t' + 1) < \textnormal{\textsl{g}}(t'+1)$. Then, there exist a $t'' \in [t', t' + 1)$ such that $\bar{\textnormal{\textsl{g}}}(t'') = \textnormal{\textsl{g}}(t')$. After time $t''$, $\texttt{d} \bar{\textnormal{\textsl{g}}}(t'') / \texttt{d} t \geq - \frac{1}{h} \left(\frac{9\bar{\textnormal{\textsl{g}}(t'')}^2 } {40} - 0.041\right) = \textnormal{\textsl{g}}(t' + 1) - \textnormal{\textsl{g}}(t')$. Therefore, $\bar{\textnormal{\textsl{g}}}(t'+1) = \bar{\textnormal{\textsl{g}}}(t'') + \int_{s = t''}^{t'+1} \texttt{d} \bar{\textnormal{\textsl{g}}}(s) \geq \textnormal{\textsl{g}}(t') + \int_{s = t''}^{t'+1} [\textnormal{\textsl{g}}(t' + 1) - \textnormal{\textsl{g}}(t')] \texttt{d} s \geq \textnormal{\textsl{g}}(t' + 1)$, which contradicts the assumption.}
Therefore, in order to prove $\textnormal{\textsl{g}}(t+1 - h/2) \leq \textnormal{\textsl{g}}_1$, we only need $\bar{\textnormal{\textsl{g}}}(t+1 - h/2) \leq \textnormal{\textsl{g}}_1$. We look at the total time for $\bar{\textnormal{\textsl{g}}}$ to reduce from the value $\textnormal{\textsl{g}}(t+1 - h) = 2\textnormal{\textsl{g}}$ to $\textnormal{\textsl{g}}_1$: $\texttt{d} t = - h \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041) $. Therefore, the total time is $\int_{\bar{\textnormal{\textsl{g}}} = \textnormal{\textsl{g}}_1}^{2\textnormal{\textsl{g}}} h \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041) <= \int_{\bar{\textnormal{\textsl{g}}} = \textnormal{\textsl{g}}_1}^{2\textnormal{\textsl{g}}} h \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041 \frac{\bar{\textnormal{\textsl{g}}}^2}{\textnormal{\textsl{g}}_1^2}) = (h / (\frac{9 }{40} - \frac{0.041}{\textnormal{\textsl{g}}_1^2}))(1/\textnormal{\textsl{g}}_1 - 1 / (2\textnormal{\textsl{g}}))$. To have this be at most $h/2$ (the total duration of Phase $1$), we only need $2(1/\textnormal{\textsl{g}}_1 - 1 / (2\textnormal{\textsl{g}})) \leq 9/40 - 0.041/(\textnormal{\textsl{g}}_1)^2$, which is satisfied by letting $\textnormal{\textsl{g}} = 7.5$ and $\textnormal{\textsl{g}}_1 = 6.5$.
We consider Phase $2$ next. Consider the set $P^o_{t'',s} \cup P^y_{t'',s}$ and time $t'' \in [t + 1 - h/2, t +1)$. The analysis is exactly the same as that for Phase $1$.
By Lemma~\ref{lem::upper_strip_tech}, these matches remove, in expectation, at least the following number of people from $P^o_{t'',s} \cup P^y_{t'',s}$:
\begin{equation}
\begin{aligned}
\frac{ |P^o_{t'',s} \cup P^y_{t'',s}|^2 - n^2 / 625 T}{2 n \sqrt{T}}
\end{aligned}
\end{equation}
Similar bounds will hold for the set $P^o_{t'',s'} \cup P^y_{t'',s'}$. We also reduce the size of $P^o_{t''} \cup P^y_{t''}$, in expectation, by at least
\begin{equation}
\begin{aligned}
\frac{|P^o_{t''} \cup P^y_{t''}|^2} {3 n\sqrt{T}}- \frac{ n}{625 T\sqrt{T}}.
\end{aligned}
\end{equation}
Then, as in Phase $1$, suppose the size of the set $P^o_{t''} \cup P^y_{t''}$ at time $t''$ is smaller than $g(t'') n \sqrt{T} / h$ and let $X = g(t'') n \sqrt{T} / h - |P^o_{t''} \cup P^y_{t''}|$. If $(1 - \epsilon) g(t'') / (2 h) \leq 1$, then the size of the set $P^o_{t''} \cup P^y_{t''}$ at round $t'' + 1$ is at most
\begin{align*}
n \sqrt{T} / h \left[ g(t'') - \frac{1}{h} \left(\frac{9g(t'')^2 } {40} - 0.041\right)\right].
\end{align*}
Let $\textnormal{\textsl{g}}(t + 1 - h/2) = 2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1$ and $g(t''+ 1) = \left[ g(t'') - \frac{1}{h} \left(\frac{9\textnormal{\textsl{g}}(t'')^2 } {40} - 0.041\right)\right]$ for $t'' \in [t + 1 - h/2, t + 1)$, then we have shown that the size of the set $P^o_{t''} \cup P^y_{t''}$ at round $t + 1$ is at most $\textnormal{\textsl{g}}(t + 1) \cdot n \sqrt{T} / h$. We consider the same differential equation here, $\texttt{d} \bar{\textnormal{\textsl{g}}} / \texttt{d} t = - \frac{1}{h} \left(\frac{9\bar{\textnormal{\textsl{g}}}^2 } {40} - 0.041\right)$, with $\bar{\textnormal{\textsl{g}}}(t + 1 - h/2) = 2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1$. Note that $\bar{\textnormal{\textsl{g}}}(t'') \geq \textnormal{\textsl{g}}(t'')$ for all $t'' \in [t + 1 - h/2, t+ 1]$.
Therefore, in order to prove $\textnormal{\textsl{g}}(t+1) \leq \textnormal{\textsl{g}}$, we only need $\bar{\textnormal{\textsl{g}}}(t+1) \leq \textnormal{\textsl{g}}$. We look at the total time for $\bar{\textnormal{\textsl{g}}}$ to reduce from the value $2g + g_1$ to $\textnormal{\textsl{g}}$: $\texttt{d} t = - h \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041) $. Therefore, the total time is $\int_{\bar{\textnormal{\textsl{g}}} = \textnormal{\textsl{g}}}^{2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1} h \cdot \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041) <= \int_{\bar{\textnormal{\textsl{g}}} = \textnormal{\textsl{g}}}^{2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1} h \cdot \texttt{d} \bar{\textnormal{\textsl{g}}} / (\frac{9 \bar{\textnormal{\textsl{g}}}^2}{40} - 0.041 \frac{\bar{\textnormal{\textsl{g}}}^2}{\textnormal{\textsl{g}}^2}) = (h / (\frac{9 }{40} - \frac{0.041}{\textnormal{\textsl{g}}^2}))(1/\textnormal{\textsl{g}} - 1 / (2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1))$. To have this be at most $h/2$ (the total duration of Phase $2$), we only need $2(1/\textnormal{\textsl{g}} - 1 / (2\textnormal{\textsl{g}} + \textnormal{\textsl{g}}_1)) \leq 9/40 - 0.041/(\textnormal{\textsl{g}})^2$, which is also satisfied by letting $\textnormal{\textsl{g}} = 7.5$ and $\textnormal{\textsl{g}}_1 = 6.5$.
Finally, as there are $\log_2 \sqrt{T} + 1$ Type $2$ strips, and, for each Type $2$ strip, we consider $h \leq T/4$ steps, the success probability is at least $1 - \frac{T/4 \cdot 4 \cdot (\log_2 \sqrt{T} + 1)}{T(\log_2 \sqrt{T} + 1) n^{2c+1}} \leq 1 - \frac{1}{n^{2c + 1}}$.
\end{proof}
\begin{proof} (Of Theorem~\ref{thm::last_strip_size_upper_bound}.)~
Let's call the bottommost Type $2$ strip $s$ and the Type $2$ strip immediately above it strip $s'$.
Any agent in the population in $s$ at time $t+1$ must belong to one of the following categories:
\begin{itemize}
\item The agent was present in $s'$ at time $t+1-T/4$.
\item The agent was present in $s'$ at time $t+1-T/2$.
\end{itemize}
However, by our inductive hypothesis $H(t)$, we know that at all time steps before $t+1$, the size of $s$ was always less than $\frac{7.5n\sqrt{T}}{\text{height of strip $s$}}\leq 4\cdot 7.5 n/\sqrt{T}.$
Thus, the size of $s$ at time $t+1$ is bounded by $8\cdot 7.5n/\sqrt{T}\leq 60n/\sqrt{T}$, which concludes the proof.
\end{proof}
\subsection{Bound on the Imbalance}
\label{appn::imbalance}
\begin{proof} (Of Claim~\ref{clm::update-to-I}).
The expected number of matches at time $\tau$ between men in diagonal $d_i$ and women in diagonal $d_j$ is
$$\frac{(2A_i+I_i+X_i)(2A_j-I_j-X_j)}{4R}.$$
Similarly, the expected number of women in $d_i$ that match with men in $d_j$ is
$$\frac{(2A_i-I_i-X_i)(2A_j+I_j+X_j)}{4R}.$$
Thus $I'(d_i,\tau)$ is given by:
\begin{align*}
& I_i+X_i - \sum_{d_j \in s}\Big[ \frac{(2A_i+I_i+X_i)(2A_j-I_j-X_j)}{4R}-\frac{(2A_i-I_i-X_i)(2A_j+I_j+X_j)}{4R}\Big]\\
&~~= I_i+X_i - \sum_{d_j \in s} \Big[ X_i\frac{(2A_j-I_j-X_j)}{4R}-X_j\frac{(2A_i-I_i-X_i)}{4R}+I_i\frac{(2A_j-I_j-X_j)}{4R}-I_j\frac{(2A_i-I_i-X_i)}{4R}\Big].
\end{align*}
\end{proof}
\begin{proof} (Of Claim~\ref{clm::bound-on-Y-one-strip}.)~
We first give a high probability bound on $\sum Y(d_i, \tau)$.
Let $m(d_i, \tau)$ be the number of men entering the market on diagonal $d_i$ at time $\tau$. Note that $d_T$ is the last Type $1$ diagonal. Let $d_r$ be a diagonal in Type $1$ strip $s$; then,
\begin{align*}
\operatorname{Pr}\bigg[\Big|\sum_{r \leq i \leq T} Y(d_i, \tau)\Big| > \Delta \bigg] = \operatorname{Pr}\bigg[\Big|\sum_{r \leq i \leq T} m(d_i, \tau) - (T-r+1)\cdot \frac{n}{2T}\Big| > \Delta / 2\bigg].
\end{align*}
Note that
\begin{align*}
&\operatorname{Pr}\bigg[\Big|\sum_{r \leq i \leq T} m(d_i, \tau) - (T-r+1)\cdot \frac{n}{2T}\Big| > \Delta / 2\bigg]\\
&\leq 2 \exp\Big[-\Delta^2 / \Big(3 \cdot (T-r+1)\cdot \frac{n}{2T}\Big)\Big] \leq 2 \exp \Big[-\Delta^2 / \Big(\frac{3n}{2}\Big)\Big].
\end{align*}
The last inequality follows as $T - r + 1 \leq T$. Letting $\Delta = \sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}$ yields
\begin{align*}
\operatorname{Pr}\bigg[\Big|\sum_{r \leq i \leq T} Y(d_i, \tau)\Big| > \sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)} \bigg] \leq \frac{1}{T n^{3c + 1}}.
\end{align*}
Therefore, with probability at least $\frac{1}{n^{3c+1}}$, for all $r$ such that $d_r$ is a diagonal in a Type $1$ strip, $\left|\sum_{r \leq i \leq T} Y(d_i, \tau)\right| \leq \sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}$.
With this result in hand, we prove the claim as follows.
Let $d_{r(s)}$ be the rightmost (lowest index) diagonal in $s$ and $d_{l(s)}$ be the leftmost (highest index) diagonal in $s$. Let $w_i = \sum_{j\ge r(s)} Y(d_i,\tau,d_j,\tau')/Y(d_i,\tau)$.
Let's consider
$\sum_{d_i\in s'; j \ge r(s)} Y(d_i,\tau,d_j,\tau')
= \sum_{d_i\in s'} w_i\cdot Y(d_i,\tau)
$.
By Claim~\ref{clm::distr-of-X}, $w_i \le w_k$, for $i<k$. Thus,
\begin{align*}
\Big|\sum_{d_i; j \ge r(s)} Y(d_i,\tau,d_j,\tau')\Big|=
\Big|\sum_{i = 1}^{T} w_i\cdot Y(d_i,\tau)\Big|\leq w_T \max_{r \leq T}\big| \sum_{r\le i \le T} Y(d_i,\tau)\Big| \leq \sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}.
\end{align*}
Finally,
\begin{align*}
\Big|\sum_{d_i; d_j\in s} Y(d_i,\tau,d_j,\tau') \Big| =
\Big| \sum_{d_i; j \ge r(s)} Y(d_i,\tau,d_j,\tau') -
\sum_{d_i; j \ge l(s)+1} Y(d_i,\tau,d_j,\tau') \Big| \le 2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)},
\end{align*}
which completes the proof.
\end{proof}
\begin{proof} (Of Claim~\ref{clm::remain::type::2}.)~
We begin by bounding $\sum_{j: d_j \in s} (2A_j - I_j - X_j) / 4R$ for any Type $2$ strip $s$.
\begin{align}
\label{eqn::match-rate-bound-second}
\sum_{j: d_j \in s} \frac{(2A_j - I_j - X_j)}{4R} \leq \frac{1}{2} \frac{\frac{3.75n\sqrt{T}}{H}+n/50\sqrt{T}}{\frac{n\sqrt{T}}{6}}<\frac{23}{2H}\hspace*{0.2in}\text{(as $\sqrt{T}\ge 26$ by constraint~\ref{eqn::constraints})}.
\end{align}
Let $s$ have height $H$.
Consider $X(d,\tau,d',\tau')$. If $d'$ is in a Type $2$ strip then by \eqref{eqn::match-rate-bound-second} at most $\frac{23}{2H}$ of it disperses to some location in the same strip and at least $1-\frac{23}{2H}$ of it moves down distance one. This implies that, within $H$ time, a Type $2$ strip loses at least $e^{-23/2}$ of the $X(d,\tau,d',\tau')$ that had been present within it at time $\tau'$. Let $K_2=e^{12}\ln 2$. Therefore, by time $\tau'+ K_2 H$ ($\leq \tau' K_2 T / 4$) at least half of the $X(d,\tau,d',\tau')$ in a Type $2$ strip has moved out of the strip.
Similarly, we can now carry out the same kind of argument for the Type $2$ strips. After $2e^2(\ln 2) \sqrt{T}(\sqrt{T}+\log_{2}(2n^k))$ time there is at most $\frac{1}{2n^k}$ fraction of $X(d,\tau)$ in the Type $1$ strips. We focus on the remaining
$1 - \frac{1}{2n^k}$ portion of $X(d,\tau)$ which has already entering the type 2 strips.
Number the Type $2$ strips from top to bottom\footnote{Our argument doesn't involve the last Type $2$ strip, so we will end at the second to last strip.}. Now consider $\gamma$ as a distribution of the rest of the $X(d,\tau)$ where $\gamma_i$ is the fraction of $X(d,\tau)$ in strip number $i$. Recall that there are
$\log_2(\sqrt{T})$ Type $2$ strips other than the bottom Type $2$ strip. We consider the worst case where all the remaining $(1 - \frac{1}{2 n^k}) \cdot X(d,\tau)$ starts out at the topmost Type $2$ strip. Define a potential function $\phi(\gamma)=\sum_{i=1}^{\log_2 \sqrt{T} + 1}\gamma_i\cdot 2^{(\log_2 \sqrt{T}) - i + 1}$. For the remaining $(1 - \frac{1}{2 n^k}) \cdot X(d,\tau)$, The initial potential is at most $\sqrt{T}$. Every $K_2 T / 4$ time steps, the potential decreases by at least $1/4$. Therefore, after $\frac{1}{\log_2 (4/3)} K_2T/4\log_{2}(2n^k\sqrt{T})$ time steps, the potential would have reduced to at most $\frac{1}{2n^k}$.
There is also $\frac{1}{2n^k}$ fraction which might still be in the Type $1$ strips. Thus the fraction of $X(d,\tau)$ in any strip other than the bottommost Type 2 strip after $\frac{1}{\log_2 (4/3)} K_1\sqrt{T}(\sqrt{T}+\log_{2}(2n^k))+$
\newline$\frac{1}{\log_2 (4/3)} K_2T/4\log_{2}(2n^k\sqrt{T})$ time is at most $\frac{1}{n^k}$.
\end{proof}
\begin{proof} (Final Calculation in Theorem ~\ref{thm::imbalance_bound})\\
As $\kappa = \frac{e^2 \ln 2}{\log_2 (4/3)} \sqrt{T}(\sqrt{T}+\log_{2}(2n^k)) + \frac{e^{12} \ln 2}{4 \log_2 (4/3)} T\log_{2}(2n^k\sqrt{T}) \leq 12.35 (T + \sqrt{T} + (c+4) \sqrt{T}\log_2 n ) + \frac{e^{12}}{2} (T + (c + 4) T \log_2 n + 0.5 T \log_2 T)$, the total imbalance is at most
\begin{align*}
\frac{15T}{n^3} + &\Big[12.35 (T + \sqrt{T} + (c+4) \sqrt{T}\log_2 n ) + \frac{e^{12}}{2} (T + (c + 4) T \log_2 n + 0.5 T \log_2 T) \Big] \cdot \\
&~~~~~~~~~~~~~\bigg(192\sqrt{ \frac{n\ln(4n^{3c+1} (T^2/32 + T/8) N)}{\sqrt{T}}} +2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)} \bigg).
\end{align*} In order to make it smaller than $\frac{n}{25 \sqrt{T}}$, we only need that
\begin{align*}\frac{375 T \sqrt{T}}{n^3} + &\Big[309 (T + \sqrt{T} + (c+4) \sqrt{T}\log_2 n ) + \frac{25 e^{12}}{2} (T + (c + 4) T \log_2 n + 0.5 T \log_2 T)\Big] \cdot \\
&~~~~~~~~~~~~\bigg(192\sqrt{ \sqrt{T}\ln[4n^{3c+1} (T^2/32 + T/8) N]} +2\sqrt{\frac{3 T}{2} \ln \left(2 T n^{3c + 1}\right)} \bigg) \leq \sqrt{n}.
\end{align*}
As $375 T\sqrt{T} / n^3 \leq 0.0012 \sqrt{n}$ from the constraint $n \geq T \geq 676$, we need
\begin{align*} &\Big[309 (T + \sqrt{T} + (c+4) \sqrt{T}\log_2 n ) + \frac{25 e^{12}}{2} (T + (c + 4) T \log_2 n + 0.5 T \log_2 T)\Big] \cdot \\
&~~~~~~~~~~~~\bigg(192\sqrt{ \sqrt{T}\ln[4n^{3c+1} (T^2/32 + T/8) N]} +2\sqrt{\frac{3 T}{2} \ln \left(2 T n^{3c + 1}\right)} \bigg) \leq 0.9988\sqrt{n}. \numberthis \label{ineq::final::1}
\end{align*}
In addition, as $n \geq T \geq 676$,
\begin{align*}
&\Big[309 (T + \sqrt{T} + (c+4) \sqrt{T}\log_2 n ) + \frac{25 e^{12}}{2} (T + (c + 4) T \log_2 n + 0.5 T \log_2 T)\Big] \\
&~~~~~~~~~~~~~~~~~~\leq (86.61 + 12.876c + 57.62 e^{12} + 12.5 e^{12} c) T \log_2 n, \numberthis \label{ineq::final::2}
\end{align*}
and, as $n \geq T \geq 676$ and $n \geq N$,
\begin{align*}
\bigg(192\sqrt{ \sqrt{T}\ln[4n^{3c+1} (T^2/32 + T/8) N]} +2\sqrt{\frac{3 T}{2} \ln \left(2 T n^{3c + 1}\right)} \bigg) \leq 42 \sqrt{T (3c + 4) \ln n}. \numberthis \label{ineq::final::3}
\end{align*}
By inequalities \eqref{ineq::final::1}, \eqref{ineq::final::2}, and \eqref{ineq::final::3},
\begin{align*}
(86.61 + 12.876c + 57.62 e^{12} + 12.5 e^{12} c) T \log_2 n \cdot 42 \sqrt{T (3c + 4) \ln n} \leq 0.9988\sqrt{n}.
\end{align*}
Therefore, $n \geq (3654 + 2436e^{12} + 546(e^{12} + 1) c)^2(3c+4) T^3 (\log_2 n)^2 \ln n$ suffices.
\end{proof}
\subsection{Initialization}
\label{appn::init}
\begin{proof} (Of Theorem~\ref{thm::initialization}.)~
At any point in the first $\sqrt{T}$ time steps:
\begin{itemize}
\item The total population in the entire matching pool is clearly less than $n\sqrt{T}<nN$, as only these many agents could have even entered the matching pool.
\item In any single Type $1$ strip,
$$\pr{\text{Number of agents that entered the strip from the top}\leq \frac{n\sqrt{T}(1+\epsilon)}{\sqrt{T}}}\geq 1-e^{-\frac{n\epsilon^2}{3}}.$$
However the agents that enter a Type $1$ strip during the first $\sqrt{T}$ time must either have entered from the top or they could have entered at the top boundary of the previous strip. Thus, by a union bound,
$$\pr{\text{Number of agents that entered any Type $1$ strip}\leq 2n(1+\epsilon)}\geq 1-\sqrt{T}e^{-\frac{n\epsilon^2}{3}}.$$
So by setting $\epsilon=\sqrt{\frac{3}{n}\ln (n^{c+1}\sqrt{T})}$, and imposing the constraints $c\ge 1$, $T\le n$, and $n\ge 35(c+2)\ln n$ that guarantee that $\epsilon< 0.3$ (from \eqref{eqn::constraints}), we obtain that with probability $1-\frac{1}{n^{c+1}}$ every Type $1$ strip has a population $<2.6n$.
\item The agents in the first Type $2$ strip after $\sqrt{T}$ time steps (the only Type $2$ strip with any population after $\sqrt{T}$ time) could only be those agents that entered the leftmost two Type $1$ strips from the top. However the previous bound already guarantees that this number is also $<2.6n$.
\item Also, the population in the bottom most Type 2 strip will be 0.
\item Now it remains only to show that in each of the strips, except possibly the bottommost Type 2 strip, $|\text{number of men}-\text{number of women}|\leq \frac{n}{25\sqrt{T}}$.
\end{itemize}
We will follow the proof of Theorem \ref{thm::imbalance_bound}, though the proof will be simplified by the fact that we only need to consider $\sqrt{T}$ many time steps.
We divide each strip into thin diagonals of width $1$. Let the diagonal include the bottom but not the top boundary. Notice that for each value, a diagonal contains at most one grid point.
As in Theorem \ref{thm::imbalance_bound}, we introduce the following notation w.r.t.\ diagonal $d$ at time step $\tau$, where we are conditioning on the outcome of step $\tau-1$.
\begin{align*}
I(d,\tau) &= E[(\text{number of men at time $\tau$}-\text{number of women at time $\tau$})]\\
X(d,\tau) &= (\text{number of men matching at time $\tau$}-\text{number of women matching at time $\tau$})\\
&\hspace*{0.2in} - E[(\text{number of men matching at time $\tau$}-\text{number of women matching at time $\tau$})]\\
Y(d,\tau) &= \text{number of men entering at time $\tau$} - \text{number of women entering at time $\tau$}
\\
A(d,\tau) &= (\text{number of men matching at time $\tau$}+\text{number of women matching at time $\tau$})/2.
\end{align*}
$I(d,\tau)$ is measured after the entry of the new agents at time $\tau$ but prior to the match for this step. Also, note that $Y(d, \tau) = 0$ if $d$ is in a Type $2$ strip.
In addition, observe that the imbalance $\operatorname{Imb}(s)$ at the start of step $t$ equals $\sum_{d\in s} I(d,t)$.
We observe that a match between two agents in distinct diagonals of the same strip
will increment the $(\text{number of men } - \text{ number of women})$
in one diagonal and decrement it in the other.
Thus there is a zero net change over all the diagonals
in the strip due to the matches. However, as the agents all age by 1 unit during a step, some agents enter the strip and some leave, which can cause changes to the imbalance within a strip.
However, the entry of new agents can introduce new imbalances.
We will need to understand more precisely how these imbalances evolve.
It is convenient to number the diagonals as $d_1,d_2,d_3,\ldots$, in right to left order.
We recall the following claims from the proof of Theorem \ref{thm::imbalance_bound}.
\begin{claim}
\label{clm::copy_update-to-I}
Let $d_i$ and $d_j$ be two diagonals in the same strip $s$. For brevity, let $I_i\triangleq I(d_i,\tau-1)$,
$I_j\triangleq I(d_j,\tau-1)$,
$A_i\triangleq A(d_i,\tau-1)$,
$A_j\triangleq A(d_j,\tau-1)$,
$X_i\triangleq X(d_i,\tau-1)$,
$X_j\triangleq X(d_j,\tau-1)$.
Finally, let $R$ denote the maximum of the total number
of men and the total number of women in the system
at time $\tau-1$.
Then the new imbalance on diagonal $d_i$, prior to every unmatched agent adding 1 to their age (which causes the agents on $d_i$ to move to $d_{i+1}$), denoted by $I'(d_i,\tau)$, is given by:
\begin{align*}
&I'(d_i,\tau)= \\
&~~~~I_i + X_i - \sum_{d_j \in s}\Big[ X_i\frac{(2A_j-I_j-X_j)}{4R}-X_j\frac{(2A_i-I_i-X_i)}{4R}+I_i\frac{(2A_j-I_j-X_j)}{4R}-I_j\frac{(2A_i-I_i-X_i)}{4R}\Big]; \\
&\text{~~and~~}I(d_i, \tau) = I'(d_{i-1}, \tau - 1) + Y(d,\tau).
\end{align*}
\end{claim}
$X(d,\tau)$ and $Y(d,\tau)$ are generated at diagonal $d$ at time $\tau$ and, by
Claim \ref{clm::copy_update-to-I}, at any subsequent time step, $X(d,\tau)$ and $Y(d,\tau)$ will be redistributed over other diagonals.
\begin{enumerate}
\item Due to the expected matching at time $\tau'\ge \tau$, each $X(d,\tau)$ and $Y(d,\tau)$ spreads to other diagonals in the same strip.
\item At the end of time step $\tau'$ the portions of $X(d,\tau)$ and $Y(d,\tau)$ present on diagonal $d_i$ move to diagonal $d_{i+1}$.
\end{enumerate}
Building on these observations, we will show our bound on the imbalance by bounding the total contribution from $X(\cdot, \tau)$ and $Y(\cdot, \tau)$ to strip $s$ at time $\tau'$.
Notice that $\sum_{d_i\in s} I'(d_i,\tau) = \sum_{d_i\in s} I(d_i,\tau-1)$, for the coefficients
multiplying $X_i$ cancel, as they also do for $I_i$. Thus we can think of this process as redistributing the imbalance, but not changing the total imbalance.
Over time an imbalance $X(d_i,\tau)$ will be redistributed over many diagonals. We write
$X(d_i,\tau,d_j,\tau')$ to denote the portion of
$X(d_i,\tau)$ on diagonal $d_j$ at time $\tau'$.
$d_j$ need not be in the same strip as $d_i$.
Note that $\sum_{d_j} X(d_i,\tau,d_j,\tau') = X(d_i,\tau)$ for all $\tau'\ge \tau$. $Y(d_i,\tau,d_j,\tau')$ is defined analogously.
For the purposes of the following claim, we treat the final strip as a single diagonal, and in addition ignore the fact that people depart at age $T$ (which means that once an imbalance appears in this strip it remains there). The reason this strip is different is that it covers the whole of the bottom boundary and so is the only strip from which people leave the system by aging out.
\begin{claim}
\label{clm::copy_distr-of-X}
For all $\ell$, for all $i<k$, and for all $\tau'\ge \tau$,
$\big|\sum_{j>\ell} X(d_i,\tau,d_j,\tau')\big| \le \big|\sum_{j>\ell} X(d_k,\tau,d_j,\tau')\big|$.
The same property holds for the $Y(d_i,\tau,d_j,\tau')$.
\end{claim}
Later, we will show a common bound $B$ on the sums
$\big| \sum_{i\le j \le k} X(d_j,\tau)\big|$,
which holds for all $d_i$ and $d_k$ in the same strip and all $\tau$.
With this bound and Claim~\ref{clm::copy_distr-of-X} in hand, for each strip $s$, we can bound the contribution of the $X(d_i,\tau,d_j,\tau')$ summed
over all $d_i$ and over $d_j\in s$ by $2B$.
\begin{claim}
\label{clm::copy_bound-on-X-one-strip}
For all $\tau'\ge \tau$, for every strip $s$,
$\big|\sum_{d_i; d_j\in s} X(d_i,\tau,d_j,\tau') \big| \le 2B$.
\end{claim}
This allows us to obtain the bound the imbalance in a strip $s$ at any time $\tau'\leq\sqrt{T}$ by considering the contributions of $\big|\sum_{d_i; d_j\in s} X(d_i,\tau,d_j,\tau')\big|$ and $\big|\sum_{d_i; d_j\in s} Y(d_i,\tau,d_j,\tau') \cdot Y(d_i, \tau) \big|$ at all possible previous times (which is at most $\sqrt{T}$ time).
Regarding the contribution of Y, we also have the following claim,
\begin{claim}
\label{clm::copy_bound-on-Y-one-strip}
With probability at least $1 - \frac{1}{n^{2c+1}}$, for all $\tau'\ge \tau$, for every strip $s$,
$\big|\sum_{d_i; d_j\in s} Y(d_i,\tau,d_j,\tau') \big| \le 2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}$.
\end{claim}
Thus,
\begin{equation}
\label{eqn::init_strip_variance}
\begin{aligned}
|\operatorname{Imb}(s,\tau')|\leq \Big[2B+2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}\Big]\sqrt{T}
\end{aligned}
\end{equation}
We now calculate $B$.
\begin{claim}
\label{clm:: B_for_init}
For any diagonal $d_j$ and any $d_i$ and $d_k$ that lie in the same strip, at any time $\tau \leq \sqrt{T}$,
$$\Pr\bigg[\Big|\sum_{i\leq j\leq k} X(d_j,\tau)\Big| \geq 2\sqrt{\frac{n(1+\epsilon)^2\sqrt{3}\ln (16T\sqrt{T}(\sqrt{T}+1)n^{c+1})}{0.48\sqrt{T}}}\bigg] \leq \frac{1}{4n^{c+1}}.$$
\end{claim}
\begin{proof}
The agents enter with one of $T$ values chosen uniformly at random and are equally likely to be men or women. Hence, for all $\tau\leq \sqrt{T}$ time steps, for each value $v$,
$$\operatorname{Pr}\Big[\text{At most $\frac{n(1+\epsilon)}{2T}$ men enter with value $v$}\Big]\geq 1-\tau Te^{-\frac{\epsilon^2n}{6T}}\geq 1-T\sqrt{T} e^{-\frac{\epsilon^2n}{6T}}.$$
Call this event $\mathcal E_m$.
Similarly,
$$\operatorname{Pr}\Big[\text{At most $\frac{n(1+\epsilon)}{2T}$ men enter with value $v$}\Big]\geq 1-\tau Te^{-\frac{\epsilon^2n}{6T}}\geq 1-T\sqrt{T} e^{-\frac{\epsilon^2n}{6T}}.$$
Call this event $\mathcal E_w$.
Henceforth we condition on $\mathcal E_m$ and $\mathcal E_w$.
Consider some time $\tau$. At this time, let $M=\max\{\text{total number of men},\text{total number of women}\}$ while $m=\text{number of men in strip $s$}$ and $w=\text{number of men in strip $s$}$.
Using Lemmas~\ref{lem::negative_dependence_two_sex} and~\ref{lem::match rate}, we obtain the following bound on the deviation from the expected number of the number of men in $s$ matched in a given time step, $\tau$:
\begin{equation}
\label{eqn::init_strip_deviation_bound}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \frac{mw\delta}{M}\bigg] \leq 2e^{-{mw\delta^2}/{3M}}.
\end{aligned}
\end{equation}
We will later prove the following claim,
\begin{claim}
\label{claim::early_total_size_lower_bound}
For all time $0\leq t\leq \sqrt{T}$, $0.12nt\leq M\leq nt$ for all $t\leq \sqrt{T}$, with probability at least $1-\frac{1}{5n^{2c+1}}$.
\end{claim}
Call this event $\mathcal E_M$. Henceforth, we further condition on $\mathcal E_M$.
Thus, from equation \ref{eqn::init_strip_deviation_bound}, we obtain
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \frac{mw\delta}{0.12nt}\bigg] \leq 2e^{-{mw\delta^2}/{3nt}}.
\end{aligned}
\end{equation*}
Setting $\delta=\Big[\frac{3nt}{mw}\ln (n^cA(n,T)) \Big]^{1/2}$, we obtain
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \sqrt{\frac{mw\sqrt{3}\ln (n^cA(n,T))}{0.12nt}}\bigg] \leq \frac{2}{A(n,T)n^c}.
\end{aligned}
\end{equation*}
We will specify $A(n,T)$ later.
It is easy to see that because of $\mathcal E_m$ and $\mathcal E_w$, $m\leq nt(1+\epsilon)/2\sqrt{T}$ and $w\leq nt(1+\epsilon)/2\sqrt{T}$. So we obtain,
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \sqrt{\frac{nt(1+\epsilon)^2\sqrt{3}\ln (n^cA(n,T))}{0.48T}}\bigg] \leq \frac{2}{A(n,T)n^c}.
\end{aligned}
\end{equation*}
Since $t\leq \sqrt{T}$,
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \sqrt{\frac{n(1+\epsilon)^2\sqrt{3}\ln (n^cA(n,T))}{0.48\sqrt{T}}}\bigg] \leq \frac{2}{A(n,T)n^c}.
\end{aligned}
\end{equation*}
We can perform the same argument for the women to obtain,
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of women matched}\\
\hspace*{0.2in}-E[\text{number of women matched}]
\end{array}
\Big|> \sqrt{\frac{n(1+\epsilon)^2\sqrt{3}\ln (n^cA(n,T))}{0.48\sqrt{T}}}\bigg] \leq \frac{2}{A(n,T)n^c}.
\end{aligned}
\end{equation*}
From this it immediately follows that
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\Big|\sum_{d\in S}X(d,\tau)
\Big|> 2\sqrt{\frac{n(1+\epsilon)^2\sqrt{3}\ln (n^cA(n,T))}{0.48\sqrt{T}}}\bigg] \leq \frac{4}{A(n,T)n^c}.
\end{aligned}
\end{equation*}
where $S$ is any subset of the diagonals in strip $s$. We have to consider $\sqrt{T}$ many possible times, $T$ many diagonals $d_j$ and up to $(\sqrt{T}+1)$ many strips. Thus setting $A(n,T)=16T\sqrt{T}(\sqrt{T}+1)n$, proves the claim.
\end{proof}
From equation \eqref{eqn::init_strip_variance} we obtain, for every strip $s$ and $\tau\leq \sqrt{T}$, the following bound on $|\operatorname{Imb}(s,\tau')|$:
\begin{equation*}
\begin{aligned}
|\operatorname{Imb}(s,\tau')|&\leq \Big[2B+2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}\Big]\sqrt{T}\\
&\leq 8\sqrt{n(1+\epsilon)^2\sqrt{T}\ln(16T\sqrt{T}(\sqrt{T}+1)n^{c+1})} + \sqrt{6nT \ln \left(2 T n^{3c + 1}\right)}.
\end{aligned}
\end{equation*}
We are conditioning on $\mathcal E_m$, $\mathcal E_w$ and $\mathcal E_M$. Set $\epsilon= \Big[\frac{6T}{n}\ln(4T\sqrt{T}n^{3c+1})\Big]^{1/2}$.
We choose constraints so that $\epsilon\leq 1$. So, for every strip $s$ and $\tau\leq \sqrt{T}$,
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\big|\operatorname{Imb}(s,\tau')\big|\leq 16\sqrt{n\sqrt{T}\ln (16T\sqrt{T}(\sqrt{T}+1)n^{c+1})} + \sqrt{6nT \ln \left(2 T n^{3c + 1}\right)}\bigg]\\
&\geq \bigg(1-\frac{1}{4n^{c+1}}-\frac{1}{n^{2c+1}}\bigg)\bigg(1-\frac{1}{5n^{2c+1}}-\frac{1}{2n^{2c+1}}\bigg).
\end{aligned}
\end{equation*}
Then we obtain,
\begin{equation*}
\begin{aligned}
&\Pr\bigg[\big|\operatorname{Imb}(s,\tau')\big|\leq 16\sqrt{n\sqrt{T}\ln (16n^{c+1}T\sqrt{T}(\sqrt{T}+1))} + \sqrt{6nT \ln \left(2 T n^{3c + 1}\right)}\bigg]\\
&\geq \bigg(1-\frac{1}{4n^{c+1}}-\frac{1}{n^{2c+1}}\bigg)\bigg(1-\frac{7}{10n^{2c+1}}\bigg)]\geq 1-\frac{1}{n^{c+1}}.
\end{aligned}
\end{equation*}
We desire that
$$16\sqrt{n\sqrt{T}\ln (16n^{c+1}T\sqrt{T}(\sqrt{T}+1))} + \sqrt{6nT \ln \left(2 T n^{3c + 1}\right)}\leq \frac{n}{25\sqrt{T}}$$
for which it suffices (by constraints \eqref{eqn::constraints}) that
$$32\cdot 25\cdot T\sqrt{(3c+3)}\ln n\leq \sqrt{n}.$$
or,
$$n\geq (3c+3)(25\cdot 32\cdot T\ln n)^2,$$
which is also provided by the constraints \eqref{eqn::constraints}.
\end{proof}
It now remains to prove Claim \ref{claim::early_total_size_lower_bound}. We proceed exactly as in the proof of Theorem \ref{thm::lb-size}.
\begin{proof} (of Claim~\ref{claim::early_total_size_lower_bound}~).
Let's consider those agents that enter at times in the range $[0,t]$ for some $t \le \sqrt{T}$. We want to lower bound the number of these agents who are present in the pool for the match at time $t$.
Henceforth, we will only consider men with values in the range $[T+\sqrt{T},2T)$. Among these men, consider those who have been in the pool for $t'$ time, where $0\le t' < \sqrt{T}$.
Let $p_i^{t'}$ be the probability that during their $t'$-th time step, the men in strip $i$ are offered a match in their own strip. Even if all these men were still present in the matching pool,
\begin{align*}
\operatorname{Pr}\Big[\text{\# of these men matched in strip $i$ at age $t'$ } \le \frac{n(1+\delta)(1+\epsilon)\cdot p_i\cdot w_i}{2T} \Big]\geq 1-e^{-\frac{\delta^2np_iw_i}{6T}},
\end{align*}
where $w_i$ is the horizontal width of strip $i$ occupied by these men when aged $t'$.
For every Type 1 strip, $w_i \le \sqrt{T}$.
For the one Type 2 strip, since all values are
at least $T+\sqrt{T}$,
for ages up to $\sqrt{T}$, $w_i \le \sqrt{T}$.
By applying $\overline{\mu} = \frac{n(1 + \epsilon) \max\{p_i, \frac{1}{T}\} \sqrt{T}}{2 T}$ in Lemma~\ref{lem::chernoff_bound}, it follows that:
\begin{align*}
\operatorname{Pr}\Big[\text{\# of these men matched in strip $i$ at age $t'$ } \le \frac{n(1+\delta)(1+\epsilon)\cdot \max\{p_i, \frac{1}{T}\}}{2\sqrt{T}} \Big]&\geq 1-e^{-\frac{\delta^2n \max\{p_i, \frac{1}{T}\} \sqrt{T}}{6T}} \\
&\geq 1-e^{-\frac{\delta^2n}{6T^{1.5}}},
\end{align*}
The sum of the match probabilities---the $p_i$'s--- is at most $1$. Notice that at any fixed time we only need to consider $\sqrt{T}$ strips, because at any time step, the men we are considering will occupy only $\sqrt{T}$ many strips. This implies $\sum \max\{p_i, \frac{1}{T}\} \leq 1 + \frac{1}{\sqrt{T}}$. Therefore,
$$\operatorname{Pr}\Big[\begin{array}{l}\text{\# of these men being matched} \\ \text{over all the strips at age $t'$}\end{array} \le \frac{ (1+\frac{1}{\sqrt{T}})n(1+\delta)(1+\epsilon)}{2\sqrt{T}}\Big]\geq 1-\sqrt{T}\cdot e^{-\frac{\delta^2n}{6T^{1.5}}}.$$
Hence, we can bound the probability of the number of men who entered at time $t -\Delta+1$ and left by time $t$ , for any $\Delta \le t$, as follows:
$$\operatorname{Pr}\Big[\begin{array}{l}\text{\# men being matched} \\ \text{in their first $\Delta$ steps}\end{array} \le \frac{ (1+\frac{1}{\sqrt{T}})n\Delta(1+\delta)(1+\epsilon)}{2\sqrt{T}}\Big]\geq 1- \Delta \sqrt{T}\cdot e^{-\frac{\delta^2n}{6T^{1.5}}}.$$
Consequently, we can bound the probability for the number of men that enter in the time interval $[0,t-1]$ and are matched no later than time $t-1$ as follows:
\begin{equation*}
\begin{aligned}
&\operatorname{Pr}\bigg[\begin{array}{l}\text{\# men who entered and were} \\ \text{matched in a $t-1$ time window}\end{array} \le \frac{(1+\frac{1}{\sqrt{T}})n(1+\delta)(1+\epsilon)(t-1)}{4} \bigg] \geq 1- \frac 12 t^2\sqrt{T} e^{-\frac{\delta^2n}{6T^{1.5}}}.
\end{aligned}
\end{equation*}
We set $\delta=\Big[\frac{6T^{1.5}}{n} \ln \big(10n^{2c+1} T^{1.5}\big)\Big]^{1/2}$. Note that $t\leq \sqrt{T}$. We already chose $\epsilon= \Big[\frac{6T}{n}\ln(4T\sqrt{T}n^{2c+1})\Big]^{1/2}.$
By constraint~\eqref{eqn::constraints}, $c\geq 1$, $400 \leq T\leq n$, and $n\geq 96T^2(2c+3)\ln n$,
$\delta\leq 1/4$ and $\epsilon\leq {1}/{64}$. This yields the bound:
\begin{align*}
\operatorname{Pr}\Big[\begin{array}{l}\text{\# men who entered in a $t-1$} \\ \text{ window being matched }\end{array} \ge \frac{\frac{65}{64} \cdot \frac 54 nt} {4} \Big]
\geq 1-\frac{1}{20n^{2c+1}}.
\end{align*}
Since we have been conditioning on $\mathcal E$, this bound holds with probability at least $1-\frac{1}{10n^{2c+1}}$.
The same bound applies to the women.
Recalling that we excluded the men with values less than $T+\sqrt{T}$,
this yields the following lower bound on the total population size, at time t:
$$nt - \frac{n(1 + \epsilon)}{\sqrt{T}} - 0.635nt\geq 0.25 nt,$$
with probability at least $1-\frac{1}{5n^{2c+1}}.$
Thus $0.12 nt\leq M\leq nt$ for all $t\leq T$, with probability at least $1-\frac{1}{5n^{2c+1}}$, which proves the claim.
\end{proof}
\section{Bound on Benefits from Deviation}
\subsubsection{Bound on Imbalance}
\label{sec::imbalance_bound}
\begin{theorem}
\label{thm::imbalance_bound}
Suppose that $H(\tau)$ and the constraints in \eqref{eqn::constraints} hold. If all agents follow the modified reasonable strategy,
then with probability at least $1-2/n^{2c+1}$, in every strip $s$ (except possibly the bottommost Type $2$ strip),
$\operatorname{Imb}(s) \le n/25\sqrt{T}$.
\end{theorem}
\begin{proof}
We divide each strip into thin diagonals of width $1$. Let the diagonal include the bottom but not the top boundary. Notice that for each value, a diagonal contains at most one grid point.
We introduce the following notation w.r.t.\ diagonal $d$ at time step $\tau$, where we are conditioning on the outcome of step $\tau-1$.
\begin{align*}
I(d,\tau) &= E[(\text{number of men at time $\tau$}-\text{number of women at time $\tau$})]\\
X(d,\tau) &= (\text{number of men matching at time $\tau$}-\text{number of women matching at time $\tau$})\\
&\hspace*{0.2in} - E[(\text{number of men matching at time $\tau$}-\text{number of women matching at time $\tau$})]\\
Y(d,\tau) &= \text{number of men entering at time $\tau$} - \text{number of women entering at time $\tau$
\\
A(d,\tau) &= (\text{number of men matching at time $\tau$}+\text{number of women matching at time $\tau$})/2.
\end{align*}
$I(d,\tau)$ is measured after the entry of the new agents at time $\tau$ but prior to the match for this step. Also, note that $Y(d, \tau) = 0$ if $d$ is in a Type $2$ strip.
In addition, observe that the imbalance $\operatorname{Imb}(s)$ at the start of step $t$ equals $\sum_{d\in s} I(d,t)$.
We observe that a match between two agents in distinct diagonals of the same strip
will increment the $(\text{number of men } - \text{ number of women})$
in one diagonal and decrement it in the other.
Thus there is a zero net change over all the diagonals
in the strip due to the matches. However, as the agents all age by 1 unit during a step, some agents enter the strip and some leave, which can cause changes to the imbalance within a strip.
However, the entry of new agents can introduce new imbalances.
We will need to understand more precisely how these imbalances evolve.
It is convenient to number the diagonals as $d_1,d_2,d_3,\ldots$, in right to left order.
\begin{claim}
\label{clm::update-to-I}
Let $d_i$ and $d_j$ be two diagonals in the same strip $s$. For brevity, let $I_i\triangleq I(d_i,\tau-1)$,
$I_j\triangleq I(d_j,\tau-1)$,
$A_i\triangleq A(d_i,\tau-1)$,
$A_j\triangleq A(d_j,\tau-1)$,
$X_i\triangleq X(d_i,\tau-1)$,
$X_j\triangleq X(d_j,\tau-1)$.
Finally, let $R$ denote the maximum of the total number
of men and the total number of women in the system
at time $\tau-1$.
Then the new imbalance on diagonal $d_i$, prior to every unmatched agent adding 1 to their age (which causes the agents on $d_i$ to move to $d_{i+1}$), denoted by $I'(d_i,\tau)$, is given by:
\begin{align*}
&I'(d_i,\tau)= \\
&~~~~I_i + X_i - \sum_{d_j \in s}\Big[ X_i\frac{(2A_j-I_j-X_j)}{4R}-X_j\frac{(2A_i-I_i-X_i)}{4R}+I_i\frac{(2A_j-I_j-X_j)}{4R}-I_j\frac{(2A_i-I_i-X_i)}{4R}\Big]; \\
&\text{~~and~~}I(d_i, \tau) = I'(d_{i-1}, \tau - 1) + Y(d,\tau).
\end{align*}
\end{claim}
This claim is shown by considering the expected number of matches involving agents in diagonals $d_i$ and $d_j$.
The proof can be found in Appendix \ref{appn::imbalance}.
The expression $X_i(2A_j-I_j-X_j)/4R$ reflects the reduction
of the contribution of $X_i$ to the total imbalance on diagonal $d_i$ and the corresponding increase on diagonal $d_j$.
Thus it is convenient to view the multiplier $(2A_j-I_j-X_j)/4R$ as indicating the fraction of $X_i$ that is being moved to diagonal $j$; the remaining fraction of $X_i$ remains on $d_i$.
$X(d,\tau)$ and $Y(d,\tau)$ are generated at diagonal $d$ at time $\tau$. In each subsequent time step the portion on each diagonal where it is present will be further redistributed:
\begin{enumerate}
\item Due to the expected matching at time $\tau'\ge \tau$, each portion of $X(d,\tau)$ and $Y(d,\tau)$ spreads to other diagonals in the same strip.
\item At the end of time step $\tau'$ the portions of $X(d,\tau)$ and $Y(d,\tau)$ present on diagonal $d_i$ move to diagonal $d_{i+1}$.
\end{enumerate}
Building on these observations, we will show our bound on the imbalance by means of the following two arguments. Specifically, we show that:
\begin{enumerate}
\item For any $\tau$ and $\tau'$, the total contribution from $X(\cdot, \tau)$ and $Y(\cdot, \tau)$ to strip $s$ at time $\tau'$ is bounded.
\item For times $\tau'\ge \tau + \Omega(T\log n)$, the remaining portions of $X(\cdot, \tau)$ and $Y(\cdot, \tau)$ in the market are small.
\end{enumerate}
\paragraph{Bound on the contribution of $X$ to the strip $s$}
Notice that $\sum_{d_i\in s} I'(d_i,\tau) = \sum_{d_i\in s} I(d_i,\tau-1)$, for the coefficients
multiplying $X_i$ cancel, as they also do for $I_i$. Thus we can think of this process as redistributing the imbalance, but not changing the total imbalance.
Over time an imbalance $X(d_i,\tau)$ will be redistributed over many diagonals. We write
$X(d_i,\tau,d_j,\tau')$ to denote the portion of
$X(d_i,\tau)$ on diagonal $d_j$ at time $\tau'$.
$d_j$ need not be in the same strip as $d_i$.
Note that $\sum_{d_j} X(d_i,\tau,d_j,\tau') = X(d_i,\tau)$ for all $\tau'\ge \tau$. $Y(d_i,\tau,d_j,\tau')$ is defined analogously.
An important property concerns the relative
distribution of the $X(d_i,\tau,d_j,\tau')$ and
the $X(d_k,\tau,d_j,\tau')$. In a sense made precise in the following claim, if $k>i$ the $d_k$ terms
remain to the left of the $d_i$ terms.
For the purposes of the following claim, we treat the final strip as a single diagonal, and in addition ignore the fact that people depart at age $T$ (which means that once an imbalance appears in this strip it remains there). The reason this strip is different is that it covers the whole of the bottom boundary and so is the only strip from which people leave the system by aging out.
\begin{claim}
\label{clm::distr-of-X}
For all $\ell$, for all $i<k$, and for all $\tau'\ge \tau$,
$\big|\sum_{j>\ell} X(d_i,\tau,d_j,\tau')\big| \le \big|\sum_{j>\ell} X(d_k,\tau,d_j,\tau')\big|$.
The same property holds for the $Y(d_i,\tau,d_j,\tau')$.
\end{claim}
\begin{proof}
We prove the result for the $X$ terms by induction on $\tau'$; the same argument applies to the $Y$ terms.
Clearly the property holds for $\tau'=\tau$.
Let $x_{ij} \triangleq X(d_i,\tau,d_j,\tau')/X(d_i,\tau)$, and define $x_{kj}$ analogously. Our claim states that $\sum_{j>\ell} x_{ij} \le \sum_{j>\ell} x_{kj}$; we need to show it holds at time $\tau'+1$ also.
We view the $x_{ij}$ as sitting on the unit interval, with $x_{ij}$ taking a portion of length $x_{ij}$, ordered by increasing $j$, and likewise for the $x_{kj}$.
We map aligned portions of the $x_{ij}$ and $x_{kj'}$ to each other.
This mapping has the property that the $j$ index in the $x_{ij}$ term is always equal to or smaller than the $j'$ index in the $x_{kj'}$ term.
Let's look at how aligned portions of $x_{ij}$ and $x_{kj'}$ are dispersed
in the next step. If they are in distinct strips, then $j< j'$ and this property is maintained for all the dispersed portions.
We view the multiplier $(2A_j-I_j-X_j)/4R$ in Claim~\ref{clm::update-to-I} as specifying the fraction of $X_i$ that moves from diagonal $i$ to diagonal $j$. Notice that this multiplier is the same for every diagonal in this strip.
We also note that $I_i$ consists of a sum of terms $X(d_i,\tau,d_j,\tau')$
and $X(d_i,\tau,d_j,\tau')$ for diagonals $d_j$ in the same strip as $d_i$ or to the right of $d_i$. Furthermore, the multiplier $(2A_j-I_j-X_j)/4R$ specifies
the fraction of each of these terms that moves from diagonal $i$ to diagonal $j$. Thus if $d_j$ and $d_{j'}$ are in the same strip, the $X$ terms corresponding to the aligned portions of $x_{ij}$ and $x_{kj'}$ are redistributed identically, thereby maintaining the property for these fragments. Naturally, the property also continues to hold for undispersed fragments.
Finally, shifting down by one diagonal, as is done following the dispersal, will leave the property unaffected.
\end{proof}
Later, we will show a common bound $B$ on the sums
$\big| \sum_{i\le j \le k} X(d_j,\tau)\big|$,
which holds for all $d_i$ and $d_k$ in the same strip and all\footnote{The calculation for the bound proved in Claim \ref{clm::bound-on-X-one-strip::B} only applies to | $\sum_{i\le j \le k} X(d_j,\tau)|$, where $\tau>\sqrt{T}$. However for times in the initial $\sqrt{T}$ steps, the bound is only better. A calculation of this bound for times in this initial period is done in the proof of Theorem \ref{thm::initialization}; see Claim \ref{clm:: B_for_init} in Appendix \ref{appn::init}.} $\tau$.
With this bound and Claim~\ref{clm::distr-of-X} in hand, for each strip $s$, we can bound the contribution of the $X(d_i,\tau,d_j,\tau')$ summed
over all $d_i$ and over $d_j\in s$ by $2B$.
\begin{claim}
\label{clm::bound-on-X-one-strip}
For all $\tau'\ge \tau$, for every strip $s$,
$\big|\sum_{d_i; d_j\in s} X(d_i,\tau,d_j,\tau') \big| \le 2B$.
\end{claim}
\begin{proof}
Let $d_{r(s)}$ be the rightmost (lowest index) diagonal in $s$ and $d_{l(s)}$ be the leftmost (highest index) diagonal in $s$. Let $w_i = \sum_{j\ge r(s)} X(d_i,\tau,d_j,\tau')/X(d_i,\tau)$.
Let's consider
$\sum_{d_i\in s'; j \ge r(s)} X(d_i,\tau,d_j,\tau')
= \sum_{d_i\in s'} w_i\cdot X(d_i,\tau)
$.
Notice that $\sum_{r(s')\leq i\leq l(s)} X(d_i,\tau)=0$. By Claim~\ref{clm::distr-of-X}, $w_i \le w_k$, for $i<k$. Thus,
\begin{align*}
\Big|\sum_{d_i\in s'; j \ge r(s)} X(d_i,\tau,d_j,\tau')\Big|=
\Big|\sum_{d_i\in s'} w_i\cdot X(d_i,\tau)\Big| &\leq
\sum_{r(s') \le r < l(s')} (w_i -w_{i-1})\Big|\sum_{r\le i \le l(s')} X(d_i,\tau)\Big|\\
&\le (w_{l(s')}-w_{r(s')})\cdot \max_{r \geq r(s')}\Big| \sum_{r\le i \le l(s')} X(d_i,\tau)\Big|.
\end{align*}
We apply this bound to the diagonals from every strip to obtain:
\begin{equation}
\label{eqn::canceled_out_spread}
\begin{aligned}
\Big|\sum_{d_i; j \ge r(s)} X(d_i,\tau,d_j,\tau')\Big|=
\Big|\sum_{s'}\sum_{d_i\in s'; j \ge r(s)} X(d_i,\tau,d_j,\tau')]\Big| \leq \sum_{s'}(w_{l(s')}-w_{r(s')})\cdot B\leq B.\
\end{aligned}
\end{equation}
Using the same argument, $\big|\sum_{d_i; j \ge l(s)+1} X(d_i,\tau,d_j,\tau')\big| \le B$, since $l(s)+1=r(s'')$ where $s''$ is the strip immediately below $s$. Therefore,
\begin{align*}
\Big|\sum_{d_i; d_j\in s} X(d_i,\tau,d_j,\tau') \Big| =
\Big| \sum_{d_i; j \ge r(s)} X(d_i,\tau,d_j,\tau') -
\sum_{d_i; j \ge l(s)+1} X(d_i,\tau,d_j,\tau') \Big| \le 2B.
\end{align*}
\end{proof}
\hide{
Notice that the coefficient of the $X_i$ term (as well as the $I_i$ term) in \rjc{Claim~\ref{clm::update-to-I}} is $\frac{(2A_j-I_j-X_j)}{4R}$. This is independent of $i$. So equal fractions of $X_i$ get sent to the diagonal $d_j$ no matter which diagonal $d_i$ we consider in the strip. \RJC{This was already used so can probably be omitted.}
}
\begin{claim}\label{clm::bound-on-X-one-strip::B}
For any time $\tau \leq n^c$, with probability at least $1 - \frac{1}{n^{2c+1}}$, $B \leq 96\Big[ \frac{n\ln(4n^{3c+1} (T^2/32 + T/8) N)}{\sqrt{T}}\Big]^{1/2}$.
\end{claim}
\begin{proof}
First we bound $|\sum_{d\in S} X(d,\tau)|$ for any subset $S$ of consecutive diagonals in a strip $s$. Suppose the total number of men in $S$ is $m$ and the total number of women is $w$.
By Theorem \ref{thm::lb-size}, the total population is at least $1/3 \cdot n\sqrt{T}$. By Theorem \ref{thm::total_size_upper_bound}, it is at most $3nN/2+n$. In addition, by the inductive hypothesis, the total imbalance is bounded by the bottommost strip population plus the individual strip imbalances, and this is at most $60 n / \sqrt{T} + 25 n N / \sqrt{T}$. Therefore,
$$\frac{n\sqrt{T}}{6}\leq\max
\Big\{
\begin{array}{l}
\text{total number of men},\\ \hspace*{0.2in}\text{total number of women}
\end{array}
\Big\}
\leq \frac 12 \Big(\frac{3n(\sqrt{T}+\log_2\sqrt{T} + 1)}{2}+n + 60 n/\sqrt{T} + nN/25\sqrt{T}\Big).$$
As $\sqrt{T}\geq 26$ by constraint \eqref{eqn::constraints}, %
\begin{equation}
\label{eqn::upper_and_lower_TotBound}
\begin{aligned}
\frac{n\sqrt{T}}{6}\leq\max
\Big\{
\begin{array}{l}
\text{total number of men},\\ \hspace*{0.2in}\text{total number of women}
\end{array}
\Big\}
\leq \rjc{n\sqrt{T}.}
\end{aligned}
\end{equation}
Let $M=\max\{\text{total number of men}, \text{total number of women} \}$.
Lemmas~\ref{lem::negative_dependence_two_sex} and~\ref{lem::match rate} yield the following bound on the deviation from the expected number of the number of men in $S$ matched in a given time step:
\begin{equation}
\label{eqn::man_deviation_general_population}
\begin{aligned}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big|> \frac{mw\epsilon}{M}\bigg] \leq 2e^{-{mw\epsilon^2}/{3M}}.
\end{aligned}
\end{equation}
By the lower bound on $M$ provided by \eqref{eqn::upper_and_lower_TotBound}:
\begin{align*}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.1in}-E[\text{number of men matched}]
\end{array}
\Big|> \frac{6mw\epsilon}{n\sqrt{T}}\bigg] \leq \Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.1in}-E[\text{number of men matched}]
\end{array}
\Big|> \frac{mw\epsilon}{M}\bigg].
\end{align*}
And by the upper bound on $M$ given by \eqref{eqn::upper_and_lower_TotBound}, $2e^{-{mw\epsilon^2}/{3M}}\leq
2e^{\rjc{-{mw\epsilon^2}/{3n\sqrt{T}}}}$.
We now apply these two bounds to equation \eqref{eqn::man_deviation_general_population} to obtain:
\begin{align*}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E[\text{number of men matched}]
\end{array}
\Big| > \frac{6mw\epsilon}{n\sqrt{T}}\bigg] \leq 2e^{-{2mw\epsilon^2}/{9n\sqrt{T}}}.
\end{align*}
The same reasoning can be applied to the number of women matched in $S$.
We set $\epsilon=\big[\frac{3n\sqrt{T}}{mw}\ln(4n^{3c+1} (T^2/32 + T/8) N)\big]^{1/2}$.
By the inductive hypothesis, $m+w\leq 7.5n$, and therefore $mw \leq (15n/4)^2$. We obtain:
\begin{align*}
& \frac{6mw\epsilon}{n\sqrt{T}}
\Big[\frac{3mw\ln(4n^{3c+1} (T^2/32 + T/8) N)}{n\sqrt{T}}\Big]^{1/2}
= \rjc{\frac{45\sqrt{3}}{2}}
\Big[\frac{n\ln(4n^{3c+1} (T^2/32 + T/8) N)}{\sqrt{T}}\Big]^{1/2},\\
&\text{and}\hspace*{0.2in} 2e^{-{2mw\epsilon^2}/{9n\sqrt{T}}}
\le \frac{1}{2n^{3c+1} (T^2/32 + T/8) N}.
\end{align*}
\hide{
We obtain:
\begin{align*}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E\big[\text{number of men matched}\big]
\end{array}
\Big|> 6 \Big[\frac{9mw\ln(4n^{2c+1} (T^2/32 + T/8) N)}{2n\sqrt{T}}\Big]^{1/2}\bigg]\\
&\hspace*{0.4in}\leq \frac{1}{2n^{2c+1} (T^2/32 + T/8) N}.
\end{align*}
By the inductive hypothesis, $m+w\leq 7.5n$. Therefore $mw \leq (15n/4)^2$, and:
\begin{align*}
&\Pr\bigg[\Big|
\begin{array}{l}
\text{number of men matched}\\
\hspace*{0.2in}-E\big[\text{number of men matched}\big]
\end{array}
> \frac{45}{2}
\Big[\frac{9n\ln(4n^{2c+1} (T^2/32 + T/8) N)}{2\sqrt{T}}\Big]^{1/2}\bigg]\\
&\hspace*{0.4in}\leq \frac{1}{2n^{2c+1} (T^2/32 + T/8) N}.
\end{align*}
}
On adding the bounds for the numbers of men and women, this yields:
\begin{align}
\label{eqn::new_variance}
\Pr\bigg[\big|\sum_{d\in S} X(d,\tau)\big|
\leq \rjc{45\sqrt{3}}\Big[ \frac{n\ln(4n^{3c+1} (T^2/32 + T/8) N)}{\sqrt{T}}\Big]^{1/2}\bigg]
\leq \frac{1}{n^{3c+1} (T^2/32 + T/8) N}.
\end{align}
Recall that there are $N$ strips, at most $n^c$ rounds, and, for each strip, there are at most $(T^2/32 + T/8)$ choices of $l$ and $r$. Therefore, the total failure probability is at most $\frac{1}{n^{2c+1}}$.
\end{proof}
\paragraph{Bound on the contribution of $Y$ to strip $s$.}
As for $X$, we define $Y(d_i, \tau, d_j, \tau')$ to be the portion of $Y(d_i, \tau)$ on diagonal $d_j$ at time $\tau'$.
\hide{\begin{claim}
\label{clm::distr-of-Y}
For all $\ell$, for all $i<k$, and for all $\tau'\ge \tau$,
$\big|\sum_{j>\ell} Y(d_i,\tau,d_j,\tau')\big| \le \big|\sum_{j>\ell} Y(d_k,\tau,d_j,\tau')\big|$.
\end{claim}
In addition:}
\begin{claim}
\label{clm::bound-on-Y-one-strip}
With probability at least $1 - \frac{1}{n^{2c+1}}$, for all $\tau'\ge \tau$, for every strip $s$,
$\big|\sum_{d_i; d_j\in s} Y(d_i,\tau,d_j,\tau') \big| \le 2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)}$.
\end{claim}
The proof of this claim is similar in spirit to that of
Claim~\ref{clm::bound-on-X-one-strip::B}. We defer it to Appendix \ref{appn::imbalance}.
\paragraph{Remaining $X$ and $Y$ in the market.}
Next, we want to show that after $O(T)$ time the portions of $X$ and $Y$ remaining in the market are small.
\begin{claim}\label{clm::remain::type::1}
$\frac{e^2 \ln 2}{\log_2 (4/3)} \sqrt{T}(\sqrt{T}+\log_{2}(2n^k))$ time after their creation, there is only a $\frac{1}{2n^k}$ fraction of $X(d,\tau)$ and $Y(d,\tau)$ remaining in the Type $1$ strips.
\end{claim}
\begin{proof}
Consider some $X(d,\tau)$ or $Y(d, \tau)$ generated in a Type $1$ strip.
We first bound $\sum_{j: d_j \in s} (2A_j - I_j - X_j) / 4R$ for any Type $1$ strip $s$.
By Theorem \ref{thm::lb-size}, the total size of the population is lower bounded by $(1/3)n\sqrt{T}$. By the inductive hypothesis, any Type $1$ strip $s$ has total size at most $2.6n$. The term $\sum_{j: d_j \in s} (2A_j - I_j - X_j)$ is $2$ times the total number of women in strip $s$. By the inductive hypothesis, the number of women in $s$ is at most $1.3n+n/50\sqrt{T}$. Lemma \ref{lem::match rate} provides the following upper bound on the probability
that a man receives a match in a Type $1$ strip:
\begin{align}
\label{enq::match-rate-bound}
\sum_{j: d_j \in s} \frac{(2A_j - I_j - X_j)}{4R} \leq \frac{1}{2} \cdot \frac{1.3n+ \frac{n}{50\sqrt{T}}}{\frac{1}{6}n\sqrt{T}}<\frac{4}{\sqrt{T}},\hspace*{0.2in}\text{as ($\sqrt{T}\ge 26$ by constraint~\ref{eqn::constraints})}.
\end{align}
Consider any $X(d,\tau,d',\tau')$. If $d'$ is in a Type $1$ strip then by \eqref{enq::match-rate-bound} in one step at most $\frac{4}{\sqrt{T}}$ of it disperses to some location in the same strip, and at least $1-\frac{4}{\sqrt{T}}$ of it moves down distance one. This implies that in $\sqrt{T}/2$ time a Type $1$ strip loses at least $e^{-2}$ of the $X(d,\tau,d',\tau')$ that had been present within it at time $\tau'$. Let $K_1=e^{2}\ln 2$. By time $\tau'+ K_1\sqrt{T}/2$ at least half of the $X(d,\tau,d',\tau')$ in a Type $1$ strip has moved out of the strip.
We number the Type $1$ strips from top to bottom. Let $\gamma$ be the distribution of $X(d,\tau)$ (or $Y(d,\tau)$) where $\gamma_i$ is the fraction of $X(d,\tau)$ (or $Y(d,\tau)$) in strip $i$. Recall that there are $\sqrt{T}$ Type $1$ strips. We consider the worst case: the $X(d,\tau)$ starts out in the topmost strip. Define a potential function $\phi(\gamma)=\sum_{i=1}^{\sqrt{T}}\gamma_i\cdot 2^{\sqrt{T} - i + 1}.$ Any fraction of $X(d,\tau)$ that has left the bottommost Type $1$ strip contributes nothing to the potential. The initial potential is $2^{\sqrt{T}}$. Every $K_1\sqrt{T}$ time steps, the potential decreases by at least $1/4$. Therefore, after $\frac{1}{\log_2 (4/3)} K_1\sqrt{T}\log_{2}(2^{\sqrt{T}}2n^k)$ time, the potential would have reduced to at most $\frac{1}{2n^k}$, which means that the fraction of $X(d,\tau)$ (or $Y(d,\tau)$) in the Type $1$ strips after $\frac{1}{\log_2 (4/3)} K_1\sqrt{T}(\sqrt{T}+\log_{2}(2n^k))$ time is at most $\frac{1}{2n^k}$.
\end{proof}
We will analyze the progress through the Type 2 strips, apart from the bottommost one, in a similar way. The proof can be found in Appendix \ref{appn::imbalance}.
\begin{claim}\label{clm::remain::type::2}
$\frac{e^2 \ln 2}{\log_2 (4/3)} \sqrt{T}(\sqrt{T}+\log_{2}(2n^k)) + \frac{e^{12} \ln 2}{4 \log_2 (4/3)} T\log_{2}(2n^k\sqrt{T})$ time after their creation, there is only $\frac{1}{n^k}$ fraction of $X(d,\tau)$ and $Y(d,\tau)$ remaining in any strip other than the bottommost Type 2 strip.
\end{claim}
\paragraph{The Total Bound on Imbalance}
Now we can bound the total imbalance in a strip $s$ at time $\tau'$. Let $\kappa = \frac{e^2 \ln 2}{\log_2 (4/3)} \sqrt{T}(\sqrt{T}+\log_{2}(2n^k)) + \frac{e^{12} \ln 2}{4 \log_2 (4/3)} T\log_{2}(2n^k\sqrt{T})$.
We divide the time interval $[0, \tau']$ into two periods: $\left[0, \tau' - {\kappa}\right]$ and $\left[\tau' - {\kappa} + 1, \tau'\right]$.
\begin{itemize}
\item In the first period, we bound each $|X(d,\tau)|$ and $|Y(d,\tau)|$ by $7.5n$ as no strip can have more than $7.5n$ agents on it by Lemma~\ref{lem::ind-bound}. By Claims~\ref{clm::remain::type::1} and \ref{clm::remain::type::2}, the total imbalance for this period is at most $15n T n^c / n^k$;
\item For the second period, using Claims \ref{clm::bound-on-X-one-strip}, \ref{clm::bound-on-X-one-strip::B}, and \ref{clm::bound-on-Y-one-strip}, The total imbalance is at most $\ceil{\kappa} \cdot \Big(192\sqrt{ \frac{n\ln(4n^{3c+1} (T^2/32 + T/8) N)}{\sqrt{T}}} +2\sqrt{\frac{3n}{2} \ln \left(2 T n^{3c + 1}\right)} \Big) $.
\end{itemize}
We choose $k = c + 4$ and sum them up. We desire that both these contributions to the imbalance add up to no more than $n/25\sqrt{T}$. Using $n\geq T\geq 676$ (by the constraint \eqref{eqn::constraints}), we simplify this condition to conclude that it suffices to have:
$$n \geq (3654 + 2436e^{12} + 546(e^{12} + 1) c)^2(3c+4) T^3 (\log_2 n)^2 \ln n.$$
The details of this calculation can be found in Appendix \ref{appn::imbalance}.
Finally, the failure probability of $2/n^{2c+1}$ arises from Claims~\ref{clm::bound-on-X-one-strip::B} and~\ref{clm::bound-on-Y-one-strip}, which each have failure probability at most $1/n^{2c+1}$.
\end{proof}
\subsubsection{Initialization}
\label{sec::initialization}
\begin{theorem}
\label{thm::initialization}
Suppose that constraint~\eqref{eqn::constraints} holds. If all agents follow the modified reasonable strategy, then $H(\sqrt{T})$ holds with probability at least $1-\frac{1}{n^{c+1}}$.
\end{theorem}
The proof is similar to the earlier analysis and can be found in Appendix \ref{appn::init}.
\section{Introduction}
\label{sec::intro}
What strategies make sense when deciding whether to commit to a long-term relationship? We are interested
in pairings between members of two sets of agents, such as an employer offering a job and a worker accepting, a woman (or man) proposing marriage to a person of the opposite sex,\footnote{Single-sex marriages could also be studied, but then there would be just one set of agents. In fact, this does not appear to significantly affect our results, but in this work we have focused on the case of two sets of agents.} a landlord agreeing to rent an apartment to a potential renter.
The key feature of these relationships is that the longer they last, the greater the utility they provide;
for simplicity, we assume this utility is linear in the duration of the match. Nonetheless, as a rule agents do not choose to match as soon as they receive a proposal, for different potential partners may provide different utilities. An employer may be supportive or not, a marriage may be happy or not; the possibilities are myriad. Agents seek to assess the utility of a proposed match and then decide whether to accept or keep searching (such an assessment might be implicit). These judgements can be based on some combination of idiosyncratic factors and commonly shared perspectives. Both sides of a potential match are making this assessment, and a match happens only if both sides accept it.
Assessing potential matches takes time and therefore an agent can consider only a relatively small number of potential matches at any one time. In many circumstances, choices are offered on a take it or lose it basis. Typically, job offers are made with a short decision window.
While marriage or its equivalents have many cultural variations, as a rule offers of marriage when made are accepted or declined; it would be unusual to collect multiple offers and only then decide (in the somewhat unlikely event the parties on the other side would be willing to wait).
Again, for simplicity, we assume agents can consider only one match at a time.
Furthermore, agents are aware of time slipping by. An unemployed worker cannot afford to stay unemployed indefinitely.
Businesses wish to fill open positions promptly as they need workers to carry out the duties of these open positions.
Many men and women appear to want to pair sooner rather than later (whether the pairing is called marriage or not).
We see two forces at work here: one is the ongoing utility from a match, which starts only when the match is formed.
The second is that at least in some circumstances partners become less desirable as they become older.
We are interested in two questions:
\begin{center}
What decision rules make sense and how can their effectiveness be measured?
\end{center}
Each potential decision rules provides a balance between the urge to form a match soon so as to have a longer time in which to enjoy it, and the desire to continue searching in the hopes of finding a better match.
The equilibrium properties of decision rules have been studied previously in models with a continuum population, a continuum model for short~\cite{Adachi03,BurdettC97,BurdettC99,BurdettW98, smith2006marriage,shimer2000assortative,bloch2000two,eeckhout1999bilateral,lauermann2014stable, mcnamara1990job, damiano2005unravelling}.
In these works, agents are assumed to arrive according to a variety of processes, such as a Poisson process.
In some of these works, they are also assumed to use time discounting of future utility.
Either they have infinite lifetimes in which to seek matches or they depart---die---according to another process.
We discuss this in more detail in the related work section below.
Each agent has an intrinsic appeal, a numeric value, called \emph{charm} in Burdett and Coles~\cite{BurdettC99}.
The utility an agent derives from a match is assumed to be an increasing function of their partner's charm.
Agents receive match proposals at a fixed rate and agents either accept or reject a match immediately; for a match to succeed both participating agents must agree to it.
One natural class of agent strategies are reservation strategies; an agent will accept a proposed match exactly if the partner has charm at least $c$. Typically the chosen $c$ is a function of the agent's own charm.
The right choices of reservations $c$ yield equilibrium strategies.
In contrast, we study this problem in a discrete, albeit stochastic, setting.
By this we mean that a finite number of agents arrive at each time step; we also choose time to be discrete.
In addition, we model lifetimes differently, viewing all lives as having duration $T$.
This has the effect of making agents less demanding over time which we believe is a real
effect, and an effect that will not arise with a departure rate that stays the same over time.
Discreteness introduces variance, which leads to localized imbalances in the numbers of men and women
(by localized, we mean agents of a given age and charm). The analysis and bounding of these imbalances
are the largest challenge we face, and while asymptotically small, for moderate values
of our parameters these are non-trivial quantities, as confirmed by our simulation results.
This is in sharp contrast to a continuum setting, where there will be no variance.
Finally, it is not clear that our setting will converge to an equilibrium or near-equilibrium, and while our simulations for moderate parameter values suggest a certain level of stability, they also show that there is continuing substantial variability.
In any event, our concern is to understand the quality of the outcomes:
in a sense we make precise shortly, our model achieves near-optimal utility with high probability.
\paragraph{Roadmap} In section~\ref{sec::model} we formally define our setting, and in section~\ref{sec::results} we state our results.
Following some preliminaries in section~\ref{sec::prelim},
we present our lower bound in section~\ref{sec::lower-bound}, and outline
the construction for our upper bound in section~\ref{sec::upper_bound}.
In section~\ref{sec::simulations} we describe our simulation results and
we conclude in section~\ref{sec:open-problems} with some additional remarks.
Many proofs are deferred to the appendix.
\section{Lower Bound on the Loss for Any Strategy}
\label{sec::lower-bound}
The intuition for this result is fairly simple.
If an agent remains unmatched for $\sqrt{T}$ steps, then any subsequent proposed match would cause a loss of at least $\sqrt{T}$ to one of the participating agents.
Thus to avoid having average losses of
$\Omega(\sqrt{T})$, most matches would need to occur during an agent's first $\sqrt{T}$ steps.
But we will show that for at least a constant fraction of the agents, the matches they are offered during their first $\sqrt{T}$ steps will all have the property that the values of the two agents differ by at least $\sqrt{T}$, and consequently one of the participating agents would suffer a $\sqrt{T}$ loss.
The overall result follows.
This second claim is not immediate because the probability that an agent is offered a close-in-value match might vary significantly from agent to agent and over time.
We somewhat optimize constants and consequently consider a period of time $w = \Theta(\sqrt{T})$ and value differences $w$, instead of precisely the value $\sqrt{T}$ used in the above outline.
\begin{proof} (Of Theorem~\ref{thm::lb-loss-two_sex})~
We divide the grid into width $w$
columns, where a column includes the low-value side boundary, but not the high-value boundary; one end column may be narrower. We will set the parameter $w$ later.
We consider the set of proposed matches at some arbitrary time $t$.
We say a proposed match is \emph{safe} if the paired agents are in the same or adjacent columns.
We also define the male match rate $p_i$ for column $i$ to be the probability that a man in the column has a safe match. By Lemma~\ref{lem::match rate}, this is at most the number of women in columns $i-1$, $i$, and $i+1$ divided by the maximum of the total number of women and the total number of men, which is at most the number of women in these columns divided by the total number of women.
Clearly the sum of the male match rates over all the columns is at most $3$.
The same claim holds for the analogous female match rates.
Consider the men entering the system at time $t$, which we call the \emph{new} men.
Each column contains at most $w$ points at which agents enter the market, namely the points along the column's top edge, and each entering agent is equally likely to be a man or a woman. By applying a Chernoff bound, we see that for any given column $i$,
\vspace*{-0.1in}
$$\operatorname{Pr}\Big[\text{\# of new men in column $i$ at time $t$} \ge \frac{(1+\delta)nw}{2T} \Big]\le e^{-\delta^2nw/6T}.$$
Applying this bound to every column over $\tau$ consecutive time steps yields:
\begin{align*}
&\operatorname{Pr}\Big[\text{every column receives at most } \frac{(1+\delta)nw}{2T} \text{ new men} \\[-10pt]
&\hspace*{1.5in}\text{for each of $\tau$ consecutive time steps}\Big]
\ge\Big(1-e^{-\delta^{2}nw/6T}\Big)^{\tau T/w}.
\end{align*}
Call this event $\mathcal E$.
Henceforth we condition on $\mathcal E$.
Now suppose that every time an agent was offered a safe match, they accepted it. Recall that $p_i$ is the match rate for column $i$.
By Lemma \ref{lem::negative_dependence_two_sex}, for the new men at time $t$ in column $i$, for any $t$,
$$\operatorname{Pr}\Big[\text{\# safely matched men } \le \frac{(1+\delta)n w p_i}{2T}\Big] \ge1-e^{-\delta^2 n w p_i/6T}.$$
In fact, agents may not accept every proposed safe match; but this only reduces the number of agents safely matched, and therefore the bound on the probability continues to hold.
Furthermore, by Lemma~\ref{lem::chernoff_bound}, letting $\overline{\mu} = \frac{n w \max\{p_i, \frac{w}{T}\}}{2T}$, gives
\begin{align*}
\operatorname{Pr}\Big[\text{number of safely matched men } \le \frac{(1+\delta)n w \max\{p_i, \frac{w}{T}\}}{2T}\Big] \ge1-e^{-\frac{\delta^2 n w \max\{p_i, \frac{w}{T}\}}{6T}} \ge 1-e^{-\frac{\delta^2 n w^2}{6T^2}}.
\end{align*}
Recalling that $\sum_i p_i \le 3$, and applying a union bound over all $T/w$ strips
for $w$ successive steps,
we obtain, for any given set of new men
entering at some time $t$,
over their first $w$ time steps,
\begin{align}
\label{eqn::prog-num-safe-match}
\operatorname{Pr}\Big[\text{\# of safely matched men} \leq\frac{2(1+\delta)nw^2}{T}\Big]\ge1- \frac{T}{w} w e^{-\delta^2nw^2/6T^2}.
\end{align}
In addition, for any given set of new men, on applying a Chernoff bound, we know that
\begin{align}
\label{eqn::prob:num-new-men}
\operatorname{Pr}\Big[\text{\# of new men} \geq\frac{n(1-\epsilon)}{2}\Big]\ge 1 - e^{-\epsilon^2n/4}.
\end{align}
For each remaining man in each of the first $\tau-w$ sets of new men---of which there are at least $(\tau - w)\big(\frac{n(1-\epsilon)}{2} - \frac{2(1 + \delta)nw^2}{T}\big)$---
one of the following two cases must apply.
\begin{itemize}
\item He has not been matched after spending $w$ time in the system. Now, if and when he is matched, the only way he can avoid suffering a $wT$ loss is to match with a sufficiently higher value woman. In this case the higher value woman suffers at least a $wT$ loss.
\item He has been matched within $w$ time but it was not a safe match. In such a match whichever agent had the higher value suffered at least a $wT$ loss.
\end{itemize}
Since the system runs for $\tau$ time steps, this argument can be applied to all agents except those that enter the system during the last $w$ time steps.
We deduce that the total loss generated by all these agents is at least
$(\tau-w)(\frac{n(1-\epsilon)}{2}- \frac{2(1 + \delta)nw^2}{T})\cdot wT$.
Note that this loss is being shared by up to $n\tau$ agents.
Hence there is an average loss of at least $\frac 12\big(wT(1-\epsilon) - \frac{4(1 + \delta)w^3T}{T}\big)\cdot\frac{\tau-w}{\tau}.$
Setting
$w = \frac{\sqrt{T}}{4}$,
and using the lower bound on $\tau$ ($\tau\geq T$), we obtain:
$$\text{average loss per agent}\geq
\frac 12 \Big(\frac{T\sqrt{T}(1-\epsilon)}{4}-\frac{T\sqrt{T}(1+\delta)}{16}\Big)\cdot\Big(1-\frac{\sqrt{T}}{4T}\Big).$$
Now we set $\delta=\sqrt{\frac{6T^2}{nw^2}\ln(3n^{c}T\tau)}$ and $\epsilon=\sqrt{\frac 4n\ln(3\tau n^{c})}$. We would like to have $\delta\leq1$, which we enforce by our choice of constraints on $n,T,\tau$ and $c$ (namely $16\leq T\leq n, c\geq1,T\leq \tau \leq n^c$ and $n\geq 96T(2c+2)\ln n$). These constraints also ensure that $\epsilon\leq 1/16$. Substituting $\delta \le 1$ and $\epsilon\leq 1/16$ yields:
$$\text{average loss per agent}\geq\frac{7T\sqrt{T}}{128}\cdot\frac{15}{16}\geq \frac{T\sqrt{T}}{20}. $$
By \eqref{eqn::prog-num-safe-match}
and \eqref{eqn::prob:num-new-men},
this bound holds with probability at least
\begin{align*}
\operatorname{Pr}[{\mathcal E}] \cdot\Big(1-\tau T e^{-\frac{\delta^2 nw^2}{6T^2}}-\tau e^{-\frac{\epsilon^2 n}{4}}\Big)
& \ge \Big( 1 - \frac{1}{n^c}\Big).
\end{align*}
The detailed calculation can be found in Appendix \ref{appn::loss_prob_calculation}.
\end{proof}
\section{The Model}
\label{sec::model}
We consider a setting in which, at each time step, $n$ agents enter a matching pool. Agents exit the pool either when they are matched or if they have been in the pool for $T$ time steps. There are two types of agents, called men and women. Each match pairs a man with a women. At each time step the agents are paired uniformly at random. Each pair comprises a proposed match. Each agent in a pair can accept or reject the proposed match as they prefer; a match occurs only if both agents accept it.
In a discrete setting, a random pairing seems more natural than having pairs arrive one by one, for the process
of pairing will proceed in parallel, and pairs are necessarily mutually exclusive.
While in practice the pairings under consideration at any one time will not cover the
whole of the smaller side of the population, considering a maximal matching seems a reasonable simplification.
We assume agents evaluate their potential partners using
cardinal values, and furthermore these are common values: every agent of the opposite type (gender) has the same value $v_i$ for agent $i$. In the terminology of Burdett and Coles, this is agent $i$'s charm.
We associate two parameters $v_i$ and $t_i$ with agent $i$. $v_i$ is the agent's charm and $t_i$ is the total time remaining before agent $i$ is forced to exit the pool. Agent $i$ derives utility $v_j\cdot \min(t_i,t_j)$ when matched with agent $j$.
We assume that the values lie in the range $[T,2T)$, and that an agent's value, chosen when it enters the pool, is one of $\{T, T+1,\ldots, 2T-1\}$, picked uniformly at random.
We note that the relative utilities of an agent are scale free;
in other words, the range assumption is equivalent to assuming the values lie in the range $[1,2]$.
We could have used a separate discretization for the values, but we preferred to avoid an additional parameter. Furthermore, it would not affect the results qualitatively.
Entering agents are either male or female with equal probability.
Throughout this work it will be useful to view the market as a $T\times T$ size box, with agents
located at grid points. The box is indexed by value
on the horizontal axis and
by time on the vertical axis.
Consider the set of $T$ points on the top edge: $\{(T,0),(T+1,0),\ldots,(2T-1,0)\}$. Agents enter the market at one of these points, picked uniformly at random. At each time step, an agent either matches and leaves the box or moves down vertically by $1$ unit. After $T$ steps, if unmatched for all these times, the agent exits the box (at the bottom).
\paragraph{A Reasonable Notion of Loss}
In a single gender version of this setting, the total utility derived by the $n$ agents that enter at any one time step is at most $n \cdot \sum_i v_i\cdot T$; in the two-gender case, by applying a Chernoff bound, one can obtain a similar bound with high probability.
This bound can easily be achieved if all agents simply accept whatever match is proposed to them in the very first step in which they enter the matching pool. However such behavior seems implausible for high value agents, as their expected utility would be much smaller than what they might reasonably hope to achieve. Consequently, we set $v_i \cdot T$ as a reasonable target for $i$'s achieved utility. Based on this, we define the \emph{total loss} suffered by the agents to be:
$$\sum_{\substack{i\in \{\text{agents obtaining utility}\\~~~~~~~\text{less than their worth}\}}} v_i \cdot T-\text{utility obtained by agent } i.$$
This measure captures the intuition that agents who obtain less than their worth due either to a lower value partner, or to accepting a match only later on in the process, are suffering losses. We want to capture how much utility is lost compared to the benchmark in which each agent gets an equal value partner for the whole length $T$ time period. It also addresses what is implausible about the naive solution, in which all agents immediately accept whatever match is proposed to them, and which maximizes the usual notion of social welfare.
It is not clear how to determine an optimal strategy, let alone whether it can be computed feasibly.
For a truly optimal strategy would incorporate the effects of past variance, a level of knowledge that seems implausible in practice;
and even an ex-ante optimal strategy seems out of reach.
Instead, we will present a strategy, which we call the \emph{reasonable strategy}, which seeks to ensure that if it is followed by all the players, then the total loss will be at most a constant factor larger than what could be achieved by the optimal strategy. Actually, we introduce two strategies, and the second one, called the \emph{modified reasonable strategy}, is the one we analyze.
\section{Numerical Simulations}
\label{sec::simulations}
We have demonstrated a strategy which is asymptotically close to optimal with regard to minimizing the average loss experienced by agents. Complementing this, in this section we simulate the evolution of the system for moderately large values of $n$ and $T$. In order to gain a sense of the overall stability of the system, we track the total population over time.
We now discuss some observations based on our simulations.\footnote{For every pair of $n$ and $T$ that we considered in the discrete setting, we ran the simulation $10$ times; letting each run for $2000$ iterations. The error ranges mentioned below are obtained from the range of values we obtained over these $10$ runs. The values for each run can be found in Appendix \ref{appn::data}.}
\begin{figure}[thb]
\centering
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{noStripPlot.png}
\caption{\label{fig:one}The evolution of total population over time in the discrete (blue) and continuum (orange) settings for $n=500$, $T=100$, using the reasonable strategy.}
\end{minipage}
\hspace{0.08\textwidth}
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{StripPlot.png}
\caption{\label{fig:two}The evolution of total population over time in the discrete (blue) and continuum (orange) settings for $n=500$, $T=100$, using the modified reasonable strategy.}
\end{minipage}
\end{figure}
For the continuum model we obtain reasonably rapid convergence---in about $T$ time---whereas for the discrete model in a similar time the system reaches its long-term average value, but with somewhat chaotic oscillations about this value, as shown in Figures~\ref{fig:one} and~\ref{fig:two}. In addition, the long-term average population for the discrete case is a bit larger than the continuum equilibrium value. (This is not surprising, for both variance and male/female imbalances will reduce the match rate.)
For moderate values of $n$ and $T$, the average loss in the modified reasonable strategy is better than the asymptotic bound we obtain. For example, consider the $n=500$, $T=100$ case. We prove an upper bound on the total loss of $11T\sqrt{T}$ but the simulation achieves an average loss of just $2.21T\sqrt{T}(\pm 1.7\%)$. The total populations are significantly closer (an upper bound of close to $1.5nN$ in our theorem vs.\ close to $n\sqrt{T}$ in the simulation).
For the case where agents use the modified reasonable strategy, we also examine the average population size and average loss for various $T$ (with $n$ fixed at $500$). Figure \ref{fig:three} shows a plot of $\text{Average Population}/n\sqrt{T}$ and $\text{Average Loss}/\sqrt{T}$ for five different values of $n$. The result is quite consistent with the $n\sqrt{T}$ scaling of the average total size and the $\sqrt{T}$ scaling of the average loss that we prove hold asymptotically,
even though these are only moderately large values of $n$ and $T$, and even though we are not in the $n$ much greater than $T$ regime of our analysis.
\begin{figure}[htb]
\centering
\begin{minipage}[tb]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{Scaling.png}
\caption{\label{fig:three} $\text{Average Population}/n\sqrt{T}$ (blue) and $\text{Average Loss}/\sqrt{T}$ (red) for five values of $T$, using the modified reasonable strategy.}
\end{minipage}
\hspace{0.08\textwidth}
\begin{minipage}[tb]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{LocalVar.png}
\caption{\label{fig:four}The evolution of the population on a "typical diagonal" over time in the discrete setting (for $n=500$, $T=100$), using the modified reasonable strategy.}
\end{minipage}
\end{figure}
Finally, we examine the population on a "typical" diagonal\footnote{Here we consider the diagonal region with width $1$ that starts at value $3T/2$ at the top ($t=0$) boundary.} in the case that all agents are following the modified strip strategy (Figure~\ref{fig:four}). Notice that the range of oscillations for this value is large compared to the range of oscillations in the total population (Figure~\ref{fig:two}). Furthermore, these
oscillations proceed at a much faster rate than
the changes in the overall population.
The results also indicate that while the system
remains within reasonable bounds, there is
substantial ongoing variation, particularly at a local level.
\hide{
\subsubsection{$n=500$, $T=100$, $2000$ iterations per run.}
Average size and loss (discrete case):
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss\\ [0.5ex]
\hline\hline
1 & 1544.1 & 10.07\\
\hline
2 & 1507.7 & 9.96\\
\hline
3 & 1558.7 & 10.13\\
\hline
4 & 1458.1 & 9.8\\
\hline
5 & 1456.4 & 9.82\\
\hline
6 & 1565.5 & 10.16\\
\hline
7 & 1487.6 & 9.91\\
\hline
8 & 1567 & 10.17\\
\hline
9 & 1499.5& 9.93\\
\hline
10 & 1508.3 & 9.99\\
\hline
\end{tabular}
\end{center}
Average size (continuum): 1180.9
Average loss (continuum): 8.89
}
\hide{
\subsection{Simulation of the System with all Players Playing an Intermediate Strategy}
As per the reasonable strategy, every agent at each point $(v,t)$ accepts matches from agents in some in a region such they receive utility at least $vt(1-\frac{1}{T}+\frac{t}{T})$. The modified reasonable strategy is simpler in two ways. Firstly it considers only regions in the form of simple diagonal strips and secondly. Secondly the whole matching pool is partitioned into a few strips rather than each point $(v,t)$ having it's own unique region.
We consider an intermediate strategy in which an agent at $(v,t)$ will match only with agents in a strip centered around the agent itself. Like the modified reasonable strategy, the regions are simple strips. However we have a different strip for each point. If the agent is at $(v,t)$ the strip will have width $T(\frac{1}{T}+\frac{t}{T})$.
\RJC{We don't do this varying widths in the reasonable strategy, so I am not sure it is a good idea to say this.}
\IA{Changed.}
\subsubsection{$n=500$, $T=100$, $2000$ iterations per run.}
Average size and loss (discrete case):
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss\\ [0.5ex]
\hline\hline
1 & 2454.4& 12.4\\
\hline
2 & 2492.3& 12.51\\
\hline
3 & 2525.4 & 12.6 \\
\hline
4 & 2504.9& 12.52\\
\hline
5 & 2532.9& 12.62\\
\hline
6 & 2442.7& 12.35 \\
\hline
7 & 2467.6 & 12.43\\
\hline
8 & 2464& 12.4\\
\hline
9 & 2560.1& 12.73\\
\hline
10 & 2478.6& 12.47\\
\hline
\end{tabular}
\end{center}
Average Size (continuum):
Average Loss (continuum):
}
\hide{
\subsubsection{$n=500$, $T=100$, $2000$ iterations per run.}
Average size and loss (discrete case):
\begin{center}
\begin{tabular}{||c c c||}
\hline
Run number & Average population size & Average Loss\\ [0.5ex]
\hline\hline
1 & 5269.7 & 23.64\\
\hline
2 & 5170.7& 23.28\\
\hline
3 & 5238.7& 23.53\\
\hline
4 & 5269.2& 23.63\\
\hline
5 & 5230.1& 23.48\\
\hline
6 & 5245.1& 23.56\\
\hline
7 & 5272.6& 23.65\\
\hline
8 & 5228.2& 23.41\\
\hline
9 & 5232.9& 23.48\\
\hline
10 &5217.3& 23.47\\
\hline
\end{tabular}
\end{center}
Average size (continuum):
5007.4
Average loss (continuum):
22.32 \RJC{Should be multiplied by $T$, labeled as $T$.
Even better rewrite as constant times $T\sqrt{T}$.}
We also consider various $n$ and $T$ to see that the scaling of the average total size and the average loss, even for these moderate values, is close to $n\sqrt{T}$ and $T\sqrt{T}$ respectively. This is what we prove for large $n$ and $T$ ($n$ much larger than $T$) in our analysis.
\begin{center}
\begin{tabular}{||c c c c||}
\hline
n & T & Average population size & Average Loss\\ [0.5ex]
\hline\hline
500 & 100& 5217.3 & 23.47\\
\hline
500 & 200& 7541.1 & 34.1\\
\hline
500 & 300& 9268.8& 42.03\\
\hline
500 & 400& 10523.6& 47.78\\
\hline
500 & 500& 12184& 55.4\\
\hline
\end{tabular}
\end{center}
}
\hide{
\begin{center}
\begin{tabular}{||c c c c||}
\hline
n & T & Average population size & Average Loss\\ [0.5ex]
\hline\hline
500 & 100& 5217.3 & 23.34\\
\hline
400 & 100& 4228.7 & 23.69\\
\hline
300 & 100& 3183.4& 23.75\\
\hline
200 & 100& 2192.7& 24.34\\
\hline
100 & 100& 1169.2& 25.61\\
\hline
\end{tabular}
\end{center}
}
\section{Open Problems}
\label{sec:open-problems}
Two natural extensions of our model come to mind.
\begin{itemize}
\item The values in our model are common to all agents, but in reality agents will have individual preferences. This could be captured with a model in which each agent $a$ has a value for agent $b$ given by $v_a + w_{a,b}$, where $v_a$ is a common public value while $w_{a,b}$ is an idiosyncratic private value of $a$ for $b$. The combining of public and private values has been studied in the literature on matchings in other settings~\cite{ashlagi2020clearing, lee2016incentive}.
\item In our model, agents receive match proposals that are generated by choosing agents from the other side of the market uniformly at random. It would be interesting to consider a more sophisticated method of recommending matches, with recommended matches being localised in value and time around the agent.
\end{itemize}
Another intriguing direction concerns the stability of this system. We have shown that if the agents play the modified reasonable strategy then with high probability the strip sizes, the total size, and the imbalance between men and women in any strip, all remain within some range. But we conjecture that if any of these parameters were to have a large deviation which took it outside its typical range, then with high probability it would soon return
to being within this range.
\subsubsection{Population Upper Bound}
\label{sec::total_size_upper_bound}
\begin{theorem}
\label{thm::total_size_upper_bound}
Suppose $H(t)$ and the constraints in \eqref{eqn::constraints}
hold. If all agents follow the modified reasonable strategy, then at the start of time step $t+1$,
with probability at least $1 - 1/n^{2c+1}$,
the total population of the matching pool will be at most $(3/2)nN+n$, where $N$ is the total number of strips.
\end{theorem}
\begin{proof} (Idea.)~
We seek to lower bound the number of matches in one time step.
If it exceeds the number of incoming agents, then the total population reduces.
The expected number of matches is minimized when the strip populations are equal, and on applying Lemma~\ref{lem::match rate}, this yields the following
lower bound on the number of matched women (or men):
$[N\cdot (P/2N)^2/(P/2)]= P/(2N)$, where $P$ is the upper bound on the population.
This yields the condition $P/N \le n$, or $P\le nN$.
The argument is completed by taking account of the deviations needed to ensure a high-probability bound. The full proof can be found in Appendix \ref{appn::total_upper}.
\end{proof}
\section{Preliminaries}
\label{sec::prelim}
We review the notion of negative cylinder dependence and make a simple observation regarding the matching procedure.
\begin{lemma}
\label{lem::match rate}
Suppose there are $m$ men and $w$ women in total.
Further suppose that for a given man $x$, there are $w'$ women for which a proposed
match would be accepted by both sides.
Then a random match will provide man $x$ such a match with probability
$w'/\max\{m,w\}$.
\end{lemma}
\begin{proof}
If there are at least as many women as men, every man will be offered a match,
and the probability that it is accepted by both sides is $w'/w$.
While if there are more men, a man will be offered a match with probability
$w/m$, and thus the probability that he is offered an acceptable match
is $w/m \cdot w'/w = w'/m$.
\end{proof}
\paragraph{Negative Dependence}
Consider a set of $0$-$1$ valued valued random variables $\{X_i\}_{i=1}^{n}$. The set $\{X_i\}$ is $\lambda$-\emph{correlated} if
\begin{align*}
E\Big[ \prod_{i=1}^{n} X_i\Big]\leq\lambda\cdot\prod_{i=1}^n E\Big[X_i\Big],
\end{align*}
where $\lambda\geq 1$.
The set $\{X_i\}$ is \emph{negative cylinder dependent} if $\{X_i\}$ and $\{1-X_i\}$ are both $1-$correlated. In our arguments we will apply Chernoff-like bounds to negative cylinder dependent variables. We will use the following lemmas; their proofs are deferred to Appendix \ref{appn::prelim}.
\begin{lemma}
\label{lem::negative_dependence_two_sex}
Let $S_m$ and $S_w$ be two sets of $N_1$ and $N_2$ agents respectively, Suppose that $N_1\leq N_2$. Let $S_a=\{a_1,a_2,\ldots, a_n\}\subseteq S_m$ and $S_b =\{b_1,b_2,\ldots, b_r\} \subseteq S_w$. Consider a matching between $S_m$ and $S_w$ chosen uniformly at random.
Let $X_i$ be an indicator variable which equals $1$ if agent $a_i$ is paired with an agent in $S_b$, and $0$ otherwise. Then the set $\{X_i\}$ is negative cylinder dependent and for any $\delta>0$,
\begin{align*}
\operatorname{Pr}\left[\sum X_i \geq (1 + \delta) \mu \right] \leq e^{- \frac{\delta^2 \mu}{3}} \text{~~and~~}
\operatorname{Pr}\left[\sum X_i \leq (1 - \delta) \mu \right] \leq e^{- \frac{\delta^2 \mu}{2}}.
\end{align*}
\end{lemma}
\begin{lemma} \label{lem::chernoff_bound}
If $\{X_i\}_{i = 1}^n$ are $1$-correlated random variables taking value $\{0, 1\}$ and $\overline{\mu}$ is an upper bound on $\mu = E[\sum X_i]$, then, for any $\delta>0$,
\begin{align*}
\operatorname{Pr}\left[\sum X_i \geq (1 + \delta) \overline{\mu} \right] \leq e^{- \frac{\delta^2 \overline{\mu}}{3}} \text{~~and~~}
\operatorname{Pr}\left[\sum X_i \leq \mu-\delta \overline{\mu} \right] \leq e^{- \frac{\delta^2 \overline{\mu}}{2}}.
\end{align*}
\end{lemma}
\subsection{Related Work}
\label{sec::related}
Rogerson~\cite{RogersonSW05} surveyed issues of search cost and bargaining in job markets.
More recently, Chade, Eeckout and Smith~\cite{ChadeES17} gave a broad survey of matching in economic models,
covering search with and without costs, and settings with and without transferable utility.
We focus on settings with search costs and no transferable utility.
Even in this domain there are many works.
We characterize these works w.rt.\ multiple dimensions.
The first is the treatment of time, both as regards arrivals and departures.
Most papers assume agents remain in the market till they are matched.
A few allow matches to be broken via a Poisson process (e.g., jobs end, partners divorce) and then the agents
return to the market; see Shimer and Smith~\cite{shimer2000assortative} and Smith~\cite{smith2006marriage}.
Others have agents ending their participation via various random processes:
Burdett and Wright~\cite{BurdettW98} use a Poisson process, Adachi~\cite{Adachi03} uses an exponential random variable, and Lauermann and Noldeke~\cite{lauermann2014stable} use an exogeneous rate.
Arrivals are similarly varied. Poisson processes in
Burdett and Coles~\cite{BurdettC97}, Smith~\cite{smith2006marriage}, and Shimer and Smith~\cite{shimer2000assortative}. Cloning: when agents leave due to a match they are replaced by clones thereby keeping the available matches unchanged; see Adachi~\cite{Adachi03} and Burdett and Wright~\cite{BurdettW98}.
Fixed arrival rates: see Eeckhout~\cite{eeckhout1999bilateral}, and Lauermann and Noldeke~\cite{lauermann2014stable}. Finally, no new arrivals: see Damiano, Hao and Suen~\cite{damiano2005unravelling}, and McNamara and Collins~\cite{mcnamara1990job}.
The second dimension is the choice of utility model.
These are all functions of the partner's charm, though there is considerable variation.
The most common is that the utility an agent gains is a non-decreasing function, either linear~\cite{BurdettC97} or more general~\cite{smith2006marriage,eeckhout1999bilateral}; some papers allow for time discounting~\cite{Adachi03,bloch2000two};
the utility can be the product of the partners' charms~\cite{damiano2005unravelling};
or it is given by independent random variables for each pair of agents~\cite{BurdettW98,mcnamara1990job};
another option is that the agents obtain their utility by dividing
a reward which is a function of their individual charms~\cite{shimer2000assortative}.
The final dimension is the choice of equilibrium model.
Most of the papers consider a steady state equilibrium;
McNamara and Collins~\cite{mcnamara1990job} consider Nash Equilibria,
and Damiano, Hao and Suen~\cite{damiano2005unravelling} analyze
a multi-round dynamic equilibrium.
The tension between taking a choice now and waiting for potentially better options arises in multiple other domains,
including secretary problems\cite{Ferguson89}, online matching\cite{KarpVV90}, matching market thickening~\cite{AkbarpourLG20,BaccaraLY20},
and regret minimization~\cite{blumlearning}.
In spirit, the secretary problem seems the most analogous as it involves a single decision, albeit by just a single agent. We discuss it briefly in the next paragraph. In contrast, online matching has a centralized decision maker that seeks to optimize the outcome of many choices. Regret minimization occurs in a distributed setting, however here each agent makes multiple decisions over time, with the goal of achieving a cumulatively good outcome; again, this seems quite distinct from our setting.
Market thickening is used in contexts where a global matching is being computed, which seem unlike the random matches on offer in our setting.
The standard secretary problem is expressed in terms of ranks.
A cardinal version was considered by Bearden~\cite{Bearden06};
here the goal is to maximize the expected value of the chosen
secretary, with values uniform on $[0,1]$.
For each applicant the decision maker learns whether they are the best so far.
Bearden shows the optimal strategy is to reject the first $\sqrt{n}-1$ candidates,
and then choose the first candidate
who meets the ``best so far'' criteria.
Clearly, the expected value of the selected secretary is
$1-\Theta(1/\sqrt{n})$,
which is analogous to the bounds we obtain, although the settings appear quite distinct.
Bearden argued that the payoff rule in this version of the
problem is more natural that the classic version.
The problem of maximizing the duration of a relatively best choice
has also been considered~\cite{Ferguson89}.
\section{Results}
\label{sec::results}
We obtain a lower bound on the total loss suffered by agents; no matter their behavior, they will, with high probability, suffer an average loss of $\Omega(T\sqrt{T})$.
\begin{theorem}
\label{thm::lb-loss-two_sex}
Suppose the matching market runs for $\tau$ time steps. If $16\leq T+1\leq n, c\geq1,T\leq \tau \leq n^c$, and $n\geq 96T(2c+2)\ln n$, then, over $\tau$ time steps, whatever strategies the agents use, with probability at least $1-\frac{1}{4n^c}$,
the average loss per agent is at least $\frac{T\sqrt{T}}{20}$.
\end{theorem}
On the other hand, we construct a strategy profile, which if followed by all the agents, leads, with high probability, to a total loss of at most $\mathrm{O}(T\sqrt{T})$.
\begin{theorem}
\label{thm::ub-loss-two_sex}
Suppose $2T\leq\tau\leq n^c$, $c\geq 1$, $676\leq T$, and $n \geq (3654 + 2436e^{12} + 546(e^{12} + 1) c)^2(3c+4) T^3 (\log_2 n)^2 \ln n.$ Then, over $\tau$ time steps, if all agents follow the modified reasonable strategy, with probability at least $1-\frac{1}{n^c}$,
the average loss per agent is at most $11T\sqrt{T}$.
\end{theorem}
Our results hold for large $n$ and $T$. Furthermore, Theorem \ref{thm::ub-loss-two_sex} applies only when $n$ is much larger than $T$. However, our numerical simulations suggest that similar results hold even for quite moderate values of $n$ and $T$ and also do not require $n$ to be much bigger than $T$.
To simplify the presentation, we assume that $T= 4^i$ for some integer $i>0$, though the bounds extend to all values of $T$, possibly with somewhat larger constants.
\subsubsection{Total Size Lower Bound}
\begin{theorem}
\label{thm::lb-size}
Suppose $H(t)$ and the constraints in~\eqref{eqn::constraints} hold.
If all agents follow the modified reasonable strategy, then with probability at least $1- 1/n^{2c+1}$,
for every time $t\in[\sqrt{T}, n^c]$, the population in the matching pool is at least $\frac 13 n\sqrt{T}$.
\end{theorem}
\begin{proof} (Idea.)~
We consider only the new agents that entered the matching pool over the last $\sqrt{T}$ time steps. We then bound how many of these agents could have been matched in this time period. Suppose that at any particular time step $t$, the match rate experienced by the men in strip $i$ is $p_i$. The critical observation is that the sum of the $p_i$ is at most $1$. The same is true for the women. This allows us to prove that even if we could set the match rates in an adversarial manner, only about $n/\sqrt{T}$ of the agents that entered at any one time could be matched in any single time step (in the discussion here, we neglect the effects of variance). This allows us to show that, of the agents we consider, only about $\sum_{i=1}^{\sqrt{T}}in/\sqrt{T} \approx n\sqrt{T}/2$ could have been matched over the last $\sqrt{T}$ time steps. This provides a lower bound on the total size of roughly $n\sqrt{T}/2$. Accounting for the variance that can occur when achieving a high probability bound causes the bound on the number of matches to degrade to $n\sqrt{T}/3$.
The full proof can be found in Appendix \ref{appn::lower_bound_on size}.
\end{proof}
\subsubsection{Upper Bound on the Size of a Strip.}
We begin with a technical lemma.
\begin{lemma} \label{lem::upper_strip_tech}
Let $s$ be a strip, and let $S$ be an arbitrary subset of the men and women in $s$. Let $m$ be the number of men and $w$ be the number of women in $S$. In addition, let $X$ be the imbalance for the whole of $s$. Then the expected number of people in $S$ that are matched in a single step is at least
\begin{align*}
\frac{\frac{(m + w)^2}{2} - \frac{X^2}{2}}{\max \{\text{\# of men, \# of women}\} \text{ in the whole population } }
\end{align*}
\end{lemma}
\begin{proof}
We need only consider the case that $|X| \leq m + w$. \footnote{Otherwise, the bound is negative.} Let $m_{t}$ denote the total number of men in this strip and $w_t$ the total number of women. In addition, let $\Delta \triangleq m - w$, $P \triangleq m + w$, $Q \triangleq m_t + w_t$ and $X = m_t - w_t$. Then, $m = \frac{P + \Delta}{2}$, $w = \frac{P - \Delta}{2}$, $m_t = \frac{Q + X}{2}$ and $w_t = \frac{Q - X}{2}$. The expected number of people matched in this subset of men and women is
\begin{align*}
\frac{m w_t + m_t w}{\max \{\text{\# of men, \# of women}\} \text{ in the whole population } }.
\end{align*}
We now focus on the numerator: $m w_t + m_t w = (PQ - X \Delta) / 2 = (P^2 + P(Q - P) - X^2 - X (\Delta - X)) / 2$. In order to show this is larger than $\frac{P^2}{2} - \frac{X^2}{2}$, it suffices to show $P(Q - P) \geq X (\Delta - X)$.
As $m_t \geq m$ and $w_t \geq w$, $Q - P \geq \Delta - X$ and $Q - P \geq X - \Delta$.
Recall that it suffices to consider the case $|X| \leq m + w = P$. Combining these two inequalities yields $P(Q - P) \geq X (\Delta - X)$, which proves the result.
\end{proof}
Next, we give an upper bound on the size of a Type $1$ strip.
\begin{theorem}
\label{type1_upper}
Suppose $H(t)$ and the constraints in \eqref{eqn::constraints} hold. If all agents follow the modified reasonable strategy, then at time $t+1$, right after the new agents have entered,
with probability $>1-\frac{1}{n^{2c+1}}$, each Type $1$ strip will continue to have population at most $dn$, where $d=2.6$.
\end{theorem}
\begin{proof} (Sketch).
Consider a strip $s'$ and its successor strip $s$ (the strip immediately to its left). We will follow the collection of agents occupying $\sqrt T$ adjacent diagonals over $\sqrt{T}$ steps, beginning with the at most $dn$ agents in strip $s'$ and ending in strip $s$, with the remainder of these agents plus any new agents who have entered these diagonals. The heart of our proof is to show that in a single step
we maintain the $dn$ bound on the number of agents in this collection of advancing diagonals. The basic idea is straightforward:
we compute a lower bound on the expected number of matches using Lemma~\ref{lem::upper_strip_tech}
taking into account the maximum possibly imbalance, add
the incoming agents and correct for variance.
One more important detail is that the expected number of matches
is minimized if, in the collection of agents we are tracking,
half are in strip $s$ and half are in $s'$;
so this is the value we use in these calculations.
The actual proof can be found in Appendix \ref{appn::type1_upper}.
\end{proof}
\section{Upper Bound on the Loss when Using the Modified Reasonable Strategy}
\label{sec::upper_bound}
The lower bound suggests that plausible agent strategies will yield a constant probability of matching
every $\sqrt{T}$ steps. This would imply that the number of agents present decreases geometrically
with agent age; more precisely, there would be a constant factor decrease for every $\sqrt{T}$ increment in
age. Then, in order to maintain match probabilities, all agents would have to be willing to match
with young agents who will accept them. In fact, the decreases we just described are far from uniform,
which makes the analysis quite non-trivial. Nonetheless, the above intuition informed the design of the following agent strategies.
The first strategy, which we call ``a reasonably good strategy'' seems quite natural, but for ease of analysis we consider a modified strategy which we prove to be asymptotically within a constant factor of optimal.
We define the \emph{worth} of an agent to be $v_i\cdot (T-t_i)$;
this is the maximum utility its partner
could derive from a match with this agent.
Note that the worth of an agent decreases as it ages.
\vspace*{-0.05in}
\paragraph{A Reasonably Good Strategy}
In this strategy an agent accepts a proposed match if it gives the agent utility at least $v_i\cdot (T-t_i)\cdot(1-\frac{1}{\sqrt{T}}-\frac{t_i}{T})$.
The terms $1/\sqrt{T}$ and $t_i/T$ are present to approximately balance
the expected loss of utility from not matching in a single step with the marginal gain in utility agent $i$ could receive from being more demanding in terms of the minimum worth it will accept in a partner.
\vspace*{-0.05in}
\paragraph{The Modified Reasonable Strategy}
We partition the $T\times T$ size space into the regions defined below,
as shown in Figure~\ref{fig:strips}.
In the modified strategy, an agent accepts a proposed match exactly if the proposed partner lies
in the same region.
This partition uses regions of two kinds, which we call \emph{strips}.
\begin{itemize}
\item
Type $1$ strips: these are strips that have new people entering the strip at the top. The $i$-th Type $1$ strip is defined as the region between the parallel lines $v=2(t-1)+T+(i-1)\sqrt{T}$ and $v=2(t-1)+T+i\sqrt{T}$; they have $\sqrt{T}$ width and $\sqrt{T}/2$ height. Points on the first (left) line are included in the strip, but points on the second (right) line are excluded. There are $\sqrt{T}$ Type $1$ strips.
\item
\emph{Type $2$ strips}: these strips do not touch the top boundary of the box. The strips are again defined by parallel lines. They have successive heights $\sqrt{T}$,
$\sqrt{T}$, $2\sqrt{T}$, and then repeatedly doubling up to $T/2$.
Here the points on the first (upper) line are excluded from the strip and the points on the second (lower) line are included in the strip. There are $\log_2 \sqrt{T} +1$ Type $2$ strips.
\end{itemize}
\begin{figure}[bht]
\centering
\includegraphics[scale=0.45]{Strips_finalversion.png}
\caption{The two types of strips used to partition the matching pool.}
\label{fig:strips}
\end{figure}
We note that with the previously stated reasonable strategy, agents would be willing to match with some agents outside their strip and would reject some agents in the same strip. However, using the modified strategy simplifies the analysis, for if all agents use the modified strategy, agents will definitely get accepted when they accept a match. We will prove that the modified strategy is not much worse than the optimal strategy in terms of the average loss of value suffered by an agent.
\vspace*{-0.05in}
\paragraph{Outline of the proof of the upper bound}
Our analysis assumes the following constraints on $n$ and $T$.
\begin{align}
\label{eqn::constraints}
\begin{array}{l}
c\geq 1,\\
T \ge 676,\\
n \geq (3654 + 2436e^{12} + 546(e^{12} + 1) c)^2(3c+4) T^3 (\log_2 n)^2 \ln n.
\end{array}
\end{align}
The result follows from a high-probability inductive bound on the overall population, the strip populations, and the male-female imbalances in each strip.
We start at time $t=0$. Time $t$ will refer to the moment after the new agents have entered in this step, but before the match occurs.
\begin{lemma}\label{lem::ind-bound}
Let $N$ denote the total number of strips.
Suppose that the constraints in \eqref{eqn::constraints} hold.
Then, with probability at least $1-1/n^c$, the following
inductive hypothesis $H(t)$ holds at the start of
every time step $t$, immediately following the entry of the new agents at time $t$, for $\sqrt{T}\le t \le n^c$.
\begin{enumerate}
\item\label{itm::tot-pop}
The total population is at most $\frac{3}{2}nN+n$.
\item The population of every Type $1$ strip is at most $2.6n$.
\item The population of every Type $2$ strip is at most $\frac{7.5n\sqrt{T}}{\text{maximum height of the strip}}$.
\item The population in the bottommost Type $2$ strip is no more than $60n/\sqrt{T}$.
\item\label{itm::pop-imb}
In every strip $s$, except possibly the bottommost Type 2 strip,
the imbalance, $\operatorname{Imb}(s,t) = \big|\text{the number of men in $s$}
-\text{the number of women in $s$}%
\big|
\leq n/25\sqrt{T}$.
\end{enumerate}
\end{lemma}
\begin{proof} (Sketch.)~
We will show in Theorems~\ref{thm::total_size_upper_bound}
and \ref{type1_upper}--\ref{thm::imbalance_bound}
that each of the above five clauses
holds with high probability.
The last of these results also requires a high-probability lower bound, Theorem~\ref{thm::lb-size},
on the population size in the same time range.
In addition, in Theorem~\ref{thm::initialization}, we show that, with high probability, the inductive hypothesis is true initially. Summing the failure probabilities prove the lemma. This calculation can be found in Appendix \ref{appn::imbalance}.
\end{proof}
With this result in hand we can upper bound the average agent loss.
\subsection{The Theorems and Proof Sketches}
Let $\widetilde{\mathcal{E}}$ be the event that the inductive hypothesis $H(t)$ holds at the start of
time step $t$ immediately following the arrival of the new agents in this step, for $\sqrt{T}\le t \le n^c$.
\subsubsection{Bounding the loss}
We first bound an individual agent's loss based on its match time. We then obtain an overall bound on the loss. As argued below, Theorem~\ref{thm::ub-loss-two_sex} follows immediately.
\begin{lemma}
\label{lem::loss_in_strip}
In the modified reasonable strategy, if an agent with value $v$ matches at time $t$, its utility loss is at most
$4Tt +2t\sqrt{T}$.
\end{lemma}
This result follows by a simple calculation based on the strip geometry. The proof is in Appendix \ref{appn::loss_from_match}.
\begin{theorem}
\label{thm::upper_bound_on_loss}
Suppose the constraints in \eqref{eqn::constraints} hold.
Also, suppose that all agents follow the modified reasonable strategy.
In addition, suppose the system runs for $\tau \ge 2T$ time steps, where $\tau\leq n^c$.
Then the average loss per departing agent over these $\tau$ steps will be at most $11 T\sqrt{T}$.
\end{theorem}
\begin{proof}
Consider the first $\tau$ time steps of the matching process.
Let $n_i$ denote the number of agents who match and thereby leave the pool at age $i$ during these $\tau$ steps. By Lemma~\ref{lem::loss_in_strip},
each such agent suffers a loss of at most
$4Ti+2T\sqrt{T}$. Thus the total loss is bounded by:
\begin{align*}
\text{Total loss} \le \sum_{i=0}^{T-1} \left(4 Ti\cdot n_i + 2T\sqrt{T}\cdot n_i\right).
\end{align*}
Each agent who is matched at age $i$ is present in the matching
pool for $i+1$ steps. By clause~\ref{itm::tot-pop} of the inductive hypothesis in Lemma~\ref{lem::ind-bound},
at each time during this period, the population of the matching pool is at most $\frac{3}{2}nN+n\leq \frac{3}{2}n(\sqrt{T}+\log_2\sqrt{T}+1)+n\leq 2n\sqrt{T}$, where the last inequality follows from $\sqrt{T} \le 26$ due to constraint \eqref{eqn::constraints}.
Thus,
\begin{align*}
\sum_{i=0}^{\tau-1} (i+1) n_i \le 2n\sqrt{T}\cdot \tau.
\end{align*}
Therefore,
\begin{align*}
\text{Total loss} \le 8nT\sqrt{T}\cdot \tau + \sum_{i=0}^{T-1} 2T(\sqrt{T}-2)\cdot n_i.
\end{align*}
Let $D\triangleq \sum_{i=0}^{\tau-1} n_i$, the number of agents that leave during the first $\tau $ steps.
We observe that $D$ is at most $n\tau$, the number of agents that entered during this period.
Also, as the population of the pool at any time is at most $ 2
n\sqrt{T}$, we see that $D\geq n\tau- 2
n\sqrt{T}$. By assumption, $\tau\ge 2T$ and $\sqrt{T}\ge 26$,
so
\begin{align*}
\frac {12}{13}n\tau \le D \le n\tau.
\end{align*}
This yields the following bound on the total loss:
\begin{align*}
\text{Total loss}\leq 8
n\tau T\sqrt{T}+ 2n\tau T\sqrt{T} \le 10
n\tau T\sqrt{T}.
\end{align*}
And therefore,
\begin{align*}
\text{Average loss per agent}= \frac{\text{Total loss}} {D}
\leq \frac{10
nT\tau\sqrt{T}} {\frac{12}{13}
n\tau}< 11 T\sqrt{T}.
\end{align*}
\end{proof}
\begin{proof}(Of Theorem~\ref{thm::ub-loss-two_sex})~
This follows immediately from Lemma \ref{lem::ind-bound} and Theorem \ref{thm::upper_bound_on_loss}.
\end{proof}
|
1,108,101,565,178 | arxiv | \section{Introduction}
\lettrine{R}{eference} Governors (RGs) \cite{garone2017reference} are add-on schemes to nominal closed-loop systems used
to enforce pointwise-in-time state and control constraints. An RG acts as a safety supervisor for the reference commands (set-points) given to the closed-loop system by a human
operator or by a higher-level planner in autonomous vehicles. The RG monitors reference commands to which the controller responds and modifies them if they create a danger of constraint violation to preserve safety.
RGs are an attractive option for practitioners as it enables them to use a variety of techniques for control system design that do not explicitly handle constraints while relying on RG for constraint enforcement.
Reference \cite{garone2017reference}
surveys the literature on aerospace and other proposed applications of RGs, the RG theory
and compares RG with alternative approaches to constraint handling such as Model Predictive Control \cite{borrelli2017predictive}; we do not replicate this survey here due to limited space.
A Command Governor (CG), first proposed in \cite{bemporad1997nonlinear} for linear discrete-time systems,
is a particular type of RG that modifies the reference command by finding a minimum norm projection of the original reference command
onto a safe set of commands for the given state, i.e., a cross section of the safe set of state-command pairs. This safe set is constructed as an invariant subset of the maximum output admissible set (MOAS) \cite{gilbert1991linear}, i.e., the set of all initial states and constant commands that yield response satisfying the imposed constraints. When this safe set is polyhedral (this is the case, e.g., if CG is designed based on a linear discrete-time model and state and control constraints are linear), the minimum norm projection is computed by solving a quadratic programming (QP) problem at each discrete-time instant. Even though this QP problem is low dimensional, it typically has a large number of constraints due to the typically large number of affine inequalities needed to define the safe set. Consequently, solving this QP problem online is challenging. Extensions of CG to nonlinear systems have been presented in \cite{bemporad1998reference}.
In this Note we present a simple but very meaningful modification of the CG which enables it to operate with non-invariant safe sets and ensures feasibility and convergence properties even in the case the optimization is inexact. This modification significantly extends the applicability of the CG to practical problems where the construction of accurate invariant approximations of the MOAS and/or exact optimization may not be feasible due to model complexity or limited available onboard computing power. We will illustrate the impact of this modification on an F-16 aircraft longitudinal flight control example in terms of computational time and memory reduction.
The paper is organized as follows.
In Section~\ref{sec:1.5} we formally introduce the CG and further explain the significance of our contribution.
In Section~\ref{sec:2} we highlight
the mechanism by which convergence of the modified by CG reference command to the original reference command is achieved in the existing CG theory.This informs the modification to CG presented in Section~\ref{sec:3} for which
we prove similar convergence results. A simulation example of
longitudinal control of F-16 aircraft is reported in Section~\ref{sec:4}.
Finally, concluding remarks are made in Section~\ref{sec:5}.
\section{Preliminaries}\label{sec:1.5}
A CG is an add-on algorithm to a nominal closed-loop system (Plant + Controller) represented by a discrete-time model (system of difference equations),
\begin{equation}\label{equ:dynamics}
\bo{x}(t+1)=f(\bo{x}(t),\bo{v}(t)),
\end{equation}
where $\bo{x}(t) \in \mathbb{R}^n$ is the state vector aggregating the states of both the Plant and Controller, $\bo{v}(t) \in \mathbb{R}^m$ is the set-point command / reference vector,
$m,n \in \mathbb{Z}_{>0}$ are positive integers, and $t \in \mathbb{Z}_{ \geq 0} $ is a non-negative integer which designates the discrete time instant.
The CG enforces pointwise-in-time constraints expressed as
\begin{equation}\label{equ:cnr}
(\bo{x}(t),\bo{v}(t)) \in \mathcal{C}~~\mbox{for all $t \in \mathbb{Z}_{\geq 0}$},
\end{equation}
where $\mathcal{C} \subset \mathbb{R}^{n+m}$ is a specified constraint set.
This is done by monitoring and modifying the original reference command (set-point) $\bo{r}(t) \in \mathbb{R}^m$ to a safe reference command $\bo{v}(t) \in \mathbb{R}^m$, see Figure~\ref{fig:basicRG}.
Note that as Eq. (\ref{equ:dynamics}) is a closed-loop system model,
Eq. (\ref{equ:cnr}) can represent both state and control constraints for the Plant \cite{garone2017reference}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=12cm]{RefGov2.pdf}
\caption{Command governor augmenting a nominal closed-loop system consisting of a Plant and a nominal Controller.}
\label{fig:basicRG}
\end{center}
\end{figure}
The MOAS, typically denoted as $O_\infty,$ is the set of all initial states and constant reference commands for which the subsequent response satisfies the constraints for all future times \cite{gilbert1991linear},
$$O_\infty=\{(\bo{x}(0),\bo{v}):~ \big(\bo{x}(t;\bo{x}(0),\bo{v} \big) \in \mathcal{C}, \, \forall t \in \mathbb{Z}_{\geq 0} \},$$
where $\bo{x}(t;\bo{x}(0),\bo{v})=A^t \bo{x}(0) + \sum_{j=0}^{t-1}A^j B v$ denotes the state trajectory of the system represented by Eq.~(\ref{equ:dynamics}) resulting from the initial state $\bo{x}(0)$ and the application of a constant reference command,
$\bo{v}(t)=\bo{v} $ for all $ t \in \mathbb{Z}_{\geq 0}$.
Any subset $P$ of $O_\infty$ is constraint admissible (satisfies constraints); such a subset is referred to as
``invariant for a fixed $\bo{v}$'' (or simply as ``invariant'')
if $(\bo{x},\bo{v}) \in P$ implies $(f(\bo{x},\bo{v}),\bo{v}) \in P$.
The Scalar Reference Governor (SRG) is the simplest RG algorithm, which searches for the closest admissible reference along the line segment connecting $\bo{v}(t-1)$ and $\bo{r}(t)$ by solving at each time instant the optimization problem:
\begin{maxi}|l|
{}{\kappa~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}{}{} \label{srg1}
\addConstraint{0 \leq \kappa \leq 1}{}{}
\addConstraint{\big( \bo{x}(t), \bo{v}(t-1)+\kappa ( \bo{r}(t)-\bo{v}(t-1) ) \big) \in P}{}{}
\end{maxi}
At each time instant $t$, $\bo{v}(t)$ is assigned as the optimal $\bo{v}^*,$ i.e. $\bo{v}(t)=\bo{v}^*=\bo{v}(t-1)+\kappa^* (\bo{r}(t)-\bo{v}(t-1)),$ where $\kappa^*$ is the optimal solution to the optimization problem (\ref{srg1}).
The first SRG proposed in the literature for discrete-time systems considered
Eq. (\ref{equ:dynamics}) as a linear system and $P$ in the optimization problem (\ref{srg1}) was chosen equal to $\tilde{O}_\infty$ -- a finitely-determined, invariant, constraint-admissible, inner approximation of $O_\infty$.
In later versions of the SRG, the requirement for $P$ in the optimization problem (\ref{srg1}) to be
$\tilde{O}_\infty$ or even
invariant was removed \cite{gilbert1999fast}, allowing a much greater freedom in selecting $P \subseteq \tilde{O}_\infty$. Note that in this case, the optimization problem~(\ref{srg1}) is not guaranteed to be recursively feasible. However, under reasonable assumptions, it is possible to prove that if no feasible solution to (\ref{srg1}) exists and the reference is held constant, $\bo{v}(t)=\bo{v}(t-1),$ then the optimization problem \eqref{srg1} will be feasible again after a finite number of steps.
This property is particularly useful when,
to reduce the computational time and memory,
sets $P$ that are simple subsets of $\tilde{O}_\infty$ are used. Systematic procedures to generate
simpler $P$ from $\tilde{O}_\infty$ by removing almost redundant inequalities from its description and applying a pull-in transformation have been proposed \cite{gilbert1999fast}.
One of the main strengths of SRG is its capability to manage constraints while being very computationally efficient. This is mainly due to the fact that in the SRG the selection of $\bo{v}(t)$ is reduced to the selection of the single scalar variable $\kappa \in [0,\, 1]$ which can be performed very efficiently (often in closed form) for many types of constraints, see e.g. \cite{nicotra2016fast}.
Unfortunately, in the case $m>1$, this comes at the price of potentially slowing down the system response as certain
$v_i(t)$ may be able to converge to the corresponding $r_i(t)$, $i=1,\cdots,m,$ faster than others and thus $\bo{v}(t)$ could be made closer to $\bo{r}(t)$ if not constrained to the line segment between $\bo{v}(t-1)$ and $\bo{r}(t).$
The above limitation can be overcome making use of the CG instead of SRG. Unlike SRG, the CG has more flexibility in choosing the reference $v(t)$ as
the solution to the following optimization problem:
\begin{argmini}|l|
{\bo{v}}{\|\bo{r}(t)-\bo{v}\|^2_Q}{}{\bo{v^*}=}
\addConstraint{(\bo{x}(t),\bo{v}) \in P}{}{} \label{cg1}
\end{argmini}
where $Q=Q^{\rm T} \succ 0$, $
\|\bo{r}(t)-\bo{v}\|^2_Q=(\bo{r}(t)-\bo{v})^{\rm T}Q (\bo{r}(t)-\bo{v}),$
and $P \subseteq O_\infty$ is invariant. In this case, at each time instant $t,$ the applied command is $\bo{v}(t)=\bo{v^*}.$
The price to be paid for using a CG instead of an SRG is that the optimization problem to be solved is not anymore a simple single variable optimization problem as in the SRG case, and, typically,
it must be solved making use of an iterative
optimization algorithm. For this reason, in practice, the CG is used almost exclusively when the set $P \subseteq O_\infty$ is convex in $v$ for any fixed $x.$ Furthermore, to ensure the correct behaviour of the CG scheme (e.g., recursive feasibility of the optimization problem (\ref{cg1})), the set $P$ must be invariant.
The primary objective of this paper is to propose a modification of the conventional CG
(\ref{cg1}) which:
\begin{enumerate}
\item Allows the CG to use a non-invariant set $P \subseteq O_\infty$;
\item Allows to solve the optimization problem (\ref{cg1}) inexactly while still ensuring constraints satisfaction and finite-time convergence.
\end{enumerate}
This modification significantly extends the applicability of the CG to practical problems where finding invariant sets may be problematic and where exact optimization may not be feasible due to unreliability of the optimizers or limited computing power.
\section{CG Convergence}\label{sec:2}
The Conventional CG convergence theory makes use of the following assumptions:
\begin{itemize}
\item [A1] $P$ is positively invariant for any fixed $\bo{v}$, i.e., $(\bo{x},\bo{v}) \in P$ implies $(f(\bo{x},\bo{v}),\bo{v}) \in P$;
\item [A2] For each $\bo{v}$ there exists a unique equilibrium $\bo{x_v}$ associated to a constant reference $\bo{v}$ such that $f(\bo{x_v},\bo{v})=\bo{x_v}$ and
$\bo{x_v}$ is Lipschitz continuous with respect to $\bo{v}$. It is further assumed that the sets $R_P=\{\bo{v}|(\bo{x_v},\bo{v}) \in P\}$ and $P_x=\{\bo{v}|(\bo{x},\bo{v}) \in P\}$ are closed and convex
for all $\bo{x}$;
\item [A3] There exists a scalar $\varepsilon>0$ such that for any fixed $\bo{v} \in R_P$ the set $P_v=\{\bo{x} | (\bo{x},\bo{v})\in P \}$ contains a ball of radius $\varepsilon$ centered at $\bo{x_v}$;
{ \item [A4]
$P \subseteq O_\infty \subseteq \mathcal{C}$};
\item[A5] If $\bo{v}(t)-\bo{v}(t-1) \to 0$ as $t \to \infty$ then the solutions of
Eq. (\ref{equ:dynamics}) satisfy
$\bo{x}(t) \to \bo{x}_{\bo{v}(t)}$ as $t \to \infty$.
\end{itemize}
Assumptions A1-A4 are reasonable and are typically
made in the study of reference and command governors.
The assumption A5 is also reasonable. For instance, if the discrete-time dynamics are linear,
$\bo{x}(t+1)=A \bo{x}(t)+B \bo{v}(t)$, and $A$ is Schur (all eigenvalues are inside the unit disk of complex plane), then
$\bar{x}_{\bo{v}}=(I-A)^{-1}B\bo{v}$,
$$\bo{x}(t+1)-\bo{x}_{\bo{v}(t+1)}
=A(\bo{x}(t)-\bo{x}_{\bo{v}(t)})+(I-A)^{-1}B (\bo{v}(t)-\bo{v}(t+1)),$$
and A5 holds.
For general nonlinear systems this property is similar to the discrete-time incremental Input-to-State (ISS) property \cite{tran2016incremental}.
Under these assumptions it is possible to prove the following properties:
\begin{theorem}\label{TheoremCG1}
Let the applied reference $\bo{v}(t)$ be managed by a CG based on solving at each sampling time the optimization problem in Eq.~\eqref{cg1}
and let A1-A5 hold. If a $\bo{v}(0)$ exists such that $(\bo{x}(0),\bo{v}(0)) \in P$ then:
\item[1.] Constraints are satisfied at all time instants, $t \geq 0$;
\item[2.] If the desired reference is constant, $\bo{r}(t)=\bo{r_0}$ for $t \geq \hat{t}$,
and, moreover, $\bo{r_0} \in R_P,$ then the sequence of $\bo{v}(t)$ converges in finite time to $\bo{r_0}$;
\item[3.] If the desired reference is constant, $\bo{r}(t)=\bo{r_0}$ for $t \geq \hat{t}$, and
$\bo{r_0} \notin R_P,$ then the sequence of $\bo{v}(t)$ converges in finite time to the best approximation of $\bo{r_0}$ in $R_P,$, i.e., to $\bo{r^*} = arg \min_{\bo{v}\in R_P} ||\bo{v}-\bo{r}||_Q^2$ \end{theorem}
Here we review several elements of the proof as they inform subsequent modifications to the CG:
\begin{proof}
1) Because of assumption A1, $P$ is invariant for any fixed $\bo{v}$. Then $(\bo{x}(t),\bo{v}(t-1))$ is always a feasible solution for the optimization problem \eqref{cg1}. Constraints satisfaction at all time instants can be concluded from assumption A4.
2),3) The starting point is to note that, because of assumption A1 and since $\bo{v}(t-1)$ is feasible at time $t$, the function $V(t)=||\bo{v}(t)-\bo{r_0}||_Q^2$ for $t>\hat{t}$ is non-increasing and bounded from above and below. Consequently, $\lim_{t \rightarrow \infty} V(t)$ exists and is finite, which also implies that $$\lim_{t \rightarrow \infty} \big[ V(t)-V(t-1) \big]=0.$$
Note that in the general case this does not imply that
$\lim_{t \rightarrow \infty} \bo{v}(t)$ exists.
However, because of the convexity and closeness of $P_{\bo{x}(t)}$ (assumption A2) it is possible to prove that
\begin{equation}\label{equ:key1}
V(t-1)\geq V(t)+||\bo{v}(t)-\bo{v}(t-1)||_Q^2,
\end{equation}
which implies that $\lim_{t \rightarrow \infty}
\big[\bo{v}(t)-\bo{v}(t-1) \big]=0$,
and hence $\bo{x}(t) \to \bo{x}_{\bo{v}(t)}$ as $ t \to \infty$ (assumption A5).
The rest of the proof is completed using assumption A3, that ensures feasibility, i.e., that $(\bo{x}(t),\bo{v}) \in P$, of any $\bo{v}$ such that $\|\bo{v} - \bo{v}(t-1)\| < \delta$ where $\delta>0$ is sufficiently small. Hence the only possible value for $\lim_{t \rightarrow \infty} \bo{v}(t)$ is $\bo{r_0}$ in the case $\bo{r_0} \in R_P$ and
$\lim_{t \rightarrow \infty} \bo{v}(t) = \bo{r^*}$ otherwise. Furthermore, these limits are reached in finite time.
A key argument of the entire proof is inequality (\ref{equ:key1}). To prove this inequality, the first step is to note that
since $P$ is invariant, $\bo{v}(t-1)$ is a feasible solution to optimization problem~\eqref{cg1} while $\bo{v}(t)$ is the optimal solution. Note that $P_{\bo{x}(t)}$ is closed and convex, $\bo{v^-}=\bo{v}(t-1) \in P_{\bo{x}(t)}$ and $\bo{v^*}=\bo{v}(t)$ is the minimizer of $F(\bo{v})=\|\bo{r_0}-\bo{v}\|_Q^2$ over $ \bo{v}
\in P_{\bo{x}(t)}$. Then the necessary condition for optimality of $\bo{v^*}$ implies that, $d_+F(\bo{v^*};\bo{v} - \bo{v^*}) =(\nabla F(\bo{v}))^{\rm T} (\bo{v}-\bo{v^*}) \geq 0$ for any $\bo{v} \in P_{\bo{x}(t)}$ where $d_+$ stands for the Gateaux differential (directional derivative). Thus
$$d_+f(\bo{v^*};\bo{v^-} - \bo{v^*})=-2 (\bo{r_0}-\bo{v^*})^{\rm T} Q (\bo{v^-} - \bo{v^*}) \geq 0,$$
and hence,
\begin{equation}\label{equ:gateaux} (\bo{r_0} - \bo{v(t)})^{\rm T} Q (\bo{v^-} - \bo{v}(t)) \leq 0.
\end{equation}
Transforming inequality (\ref{equ:gateaux}) as
$$(\bo{r_0}-\bo{v}(t)-\bo{v^-} + \bo{v^-})^{\rm T} Q(\bo{v^-} - \bo{r_0} + \bo{r_0} - \bo{v}(t) ) \leq 0,$$
expanding and applying inequality (\ref{equ:gateaux}) again, it follows that
\begin{equation}\label{equ:key2}
||\bo{v^-} - \bo{r_0}||_Q^2 \geq ||\bo{v}(t) - \bo{r_0}||_Q^2 + ||\bo{v^-} - \bo{v}(t)||_Q^2,
\end{equation}
which implies the inequality (\ref{equ:key1}).
\end{proof}
{\bf Remark~1: } It is worth to remark that unlike the SRG, which requires at time zero the knowledge of a feasible $\bo{v}(0)$ to start the algorithm, the CG only requires that a feasible $\bo{v}(0)$ exists as the CG itself will be able compute it. This not only simplifies the start-up relative to the SRG, but also means that the CG has some implicit reconfiguration capability in case of impulsive disturbances. Indeed, if an impulsive disturbance changes the state of the system in such a way that the previously applied reference $\bo{v}(t-1)$ is not feasible anymore (i.e. $(\bo{x}(t),\bo{v}(t-1))\notin P$), the CG is able (whenever possible) to reconfigure the reference $\bo{v}(t)$ in such a way that $(\bo{x}(t),\bo{v}(t))\in P$. For this reason, the CG has also been used in Fault Tolerant control schemes \cite{casavola2007fault}. However it must be mentioned that, depending on the application, erratic jumps in $\bo{v}(t)$ due to occasional infeasibility caused by model mismatch or impulsive disturbances may not be necessarily preferable to maintaining
previously applied reference, provided it is not permanently stuck, so this property is not necessarily an advantage.
\section{Modified Command Governor}\label{sec:3}
The proof of Theorem~\ref{TheoremCG1} reveals that the convergence results follow from the condition (\ref{equ:key2}) which, in the conventional CG case, is ensured thanks to the invariance of $P$ and the assumption that the CG is able to compute the optimal solution of the optimization problem \eqref{cg1}.
The key observation behind this note is that if the condition (\ref{equ:key2}) satisfied in some other way, the results of Theorem~\ref{TheoremCG1} still follow without assuming invariance of $P$ or relying on exact optimization.
A simple way to ensure that the condition (\ref{equ:key2}) holds is to use the following logic-based condition for accepting a sub-optimal solution of the optimization problem~\eqref{cg1}.
{\textbf{Modified CG}} Let $\bo{v^\prime}$ be a possibly sub-optimal solution
of the optimization problem \eqref{cg1}. $\bo{v^\prime}$ is \textit{accepted}, i.e.
$\bo{v}(t)=\bo{v^\prime}$ if :
\begin{itemize}
\item $\bo{v^\prime}$ is feasible, i.e. $(\bo{x}(t),\bo{v^\prime}) \in P$
\item $\bo{v^\prime}$ satisfies \begin{equation}\label{equ:cond2}
||\bo{v^\prime}-\bo{r}(t)||_Q^2 \leq ||\bo{v}(t-1)-\bo{r}(t)||_Q^2-||\bo{v^\prime}-\bo{v}(t-1)||_Q^2.
\end{equation}
\end{itemize}
Otherwise, $\bo{v^\prime}$ is \textit{rejected} and the previous value of the reference is held, i.e. $\bo{v}(t)=\bo{v}(t-1)$.
The following theorem shows that under very mild conditions on the solver properties, this simple logic allows to retain all of the properties of the conventional CG even if the solution of the optimization problem is inexact and if $P$ is not invariant.
\begin{theorem}\label{TheoremCG2}
Let the set $P$ satisfy assumptions A2-A4, and let assumption A5 also hold. Consider a system
where $\bo{v}(t)$ is managed accordingly to the
\textbf{modified CG}. Under the only condition that there exist two scalars $\varepsilon^\prime,\delta^\prime>0$ such that whenever
\begin{equation}\label{equ:9pre}
\|\bo{x}(t)-\bo{x}_{\bo{v}(t-1)}\|<\varepsilon^\prime,
\end{equation}
then the sub-optimal solution of the optimization problem \eqref{cg1} provides a feasible solution $\bo{v}(t)$ such that
\begin{equation}\label{inexactCond}
||\bo{v}(t)-\bo{r^*}(t)||_Q^2 \leq \max\left\{0, ||\bo{v}(t-1)-\bo{r^*}(t)||_Q^2-{ (\delta^\prime)^2} \right\} ,
\end{equation}
where $\bo{r^*}(t)=\arg \min_{\bo{v}\in R_P} ||\bo{v}-\bo{r}(t)||_Q^2,$
then if $\bo{v}(0)$ is such that $(\bo{x}(0),\bo{v}(0)) \in P$:
\begin{itemize}
\item[1.] Constraints are satisfied at all time instants $t \geq 0$;
\item[2.] If the desired reference is constant, $\bo{r}(t)=\bo{r_0}$ for $t \geq \hat{t}$,
and, moreover, $\bo{r_0} \in R_P,$ then the sequence $\bo{v}(t)$ converges in finite time to $\bo{r_0}$;
\item[3.] If the desired reference is constant, $\bo{r}(t)=\bo{r_0}$ for $t \geq \hat{t}$, and
$\bo{r_0} \notin R_P,$ then the sequence $\bo{v}(t)$ converges in finite time to the best approximation of $\bo{r_0}$ in $R_P,$ i.e. $\bo{r^*}=\arg \min_{\bo{v}\in R_P} ||\bo{v}-\bo{r_0}||_Q^2$.
\end{itemize}
\end{theorem}
\begin{proof}
The constraints are satisfied as the property $(\bo{x}(t),\bo{v}(t)) \in O_\infty$ is maintained for all $t \in \mathbb{Z}_{ \geq 0}$
despite the fact that $(\bo{x}(t),\bo{v}(t)) \in P$ may not hold.
The acceptance/rejection logic based on the condition (\ref{equ:cond2})
ensures
$\|\bo{v}(t)-\bo{r}(t)\|_Q^2 \leq \|\bo{v}(t)-\bo{r}(t)\|_Q^2-\|\bo{v}(t)-\bo{v}(t-1)\|_Q^2$ for all $t \in \mathbb{Z}_{> 0}$.
This, coupled with the condition (\ref{inexactCond}), ensures that properties 2 and 3 follow by the same arguments as in the proof of Theorem~1.
\end{proof}
According to Theorem~2 the only condition that a sub-optimal solver must satisfy in order to ensure the correct behaviour of the CG is that if $||\bo{x}(t)-\bo{x}_{\bo{v}(t-1)}||<\varepsilon^\prime$ then
the inequality \eqref{inexactCond} holds. This condition is very reasonable in the CG setting. In fact, because of assumption A3 and of the Lipschitz continuity of $\bo{x}_{\bo{v}}$ w.r.t. to $\bo{v}$ (assumption A2) then for any $\varepsilon^\prime <\varepsilon$ there exists a $\delta^{\prime \prime}$ so that, whenever $||\bo{x}(t)-\bo{x}_{\bo{v}(t-1)}||<\varepsilon^\prime,$ then any $\bo{v}\in R_P$ such that $||\bo{v}-\bo{v}(t-1)|| \leq \delta^{\prime \prime}$ is feasible and thus $\bo{v}$ can be either moved in the direction of $\bo{r^*}(t)$ by a distance of $\delta^{\prime \prime}$ or be set equal to $\bo{r^*}(t)$.
This, in turn, guarantees the existence of a $\delta^\prime$ ensuring the inequality \eqref{inexactCond}.
This observation also allows to build the following algorithm that ensures the correct behaviour of the CG when integrated with an arbitrary optimization solver.
\begin{algorithm}[H]
\SetAlgoLined
\setstretch{0.65}
\KwData{$\bo{x}(t)$, $\bo{r}(t)$, $\bo{v}(t-1)$}
\KwResult{$\bo{v}(t)$}
Compute $\bo{r}^*(t)=\arg \min_{\bo{v}\in R_P} ||\bo{v}-\bo{r}(t)||_Q^2$ \;
Compute an approximate solution $\bo{v^\prime}$ of
$$ \begin{array} {lcl}
\bo{v^*} &=& \arg \min\limits_{\bo{v}} ||\bo{r^*}(t)-\bo{v}||_Q^2 \\
& & \text{subject}\,\,\, \text{to} \\
& & (\bo{x}(t),\bo{v}) \in P.
\end{array}$$
\eIf{$(\bo{x}(t),\bo{v^\prime}) \in P$ ~\mbox{AND}~ $\!|\!|\bo{v^\prime}\!-\!\!\bo{r}(t)|\!|_Q^2 \!\! \leq \!\! |\!|\bo{v}(t\!-\!1)\!-\!\bo{r}(t)|\!|_Q^2\!\!-\!|\!|
\bo{v^\prime}\!-\!\bo{v}(t\!-\!1)|\!|_Q^2\!$}{
set $\bo{v^{\prime \prime}}=\bo{v^\prime}$ \;
}{
set $\bo{v^{\prime \prime}} = \bo{v}(t-1)$ \;
}
\eIf{$||\bo{x}(t)-\bo{x}_{\bo{v}(t-1)}|| \leq \varepsilon^\prime$ ~\mbox{AND}~ $||\bo{v^{\prime \prime}}-\bo{v}(t-1)||_Q < \delta^\prime$}
{return $\bo{v}(t)= \bo{v}(t-1)+min\left\{1,\frac{\delta^{\prime \prime }}{||\bo{r^*}(t)-\bo{v}(t-1)||}\right\}{\color{red} }(\bo{r^*}(t)-\bo{v}(t-1))$}{return $\bo{v}(t)=\bo{v^{\prime \prime}}$\;}
\caption{Command Governor for non-exact solver}
\label{AlgoCG6}
\end{algorithm}
Note that if $Q=I$ then $\delta^\prime = \delta^{\prime \prime} $.
Note also that an SRG can be viewed as a special case of inexact CG in which $Q=I,$ $\bo{r}(t)\in R_P,$ and a search over the line segment between $\bo{v}(t-1)\in R_P$ is used as an inexact solution. Note also that in the SRG approach the inequality (\ref{inexactCond}) is satisfied whenever the condition (\ref{equ:9pre}) holds.
{\bf Remark 2:}
The requirement of inequality (\ref{inexactCond}) holding whenever the condition (\ref{equ:9pre}) holds can be relaxed. For instance, it is sufficient that there exists $N \in \mathbb{Z}_{>0}$ such that the inequality (\ref{inexactCond}) holds at least once in every sequence of length $N$ of consecutive time steps $t$ for which condition (\ref{equ:9pre}) holds.
{\bf Remark~3: } Note that this modified CG algorithm loses the ``reconfiguration'' capabilities mentioned in Remark~1. In fact, whenever $(\bo{x}(t),\bo{v}(t-1))\notin P$ we are implicitly assuming that by keeping the command constant, the constraints are always satisfied.
{\bf Remark~4: } Depending on the shape $P$, in order to reduce the number of discarded $\bo{v}(t)$ as a result of violation of the condition (\ref{equ:cond2}), the following optimization problem can be used in place of the optimization problem \eqref{cg1}:
\begin{mini}|l|
{\bo{v}}{\|\bo{r}(t)-\bo{v}\|_Q^2+\|\bo{v}(t-1)-\bo{v}\|_Q^2}{}{}, \label{cg1mod}
\addConstraint{(\bo{x}(t),\bo{v}) \in P}{}{}.
\end{mini}
This optimization problem is also a QP.
\section{F-16 Aircraft Longitudinal Flight Control Example}\label{sec:4}
In this section we consider an example of longitudinal control of F-16 aircraft based on the continuous-time aircraft model presented in \cite{sobel1985design}. This model represents linearized closed-loop aircraft dynamics at an altitude of $3000$ ft and
$0.6$ Mach number in straight and level subsonic flight.
The model has been converted to discrete-time assuming a sampling period of $5$ msec.
The resulting discrete-time model has
the form of Eq. (\ref{equ:dynamics}) with
$$f(\bo{x},\bo{v})=A\bo{x}+B\bo{v},$$
where
$$
{\small
A= { \left(\begin{array}{ccccc} 0.9998 & 3.126\times 10^{-5} & 0.006366 & 0.0008041 & 0.001198\\ -0.01104 & 0.9928 & 0.1892 & -0.07997 & -0.00731\\ 0.0002201 & 0.004952 & 0.9941 & -0.001009 & -0.001217\\ 0.3035 & 0.0844 & 0.6711 & 0.8547 & -0.007991\\ -0.5769 & -0.08625 & -0.953 & 0.04102 & 0.9148 \end{array}\right),~ }}
{\small
B=\left(\begin{array}{cc} 5.314 \times 10^{-6} & 0.0002335\\ 0.01105 & -2.445 \times 10^{-5}\\ 1.334 \times 10{-5} & -0.0002335\\ -0.2676 & -0.03565\\ 0.1873 & 0.3896 \end{array}\right).}
$$
The components of the state vector $x \in \mathbb{R}^5$ are: the flight path angle (deg),
the pitch rate (deg/sec), the angle of attack (deg), the elevator deflection (deg), and the flaperon deflection (deg), respectively. The components of the reference command vector $\bo{v} \in \mathbb{R}^2$ components are: the commanded pitch angle (deg), and commanded flight path angle (deg), respectively.
Upper and lower bound constraints are prescribed on the
elevator deflection,
flaperon deflection,
elevator deflection rate,
flaperon deflection rate,
and angle of attack.
These constraints can be written as
\begin{equation}\label{equ:cnr10}
(\bo{x},\bo{v}) \in \mathcal{C}=\{(\bo{x},\bo{v}):~ \bo{y_c} = C_c \bo{x} + D_c \bo{v} \in Y_c\},
\end{equation}
with $$C_c=\left(\begin{array}{ccccc} 0 & 0 & 0 & 1.0 & 0\\ 0 & 0 & 0 & 0 & 1.0\\ 65.0 & 17.82 & 142.3 & -30.5 & -1.68\\ -122.0 & -17.95 & -200.6 & 8.412 & -17.89\\ 0 & 0 & 1.0 & 0 & 0 \end{array}\right),
D_c=\left(\begin{array}{cc} 0 & 0\\ 0 & 0\\ -57.6 & -7.34\\ 40.4 & 81.6\\ 0 & 0 \end{array}\right)$$
and $$Y_c=\{\bo{y_c}:~ \Gamma \bo{y_c} - \bo{\gamma} \leq \bo{0}\}$$
$$=
[-25,25] \times [-20, 20] \times
[-42, 42] \times [-56, 56] \times [-4, 4],$$
where the set $Y_c$ is the Cartesian product of the intervals restricting the range of each of the components of $y_c$ in Eq.~(\ref{equ:cnr10}), which is a $5 \times 1$ vector and its compoments have units of deg, deg, deg/sec, deg/sec and deg, respectively. The matrices $\Gamma$ and $\bo{\gamma}$ are
$10 \times 5$ and $10 \times 1$, respectively.
The limits on the elevator and flaperon deflection and deflection rates are based on
\cite{sobel1985design}. The angle of attack limit of $\pm 4$ deg has been made tighter than usual to create a more challenging scenario for the CG. In practice, tight limits on angle of attack could be imposed by the flight envelope protection system when flying in presence of significant wind gusts or high turbulence \cite{richardson2013envelopes} (especially near a trim condition already at a high angle of attack), during aerial refueling, when flying in tandem with a drone or if wing icing has occurred.
In this example we compare three CG implementations: The first implementation (conventional CG) is based on optimization problem (\ref{cg1mod}) with $P=\tilde{O}_\infty$ defined by $748$ inequalities and the primal-dual active set algorithm
{\tt qpkwik} implemented in Matlab was used to solve the QP problem (\ref{cg1mod}) with the maximum number of iterations limited to $200$. This solver was chosen as it appears to be one of the fastest options for solving QP problems for optimization problems that, like the CG, have a small number of optimization variables and large number of constraints.
The second implementation (modified CG) was the same as the first except for the maximum number of iterations limited to $3$. The QP solver was warm-started in both implementations 1 and 2. The third implementation (also modified CG) was based on $P$ with $106$ inequalities obtained from $\tilde{O}_\infty$ by systematic elimination of the almost redundant inequalities and a pull-in procedure \cite{gilbert1999fast}. This reduction in the number of inequalities translates into more than a $7$-fold reduction in ROM size needed to store $P$. In this third implementation, we use as an approximate solver based on a modified scalar reference governor
update (\ref{srg1}) that assumes the following form
\begin{equation}\label{equ:modsrg}
\bo{v} = \bo{v}(t-1) +\kappa \bo{E}(t) (\bo{r}(t) - \bo{v}(t-1)),
\end{equation}
where $\bo{E}(t)$ is alternating between
$$\left\{ \left[\begin{array}{c} 1 \\ 0 \end{array} \right], \left[\begin{array}{c} 0 \\ 1 \end{array} \right],
\left[\begin{array}{c} 1 \\ 1 \end{array} \right]
\right\}.$$
This strategy is motivated by the idea of applying time distributed coordinate descent. Since only a scalar parameter $\kappa$ is optimized, the minimizer can be easily found by evaluating an explicit expression \cite{gilbert1999fast}. To guarantee convergence to constant constraint admissible inputs, every third update is made along the line segment connecting $\bo{v}(t-1)$ and $\bo{r}(t)$; this ensures that the relaxed condition in Remark 2 holds with $N=3$.
In each of these three implementations, the logic based condition (\ref{equ:cond2}) is applied.
The responses are shown in Figures~\ref{fig:sim11}-\ref{fig:sim14}.
The time history of the maximum of constraints at each time instant, i.e., of
$\max \{\Gamma( C_c \bo{x}(t)+ D_c \bo{v}(t) )- \gamma \}$ is plotted in Figure~\ref{fig:sim13}.
As this maximum stays less or equal than zero (whose value is designated by a dashed black line in Figure~\ref{fig:sim13}), the constraints are satisfied by each of
Implementations. Figure~\ref{fig:sim14} shows that the condition (\ref{equ:cond2}) is activated sparingly for Implementation 2 and frequently for Implementation 3.
The computational time statistics were
tallied from $19$ simulations in sequence in Matlab running under
Windows 10 on Microsoft Surface 7 tablet for
each of the Implementations. The computation
times (averaged over time instants and $19$ runs) were $9.539$ msec for Implementation 1, $4.978$ msec for
Implementation 2 and $1.9087$ msec for Implementation 3.
As is clear from Figures~\ref{fig:sim11}-\ref{fig:sim12}, the response is slightly slower in case of Implementation 3 as compared to Implementations 1 and 2.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=5cm]{CG_OnfinityTilde_fig1.pdf}~~
\includegraphics[width=5cm]{CG_OnfinityTilde_fig1_maxIter3.pdf}~~
\includegraphics[width=5cm]{CGCD_P_fig1.pdf}
\caption{Time histories of pitch angle command, modified pitch angle command and actual pitch angle: Implementation 1 (a), Implementation 2 (b), Implementation 3 (c). }
\label{fig:sim11}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=5cm]{CG_OnfinityTilde_fig2.pdf}~~
\includegraphics[width=5cm]{CG_OnfinityTilde_fig2_maxIter3.pdf}~~
\includegraphics[width=5cm]{CGCD_P_fig2.pdf}
\caption{Time histories of flight path angle command, modified flight path angle command and actual flight path angle:
Implementation 1 (a), Implementation 2 (b), Implementation 3 (c).
}
\label{fig:sim12}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=5cm]{CG_OnfinityTilde_fig3_maxonly.pdf}~~
\includegraphics[width=5cm]{CG_OnfinityTilde_fig3_maxIter3_maxonly.pdf}~~
\includegraphics[width=5cm]{CGCD_P_fig3_maxonly.pdf}
\caption{Time history of the maximum of constraints, $\max \{
\Gamma_Y( C x(t)+ D v(t) )-
\gamma_Y \}$: Implementation 1 (a), Implementation 2 (b), Implementation 3 (c). }
\label{fig:sim13}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=5cm]{CG_OnfinityTilde_fig4.pdf}~~
\includegraphics[width=5cm]{CG_OnfinityTilde_fig4_maxIter3.pdf}~~
\includegraphics[width=5cm]{CGCD_P_fig4.pdf}
\caption{Time instants at which condition (\ref{equ:cond2}) is violated, $\bo{v}^\prime$ is rejected, and previous value of reference is held for Implementation 1 (a), Implementation 2 (b) and Implementation 3 (c). }
\label{fig:sim14}
\end{center}
\end{figure}
\section{Concluding Remarks}\label{sec:5}
The Command Governor (CG) is an add-on scheme to a nominal closed-loop system used to satisfy state and control constraints through reference command modification to maintain state-command pair in a safe set. By modifying the CG logic, it is possible to implement CG without requiring this safe set to be invariant or relying on exact optimization, while retaining constant command finite time convergence properties typical of CG. An F-16 longitudinal flight control simulation example has been reported that demonstrated the potential of the proposed approach for significant reduction in the computation time and in ROM size required for implementation.
\section*{Acknowledgement} The authors would like to thank Dr. Dominic Liao-McPherson for
the code of primal-dual active set solver {\tt qpkwik} used in the numerical experiments. The second author acknowledges the support of
AFOSR under the grant number FA9550-20-1-0385 to the University
of Michigan.
|
1,108,101,565,179 | arxiv | \section{Introduction}
Given a positive integer $r$ and a graph $G$, the \emph{$r$-neighbour bootstrap process} begins with an initial set of ``infected'' vertices of $G$ and, at each step of the process, a vertex becomes infected if it has at least $r$ infected neighbours. More formally, if $A_0$ is the initial set of infected vertices, then the set of vertices that are infected after the $j$th step of the process for $j\geq1$ is defined by
\[A_j:=A_{j-1}\cup\left\{v\in V(G): \left|N_G(v)\cap A_{j-1}\right|\geq r\right\},\]
where $N_G(v)$ denotes the neighbourhood of $v$ in $G$. We say that $A_0$ \emph{percolates} if $\bigcup_{j=0}^\infty A_j=V(G)$. Bootstrap percolation was introduced by Chalupa, Leath and Reich~\cite{Chalupa} as a mathematical simplification of existing dynamic models of ferromagnetism, but it has also found applications in the study of other physical phenomena such as crack formation and hydrogen mixtures (see Adler and Lev~\cite{AdlerLev}). In addition, advances in bootstrap percolation have been highly influential in the study of more complex processes including, for example, the Glauber dynamics of the Ising model~\cite{RobIsing}.
The main extremal problem in bootstrap percolation is to determine the minimum cardinality of a set which percolates under the $r$-neighbour bootstrap process on $G$; we denote this by $m(G,r)$. An important case is when $G$ is the \emph{$d$-dimensional hypercube} $Q_d$; ie., the graph with vertex set $\{0,1\}^d$ in which two vertices are adjacent if they differ in exactly one coordinate. Balogh and Bollob\'{a}s~\cite{conj} (see also~\cite{Highdim, LinAlg}) made the following conjecture.
\begin{conj}[Balogh and Bollob\'{a}s~\cite{conj}]
\label{theConj}
For fixed $r\geq3$ and $d\to\infty$,
\[m(Q_d,r) = \frac{1+o(1)}{r}\binom{d}{r-1}.\]
\end{conj}
The upper bound of Conjecture~\ref{theConj} is not difficult to prove. Simply let $A_0$ consist of all vertices on ``level $r-2$'' of $Q_d$ and an approximate Steiner system on level $r$, whose existence is guaranteed by an important theorem of R\"{o}dl~\cite{Rodl}; see Balogh, Bollob\'{a}s and Morris~\cite{Highdim} for more details. Note that, under certain conditions on $d$ and $r$, the approximate Steiner system in this construction can be replaced with an exact Steiner system (using, for example, the celebrated result of Keevash~\cite{Keevash}). In this special case, the percolating set has cardinality $\frac{1}{r}\binom{d}{r-1} + \binom{d}{r-2}$ which yields
\begin{equation}
\label{Steiner}
m\left(Q_d,r\right) \leq \frac{d^{r-1}}{r!} + \frac{d^{r-2}(r+2)}{2r(r-2)!}+O\left(d^{r-3}\right).
\end{equation}
Lower bounds have been far more elusive; previously, the best known lower bound on $m\left(Q_d,r\right)$ for fixed $r\geq3$ was only linear in $d$ (see Balogh, Bollob\'{a}s and Morris~\cite{Highdim}). In this paper, we prove Conjecture~\ref{theConj}.
\begin{thm}
\label{hyper}
For $d\geq r\geq1$,
\[m\left(Q_d,r\right)\geq 2^{r-1} + \sum_{j=1}^{r-1}\binom{d-j-1}{r-j}\frac{j2^{j-1}}{r}\]
where, by convention, $\binom{a}{b}=0$ when $a<b$.
\end{thm}
For fixed $r\geq3$, Theorem~\ref{hyper} implies
\[m(Q_d,r)\geq \frac{d^{r-1}}{r!} + \frac{d^{r-2}(6-r)}{2r(r-2)!}+\Omega\left(d^{r-3}\right),\]
which differs from the upper bound in (\ref{Steiner}) by an additive term of order $\Theta\left(d^{r-2}\right)$. We will also provide a recursive upper bound on $m\left(Q_d,r\right)$, which improves on the second order term of (\ref{Steiner}). For $r=3$, we combine this recursive bound with some additional arguments to show that Theorem~\ref{hyper} is tight in this case.
\begin{thm}
\label{r=3Upper}
For $d\geq3$, we have $m(Q_d,3)=\left\lceil\frac{d(d+3)}{6}\right\rceil+1$.
\end{thm}
In order to prove Theorem~\ref{hyper}, we will exploit a relationship between bootstrap percolation and the notion of weak saturation introduced by Bollob\'{a}s~\cite{wsat}. Given fixed graphs $G$ and $H$, we say that a spanning subgraph $F$ of $G$ is \emph{weakly $(G,H)$-saturated} if the edges of $E(G)\setminus E(F)$ can be added to $F$, one edge at a time, in such a way that each edge completes a copy of $H$ when it is added. The main extremal problem in weak saturation is to determine the \emph{weak saturation number} of $H$ in $G$ defined by
\[\operatorname{wsat}(G,H):=\min\left\{|E(F)|: F\text{ is weakly $(G,H)$-saturated}\right\}.\]
Weak saturation is very well studied (see, e.g.~\cite{Alon,Kalai1,Kalai2, MNS,MoshShap,Oleg}). Our proof of Theorem~\ref{hyper} relies on the following bound, which is easy to prove:
\begin{equation}\label{wsatperc}
m(G,r) \geq \frac{\operatorname{wsat}\left(G,S_{r+1}\right)}{r}
\end{equation}
where $S_{r+1}$ denotes the star with $r+1$ leaves. A slightly stronger version of (\ref{wsatperc}) is stated and proved in the next section. We obtain an exact expression for the weak saturation number of $S_{r+1}$ in the hypercube.
\begin{thm}
\label{wsatcube}
If $d \ge r \ge 0$, then
\[\operatorname{wsat}\left(Q_d, S_{r+1}\right) = r2^{r-1} + \sum_{j=1}^{r-1} \binom{d-j-1}{r-j}j 2^{j-1}.\]
\end{thm}
Note that Theorem~\ref{hyper} follows directly from this theorem and (\ref{wsatperc}). More generally, we determine the weak saturation number of $S_{r+1}$ in the $d$-dimensional $a_1\times\cdots \times a_d$ grid, denoted by $\prod_{i=1}^d[a_i]$. We state this result here in the case $d\geq r$; an even more general result is expressed later in terms of a recurrence relation.
\begin{thm}
\label{genwsat}
For $d\geq r\geq1$ and $a_1,\dots,a_d\geq2$,
\begin{dmath*}{\operatorname{wsat}\left(\prod_{i=1}^d[a_i],S_{r+1}\right)} = \sum_{\substack{S\subseteq [d]\\ |S|\leq r-1}}\left(\prod_{i\in S}(a_i-2)\right)\left({(r-|S|)2^{r-|S|-1}} + {\sum_{j=1}^{r-|S|-1}\binom{d-|S|-j-1}{r-|S|-j}j2^{j-1}}\right).\end{dmath*}
\end{thm}
Observe that a lower bound on $m\left(\prod_{i=1}^d[a_i],r\right)$ follows from Theorem~\ref{genwsat} and (\ref{wsatperc}). To our knowledge, the combination of Theorem~\ref{genwsat} and (\ref{wsatperc}) implies all of the known lower bounds on the cardinality of percolating sets in multidimensional grids. In particular, it implies the (tight) lower bounds
\[\label{d=r}m\left([n]^d,d\right)\geq n^{d-1},\]
and
\begin{equation}\label{r=2}m\left(\prod_{i=1}^d[a_i],2\right) \geq \left\lceil\frac{\sum_{i=1}^d(a_i-1)}{2}\right\rceil + 1.\end{equation}
established in~\cite{Pete} and~\cite{conj}, respectively.
An important motivation for Conjecture~\ref{theConj} stems from its potential applications in a probabilistic setting. The most well studied problem in bootstrap percolation is to estimate the \emph{critical probability} at which a randomly generated set of vertices in a graph $G$ becomes likely to percolate. To be more precise, for $p\in [0,1]$, suppose that $A_0^p$ is a subset of $V(G)$ obtained by including each vertex randomly with probability $p$ independently of all other vertices and define
\[p_c(G,r):=\inf\left\{p: \mathbb{P}\left(A_0^p \text{ percolates}\right)\geq 1/2 \right\}.\]
The problem of estimating $p_c\left([n]^d,r\right)$ for fixed $d$ and $r$ and $n\to\infty$ was first considered by Aizenman and Lebowitz~\cite{Aiz} and subsequently studied in~\cite{3Dim,Cerf,Cerf2,sharper,Holroyd}. This rewarding line of research culminated in a paper of Balogh, Bollob{\'a}s, Duminil-Copin and Morris~\cite{sharp} in which $p_c\left([n]^d,r\right)$ is determined asymptotically for all fixed values of $d$ and $2\leq r\leq d$.
Comparably, far less is known about the critical probability when $d$ tends to infinity. In this regime, the main results are due to Balogh, Bollob\'{a}s and Morris in the case $r=d$~\cite{majority} and $r=2$~\cite{Highdim}. In the latter paper, the extremal bound (\ref{r=2}) was applied to obtain precise asymptotics for $p_c\left([n]^d,2\right)$ whenever $d\gg \log(n)\geq 1$. In contrast, very little is known about the critical probability for fixed $r\geq3$ and $d\to\infty$. For example, the logarithm of $p_c\left(Q_d,3\right)$ is not even known to within a constant factor (see~\cite{Highdim}). As was mentioned in~\cite{LinAlg}, a stumbling block in obtaining good estimates for $p_c\left(Q_d,r\right)$ when $d\to\infty$ has been the lack of an asymptotically tight lower bound $m\left(Q_d,r\right)$. In this paper, we provide such a bound.
The rest of the paper is organised as follows. In the next section, we outline our approach to proving Theorems~\ref{hyper} and~\ref{genwsat} and establish some preliminary lemmas. In Section~\ref{hyperSec}, we prove Theorem~\ref{wsatcube}. We then determine $\operatorname{wsat}\left(\prod_{i=1}^d[a_i],S_{r+1}\right)$ in full generality in Section~\ref{generalSec} using similar ideas (which become somewhat more cumbersome in the general setting). In Section~\ref{upperSec}, we provide constructions of small percolating sets in the hypercube and prove Theorem~\ref{r=3Upper}. Finally, we conclude the paper in Section~\ref{concl} by stating some open problems related to our work.
\section{Preliminaries}
\label{prelim}
We open this section by proving the following lemma, which improves on (\ref{wsatperc}) for graphs with vertices of degree less than $r$ (including, for example, the graph $\prod_{i=1}^d[a_i]$ for $d<r$).
\begin{lem}
\label{edge}
Let $G$ be a graph and let $F$ be a spanning subgraph of $G$ such that the set
\[A_0:=\left\{v\in V(G): d_F(v)\geq\min\left\{r,d_G(v)\right\}\right\}\]
percolates with respect to the $r$-neighbour bootstrap process on $G$. Then $F$ is weakly $\left(G,S_{r+1}\right)$-saturated.
\end{lem}
\begin{proof}
By hypothesis, we can label the vertices of $G$ by $v_1,\dots,v_{n}$ in such a way that
\begin{itemize}
\item $\left\{v_1,\dots,v_{|A_0|}\right\}= A_0$, and
\item for $|A_0|+1\leq i\leq n$, the vertex $v_i$ has at least $r$ neighbours in $\left\{v_1,\dots,v_{i-1}\right\}$.
\end{itemize}
Let us show that $F$ is weakly $(G,S_{r+1})$-saturated. We begin by adding to $F$ every edge of $E(G)\setminus E(F)$ which is incident to a vertex in $A_0$ (one edge at a time in an arbitrary order). For every vertex $v\in A_0$, we have that either
\begin{itemize}
\item there are at least $r$ edges of $F$ incident to $v$, or
\item every edge of $G$ incident with $v$ is already present in $F$.
\end{itemize}
Therefore, every edge of $E(G)\setminus E(F)$ incident to a vertex in $A_0$ completes a copy of $S_{r+1}$ when it is added.
Now, for each $i=|A_0|+1,\dots, n$ in turn, we add every edge incident to $v_i$ which has not already been added (one edge at a time in an arbitrary order). Since $v_i$ has at least $r$ neighbours in $\left\{v_1,\dots,v_{i-1}\right\}$ and every edge incident to a vertex in $\left\{v_1,\dots,v_{i-1}\right\}$ is already present, we get that every such edge completes a copy of $S_{r+1}$ when it is added. The result follows.
\end{proof}
For completeness, we will now deduce (\ref{wsatperc}) from Lemma~\ref{edge}.
\begin{proof}[Proof of (\ref{wsatperc})]
Let $A_0$ be a set of cardinality $m(G,r)$ which percolates with respect to the $r$-neighbour bootstrap process on $G$ and let $F$ be a spanning subgraph of $G$ such that $d_F(v)\geq \min\left\{d_G(v),r\right\}$ for each $v\in A_0$. Note that this can be achieved by adding at most $r$ edges per vertex of $A_0$ and so we can assume that $|E(F)|\leq r|A_0|=rm(G,r)$. By Lemma~\ref{edge}, $F$ is weakly $(G,S_{r+1})$-saturated and so
\[\operatorname{wsat}\left(G,S_{r+1}\right)\leq |E(F)|\leq rm(G,r)\]
as required.
\end{proof}
We turn our attention to determining the weak saturation number of stars in hypercubes and, more generally, in multidimensional rectangular grids. To prove an upper bound on a weak saturation number, one only needs to construct a \emph{single} example of a weakly saturated graph of small size. Our main tool for proving the lower bound is the following linear algebraic lemma of Balogh, Bollob\'{a}s, Morris and Riordan~\cite{LinAlg}. A major advantage of this lemma is that it allows us to prove the lower bound in a constructive manner as well. We include a proof for completeness.
\begin{lem}[Balogh, Bollob\'{a}s, Morris and Riordan~\cite{LinAlg}]
\label{linalg}
Let $G$ and $H$ be graphs and let $W$ be a vector space. Suppose that $\left\{f_e: e\in E(G)\right\}$ is a collection of vectors in $W$ such that for every copy $H'$ of $H$ in $G$ there exists non-zero coefficients $\left\{c_e:e\in E(H')\right\}$ such that $\sum_{e\in E(H')}c_ef_e = 0$. Then
\[\operatorname{wsat}(G,H) \geq \operatorname{dim}(\operatorname{span}\left\{f_e:e\in E(G)\right\}).\]
\end{lem}
\begin{proof}
Let $F$ be a weakly $(G,H)$-saturated graph and define $m:=|E(G)\setminus E(F)|$. By definition of $F$, we can label the edges of $E(G)\setminus E(F)$ by $e_1,\dots,e_m$ in such a way that, for $1\leq i\leq m$, there is a copy $H_i$ of $H$ in $F_i:=F\cup\left\{e_1,\dots,e_i\right\}$ containing the edge $e_i$. By the hypothesis, we get that
\[f_{e_i}\in \operatorname{span}\left\{f_e: e\in E(H_i)\setminus\left\{e_i\right\}\right\}\subseteq \operatorname{span}\left\{f_e: e\in E(F_i)\setminus\left\{e_i\right\}\right\}\]
for all $i$. Therefore,
\[|E(F)|\geq \operatorname{dim}\left(\operatorname{span}\left\{f_e: e\in E(F)\right\}\right) = \operatorname{dim}\left(\operatorname{span}\left\{f_e: e\in E\left(F_1\right)\right\}\right)\]
\[=\dots =\operatorname{dim}\left(\operatorname{span}\left\{f_e:e\in E\left(F_m\right)\right\}\right) = \operatorname{dim}\left(\operatorname{span}\left\{f_e:e\in E(G)\right\}\right).\]
The result follows.
\end{proof}
Lemma~\ref{linalg} was proved in a more general form and applied to a percolation problem in multidimensional square grids in~\cite{LinAlg}. It was also used by Morrison, Noel and Scott~\cite{MNS} to determine $\operatorname{wsat}\left(Q_d,Q_m\right)$ for all $d\geq m\geq1$. We remark that the general idea of applying the notions of dependence and independence in weak saturation problems is also present in the works of Alon~\cite{Alon} and Kalai~\cite{Kalai2}, where techniques involving exterior algebra and matroid theory were used to prove a tight lower bound on $\operatorname{wsat}(K_n,K_k)$ conjectured by Bollob\'{a}s~\cite{Bollobook}. For a more recent application of exterior algebra and matroid theory to weak saturation problems, see the paper of Pikhurko~\cite{Oleg}.
\section{The Hypercube Case}
\label{hyperSec}
Our goal in this section is to prove Theorem~\ref{wsatcube}. This will settle the case $a_1=\dots=a_d=2$ of Theorem~\ref{genwsat} and, as discussed earlier, imply Theorem~\ref{hyper} via (\ref{wsatperc}). First, we require some definitions.
\begin{defn}
Given $k\geq 1$, an index $i\in [k]$ and $x\in \mathbb{R}^k$, let $x_i$ denote the $i$th coordinate of $x$. The \emph{support} of $x$ is defined by $\operatorname{supp}(x):=\left\{i\in[k]: x_i\neq 0\right\}$.
\end{defn}
\begin{defn}
The \emph{direction} of an edge $e=uv\in E\left(Q_d\right)$ is the unique index $i\in [d]$ such that $u_i\neq v_i$. Given a vertex $v\in V(Q_d)$, we define $e(v,i)$ to be the unique edge in direction $i$ that is incident to $v$.
\end{defn}
Note that each edge of $Q_d$ receives two labels (one for each of its endpoints). Our approach will make use of the following simple linear algebraic fact.
\begin{lem}
\label{support}
Let $k\geq \ell\geq0$ be fixed. Then there exists a subspace $X$ of $\mathbb{R}^k$ of dimension $k-\ell$ such that $\left|\operatorname{supp}(x)\right|\geq \ell+1$ for every $x\in X\setminus \{0\}$.
\end{lem}
\begin{proof}
Define $X$ to be the span of a set $\left\{v_1,\dots,v_{k-\ell}\right\}$ of unit vectors of $\mathbb{R}^k$ chosen independently and uniformly at random with respect to the standard Lebesgue measure on the unit sphere $S^{k-1}$. Given a fixed subspace $W$ of $\mathbb{R}^k$ of dimension at most $\ell$ and $1\leq i\leq k-\ell$, the space
\[\operatorname{span}\left(W\cup\left\{v_1,\dots,v_{i-1}\right\}\right)\]
has dimension less than $k$. Thus, the unit sphere of this space has measure zero in $S^{k-1}$ and so, with probability one, $v_{i}\notin\operatorname{span}\left(W\cup\left\{v_1,\dots,v_{i-1}\right\}\right)$. It follows that $\operatorname{dim}(X)=k-\ell$ and $X\cap W=\{0\}$ almost surely. In particular, if we let $T\subseteq [k]$ be a fixed set of cardinality $\ell$ and define
\[W_T:=\left\{x\in\mathbb{R}^k: \operatorname{supp}(x)\subseteq T\right\},\]
then $X\cap W_T=\{0\}$ almost surely. Since there are only finitely many sets $T\subseteq [k]$ of cardinality $\ell$, we can assume that $X$ is chosen so that $X\cap W_T=\{0\}$ for every such set. This completes the proof.
\end{proof}
In the appendix, we provide an explicit (ie. non-probabilistic) example of a vector space $X$ satisfying Lemma~\ref{support}. The following lemma highlights an important property of the space $X$.
\begin{lem}
\label{Tsupp}
Let $k\geq\ell\geq0$ and let $X$ be a subspace of $\mathbb{R}^k$ of dimension $k-\ell$ such that $|\operatorname{supp}(x)| \ge \ell + 1$ for every $x \in X\backslash \{0\}$. For every set $T \subseteq [k]$ of cardinality $\ell+1$, there exists $x \in X$ with $\operatorname{supp}(x) = T$.
\end{lem}
\begin{proof}
Let $T\subseteq[k]$ with $|T|=\ell+1$. Clearly, the space $\{x\in \mathbb{R}^k: \operatorname{supp}(x)\subseteq T\}$ has dimension $\ell+1$. Therefore, since $\operatorname{dim}(X)=k-\ell$, there must be a non-zero vector $x\in X$ with $\operatorname{supp}(x)\subseteq T$. However, this inclusion must be equality since $|\operatorname{supp}(x)|\geq \ell+1$.
\end{proof}
We are now in position to prove Theorem~\ref{wsatcube}. For notational convenience, we write
\[w:=r2^{r-1} + \sum_{j=1}^{r-1} \binom{d-j-1}{r-j}j 2^{j-1}.\]
Also, using Lemma~\ref{support}, let $X$ be a subspace of $\mathbb{R}^d$ of dimension $d-r$ such that $|\operatorname{supp}(x)|\geq r+1$ for every $x\in X\setminus\{0\}$. We deduce Theorem~\ref{wsatcube} from the following lemma, after which we will prove the lemma itself.
\begin{lem}
\label{theProp}
There is a spanning subgraph $F$ of $Q_d$ and a collection $\left\{f_e: e\in E\left(Q_d\right)\right\}\subseteq \mathbb{R}^w$ such that
\begin{enumerate}
\renewcommand{\theenumi}{Q\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
{\setlength\itemindent{4pt}\item \label{wsat} $F$ is weakly $\left(Q_d,S_{r+1}\right)$-saturated and $|E(F)| = w$,}
{\setlength\itemindent{4pt}\item\label{satis} $\sum_{i=1}^d x_i f_{e(v,i)} = 0$ for every $v\in V\left(Q_d\right)$ and $x\in X$, and}
{\setlength\itemindent{4pt}\item\label{BigDim} $\operatorname{span}\left\{f_e: e\in E\left(Q_d\right)\right\}=\mathbb{R}^w$. }
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Theorem~\ref{wsatcube}]
Clearly, the existence of a graph $F$ satisfying (\ref{wsat}) implies the upper bound $\operatorname{wsat}(Q_d,S_{r+1})\leq w$. We show that the lower bound follows from (\ref{satis}), (\ref{BigDim}) and Lemma~\ref{linalg}. Note that the edge sets of copies of $S_{r+1}$ in $Q_d$ are precisely the sets of the form $\{e(v,i): i\in T\}$ where $v$ is a fixed vertex of $Q_d$ and $T$ is a subset of $[d]$ of cardinality $r+1$. By Lemma~\ref{Tsupp} we know that there exists some $x \in X$ with $\operatorname{supp}(x) = T$. By (\ref{satis}) we have
\[\sum_{i=1}^d x_i f_{e(v,i)} = \sum_{i\in T}x_if_{e(v,i)} = 0.\]
Therefore, by Lemma~\ref{linalg},
\[\operatorname{wsat}\left(Q_d,S_{r+1}\right)\geq \operatorname{dim}(\operatorname{span}\left\{f_e:e\in E(Q_d)\right\})\]
which equals $w$ by (\ref{BigDim}). The result follows.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{theProp}]
We proceed by induction on $d$. We begin by settling some easy boundary cases before explaining the inductive step.
\begin{case}
$r=0$.
\end{case}
In this case, $S_{r+1}\simeq K_2$. Also, $w=0$ and $X=\mathbb{R}^d$. We let $F$ be a spanning subgraph of $Q_d$ with no edges and set $f_e:=0$ for every $e\in Q_d$. It is trivial to check that (\ref{wsat}), (\ref{satis}) and (\ref{BigDim}) are satisfied.
\begin{case}
$d=r\geq 1$.
\end{case}
In this case, $w=d2^{d-1}= \left|E\left(Q_d\right)\right|$ and $X=\{0\}$. We define $F:=Q_d$ and let $\left\{f_e: e\in E\left(Q_d\right)\right\}$ be a basis for $\mathbb{R}^w$. Clearly (\ref{wsat}), (\ref{satis}) and (\ref{BigDim}) are satisfied.
\begin{case}
$d>r\geq1$.
\end{case}
We begin by constructing $F$ in such a way that (\ref{wsat}) is satisfied. For $i\in \{0,1\}$, let $Q_{d-1}^i$ denote the subgraph of $Q_d$ induced by $\{0,1\}^{d-1}\times\{i\}$. Note that both $Q_{d-1}^0$ and $Q_{d-1}^1$ are isomorphic to $Q_{d-1}$. Let $F$ be a spanning subgraph of $Q_d$ such that
\begin{itemize}
\item the subgraph $F_0$ of $F$ induced by $V\left(Q_{d-1}^0\right)$ is a weakly $(Q_{d-1},S_{r+1})$-saturated graph of minimum size,
\item the subgraph $F_1$ of $F$ induced by $V\left(Q_{d-1}^1\right)$ is a weakly $(Q_{d-1},S_{r})$-saturated graph of minimum size, and
\item $F$ contains no edge in direction $d$.
\end{itemize}
Figure~\ref{Q5} contains a specific instance of this construction. Define $w_0:=\operatorname{wsat}\left(Q_{d-1},S_{r+1}\right)$ and $w_1:=\operatorname{wsat}\left(Q_{d-1},S_{r}\right)$. By construction, we have $|E(F)|=w_0+w_1$ which is equal to $w$ by the inductive hypothesis. Let us verify that $F$ is weakly $\left(Q_d,S_{r+1}\right)$-saturated. To see this, we add the edges of $E\left(Q_d\right)\setminus E(F)$ to $F$ in three stages. By construction, we can begin by adding all edges of $Q_{d-1}^0$ which are not present in $F_0$ in such a way that each edge completes a copy of $S_{r+1}$ in $Q_{d-1}^0$ when it is added. In the second stage, we add all edges of $Q_d$ in direction $d$ one by one in any order. Since every vertex of $Q_d$ has degree $d\geq r+1$ and every edge of $Q_{d-1}^0$ has already been added, we get that every edge added in this stage completes a copy of $S_{r+1}$ in $Q_d$. Finally, we add the edges of $Q_{d-1}^1$ which are not present in $F_1$ in such a way that each added edge completes a copy of $S_{r}$ in $Q_{d-1}^1$. Since the edges in direction $d$ have already been added, we see that every such edge completes a copy of $S_{r+1}$ in $Q_d$. Therefore, (\ref{wsat}) holds.
\begin{figure}[htbp]
\newcommand{2.1}{1.6}
\newcommand{1.6}{1.6}
\newcommand{1.3}{1.6}
\newcommand{2.5*\Height}{2.5*1.6}
\newcommand{0.17*\Shift}{0.17*2.5*\Height}
\tikzstyle{loosely dashed}= [dash pattern=on 5pt off 3pt]
\begin{center}
\begin{tikzpicture}
\coordinate (O1) at (0,0,0);
\coordinate (A1) at (0,1.3,0);
\coordinate (B1) at (0,1.3,1.6);
\coordinate (C1) at (0,0,1.6);
\coordinate (D1) at (2.1,0,0);
\coordinate (E1) at (2.1,1.3,0);
\coordinate (F1) at (2.1,1.3,1.6);
\coordinate (G1) at (2.1,0,1.6);
\draw[very thick] (O1) -- (C1) -- (G1) -- (D1) -- cycle
\draw[very thick,dashed] (O1) -- (A1) -- (E1) -- (D1)
\draw[very thick,dashed] (A1) -- (B1) -- (C1)
\draw[very thick,dashed] (E1) -- (F1) -- (G1)
\draw[very thick] (B1) -- (F1)
\draw (2.1/2 +2.5*\Height/2, 1.3/2,1.6/2) node [rectangle, draw, thick, rounded corners, minimum height=6.2em,minimum width=18em]{};
\draw (2.1/2+2.5*\Height/2,0,1.6/2) node [label={[label distance=0.45cm]270:$F_{1}$}] {};
\coordinate (O2) at (0+2.5*\Height,0,0);
\coordinate (A2) at (0+2.5*\Height,1.3,0);
\coordinate (B2) at (0+2.5*\Height,1.3,1.6);
\coordinate (C2) at (0+2.5*\Height,0,1.6);
\coordinate (D2) at (2.1+2.5*\Height,0,0);
\coordinate (E2) at (2.1+2.5*\Height,1.3,0);
\coordinate (F2) at (2.1+2.5*\Height,1.3,1.6);
\coordinate (G2) at (2.1+2.5*\Height,0,1.6);
\draw[very thick,dashed] (O2) -- (C2) -- (G2) -- (D2) -- cycle
\draw[very thick,dashed] (O2) -- (A2) -- (E2) -- (D2)
\draw[very thick,dashed] (A2) -- (B2) -- (C2)
\draw[very thick,dashed] (E2) -- (F2) -- (G2)
\draw[very thick] (B2) -- (F2)
\coordinate (O3) at (0,-2.5*\Height,0);
\coordinate (A3) at (0,1.3-2.5*\Height,0);
\coordinate (B3) at (0,1.3-2.5*\Height,1.6);
\coordinate (C3) at (0,-2.5*\Height,1.6);
\coordinate (D3) at (2.1,-2.5*\Height,0);
\coordinate (E3) at (2.1,1.3-2.5*\Height,0);
\coordinate (F3) at (2.1,1.3-2.5*\Height,1.6);
\coordinate (G3) at (2.1,-2.5*\Height,1.6);
\draw[very thick] (O3) -- (C3) -- (G3) -- (D3) -- cycle
\draw[very thick] (O3) -- (A3) -- (E3) -- (D3)
\draw[very thick] (A3) -- (B3) -- (C3)
\draw[very thick] (E3) -- (F3) -- (G3)
\draw[very thick] (B3) -- (F3)
\draw (2.1/2 +2.5*\Height/2, 1.3/2 - 2.5*\Height,1.6/2) node [rectangle, draw, thick, rounded corners, minimum height=6.2em,minimum width=18em]{};
\draw (2.1/2+2.5*\Height/2,-2.5*\Height,1.6/2) node [label={[label distance=0.45cm]270:$F_{0}$}] {};
\coordinate (O4) at (0+2.5*\Height,0-2.5*\Height,0);
\coordinate (A4) at (0+2.5*\Height,1.3-2.5*\Height,0);
\coordinate (B4) at (0+2.5*\Height,1.3-2.5*\Height,1.6);
\coordinate (C4) at (0+2.5*\Height,0-2.5*\Height,1.6);
\coordinate (D4) at (2.1+2.5*\Height,0-2.5*\Height,0);
\coordinate (E4) at (2.1+2.5*\Height,1.3-2.5*\Height,0);
\coordinate (F4) at (2.1+2.5*\Height,1.3-2.5*\Height,1.6);
\coordinate (G4) at (2.1+2.5*\Height,0-2.5*\Height,1.6);
\draw[very thick] (O4) -- (C4) -- (G4) -- (D4) -- cycle
\draw[very thick,dashed] (O4) -- (A4) -- (E4) -- (D4)
\draw[very thick,dashed] (A4) -- (B4) -- (C4)
\draw[very thick,dashed] (E4) -- (F4) -- (G4)
\draw[very thick] (B4) -- (F4)
\draw[line width=2,loosely dashed] (2.1 +0.17*\Shift, 1.3/2,1.6/2) -- (2.5*\Height-0.17*\Shift, 1.3/2,1.6/2);
\draw[line width=2,loosely dashed] (2.1 +0.17*\Shift, 1.3/2 - 2.5*\Height,1.6/2) -- (2.5*\Height-0.17*\Shift, 1.3/2 - 2.5*\Height,1.6/2);
\draw[line width=2,loosely dashed] (2.1/2, -0.17*\Shift ,1.6/2) -- (2.1/2, -2.5*\Height+0.17*\Shift +1.3,1.6/2);
\draw[line width=2,loosely dashed] (2.5*\Height + 2.1/2, -0.17*\Shift ,1.6/2) -- (2.5*\Height +2.1/2, -2.5*\Height+0.17*\Shift +1.3,1.6/2);
\end{tikzpicture}
\end{center}
\caption{A weakly $(Q_5,S_4)$-saturated graph $F$ constructed inductively from a weakly $(Q_4,S_4)$-saturated graph $F_0$ and a weakly $(Q_4,S_3)$-saturated graph $F_1$, each of which is also constructed inductively. }
\label{Q5}
\end{figure}
Thus, all that remains is to construct $\left\{f_e: e\in E\left(Q_d\right)\right\}$ in such a way that (\ref{satis}) and (\ref{BigDim}) are satisfied. Let $\pi:X\to \mathbb{R}^{d-1}$ be the standard projection defined by $\pi:\left(x_1,\dots,x_d\right)\mapsto \left(x_1,\dots,x_{d-1}\right)$. Let $z\in X$ be an arbitrary vector such that $d\in\operatorname{supp}\left(z\right)$ (such a vector exists by Lemma~\ref{Tsupp}) and let $T_{z}: X\to X$ be the linear map defined by
\[T_{z}(x):= x - \frac{x_d}{z_d}z\]
for $x\in X$. Define
\[X_0:= \pi\left(T_{z}(X)\right)\text { and}\]
\[X_1:= \pi(X).\]
Clearly, $\ker\left(T_{z}\right) = \operatorname{span}\left\{z\right\}$ and, since every $x\in X\setminus\{0\}$ has $|\operatorname{supp}(x)|\geq r+1 \geq2$, we have $\ker(\pi)=\{0\}$. This implies that $X_0$ has dimension $d-r-1$ and that $X_1$ has dimension $d-r$. Also, by construction, we have that $|\operatorname{supp}(x)|\geq r+1$ for every non-zero $x\in X_0$ and $|\operatorname{supp}(x)|\geq r$ for every non-zero $x\in X_1$.
Therefore, by the inductive hypothesis, there exists $\left\{f_e^0: e\in E\left(Q_{d-1}^0\right)\right\}$ in $\mathbb{R}^{w_0}$ and $\left\{f_e^1: e\in E\left(Q_{d-1}^1\right)\right\}$ in $\mathbb{R}^{w_1}$ such that
\begin{enumerate}
\renewcommand{\theenumi}{Q2.\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\addtocounter{enumi}{-1}
{\setlength\itemindent{10pt}\item\label{satisfied0} $\sum_{i=1}^{d-1} x_i f_{e(v,i)}^0 = 0$ for every $v\in V\left(Q_{d-1}^0\right)$ and $x\in X_0$,}
{\setlength\itemindent{10pt}\item\label{satisfied1} $\sum_{i=1}^{d-1} x_i f_{e(v,i)}^1 = 0$ for every $v\in V\left(Q_{d-1}^1\right)$ and $x\in X_1$,}
\renewcommand{\theenumi}{Q3.\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\addtocounter{enumi}{-2}
{\setlength\itemindent{10pt}\item\label{bigDim0} $\operatorname{span}\left\{f_e^0: e\in E\left(Q_{d-1}^0\right)\right\}=\mathbb{R}^{w_0}$, and}
{\setlength\itemindent{10pt}\item\label{bigDim1} $\operatorname{span}\left\{f_e^1: e\in E\left(Q_{d-1}^1\right)\right\}=\mathbb{R}^{w_1}$. }
\end{enumerate}
We will define the vectors $\left\{f_e: e\in E\left(Q_d\right)\right\}\subseteq \mathbb{R}^{w_0}\oplus\mathbb{R}^{w_1} \simeq \mathbb{R}^w$ satisfying (\ref{satis}) and (\ref{BigDim}) in three stages. First, if $e\in E\left(Q_{d-1}^0\right)$, then we set
\[f_e:=f_e^0\oplus 0.\]
Next, for each edge of the form $e = e(v,d)$ for $v\in V\left(Q_{d-1}^0\right)$, we let
\begin{equation}
\label{xstar}
f_e:= -\frac{1}{z_d}\sum_{i=1}^{d-1}z_if_{e(v,i)}
\end{equation}
(recall the definition of $z$ above). Finally, if $e=uv\in E\left(Q_{d-1}^1\right)$, then we let $e' = u'v'$ where $u'$ and $v'$ are the unique neighbours of $u$ and $v$ in $V\left(Q_{d-1}^0\right)$ and define
\[f_e:=f_{e'}^0\oplus f_e^1.\]
It is easily observed that $\operatorname{dim}\left(\operatorname{span}\left\{f_e: e\in E\left(Q_d\right)\right\}\right) = w_0 +w_1 = w$ by (\ref{bigDim0}), (\ref{bigDim1}) and the construction of $f_e$ given above. Therefore, (\ref{BigDim}) holds.
Finally, we prove that (\ref{satis}) is satisfied. First, let $v\in V\left(Q_{d-1}^0\right)$ and let $x\in X$ be arbitrary. Define $x^\dagger:=T_{z}(x)$ and note that $d\notin \operatorname{supp}\left(x^\dagger\right)$. We have
\[\sum_{i=1}^{d} x_i f_{e(v,i)} = \sum_{i=1}^{d-1} x^\dagger_i f_{e(v,i)} + \frac{x_d}{z_d}\sum_{i=1}^dz_if_{e(v,i)}\]
by definition of $T_{z}$. Both of the sums on the right side are zero by (\ref{satisfied0}) and (\ref{xstar}). Now, suppose that $v\in V\left(Q_{d-1}^1\right)$ and let $v'$ be the unique neighbour of $v$ in $V\left(Q_{d-1}^0\right)$. Given $x\in X$, we have
\[\sum_{i=1}^{d} x_i f_{e(v,i)} = \sum_{i=1}^{d}x_if_{e(v',i)} + \sum_{i=1}^{d-1}x_i \left(0\oplus f_{e(v,i)}^1\right) \]
which is zero by (\ref{satisfied1}) and the fact that $\sum_{i=1}^{d} x_i f_{e(v',i)}=0,$ which was proven above (as $v' \in V\left(Q_{d-1}^0\right)$). Therefore, (\ref{satis}) holds. This completes the proof of the lemma.
\end{proof}
\section{General Grids}
\label{generalSec}
Our objective in this section is to determine the weak saturation number of $S_{r+1}$ in $\prod_{i=1}^d[a_i]$ in full generality. We express this weak saturation number in terms of the following recurrence relation.
\begin{defn}
\label{recursion}
Let $d$ and $r$ be integers such that $0\leq r\leq 2d$ and let $a_1,\ldots,a_d \ge 2$. Define $w_r(a_1,\ldots,a_d)$ to be
\begin{itemize}
\item $0, \text{ if } r=0$;
\item \( \displaystyle \sum_{j=1}^d(a_j-1)\prod_{i\neq j}a_i, \text{ if }r=2d; \)
\item \( \displaystyle d2^{d-1}, \text{ if } {a_1=\dots=a_d=2\text{ and }d+1\leq r\leq 2d-1}; \)
\item \( \displaystyle r2^{r-1} + \sum_{j=1}^{r-2} \binom{d-j-1}{r -j}j 2^{j-1}, \text{ if } {a_1=\dots=a_d=2 \text{ and } 1\leq r\leq d}; \text{ and } \)
\item \( \displaystyle {w_r(a_1,\dots,a_{i-1},a_i-1,a_{i+1},\dots a_d)} + {w_{r-1}(a_1,\dots,a_{i-1},a_{i+1},\dots a_d)} \\+ {\sum_{\substack{S\subseteq[d]\setminus\{i\}\\ |S|\geq 2d-r}}2^{|S|}\prod_{j\notin S}(a_j-2)},\text{ if }1\leq r\leq 2d-1\text{ and }a_i\geq3.\)
\end{itemize}
\end{defn}
We prove the following.
\begin{thm}\label{wsatgrid}
For $0 \le r \le 2d$ and $a_1,\dots,a_d\geq2$, we have
$$\operatorname{wsat}\left(\prod_{i=1}^d[a_i], S_{r+1}\right) = w_r(a_1,\ldots,a_d).$$
\end{thm}
Before presenting the proof let us remark that, for $d\geq r$, the expression in Theorem~\ref{genwsat} satisfies the recurrence in Definition~\ref{recursion}. Therefore, Theorem~\ref{wsatgrid} implies Theorem~\ref{genwsat}. Let $a_1,\dots,a_{d}\geq2$ and define $G:=\prod_{i=1}^d[a_i]$. In proving of Theorem~\ref{wsatgrid}, we employ an inductive approach similar to the one used in the proof of Theorem~\ref{wsatcube}. The main difference is that a vertex $v$ of $G$ may be incident to either one or two edges in direction $i\in[d]$ depending on whether or not $v_i\in\left\{1,a_i\right\}$. With this in mind, we define a labelling of the edges of $G$.
\begin{defn}
Say that an edge $e=uv\in E(G)$ in direction $i\in [d]$ is \emph{odd} if $\min\left\{u_i,v_i\right\}$ is odd and \emph{even} otherwise. We label $e$ by $e(v,2i-1)$ if $e$ is odd and $e(v,2i)$ if $e$ is even.
\end{defn}
Note that each edge of $G$ receives two labels, one for each of its endpoints.
\begin{defn}
For $v\in V(G)$, define $I^G_v:=\left\{j\in [2d]: e(v,j)\in E(G)\right\}$.
\end{defn}
We are now in position to prove Theorem~\ref{wsatgrid}. Using Lemma~\ref{support}, we let $X$ be a subspace of $\mathbb{R}^{2d}$ of dimension $2d-r$ such that $|\operatorname{supp}(x)|\geq r+1$ for every $x\in X\setminus\{0\}$. Define $w:=w_r(a_1,\ldots,a_d)$. As with the proof of Theorem~\ref{wsatcube}, we state a lemma from which we deduce Theorem~\ref{wsatgrid}, and then we prove the lemma.
\begin{lem}
\label{theGridProp}
There is a spanning subgraph $F$ of $G$ and a collection $\left\{f_e: e\in E\left(G\right)\right\}\subseteq \mathbb{R}^w$ such that
\begin{enumerate}
\renewcommand{\theenumi}{G\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
{\setlength\itemindent{4pt}\item \label{Gwsat} $F$ is weakly $\left(G,S_{r+1}\right)$-saturated and $|E(F)|=w$,}
{\setlength\itemindent{4pt}\item \label{Gsatisfied} $\sum_{i=1}^{2d} x_i f_{e(v,i)} = 0$ for every $v\in V\left(G\right)$ and $x\in X$ such that $\operatorname{supp}(x)\subseteq I^G_v$, and}
{\setlength\itemindent{4pt}\item \label{GbigDim} $\operatorname{span}\left\{f_e: e\in E\left(G\right)\right\}=\mathbb{R}^w$.}
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Theorem~\ref{wsatgrid}.]
First observe that the existence of a graph $F$ satisfying (\ref{Gwsat}) implies $\operatorname{wsat}(G, S_{r+1}) \le w$. To obtain a matching lower bound, we apply Lemma \ref{linalg} as we did in the hypercube case. The edge sets of copies of $S_{r+1}$ in $G$ are the sets of the form $\{e(v,i): i \in T\}$, where $v \in V(G)$ and $T$ is a subset of $I^G_v$ of cardinality $r+1$. By applying Lemma~\ref{Tsupp} together with (\ref{Gsatisfied}), we see that the conditions of Lemma~\ref{linalg} are satisfied. Thus by (\ref{GbigDim}), $\operatorname{wsat}(G,S_{r+1}) \geq w$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{theGridProp}]
We proceed by induction on $|V(G)|$. We begin with the boundary cases.
\begin{case3}
$r = 0$.
\end{case3}
In this case, $S_{r+1}\simeq K_2$. Also, $w=0$ and $X=\mathbb{R}^{2d}$. We let $F$ be a spanning subgraph of $G$ with no edges and set $f_e:=0$ for every $e\in Q_d$. Properties (\ref{Gwsat}), (\ref{Gsatisfied}) and (\ref{GbigDim}) are satisfied trivially.
\begin{case3}
$r=2d \geq 2$.
\end{case3}
In this case, $w=|E(G)|$ and $X=\{0\}$. We define $F:=G$ and let $\{f_e:e\in E(G)\}$ be a basis for $\mathbb{R}^w$. Clearly (\ref{Gwsat}), (\ref{Gsatisfied}) and (\ref{GbigDim}) are satisfied.
\begin{case3}
$a_1 = \ldots = a_d=2$ and $1\leq r\leq 2d-1$.
\end{case3}
In this case, $G$ is isomorphic to $Q_d$ and every edge of $G$ is odd. First, suppose that $d+1\leq r\leq 2d-1$. Then we have $w=|E(G)|$ and we define $F:=G$ and let $\{f_e:e\in E(G)\}$ be a basis for $\mathbb{R}^w$.
On the other hand, if $1\leq r\leq d$, then we let $X'$ be the subspace of $X$ consisting of all vectors $x$ of $X$ such that every element of $\operatorname{supp}(x)$ is odd. It is not hard to show that $X'$ has dimension $d-r$ and that every vector $x\in X'$ has $|\operatorname{supp}(x)|\geq r+1$. Thus, we are done by Lemma~\ref{theProp}.
\begin{case3}
$a_i\geq3$ for some $i\in [d]$ and $1\leq r\leq 2d-1$.
\end{case3}
Without loss of generality, assume that $a_d\geq3$. Define
\[G_1:=\prod_{i=1}^{d-1}[a_i]\times [a_d-1],\text{ and}\]
\[G_2:=G\setminus G_1.\]
Observe that every vertex of $G_2$ has a unique neighbour in $V(G_1)$. The edges with one endpoint in $G_1$ and the other in $G_2$ will play a particular role in the proof. We define
$$\tau := \left\{
\begin{array}{ll}
2d-1 & a_d - 1 \text{ is odd}\\
2d & a_d - 1 \text{ is even},
\end{array}\right.$$
and we write $\bar{\tau}$ for the unique element of $\{2d-1,2d\}\backslash \{\tau\}$. Observe that for $v \in V(G_2)$, we have that $\bar{\tau} \notin I^G_v$, and that $I^{G_2}_v = I^G_v\setminus\{\tau\}$. On the other hand, if $v\in V(G_1)$, then
\[I_v^{G_1} = \left\{\begin{array}{ll} I_v^G\setminus \{\tau\} & \text{if }v_d= a_d-1,\\
I_v^G & \text{otherwise}.\end{array}\right.\]
Define
\[Y := \left\{v \in V(G_1): v_d = a_d -1 \text{ and } d_{G_1}(v) < r\right\}.\]
It is not hard to see that
\[|Y|= {\sum_{\substack{S\subseteq[d-1]\\ |S|\geq 2d-r}}2^{|S|}\prod_{j\notin S}(a_j-2)}.\]
For brevity we write $y:=|Y|$ and
\[w_1:=\operatorname{wsat}(G_1,S_{r+1}),\]
\[w_2:=\operatorname{wsat}(G_2,S_r).\]
We construct a graph $F$ satisfying (\ref{Gwsat}). Define $F$ to be a spanning subgraph of $G$ such that
\begin{itemize}
\item the subgraph $F_1$ of $F$ induced by $V(G_1)$ is a weakly $(G_1,S_{r+1})$-saturated graph of minimum size,
\item the subgraph $F_2$ of $F$ induced by $V(G_2)$ is a weakly $(G_2,S_{r})$-saturated graph of minimum size, and
\item an edge $e$ from $V(G_1)$ to $V(G_2)$ is contained in $F$ if and only if $e$ is of the form $e(v,\tau)$ for $v\in Y$.
\end{itemize}
Applying the inductive hypothesis and Definition~\ref{recursion}, we see that $|E(F)| = w_1+w_2+y = w$, as required. To see that $F$ is weakly $(G,S_{r+1})$-saturated, we add the edges of $E(G)\backslash E(F)$ to $F$ in three stages. First, by definition of $F_1$, we can add the edges that are not present in $E(F_1)$ in such a way that every added edge completes a copy of $S_{r+1}$ in $G_1$. Next, we can add the edges of the form $e(v,\tau)$, where $v\notin Y$ and $v_d = a_d -1$, in any order. By definition of $Y$, we see that every such $v$ has at least $r$ neighbours in $G_1$. As every edge in $E(G_1)$ has already been added, the addition of $e(v,\tau)$ completes a copy of $S_{r+1}$ in $G$. Finally, we add the edges of $G_2$ that are not present in $F_2$ in such a way that each added edge completes a copy of $S_r$ in $G_2$. Every such edge completes a copy of $S_{r+1}$ in $G$ since every vertex in $G_2$ has a neighbour in $G_1$ and every edge between $G_1$ and $G_2$ is already present. Thus, (\ref{Gwsat}) holds.
It remains to find a collection $\{f_e: e \in E(G)\}$ satisfying (\ref{Gsatisfied}) and (\ref{GbigDim}). Let $\pi:X \rightarrow \mathbb{R}^{2d-2}$ be the projection defined by $\pi:(x_1, \ldots, x_{2d}) \mapsto (x_1,\ldots, x_{2d -2})$. Let $z$ be a fixed vector of $X$ such that $\bar{\tau}\in \operatorname{supp}\left(z\right)$ and define $T_{z}:X\to X$ by
\[T_{z}(x):= x-\frac{x_{\bar{\tau}}}{z_{\bar{\tau}}}.\]
Define $X_1:=X$ and $X_2:= \pi\left(T_{z}(X)\right)$. Since $\ker\left(T_{z}\right)=\operatorname{span}\left\{z\right\}$ and $\ker(\pi)=\{0\}$ we see that $X_2$ has dimension $2d - r -1 = 2(d-1) - (r-1)$. Also, by construction, we have $|\operatorname{supp}(x)| \ge r$ for every non-zero $x \in X_2$. By applying the inductive hypothesis to both $G_1$ and $G_2$, we can find collections $\{f^1_e: e \in E(G_1)\}$ in $\mathbb{R}^{w_1}$ and $\{f^2_e: e \in E(G_2)\}$ in $\mathbb{R}^{w_2}$ such that
\begin{enumerate}
\renewcommand{\theenumi}{G2.\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
{\setlength\itemindent{10pt}\item\label{Gsatisfied0} $\sum_{i=1}^{2d} x_i f_{e(v,i)}^1 = 0$ for every $v\in V\left(G_1\right)$ and $x\in X_1$ with $\operatorname{supp}(x) \subseteq I^{G_1}_v$,}
{\setlength\itemindent{10pt}\item\label{Gsatisfied1} $\sum_{i=1}^{2d-2} x_i f_{e(v,i)}^2 = 0$ for every $v\in V\left(G_2\right)$ and $x\in X_2$ with $\operatorname{supp}(x) \subseteq I^{G_2}_v$,}
\renewcommand{\theenumi}{G3.\arabic{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\addtocounter{enumi}{-2}
{\setlength\itemindent{10pt}\item\label{GbigDim0} $\operatorname{span}\left\{f_e^1: e\in E\left(G_1\right)\right\}=\mathbb{R}^{w_1}$, and}
{\setlength\itemindent{10pt}\item\label{GbigDim1} $\operatorname{span}\left\{f_e^2: e\in E\left(G_2\right)\right\}=\mathbb{R}^{w_2}$. }
\end{enumerate}
Using this, we will now construct a collection $\{f_e: e \in E(G)\} \subseteq \mathbb{R}^{w_1} \oplus \mathbb{R}^{w_2} \oplus \mathbb{R}^{y} \simeq \mathbb{R}^{w}$ in four steps. First, for $e \in E(G_1)$, we define
$$f_e := f_e^1 \oplus 0 \oplus 0.$$
Let $\{f_y^3: y \in Y\}$ be a basis of $\mathbb{R}^{y}$. Next, we consider edges $e = uv$, where $v \in V(G_1)$, and $u \in V(G_2)$. If $v$ is in $Y$, then we let
$$f_e := 0 \oplus 0 \oplus f^3_v.$$
If $v$ is not in $Y$, then let $z^v\in X$ be a vector such that $\operatorname{supp}(z^v)\subseteq I_v^G$ and $\tau\in \operatorname{supp}\left(z^v\right)$, which exists by Lemma~\ref{support}. Define
\begin{equation}
\label{xstar2}
f_e := -\frac{1}{z^v_\tau}\sum_{i\in [2d]\setminus\{\tau\}}z^v_i f_{e(v,i)}.
\end{equation}
Finally if $e =uv \in E(G_2)$, then let $e' = u'v'$ where $u'v'$ are the unique neighbours of $u$ and $v$ in $V(G_1)$ and define
$$f_e := f^1_{e'}\oplus f^2_{e} \oplus 0.$$
It is clear from (\ref{GbigDim0}), (\ref{GbigDim1}) and the construction of $f_e$, that the dimension of $\operatorname{span} \{f_e:e \in E(G)\}$ is $w_1 + w_2 + y = w$. Thus (\ref{GbigDim}) is satisfied.
It remains to show that (\ref{Gsatisfied}) holds. Firstly, suppose $v \in V(G_1)$ and let $x \in X$ be such that $\operatorname{supp}(x) \subseteq I^G_v$. If $v_d < a_d - 1$, then $\sum_{i=1}^{2d} x_i f_{e(v,i)} = 0$ by (\ref{Gsatisfied0}). If $v_d = a_d - 1$ and $v \in Y$, then, by definition of $Y$, we have $|I^G_v| \le r$ and so it must be the case that $x=0$ and we are done. Now suppose that $v\notin Y$ and that $v_d = a_d -1$. Define
\[x^{\dagger} := x - \frac{x_\tau}{z^v_\tau}z^v.\]
We have,
\begin{equation}\label{G2sat}
\sum_{i=1}^{2d}x_if_{e(v,i)} = \sum_{i=1}^{2d} x^{\dagger}_i f_{e(v,i)} + \frac{x_{\tau}}{z^v_{\tau}}\sum_{i=1}^{2d}z_i^vf_{e(v,i)}.
\end{equation}
Note that $\tau \notin \operatorname{supp}(x^{\dagger})$ and thus $\operatorname{supp}(x^\dagger) \subseteq I^{G_1}_v$. Therefore, both of the sums on the right side of (\ref{G2sat}) are zero by (\ref{Gsatisfied0}) and (\ref{xstar2}).
Finally, consider $v \in V(G_2)$. Let $v'$ be the unique neighbour of $v$ in $V(G_2)$. Given $x \in X$, with $\operatorname{supp}(x) \subseteq I^G_{v}$ we have
$$\sum_{i=1}^{2d} x_i f_{e(v,i)} = \sum_{i=1}^{2d}x_i f_{e(v',i)} + \sum_{i=1}^{2d - 2}x_i \left(0 \oplus f^2_{e(v,i)} \oplus 0\right).$$
We have that $\sum_{i=1}^{2d}x_i f_{e(v',i)} = 0$ for $v' \in V(G_1)$, as proved above. The second sum on the right side is zero by (\ref{Gsatisfied1}), which is applicable as $\bar{\tau} \notin I_v^G\supseteq \operatorname{supp}(x)$, and so $x \in T_{z}(X)$. This completes the proof of the lemma.
\end{proof}
\section{Upper Bound Constructions}
\label{upperSec}
In this section, we prove a recursive upper bound on $m(Q_d,r)$ for general $d\geq r\geq 1$ and then apply it to obtain an exact expression for $m(Q_d,3)$.
\begin{lem}
\label{generalRUpper}
For $d\geq r\geq1$,
\[m\left(Q_d,r\right) \leq m\left(Q_{d-r},r\right) + (r-1)m\left(Q_{d-r},r-1\right) + \sum_{j=1}^{\left\lceil r/2\right\rceil-1}\binom{r}{2j+1}m\left(Q_{d-r},r-2j\right).\]
\end{lem}
\begin{proof}
Let $d\geq r$ be fixed positive integers. For $1\leq t\leq r$, let $B_t$ be a subset of $V\left(Q_{d-r}\right)$ of cardinality $m\left(Q_{d-r},t\right)$ which percolates with respect to the $t$-neighbour bootstrap process in $Q_{d-r}$.
Given $x\in V(Q_d)$, let $[x]_r$ and $[x]_{d-r}$ denote the vectors obtained by restricting $x$ to its first $r$ coordinates and last $d-r$ coordinates, respectively. We partition $\{0,1\}^r$ into $r+1$ sets $L_0,\dots,L_r$ such that $L_i$ consists of the vectors whose coordinate sum is equal to $i$. We construct a percolating set $A_0$ in $Q_d$. Given $x\in V(Q_d)$, we include $x$ in $A_0$ if one of the following holds:
\begin{itemize}
\item $[x]_r\in L_1$ and either
\begin{itemize}
\item $[x]_r = (1,0,\dots,0)$ and $[x]_{d-r}\in B_r$.
\item $[x]_r \neq (1,0,\dots,0)$ and $[x]_{d-r}\in B_{r-1}$.
\end{itemize}
\item $[x]_r\in L_{2j+1}$ for some $1\leq j\leq \left\lceil r/2\right\rceil-1$ and $[x]_{d-r}\in B_{r-2j}$.
\end{itemize}
It is clear that
\[|A_0| = m\left(Q_{d-r},r\right) + (r-1)m\left(Q_{d-r},r-1\right) + \sum_{j=1}^{\left\lceil r/2\right\rceil-1}\binom{r}{2j+1}m\left(Q_{d-r},r-2j\right)\]
by construction. We will be done if we can show that $A_0$ percolates with respect to the $r$-neighbour bootstrap process.
We begin by showing that every vertex $x$ with $[x]_r\in L_0\cup L_1$ is eventually infected. First, we can infect every vertex $x$ such that $[x]_r = (1,0,\dots,0)$, one by one in some order, by definition of $B_r$. Next, consider a vertex $x$ such that $[x]_r\in L_0$ and $[x]_{d-r}\in B_{r-1}$. Then $x$ has $r-1$ neighbours $z\in A_0$ such that $[z]_r\neq (1,0,\dots,0)$, by construction, and one infected neighbour $y$ such that $[y]_r=(1,0,\dots,0)$. Thus, every such $x$ becomes infected. Now, by definition of $B_{r-1}$, the remaining vertices $x$ such that $[x]_r\in L_0$ can be infected since every such vertex has an infected neighbour $y$ such that $[y]_r=(1,0,\dots,0)$. Finally, each vertex $x$ such that $x\neq (1,0,\dots,0)$ and $[x]_r\in L_1$ becomes infected using the definition of $B_{r-1}$ and the fact that every vertex $y$ with $[y]_r\in L_0$ is already infected.
Now, suppose that, for some $1\leq j\leq \left\lceil r/2\right\rceil-1$ every vertex $x$ such that $[x]_r \in L_0\cup\cdots\cup L_{2j-1}$ is already infected. We show that every vertex $x$ with $[x]_r\in L_{2j}\cup L_{2j+1}$ is eventually infected. First, consider a vertex $x$ with $[x]_r\in L_{2j}$ and $[x]_{d-r}\in B_{r-2j}$. Such a vertex has $2j$ infected neighbours $y$ such that $[y]_r\in L_{2j-1}$ and $r-2j$ neighbours $z$ such that $[z]_{r}\in L_{2j+1}\cap A_0$. Therefore, every such $x$ becomes infected. Now, by definition of $B_{r-2j}$, the remaining vertices $x$ such that $[x]_r\in L_{2j}$ can be infected since every such vertex has $2j$ infected neighbours $y$ such that $[y]_r\in L_{2j-1}$. Finally, each vertex $x$ such that $[x]_r\in L_{2j+1}$ becomes infected using the definition of $B_{r-2j}$ and the fact that every vertex $y$ with $[y]_r\in L_{2j-1}$ is already infected.
Finally, if $r$ is even, then we need to show that every vertex of $L_r$ becomes infected. Every such vertex has precisely $r$ neighbours in $L_{r-1}$. Thus, given that every vertex of $L_{r-1}$ is infected, $x$ becomes infected as well. This completes the proof.
\end{proof}
\begin{figure}[ht]
\newcommand{2.1}{2.1}
\newcommand{1.3}{1.3}
\tikzstyle{loosely dashed}= [dash pattern=on 5pt off 3pt]
\begin{center}
\begin{tikzpicture}
\node[circle,draw,minimum width=22pt] (A) at (0,0){};
\node[circle,draw,minimum width=22pt] (B) at (-2.1,1.3){$3$};
\node[circle,draw,minimum width=22pt] (C) at (0,2*1.3){};
\node[circle,draw,minimum width=22pt] (D) at (2.1,1.3){$2$};
\node[circle,draw,minimum width=22pt] (E) at (0,1.3){$2$};
\node[circle,draw,minimum width=22pt] (F) at (-2.1,2*1.3){};
\node[circle,draw,minimum width=22pt] (G) at (0,3*1.3){$1$};
\node[circle,draw,minimum width=22pt] (H) at (2.1,2*1.3){};
\draw[very thick] (A) -- (B) -- (C) -- (D) -- (A)
\draw[very thick] (A) -- (E);
\draw[very thick] (B) -- (F);
\draw[very thick] (C) -- (G);
\draw[very thick] (D) -- (H);
\draw[very thick] (E) -- (F) -- (G) -- (H) -- (E)
\end{tikzpicture}
\end{center}
\caption{An illustration of the set $A_0$ constructed in the proof of Theorem~\ref{generalRUpper} in the case $r=3$. Each node represents a copy of $Q_{d-3}$. The set $A_0$ consists of a copy of $B_i$ on each node labelled $i\in\{1,2,3\}$.}
\label{construction}
\end{figure}
We remark that the recursion in Lemma~\ref{generalRUpper} gives a bound of the form $m\left(Q_d,r\right)\leq \frac{(1+o(1))d^{r-1}}{r!}$ where the second order term is better than that of (\ref{Steiner}). Next, we prove Theorem~\ref{r=3Upper}.
\begin{proof}[Proof of Theorem~\ref{r=3Upper}]
The lower bound follows from Theorem~\ref{hyper}. We prove the upper bound by induction on $d$. First, we settle the cases $d\in\{3,\dots,8\}$. For notational convenience, we associate each element $v$ of $\{0,1\}^d$ with the of subset of $[d]$ for which $v$ is the characteristic vector. Moreover, we identify each non-empty subset of $[d]$ with the concatenation of its elements (e.g. $\{1,3,7\}$ is written $137$). One can verify (by hand or by computer) that the set $A_0^d$, defined below, percolates with respect to the $3$-neighbour bootstrap process in $Q_d$ and that it has the cardinality $\left\lceil\frac{d(d+3)}{6}\right\rceil+1$.
\begin{align*}A_0^3 &:= \{1,2,3,123\},\\
A_0^4 &:= \left(A_0^3\setminus\{3\}\right)\cup\{134,4,234\},\\
A_0^5 &:= \left(A_0^4\setminus\{134\}\right)\cup\{135,245,12345\},\\
A_0^6 &:= \left(A_0^5\setminus\{135,245\}\right)\cup\{346,12356,456,23456\},\\
A_0^7 &:= \left(A_0^6\setminus\{346\}\right)\cup\{13457,24567,12367,1234567\},\\
A_0^8 &:= \left(A_0^7\setminus\{13457,24567\}\right)\cup\{34568,1234578,34678,25678,2345678\}.\end{align*}
Now, suppose $d\geq9$ and that the theorem holds for smaller values of $d$. If $d$ is odd, then we apply Lemma~\ref{generalRUpper} to obtain
\[m\left(Q_d,3\right)\leq m\left(Q_{d-3},3\right)+ 2m\left(Q_{d-3},2\right)+ m\left(Q_{d-3},1\right).\]
Clearly, $m\left(Q_{d-3},1\right)=1$ and it is easy to show that $m\left(Q_{d-3},2\right) \leq \frac{d-3}{2}+1$ (since $d-3$ is even). Therefore, by the inductive hypothesis,
\[m\left(Q_d,3\right)\leq \left\lceil\frac{(d-3)d}{6}\right\rceil+1 + 2\left(\frac{d-3}{2}+1\right)+1 = \left\lceil\frac{d(d+3)}{6}\right\rceil+1.\]
Now, suppose that $d\geq10$ is even. For $t\in\{1,2,3\}$, let $B_t$ be a subset of $V\left(Q_{d-6}\right)$ of cardinality $m\left(Q_{d-6},t\right)$ which percolates with respect to the $t$-neighbour bootstrap process on $Q_{d-6}$ and let $A_0^6$ be as above. Given a vector $x\in V(Q_d)$, let $[x]_6$ be the restriction of $x$ to its first six coordinates and $[x]_{d-6}$ be the restriction of $x$ to its last $d-6$ coordinates. We define a subset $A_0$ of $V(Q_d)$. We include a vertex $x\in V(Q_d)$ in $A_0$ if $[x]_6\in A_0^6$ and one of the following holds:
\begin{itemize}
\item $[x]_6 = (0,0,1,1,0,1)$ and $[x]_{d-6}\in B_3$.
\item $[x]_6 \neq (0,0,1,1,0,1)$ and we have $x_5=1$ and $[x]_{d-6}\in B_2$.
\item $x_5=x_6=0$ and $[x]_{d-6}\in B_1$.
\end{itemize}
The fact that $A_0$ percolates follows from arguments similar to those given in the proof of Theorem~\ref{generalRUpper}; we omit the details. By construction,
\[|A_0| = m\left(Q_{d-6},3\right) + 4m\left(Q_{d-6},2\right) + 5m\left(Q_{d-6},1\right)\]
which equals
\[\left\lceil\frac{(d-6)(d-3)}{6}\right\rceil+1 + 4\left(\frac{d-6}{2}+1\right) + 5 = \left\lceil\frac{d(d+3)}{6}\right\rceil+1\]
by the inductive hypothesis. The result follows.
\end{proof}
\section{Concluding Remarks}
\label{concl}
In this paper, we have determined the main asymptotics of $m\left(Q_d,r\right)$ for fixed $r$ and $d$ tending to infinity and obtained a sharper result for $r=3$. We wonder whether sharper asymptotics are possible for general $r$.
\begin{ques}
For fixed $r\geq4$ and $d\to \infty$, does
\[\frac{m\left(Q_d,r\right)-\frac{d^{r-1}}{r!}}{d^{r-2}}\]
converge? If so, what is the limit?
\end{ques}
As Theorem~\ref{r=3Upper} illustrates, it may be possible to obtain an exact expression for $m\left(Q_d,r\right)$ for some small fixed values of $r$. The first open case is the following.
\begin{prob}
Determine $m\left(Q_d,4\right)$ for all $d\geq4$.
\end{prob}
Using a computer, we have determined that $m\left(Q_5,4\right) =14$, which is greater than the lower bound of $13$ implied by Theorem~\ref{hyper}. Thus, Theorem~\ref{hyper} is not tight for general $d$ and $r$. However, we wonder whether it could be tight when $r$ is fixed and $d$ is sufficiently large.
\begin{ques}
For fixed $r\geq4$, is it true that
\[m\left(Q_d,r\right)= 2^{r-1} + \left\lceil\sum_{j=1}^{r-1}\binom{d-j-1}{r-j}\frac{j2^{j-1}}{r}\right\rceil\]
provided that $d$ is sufficiently large?
\end{ques}
Another direction that one could take is to determine $\operatorname{wsat}\left(G,S_{r+1}\right)$ for other graphs $G$. For example, one could consider the $d$-dimensional torus $\mathbb{Z}_n^d$.
\begin{prob}
Determine $\operatorname{wsat}\left(\mathbb{Z}_n^d,S_{r+1}\right)$ for all $n,d$ and $r$.
\end{prob}
\begin{ack}
The authors would like to thank Eoin Long for encouraging us to work on this problem and for several stimulating discussions during the 18th Midrasha Mathematicae at the Institute of Advanced Studies in Jerusalem. We would also like to thank Micha{\l} Przykucki and Alex Scott for several enlightening discussions. In particular, we are grateful to the latter for bringing (\ref{wsatperc}) to our attention and for helpful comments regarding the presentation of this paper.
\end{ack}
|
1,108,101,565,180 | arxiv | \section{Introduction}
A minimal surface is a surface that locally minimizes its area. The theory of minimal surfaces arises due to the work of Lagrange in $1762$ when he was trying to find the surface $f=f(x,y)$ having the least area but bounded by a given closed curve. He could not succeed in finding any solution other than the plane. In $1776$, Meusnier discovered that the helicoid and catenoid are also the solutions. He also showed that this is equivalent to the vanishing of the
mean curvature, and the study of the differential geometry of these surfaces was started.
The study of minimal surfaces in Riemannian manifolds has been extensively developed \cite{RS}. Many of the developed techniques have played key roles in geometry and partial differential equations. Examples include monotonicity and tangent cone analysis originating in the regularity theory
for minimal surfaces, estimates for nonlinear equations based on the maximum principle arising in Bernstein's classical work, and even Lebesgue's
definition of the integral that he developed in his thesis on the Plateau problem for minimal surfaces \cite{Ra1}. However, minimal surfaces in Finsler spaces have not been studied and developped at the same pace. The fundamental contribution to the minimal surfaces of Finsler geometry was given by Shen \cite{ZS}. He introduced the notion of mean curvature for immersions into Finsler manifolds and he established some of its properties. As in the Riemannian case, if the mean curvature is identically zero, then the immersion is said to be minimal.
\par The Randers metric is the simplest class of non-Riemannian Finsler metric which is defined as $F=\alpha+\beta$, where $\alpha$ is a Riemannian metric and $\beta$ is a one-form. M. Souza and K. Tenenblat studied the rotational surfaces to become a minimal surfaces in Minkowski space with Randers metric \cite{MK} and Souza et. al obtained a Bernstein type theorem on a Randers space \cite{MJK}. After that few other authors studied the minimal surfaces on Randers spaces \cite{NC1, NC2, NC3, RK}. V. Balan studied the rotational surface and graph of a smooth function to become a minimal surfaces in Minkowski space with Kropina metric \cite{VB}. N. Cui and Y.B. Shen studied a special class of $(\alpha, \beta)$- metric which satisfies the system of differential equation \cite{NC4}
\begin{equation}
(\phi-s\phi')^{n-1}=1+p(s)+s^2q(s)
\end{equation}
\begin{equation}
(\phi-s\phi')^{n-2}\phi''=q(s)
\end{equation}
where, $p(s)$ and $q(s)$ are arbitrary odd smooth functions. But again Randers metric is the only metric they have found that satifies the above differential equation.\\
Matsumoto slope metric is another class of interesting $(\alpha,\beta)$ metric investigated by M. Matsumoto on the motivation of a letter written by P. Finsler himself in 1969 to Matsumoto. He considered the following problem:
A person is walking on a horizontal plane with some velocity, and the gravity is acting perpendicularly on this plane. Now suppose the person walks with same velocity on an inclined plane to the horizontal sea level. Now the question is under the presence of gravitational forces, what should be the trajectory the person should walk in the center to reach a given destination in the shortest time?
Based on this, he has formulated the Slope principle \cite{MM1,MM2}. Matsumoto showed that for a hiker walking the slope of a mountain under the presence of gravity, the most efficient time minimizing paths are not the Riemannian geodesics, but the geodesics of the slope metric $F =\frac{\alpha^2}{\alpha-\beta} $.
\par In this paper, we study the minimal surface of graph of a smooth function and translation surface in Minkowski Matsumoto slope metric and prove that plane is the only minimal surface in both the cases.
\section{Preliminaries}
Let $ M $ be an $n$-dimensional smooth manifold. $T_{x}M$ denotes the tangent space of $M$
at $x$. The tangent bundle of $ M $ is the disjoint union of tangent spaces $ TM:= \sqcup _{x \in M}T_xM $. We denote the elements of $TM$ by $(x,y)$ where $y\in T_{x}M $ and $TM_0:=TM \setminus\left\lbrace 0\right\rbrace $. \\
\begin{definition}
\cite{SSZ} A Finsler metric on $M$ is a function $F:TM \to [0,\infty)$ satisfying the following condition:
\\(i) $F$ is smooth on $TM_{0}$,
\\(ii) $F$ is a positively 1-homogeneous on the fibers of tangent bundle $TM$,
\\(iii) The Hessian of $\frac{F^2}{2}$ with element $g_{ij}=\frac{1}{2}\frac{\partial ^2F^2}{\partial y^i \partial y^j}$ is positive definite on $TM_0$.\\
The pair $(M,F)$ is called a Finsler space and $g_{ij}$ is called the fundamental tensor.
\end{definition}
A Matsumoto metric on $M$ is a Finsler structure $F$ on $TM$ is given by $F=\frac{\alpha^2}{\alpha-\beta}$, where $\alpha=\sqrt{a_{ij}y^iy^j}$ is a Riemannian metric and $\beta=b_iy^i$ is a one-form with $0<b<1/2$.
Let $(M^n,F)$ be a $n$-dimensional Finsler manifold. Then the Busemann-Hausdorff volume form is defined as $dV_{BH}=\sigma_{BH}(x)dx$, where
\begin{equation}\label{eqn9}
\sigma_{BH}(x)=\frac{Vol(B^n(1))}{Vol\left\lbrace (y^i)\in T_xM : F(x,y)< 1 \right\rbrace },
\end{equation}
$B^n$ is the Euclidean unit ball in $\mathbb{R}^n$ and $vol$ is the Euclidean volume.\\
Let $( \tilde{M}^m, \tilde{F})$ be a Finsler manifold, with local coordinates $(\tilde{x}^1, \dots, \tilde{x}^m)$ and
$\varphi : M^n \to (\tilde{M}^m, \tilde{F})$ be an immersion. Then $\tilde{F}$ induces a Finsler metric
on $M$, defined by
\begin{equation}\label{eqn2.1}
F(x,y)=\left( \varphi^*\tilde{F}\right) (x,y)=\tilde{F}\left( \varphi(x),\varphi_*(y)\right) ,\quad \forall (x,y)\in TM.
\end{equation}
At first we assume the following convention: the greek letters $\epsilon, \eta, \gamma, \tau, \dots$ are the indices ranging from $1$ to $n$ and the latin letters $i,j,k,l,\dots$ are the indices ranging from $1$ to $n+1$.
\par A Minkowski space is a vector space $V^n$ equipped with a Minkowski norm $F$ whose indicatrix is strongly convex. Equivalently, we can say that $F(x, y)$ depends only on $y \in T_x(V^n)$. In this paper we will consider the hypersurface $M^n$ in the Minkowski Matsumoto space $V^{n+1}$ given by the immersion $\varphi : M^n \to (V^{n+1}, F_b)$, where
$F_b= \frac{\alpha^2}{\alpha-\beta}$, $\alpha$ is the Euclidean metric,
and $\beta$ is a one-form with norm $b$, $0 \le b < 1/2$. Without loss of generality we will
consider $\beta = b dx_{n+1}$. If $M^n$ has local coordinates $x = (x^{\epsilon}), \epsilon= 1,... , n$, and
$\varphi(x) =\left( \varphi^i(x^{\epsilon})\right)\in V $, $i = 1,\dots, n + 1$, we define
\begin{equation}\label{eqn2.01}
\mathcal{F}(x,z)=\frac{vol (B^n)}{vol (D^n_x)}, \quad z=\left( z^i_{\epsilon}\right)=\frac{\partial \varphi^i}{\partial x^{\epsilon}},
\end{equation}
where,
\begin{equation}
D^n_x=\left\lbrace (y^1,y^2,...,y^n)\in \mathbb{R}^n:F(x,y)<1\right\rbrace.
\end{equation}
The Euclidean volume of $D^n_x$ is given by
\begin{equation}
vol D^n_x=\frac{vol B^n}{\left(1-b^2A^{\epsilon \eta}z^{n+1}_{\epsilon}z^{n+1}_\eta \right)^{\frac{n+1}{2}}\sqrt{det A} }
\end{equation}
where,
\begin{equation}
A=\left( A_{\epsilon \eta}\right)=\left( \sum\limits_{i=1}^{n+1}z^i_{\epsilon }z^i_{\eta}\right), \quad \textnormal{and} \quad \left( A^{\epsilon \eta}\right) =\left( A_{\epsilon \eta}\right) ^{-1}
\end{equation}
Then the volume form $dV_{BH}$ is given by
\begin{equation}
dV_{BH}=\left(1-b^2A^{\epsilon \eta}z^{n+1}_{\epsilon}z^{n+1}_{\eta} \right)^{\frac{n+1}{2}}\sqrt{det A}dx^1...dx^n.
\end{equation}
The mean curvature $\mathcal{H}_{\varphi}$, introduced by Z.Shen \cite{ZS} and is given by
\begin{equation}
\mathcal{H}_{\varphi}(v)=\frac{1}{\mathcal{F}}\left\lbrace \frac{\partial^2 \mathcal{F}}{\partial z^i_{\epsilon}\partial z^j_{\eta}} \frac{\partial^2 \mathcal{\varphi}^j}{\partial x^{\epsilon}\partial x^{\eta}}+\frac{\partial^2 \mathcal{F}}{\partial z^i_{\epsilon}\partial \tilde{x}^j} \frac{\partial \mathcal{\varphi}^j}{\partial x^{\epsilon}} -\frac{\partial \mathcal{F}}{\partial \tilde{x}^i}\right\rbrace v^i.
\end{equation}
Here $v=(v^i)$ is a vector field over $\tilde{M}$ and $\mathcal{H}_{\varphi}(v)$depends linearly on $v$. Also the mean curvature vanishes on $\varphi_*(TM)$. Whenever $(V ,F)$ is a Minkowski space, the expression of the mean curvature reduces to
\begin{equation}
\mathcal{H}_{\varphi}(v)=\frac{1}{\mathcal{F}}\left\lbrace \frac{\partial^2 \mathcal{F}}{\partial z^i_{\epsilon}\partial z^j_{\eta}} \frac{\partial^2 \mathcal{\varphi}}{\partial x^{\epsilon}\partial x^{\eta}}\right\rbrace v^i.
\end{equation}
The immersion $\varphi$ is said to be minimal when $\mathcal{H}_{\varphi}=0$.
\section{The partial differential equation of minimal surfaces in Matsumoto spaces}
In this section we obtain the volume form of Matsumoto metric and with the help of that for any immersion $\varphi:M^2\to (V^3,F_b)$ we obtain the characteristic differential equation for which $\varphi$ is minimal.
\begin{proposition}\label{prop1}\cite{XZ}
Let $F=\alpha\phi(s)$, $s=\beta/\alpha$, be an $(\alpha,\beta)$-metric on an $n$-dimensional manifold $M$. Let
\begin{equation}\label{eqn3.1}
f(b) := \begin{cases} \frac{\int\limits_{0}^{\pi}\sin^{n-2}(t)dt}{\int\limits_{0}^{\pi}\frac{\sin^{n-2}(t)}{\phi(b\cos (t))^n}dt} &\mbox{if } dV=dV_{BH} \\
\frac{\int\limits_{0}^{\pi}\sin^{n-2}(t)T(b\cos (t))dt}{\int\limits_{0}^{\pi}\sin^{n-2}(t)dt} & \mbox{if } dV=dV_{HT} \end{cases}
\end{equation}
Then the volume form $dV$ is given by $$dV=f(b)dV_{\alpha}$$
where, $dV_{\alpha}=\sqrt{det(a_{ij})}dx$ denotes the Riemannian volume form $\alpha$.
\end{proposition}
\begin{theorem}\label{thm1}
Let $F=\frac{\alpha^2}{\alpha-\beta}$, be the Matsumoto metric on a $2$-dimensional manifold $M$. Then Bausmann-Hausdorff volume form of $F$ is given by
$$dV_{BH}=\frac{2}{2+b^2}\sqrt{det(a_{ij})}dx$$
\end{theorem}
\begin{proof}
For Matsumoto surface we have $\phi(s)=\frac{1}{1-s}$ and $n=2$. Therefore, from \eqref{eqn3.1} we have
\begin{equation}\label{eqn3.01}
\begin{split}
f(b) &= \frac{\int\limits_{0}^{\pi}dt}{\int\limits_{0}^{\pi}(1-b\cos t)^2dt} \\ & =\frac{2}{2+b^2}
\end{split}
\end{equation}
Hence, the theorem follows.
\end{proof}
\begin{theorem}
Let $\varphi:M^2 \to (V^3,F_b)$ be an immersion in a Matsumoto space with local coordinates $(\varphi^j(x))$. Then $\varphi$ is minimal if and only if
\begin{eqnarray}\label{MCE0}
\frac{\partial^2 \varphi^j}{\partial x^{\epsilon}\partial x^{\eta}}v^i\left[\frac{2C^2+3E}{(2C^2+E)^2} \frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}-\frac{2C^2}{(2C^2+E)^2}\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\nonumber \right.\\ \left.- \frac{2(4C^4+12C^2E-12C^3-3E^2)}{(2C^2+E)^3} \frac{\partial C}{\partial z^i_{\epsilon}}\frac{\partial C}{\partial z^j_{\eta}}+\frac{4C^2}{(2C^2+E)^3} \frac{\partial E }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta}\nonumber\right.\\ \left.+\frac{4C^3-6CE}{(2C^2+E)^3}\left(\frac{\partial C }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta}+\frac{\partial E }{\partial z^i_\epsilon}\frac{\partial C }{\partial z^j_\eta} \right) \right]&=&0 \hspace{1.0cm}
\end{eqnarray}
\end{theorem}
where, $C=\sqrt{det(A)}$ and
\begin{equation}
E=b^2\sum\limits_{k=1}^{3}(-1)^{\gamma + \tau}z^{k}_{\bar{\gamma}}z^{k}_{\bar{\tau}}z^{3}_{\gamma}z^{3}_{\tau}
\end{equation}
\begin{proof}
From the discussion in Section $2$ the volume form of a Matsumoto metric can be written as
\begin{equation}
dV_{BH}=\frac{2}{2+b^2A^{\epsilon \eta}z^3_{\epsilon}z^3_{\eta}}\sqrt{det(A)}dx^1dx^2.
\end{equation}
Let $B=b^2A^{\epsilon \eta}z^3_{\epsilon }z^3_{\eta}$. Then using \eqref{eqn3.01} in \eqref{eqn2.01} one can write
\begin{equation}\label{eqn3.5}
\mathcal{F}(x,z)=\frac{2}{2+B}C.
\end{equation}
Since $A_{\epsilon\eta}=\sum\limits_{i=1}^{3}z^i_{\epsilon}z^i_{\eta}$, its inverse matrix is given by,
\begin{equation*}
A^{\epsilon\eta}= \sum\limits_{i=1}^{3}\frac{(-1)^{\epsilon+\eta}}{det A}z^i_{\bar{\epsilon}}z^i_{\bar{\eta}}
\end{equation*}
Here the notation bar for any greek letters ranging from $1$ to $2$ is defined by
\begin{equation*}
\bar{\tau}=\delta_{\tau 2}+2\delta_{\tau 1}.
\end{equation*}
Hence we get
\begin{equation}
B=b^2 \sum\limits_{i=1}^{3}\frac{(-1)^{\gamma + \tau}}{det A}z^{i}_{\bar{\gamma}}z^{i}_{\bar{\tau}}z^{3}_{\gamma}z^{3}_{\tau}. ,
\end{equation}
Hence, we have,
\begin{equation}
B=\frac{E}{C^2}
\end{equation}
where,
\begin{equation}
E=b^2\sum\limits_{k=1}^{3}(-1)^{\gamma + \tau}z^{k}_{\bar{\gamma}}z^{k}_{\bar{\tau}}z^{3}_{\gamma}z^{3}_{\tau}
\end{equation}
Therefore, \eqref{eqn3.5} becomes
\begin{equation}\label{eqn3.6}
\mathcal{F}(x,z)=\frac{2C^3}{2C^2+E}.
\end{equation}
Differentiating \eqref{eqn3.6} with respect to $z^i_{\epsilon}$ we get
\begin{equation}\label{eqn3.7}
\frac{\partial \mathcal{F} }{\partial z^i_{\epsilon}}= \frac{4C^4+6C^2E}{(2C^2+E)^2}\frac{\partial C }{\partial z^i_{\epsilon}} - \frac{2C^3}{(2C^2+E)^2}\frac{\partial E }{\partial z^i_{\epsilon}}.
\end{equation}
Differentiating \eqref{eqn3.7} with respect to $z^j_{\eta}$ we get
\begin{equation}\label{eqn3.8}
\begin{split}
\frac{\partial^2 \mathcal{F} }{\partial z^i_{\epsilon}\partial z^j_{\eta}}
=\frac{4C^4+6C^2E}{(2C^2+E)^2}\frac{\partial^2 C}{\partial z^i_{\epsilon}\partial z^j_{\eta}}-\frac{2C^3}{(2C^2+E)^2}\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\hspace{2.0cm} \\+\frac{12CE^2-8C^3E}{(2C^2+E)^3}\frac{\partial C}{\partial z^i_{\epsilon}}\frac{\partial C}{\partial z^j_{\eta}}+\frac{4C^4-6C^2E}{(2C^2+E)^3}\left(\frac{\partial C }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta}+\frac{\partial E }{\partial z^i_\epsilon}\frac{\partial C }{\partial z^j_\eta} \right)\\+\frac{4C^3}{(2C^2+E)^3} \frac{\partial E }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta}.
\end{split}
\end{equation}
The Matsumoto metric has vanishing mean curvature iff
\begin{equation}\label{eqn3.9}
\frac{\partial^2 \mathcal{F} }{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2 \varphi^j}{\partial x^{\epsilon}\partial x^{\eta}}v^i=0.
\end{equation}
Therefore, using \eqref{eqn3.8} in \eqref{eqn3.9} we obtain \eqref{MCE0}.
\end{proof}
\section{The characterization of minimal surfaces which are the graph of a function}
In this section we study the graph of a function $M^2$ in Matsumoto space $(V^3,F_b)$, where $V^3$ is a real vectoe space and $F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}$ is a Matsumoto metric, where $\tilde{\alpha}$ is the Euclidean metric and $\tilde{\beta}=bdx^3$ is a one-form. Here we consider the immersion $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1, x^2))$. At first we show that the pullback metric of $F_b$ bt $\varphi$ is again a Matsumoto metric and then find the characterization equation the surface to be minimal.
\begin{proposition}
Let $F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}$ is a Matsumoto metric, where $\tilde{\alpha}$ is the Euclidean metric and $\tilde{\beta}=bdx^3$ is a one-form on the real vector space $V^3$. Now suppose $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1, x^2))$, where $f$ is a real valued smooth function be an immersion Then the pullback metric on $U$ defined by \eqref{eqn2.1} is again a Matsumoto metric.
\end{proposition}
\begin{proof}
We have,
\begin{equation}
F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}=\frac{(d\tilde{x}^1)^2+(d\tilde{x}^2)^2+(d\tilde{x}^3)^2}{\sqrt{(d\tilde{x}^1)^2+(d\tilde{x}^2)^2+(d\tilde{x}^3)^2}-bd\tilde{x}^3}
\end{equation}
Now,
\begin{equation}
\begin{split}
\varphi^*(d\tilde{x}^1)=dx^1, \quad \varphi^*(d\tilde{x}^2)=dx^2, \\ \varphi^*(d\tilde{x}^3)=(d(f(x^1, x^2)))=f_{x^1}dx^1+f_{x^2}dx^2
\end{split}
\end{equation}
Therefore,
\begin{equation}
\varphi^*F_b=\frac{(1+f^2_{x^1})(dx^1)^2+2f_{x^1}f_{x^2}dx^1dx^2+(1+f^2_{x^2})(dx^2)^2}{\sqrt{(1+f^2_{x^1})(dx^1)^2+2f_{x^1}f_{x^2}dx^1dx^2+(1+f^2_{x^2})(dx^2)^2}-b(f_{x^1}dx^1+f_{x^2}dx^2)}
\end{equation}
which is a Matsumoto metric of the form $\frac{\alpha^2}{\alpha-\beta}$, where
\begin{equation*}
\alpha^2=(1+f^2_{x^1})(dx^1)^2+2f_{x^1}f_{x^2}dx^1dx^2+(1+f^2_{x^2})(dx^2)^2
\end{equation*}
is a Riemannian metric and
\begin{equation*}
\beta=b(f_{x^1}dx^1+f_{x^2}dx^2)
\end{equation*}
is a one-form.
\end{proof}
\begin{theorem}\label{thm4.1}
An immersion $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1, x^2))$ is minimal, if and only if, $f$ satisfies
\begin{equation}\label{eqn4.0}
\sum\limits_{ \epsilon, \eta=1,2}\left[ T_b(T_b-2b^2)\left(\delta_{\epsilon \eta}-\frac{f_{x^{\epsilon}}f_{x^{\eta}}}{W^2} \right)+ 4b^2(T_b +4 b^2) \frac{f_{x^{\epsilon}}f_{x^{\eta}}}{W^2} \right] f_{x^{\epsilon}x^{\eta}}= 0,
\end{equation}
where,
$W^2 = 1 + f^2_{x^1}+ f^2_{x^2}, \qquad T_b = 2W^2 + b^2(W^2-1)$.
\end{theorem}
\begin{proof}
The mean curvature vanishes on tangent vectors of the immersion $\varphi$. Therefore, we need to consider a vector field $v$ such that the set $\left\lbrace v, \varphi_{x^1}, \varphi_{x^2} \right\rbrace $ is linearly independent. Therefore, we consider $v=\varphi_{x^1}\times \varphi_{x^2}$. Then $v=\left( v^1,v^2,v^3\right)=\left(-f_{x^1}, -f_{x^2}, 1 \right)$.
Here we have
\begin{equation}\label{eqn4.1}
A =
\begin{pmatrix}
1+f^2_{x^1} & f_{x^1}f_{x^2} \\
f_{x^1}f_{x^2} & 1+f^2_{x^2} \\
\end{pmatrix},
\end{equation}
\begin{equation}\label{eqn4.2}
C=\sqrt{det A}=W, \quad E=b^2\left( W^2-1\right),
\end{equation}
By some simple calculations we can have
\begin{equation}\label{eqn4.3}
\frac{\partial C}{\partial z^i_{\epsilon}}v^i=0,
\end{equation}
\begin{equation}\label{eqn4.4}
\frac{\partial E}{\partial z^i_{\epsilon}}v^i= 2b^2(\delta_{\epsilon 1}f_{x^1}+\delta_{\epsilon 2}f_{x^2}),
\end{equation}
\begin{equation}\label{eqn4.5}
\frac{\partial C}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= \frac{f_{x^1}f_{x^{\epsilon}x^1}+f_{x^2}f_{x^{\epsilon}x^2}}{W},
\end{equation}
\begin{equation}\label{eqn4.6}
\frac{\partial E}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}= 2b^2(f_{x^1}f_{x^{\epsilon}x^1}+f_{x^2}f_{x^{\epsilon}x^2}),
\end{equation}
\begin{equation}\label{eqn4.7}
\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= 2b^2\left[ (1+f^2_{x^2})f_{x^1x^1}-2f_{x^1}f_{x^2}f_{x^1x^2}+(1+f^2_{x^1})f_{x^2x^2}\right],
\end{equation}
\begin{equation}\label{eqn4.8}
\frac{1}{2}\frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= \left[ (1+f^2_{x^2})f_{x^1x^1}-2f_{x^1}f_{x^2}f_{x^1x^2}+(1+f^2_{x^1})f_{x^2x^2}\right].
\end{equation}
Using \eqref{eqn4.3} in \eqref{MCE0} we have
\begin{eqnarray}\label{MCE001}
\frac{\partial^2 \varphi^j}{\partial x^{\epsilon}\partial x^{\eta}}v^i\left[ \frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}(2C^2+3E)(2C^2+E)-\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}} 2C^2(2C^2+E)\nonumber \right.\\ \left.+\left\lbrace\frac{\partial E }{\partial z^i_\epsilon}\frac{\partial C }{\partial z^j_\eta} \left(4C^3-6CE \right)+4C^2 \frac{\partial E }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta} \right\rbrace \right]&=&0. \hspace{1.0cm}
\end{eqnarray}
Let $T_b=2C^2+E$. Then we have the followings:
\begin{equation}
\begin{split}
T_b=2b^2+b^2(W^2-1), \quad 2C^2+3E=2W^2+3b^2(W^2-1), \\ \left(4C^3-6CE \right)=2W\left\lbrace T_b-4b^2(W^2-1)\right\rbrace.
\end{split}
\end{equation}
Putting all these values in \eqref{MCE001} we get
\begin{equation}\label{eqn4.my}
\begin{split}
T_b(T_b-2b^2)\left[ (1+f^2_{x^2})f_{x^1x^1}-2f_{x^1}f_{x^2}f_{x^1x^2}+(1+f^2_{x^1})f_{x^2x^2}\right]\\+4b^2(T_b+4b^2)\left[ f^2_{x^1}f_{x^1x^1}+2f_{x^1}f_{x^2}f_{x^1x^2}+f^2_{x^2}f_{x^2x^2}\right]=0
\end{split}
\end{equation}
Equation \eqref{eqn4.my} can be written as
\begin{equation}
\begin{split}
\left[T_b(T_b-2b^2)\left(W^2-f^2_{x^1} \right)+ 4b^2(T_b+4b^2)f^2_{x^1} \right] f_{x^1x^1}\\ -2\left[ T_b(T_b-2b^2)-4b^2(T_b+4b^2)\right]f_{x^1}f_{x^2} f_{x^1x^2}\\
+\left[ T_b(T_b-2b^2)\left(W^2-f^2_{x^2} \right)+ 4b^2(T_b+4b^2)f^2_{x^2}\right] f_{x^2x^2}
\end{split}
\end{equation}
The above equation is equivalent to \eqref{eqn4.0}. Hence, we complete the proof.
\end{proof}
\begin{theorem}\label{thm2}
An immersion $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1, x^2))$ is minimal, if and only if, $f$ satisfies
\begin{eqnarray}\label{eqn4.25}
\sum\limits_{ \epsilon, \eta =1,2}\left[ S_b(S_b-2b^2w^2)\left(\delta_{\epsilon\eta}-\frac{f_{x^{\epsilon}}f_{x^{\eta}}}{W^2} \right)\nonumber \right.\\ \left.+ 4b^2(S_b +4 b^2w^2)\left( k_{\epsilon}+\frac{f_{x^{\epsilon}}}{W^2}\right)\left( k_{\eta}+\frac{f_{x^{\eta}}}{W^2}\right) \right] f_{x^{\epsilon}x^{\eta}}&=& 0
\end{eqnarray}
where $k_i$ are real numbers such that $\sum\limits_{i=1}^{3}k^2_i=1$ and
\begin{equation}
W^2 = 1 + f^2_{x^1}+ f^2_{x^2},\quad S_b = b^2 + (2+b^2)W^2, \quad w=-k_1f_{x^1}-k_2f_{x^2}+k_3.
\end{equation}
\end{theorem}
\begin{proof}
The proof of this theorem is similar to the previous theorem. Let us consider the immersion $\varphi$ is a graph of a function over an open
subset of a plane of $V^3$. Then $\varphi$ can be written in the form
\begin{equation}
\varphi(x^1,x^2)=\left( x^1,x^2,f(x^1,x^2)\right)\left( m_{ij} \right),
\end{equation}
where $(m_{ij} )$ is a $3 \times 3$ orthogonal matrix, $(x^1,x^2)\in U \subset \mathbb{R}^2$ and the surface is a graph over the plane $m_{31}x + m_{32}y + m_{33}z = 0$.\\
We now consider the vector field $v = (v^1, v^2, v^3)$ which is linearly independent with $\varphi_{x^1}$ and $\varphi_{x^2}$. Hence we consider $v=\varphi_{x^1}\times \varphi_{x^2}$. Therefore,
\begin{equation*}
v^i = -f_{x^1}m_{1i} -f_{x^2}m_{2i} + m_{3i} ,
\end{equation*}
Now note that
\begin{equation}
z^i_{\eta}= \frac{\partial \varphi^i}{\partial x^{\eta}}=m_{\eta i}+f_{x^{\eta}}m_{3i}, \quad \frac{\partial^2 \varphi^i}{\partial x_{\epsilon}\partial_{x^{\eta}}}=f_{x^{\epsilon}{x^{\eta}}}m_{3i}.
\end{equation}
Further, for all $i=1,2,3$ and $\eta,\gamma,\epsilon=1,2$, we have,
\begin{equation}
\sum\limits_{i=1}^{3}z^i_{\eta}v^i=0,\quad \sum\limits_{i=1}^{3}v^im_{3i}=1, \quad \sum\limits_{i=1}^{3}z^i_{\eta}m_{3i}=f_{x^{\eta}}, \quad \sum\limits_{i=1}^{3}z^i_{\gamma}\frac{\partial^2 \varphi^i}{\partial x^{\epsilon}\partial {x^{\eta}}}=f_{x^{\gamma}}f_{x^{\epsilon}{x^{\eta}}}.
\end{equation}
Here the values of $A$ and $C$ are as given in \eqref{eqn4.1} and \eqref{eqn4.2} respectively. And $E=b^2(W^2-w^2)$, $w=v^3$. Let $m_{3i}=k_i$. Therefore, as obtained in Theorem \ref{thm4.1} similarly we obtain the followings:
\begin{equation}\label{eqn4.10}
\frac{\partial C}{\partial z^i_{\epsilon}}v^i=0,
\end{equation}
\begin{equation}\label{eqn4.11}
\frac{\partial E}{\partial z^i_{\epsilon}}v^i= 2b^2\left( z^3_{\epsilon}A_{\bar{\epsilon}\bar{\epsilon}}-z^3_{\bar{\epsilon}}A_{\epsilon\bar{\epsilon}}\right) w, \quad \forall \epsilon
\end{equation}
\begin{equation}\label{eqn4.12}
\frac{\partial C}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= \frac{f_{x^1}f_{x^{\epsilon}x^1}+f_{x^2}f_{x^{\epsilon}x^2}}{W}, \quad \forall \epsilon
\end{equation}
\begin{equation}\label{eqn4.13}
\frac{\partial E}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}= 2b^2\left[ \left( f_{x^1}+k_1w\right) f_{x^{\epsilon}x^1}+\left( f_{x^2}+k_2w\right)f_{x^{\epsilon}x^2}\right] , \quad \forall \epsilon
\end{equation}
\begin{eqnarray}\label{eqn4.14}
\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= 2b^2\left[\left\lbrace 1+f^2_{x^2}-k_1\left( k_1W^2+f_{x^1}w\right) \right\rbrace f_{x^1x^1}\nonumber\right.\\ \left.-\left\lbrace \left(1+k^2_3 \right)f_{x^1}f_{x^2}+ k_1k_2W^2+k_1k_3f_{x^2}+k_2k_3f_{x^1}+k_1k_2\right\rbrace f_{x^1x^2}\nonumber\right.\\ \left.+ \left\lbrace 1+f^2_{x^1}-k_2\left( k_2W^2+f_{x^2}w\right) \right\rbrace f_{x^2x^2}\right],
\end{eqnarray}
\begin{equation}\label{eqn4.15}
\frac{1}{2}\frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= \left[ (1+f^2_{x^1})f_{x^2x^2}-2f_{x^1}f_{x^2}f_{x^1x^2}+(1+f^2_{x^2})f_{x^1x^1}\right].
\end{equation}
Using \eqref{eqn4.10} in \eqref{MCE0} we have
\begin{eqnarray}\label{MCE003}
\frac{\partial^2 \varphi^j}{\partial x^{\epsilon}\partial x^{\eta}}v^i\left[ \frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}(2C^2+3E)(2C^2+E)-\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}} 2C^2(2C^2+E)\nonumber \right.\\ \left.+\left\lbrace\frac{\partial E }{\partial z^i_\epsilon}\frac{\partial C }{\partial z^j_\eta} \left(4C^3-6CE \right)+4C^2 \frac{\partial E }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta} \right\rbrace \right]&=&0 \hspace{1.0cm}
\end{eqnarray}
Let $S_b=2C^2+E$. Then,
\begin{equation}
\begin{split}
S_b=2b^2+b^2(W^2-w^2), \quad 2C^2+3E=2W^2+3b^2(W^2-w^2), \\ \left(4C^3-6CE \right)=2W\left\lbrace S_b-4b^2(W^2-w^2)\right\rbrace.
\end{split}
\end{equation}
Putting all these values in \eqref{MCE003} we get \eqref{eqn4.25}.
\end{proof}
\begin{remark}
Observe that when $k_1 = k_2 = 0$ and $k_3 = 1$, then equation \eqref{MCE001} reduces to \eqref{eqn4.0}.
\end{remark}
\begin{definition}\cite{LS1}
A differential equation is said to be a elliptic equation of mean curvature type on a domain $\Omega\subset \mathbb{R}^2$ if
\begin{equation}
\sum\limits_{ \epsilon,\eta=1,2}a_{\epsilon\eta}(x,f,\nabla f)f_{x^{\epsilon}x^{\eta}}=0
\end{equation}
where $a_{\epsilon\eta}, \epsilon,\eta = 1, 2$ are given real-valued functions on $\Omega \times \mathbb{R} \times \mathbb{R}^2$, $x\in \Omega$, $f:\Omega \to \mathbb{R}$ with
\begin{equation}
|\xi|^2- \frac{(p\cdot \xi)^2}{1+|p|^2}\sum\limits_{ \epsilon,\eta=1,2}a_{\epsilon\eta}(x,u,p)\xi_{\epsilon}\xi_{\eta}\le\left(1+\mathcal{C} \right) \left[|\xi|^2- \frac{(p\cdot\xi)^2}{1+|p|^2}\right]
\end{equation}
for all $u \in \mathbb{R}$, $p \in \mathbb{R}^2$ and $\xi \in \mathbb{R}^2\setminus \left\lbrace 0 \right\rbrace $.
\end{definition}
\begin{theorem}\label{thm3}
Let $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ be an immersion which is the graph of a function $f (x^1, x^2)$ over a plane. Then $\varphi$ is minimal, if and only if, $f$ satisfies the elliptic differential equation, of mean curvature type, given by
\begin{equation}\label{eqn4.34}
\sum\limits_{ \epsilon,\eta=1,2}a_{\epsilon\eta}(x,f,\nabla f)f_{x^{\epsilon}x^{\eta}}=0
\end{equation}
where,
\begin{equation}
a_{\epsilon\eta}=\delta_{\epsilon\eta}-\frac{f_{x^{\epsilon}}f_{x^{\eta}}}{W^2}+R_bW^2+\left( k_{\epsilon}+\frac{f_{x^{\epsilon}}}{W^2}\right)\left( k_{\eta}+\frac{f_{x^{\eta}}}{W^2}\right),
\end{equation}
\begin{equation}
R_b=\frac{4b^2(S_b +4 b^2w^2)}{S_b(S_b-2b^2w^2)},
\end{equation}
\end{theorem}
\begin{proof}
In Theorem \ref{thm2}, we already prove that $\varphi$ is minimal if and only if it satisfies \eqref{eqn4.25}. Since for a Matsumoto metric $0<b<1/2$, therefore, we have from the definition, $S_b>0$. And also
\begin{equation}\label{eqn4.35}
(S_b-2b^2w^2)=b^2+(2+b^2)W^2-2b^2w^2=b^2+(2-b^2)W^2+2b^2(W^2-w^2)
\end{equation}
Now,
\begin{equation}\label{eqn4.36}
W^2-w^2=(k_2f_{x^1}-k_1f_{x^2})^2+(k_1+k_3f_{x^1})^2+(k_2+k_3f_{x^2})^2>0.
\end{equation}
Since, $0<b<1/2$, using \eqref{eqn4.36} in \eqref{eqn4.35}, we have, $(S_b-2b^2w^2)>0$.\\
Now dividing both sides of \eqref{eqn4.25} by $S_b(S_b-2b^2w^2)$, we get \eqref{eqn4.34}.\\
Let us consider $\xi\in \mathbb{R}^2\setminus \left\lbrace 0 \right\rbrace $, $x,t\in \mathbb{R}^2$ and $u\in \mathbb{R}$ and we define
\begin{equation}
h_{\epsilon\eta}(u)= \delta_{\epsilon\eta}-\frac{t_{\epsilon}t_{\eta}}{W^2(u)}.
\end{equation}
Hence, we have,
\begin{equation}\label{eqn5.36}
\sum\limits_{\epsilon,\eta=1}^{2}h_{\epsilon\eta}(t)\xi_i\xi_j=\frac{|\xi|^2}{W^2}(1+|t|^2\sin^2 \theta),
\end{equation}
where, $\theta$ is the angle function between $t$ and $\xi$. We also have from
\begin{equation}
\sum\limits_{\epsilon,\eta=1}^{2}a_{\epsilon\eta}(x,u,t)\xi_{\epsilon}\xi_{\eta}=\sum\limits_{\epsilon\eta=1}^{2}h_{\epsilon\eta}(t)\xi_{\epsilon}\xi_{\eta}+R_bW^2\left[(k_1,k_2)\cdot\xi+\frac{w}{W^2}t\cdot\xi \right]^2,
\end{equation}
where $\cdot$ represents the Euclidean inner product.\\
Since $R_b>0$, for all $\xi\in \mathbb{R}^2\setminus \left\lbrace 0 \right\rbrace $, from \eqref{eqn5.36} we have,
\begin{equation}\label{eqn4.37}
\sum\limits_{\epsilon,\eta=1}^{2}a_{\epsilon\eta}(x,u,t)\xi_{\epsilon}\xi_{\eta}\ge\sum\limits_{\epsilon,\eta=1}^{2}h_{\epsilon\eta}(t)\xi_{\epsilon}\xi_{\eta}\ge\frac{|\xi|^2}{W^2}>0.
\end{equation}
Hence, \eqref{eqn4.34} is an elliptic equation.
Now we prove that it is a differential equation of mean curvature type for which we need to show that there exists a constant $\mathcal{C}$ such that, for all
\begin{equation}
\sum\limits_{\epsilon, \eta=1}^{2}h_{\epsilon \eta}(x,u,t)\xi_{\epsilon}\xi_{\eta} \le \sum\limits_{\epsilon, \eta=1}^{2}a_{\epsilon \eta}(x,u,t)\xi_{\epsilon}\xi_{\eta}\le(1+\mathcal{C})\sum\limits_{\epsilon, \eta=1}^{2}h_{\epsilon \eta}(x,u,t)\xi_{\epsilon}\xi_{\eta}.
\end{equation}
The first inequality is immediate from \eqref{eqn4.37}. To prove the second inequality we need to show that
\begin{equation}
R_bW^2\left[(k_1,k_2).\xi+\frac{w}{W^2}t.\xi \right]^2\le\mathcal{C}\sum\limits_{\epsilon,\eta=1}^{2}h_{\epsilon\eta}(x,u,t)\xi_{\epsilon}\xi_{\eta},
\end{equation}
where, $w=-k_1t_1-k_2t_2+k_3$.\\
From \eqref{eqn5.36} we have,
\begin{equation}
W^2\left[(k_1,k_2).\xi+\frac{w}{W^2}t.\xi \right]^2=\frac{\left[ W^2|(k_1,k_2)|\cos \gamma+w|t|\cos \theta\right] ^2}{1+|t|^2\sin^2\theta}\sum\limits_{\epsilon, \eta=1}^{2}h_{\epsilon \eta}(x,u,t)\xi_{\epsilon}\xi_{\eta},
\end{equation}
where $\gamma$ is the angle between $(k_1,k_2)$ and $\xi$. Hence, we need to show that
\begin{equation}\label{eqn4.39}
R_b\frac{\left[ W^2|(k_1,k_2)|\cos \gamma+w|t|\cos \theta\right] ^2}{1+|t|^2\sin^2\theta}\le\mathcal{C}.
\end{equation}
It can be seen that $W^2\ge 1$. When $W^2 = 1$, then, $t = 0$. In that case, we have
\begin{equation*}
0\le R_b\left[ |(k_1,k_2)|\cos \gamma\right] ^2\le R_b(0)(k_1^2+k_2^2).
\end{equation*}
Therefore, taking $\mathcal{C}=R_b(0)(k_1^2+k_2^2)$ we prove the inequality.\\
Now suppose $W^2 > 1$ and $\sin \theta = 0$. in that case $t \ne 0$ and the vectors $t$ and $\xi$ are parallel to each other. Hence,
\begin{equation}\label{eqn4.40}
\left[ W^2 |(k_1, k_2)| \cos \gamma + w|t| \cos \theta\right]^2 = \left[ |(k_1, k_2)| \cos \gamma + k_3|t| cos \theta\right]^2.
\end{equation}
Equation \eqref{eqn4.40} implies that $R_b\frac{\left[ W^2|(k_1,k_2)|\cos \gamma+w|t|\cos \theta\right] ^2}{1+|t|^2\sin^2\theta}$ is a rational function of $|t|$ whose numerator is of degree less than or equal to $4$, and denominator is of degree $4$ and hence it is a bounded function as $|t|$ (or, equivalently $W$) tends to infinity.\\
Now suppose $W^2 > 1$ and $\sin \theta \ne 0$, then $t \ne 0$ and the vectors $t$ and $\xi$ are not parallel.
Therefore, $R_b\frac{\left[ W^2|(k_1,k_2)|\cos \gamma+w|t|\cos \theta\right] ^2}{1+|t|^2\sin^2\theta}$ is a rational function of $|t|$ whose numerator is of degree less than or equal to $6$, and denominator is of degree $6$. Therefore, it is a bounded function when $|t|$ (or equivalently W) tends to infinity. Hence, we prove the inequality \eqref{eqn4.39}. And this proves the theorem.
\end{proof}
\par Now the theorem proved by L. Simon (Theorem 4.1 of \cite{LS2}) and from Theorem \ref{thm3} we conclude that
\begin{theorem}
A minimal surface in a Matsumoto space $(V^3,F_b)$, which is a graph of a function defined on $\mathbb{R}^2$, is a plane.
\end{theorem}
\section{The characterization of minimal surfaces of translation surfaces}
In this section we study the minimal translation surface $M^2$ in Matsumoto space $(V^3,F_b)$, where $V^3$ is a real vectoe space and $F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}$ is a Matsumoto metric, where $\tilde{\alpha}$ is the Euclidean metric and $\tilde{\beta}=bdx^3$ is a one-form. Here we consider the immersion $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1)+g(x^2))$. At first we show that the pullback metric of $F_b$ by $\varphi$ is again a Matsumoto metric and then find the characterization equation the surface to be minimal.
\begin{proposition}
Let $F_b=F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}$ is a Matsumoto metric, where $\tilde{\alpha}$ is the Euclidean metric and $\tilde{\beta}=bdx^3$ is a one-form on the real vector space $V^3$. Now suppose $\varphi : U \subset \mathbb{R}^2 \to (V^3, F_b)$ given by $\varphi(x^1, x^2) =(x^1, x^2, f(x^1)+g (x^2))$,where $f$ and $g$ is a real valued smooth function be an immersion then the pullback metric on $U$ defined by \eqref{eqn2.1} is again a Matsumoto metric.
\end{proposition}
\begin{proof}
We have,
\begin{equation}
F_b=F_b=\frac{{\tilde{\alpha}}^2}{\tilde{\alpha}-\tilde{\beta}}=\frac{(d\tilde{x}^1)^2+(d\tilde{x}^2)^2+(d\tilde{x}^3)^2}{\sqrt{(d\tilde{x}^1)^2+(d\tilde{x}^2)^2+(d\tilde{x}^3)^2}-bd\tilde{x}^3}
\end{equation}
Now,
\begin{equation}
\begin{split}
\varphi^*(d\tilde{x}^1)=dx_1, \quad \varphi^*(d\tilde{x}^2)=dx_2, \\ \varphi^*(d\tilde{x}^3)=(d(f(x^1)+g (x^2)))=f_{x^1}dx^1+g_{x^2}dx^2
\end{split}
\end{equation}
Therefore,
\begin{equation}
\varphi^*F_b=\frac{(1+f^2_{x^1})(dx^1)^2+2f_{x^1}g_{x^2}dx^1dx^2+(1+g^2_{x^2})(dx^2)^2}{\sqrt{(1+f^2_{x^1})(dx^1)^2+2f_{x^1}g_{x^2}dx^1dx^2+(1+g^2_{x^2})(dx^2)^2}-b(f_{x^1}dx^1+g_{x^2}dx^2)}
\end{equation}
which is a Matsumoto metric of the form $\frac{\alpha^2}{\alpha-\beta}$, where
\begin{equation*}
\alpha^2=(1+f^2_{x^1})(dx^1)^2+2f_{x^1}g_{x^2}dx^1dx^2+(1+g^2_{x^2})(dx^2)^2
\end{equation*}
is a Riemannian metric and
\begin{equation*}
\beta=b(f_{x^1}dx^1+g_{x^2}dx^2)
\end{equation*}
is a one-form.
\end{proof}
\par Let us consider the following immersion:
\begin{equation}
\varphi(x^1,x^2)=(\varphi^1,\varphi^2,\varphi^3)=\left( x^1,x^2,f(x^1)+g(x^2)\right)
\end{equation}
Then we can write
\begin{equation}
\varphi^j=\delta_{j1}x_1+\delta_{j2}x_2+(f+g)\delta_{j3}, \quad 1\le \epsilon \le 3.
\end{equation}
Therefore, we get
\begin{equation}\label{eqn5.1}
A =
\begin{pmatrix}
1+f^2_{x^1} & f_{x^1}g_{x^2} \\
f_{x^1}g_{x^2} & 1+g^2_{x^2} \\
\end{pmatrix},
\end{equation}
\begin{equation}\label{eqn5.2}
C=\sqrt{det A}=\sqrt{1+f^2_{x^1}+g^2_{x^2}} \quad \textnormal{and} \quad E=b^2(f^2_{x^1}+g^2_{x^2}).
\end{equation}
Here we choose $v=\varphi_{x^1}\times \varphi_{x^2} $. Then $v=(v^1,v^2,v^3)=(-f_{x^1},-g_{x^2},1)$. Hence, $v^i=-\delta_{i3}f_{x^1}-\delta_{i3}g_{x^2}+\delta_{i3}, \quad 1\le i\le 3$.\\
By some simple calculations we can have
\begin{equation}\label{eqn5.3}
\frac{\partial C}{\partial z^i_{\epsilon}}v^i=0,
\end{equation}
\begin{equation}\label{eqn5.4}
\frac{\partial E}{\partial z^i_{\epsilon}}v^i= 2b^2(\delta_{\epsilon 1}f_{x^1}+\delta_{\epsilon 2}g_{x^2}),
\end{equation}
\begin{equation}\label{eqn5.5}
\frac{\partial C}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}= \frac{\delta_{\epsilon 1}f_{x^1}f_{x^1x^1}+\delta_{\epsilon 2}g_{x^2x^2}}{C},
\end{equation}
\begin{equation}\label{eqn5.6}
\frac{\partial E}{\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}= 2b^2(\delta_{\epsilon 1}f_{x^1}f_{x^1x^1}+\delta_{\epsilon 2}g_{x^2x^2}),
\end{equation}
\begin{equation}\label{eqn5.7}
\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x^{\epsilon}\partial x^{\eta}}v^i= 2b^2\left[ (1+g^2_{x^2x^2})f_{x^1x^1}+(1+f^2_{x^1x^1})g_{x^2x^2}\right] ,
\end{equation}
\begin{equation}\label{eqn5.8}
\frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}\frac{\partial^2\varphi^j} {\partial x_{\epsilon}\partial x_{\eta}}v^i= 2\left[ (1+g^2_{x^2x^2})f_{x^1x^1}+(1+f^2_{x^1x^1})g_{x^2x^2}\right],
\end{equation}
Using \eqref{eqn5.3} in \eqref{MCE0} we have
\begin{eqnarray}\label{MCE002}
\frac{\partial^2 \varphi^j}{\partial x^{\epsilon}\partial x^{\eta}}v^i\left[ \frac{\partial^2 C^2}{\partial z^i_{\epsilon}\partial z^j_{\eta}}(2C^2+3E)(2C^2+E)-\frac{\partial^2 E}{\partial z^i_{\epsilon}\partial z^j_{\eta}} 2C^2(2C^2+E)\nonumber \right.\\ \left.+\left\lbrace\frac{\partial E }{\partial z^i_\epsilon}\frac{\partial C }{\partial z^j_\eta} \left(4C^3-6CE \right)+4C^2 \frac{\partial E }{\partial z^i_\epsilon}\frac{\partial E }{\partial z^j_\eta} \right\rbrace \right]&=&0. \hspace{1.0cm}
\end{eqnarray}
Therefore, using \eqref{eqn5.2} to \eqref{eqn5.8} in \eqref{MCE002} we obtain
\begin{equation}\label{eqn5.11}
\begin{split}
f_{x^1x^1}(1+g^2_{x^2})\left[2+(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\left[ 2(1-b^2)(2+b^2)(f^2_{x^1}+g^2_{x^2})\right] \\ +g_{x^2x^2}(1+f^2_{x^1})\left[2+(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\left[ 2(1-b^2)(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]=0 .
\end{split}
\end{equation}
Hence, we have the following theorem,
\begin{theorem}
Let $\varphi:M^2 \to (V^3,F_b)$ be an immersion in a Matsumoto space with local coordinates $(\varphi^i(x))$. Then $\varphi$ is minimal if and only if
\begin{equation}\label{eqn5.12}
\lambda f_{x^1x^1}+ \mu g_{x^2x^2}=0
\end{equation}
where,
\begin{equation}\label{eqn5.13}
\begin{split}
\lambda=(1+g^2_{x^2})\left[2+(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\left[ 2(1-b^2)(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\\
+6b^2f^2_{x^1}\left\lbrace 2+(2-b^2) (f^2_{x^1}+g^2_{x^2})\right\rbrace
\end{split}
\end{equation}
and
\begin{equation}\label{eqn5.14}
\begin{split}
\mu= (1+f^2_{x^1})\left[2+(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\left[ 2(1-b^2)(2+b^2)(f^2_{x^1}+g^2_{x^2})\right]\\
+6b^2g^2_{x^2}\left\lbrace 2+(2-b^2) (f^2_{x^1}+g^2_{x^2})\right\rbrace
\end{split}
\end{equation}
\end{theorem}
Now we want to solve the differential equation \eqref{eqn5.11}. Let $r=f^2_{x^1}$ and $s=g^2_{x^2}$. Then
\begin{equation}
f_{x^1x^1}=\frac{r_f}{2}, \qquad g_{x^2x^2}=\frac{s_g}{2}.
\end{equation}
Then from \eqref{eqn5.13} and \eqref{eqn5.14} becomes
\begin{equation}\label{eqn5.15}
\begin{split}
\lambda=(1+s)\left[2+(2+b^2)(r+s)\right]\left[ 2(1-b^2)(2+b^2)(r+s)\right]\\
+6b^2r\left\lbrace 2+(2-b^2) (r+s)\right\rbrace
\end{split}
\end{equation}
and
\begin{equation}\label{eqn5.16}
\begin{split}
\mu=(1+r)\left[2+(2+b^2)(r+s)\right]\left[ 2(1-b^2)(2+b^2)(r+s)\right]\\
+6b^2s\left\lbrace 2+(2-b^2) (r+s)\right\rbrace
\end{split}
\end{equation}
And \eqref{eqn5.12} becomes
\begin{equation}
r_f\lambda+s_q\mu=0
\end{equation}
Therefore, we have two cases:\\
\textbf{Case 1:} If $r_f=0$ or, $s_g=0$, then $r$ and $s$ are constant functions. And hence $f$ and $g$ are linear functions. Therefore, $M^2$ is a piece of plane in $(V^3, F_b)$.\\
\textbf{Case 2:} Let $r_f\ne 0$ and $s_g \ne 0$. Then we have, $\lambda\ne 0$ and $\mu \ne 0$. Suppose
\begin{equation}
\kappa =\frac{r_f}{\mu}=-\frac{s_g}{\lambda}.
\end{equation}
Which implies that
\begin{equation*}
(r_f)_g=\mu_g\kappa+\mu\kappa_g=0 \quad \textnormal{and} \quad (s_g)_f=\lambda_f\kappa+\lambda\kappa_f=0
\end{equation*}
Hence, we have,
\begin{equation}
\log{\kappa}_f= \frac{\kappa_f}{\kappa}=-\frac{\lambda_f}{\lambda}\quad \textnormal{and} \quad \log{\kappa}_g=\frac{\kappa_g}{\kappa}=-\frac{\mu_g}{\mu}.
\end{equation}
Since, $(\log{\kappa}_f)_g=(\log{\kappa}_g)_f$, we have,
\begin{equation}\label{eqn5.17}
\left(\frac{\lambda_f}{\lambda} \right)_g =\left( \frac{\mu_g}{\mu}\right)_f.
\end{equation}
We can easily observe that, $r_g=(r_f)_g=0$ and $s_f=(s_g)_f=0$. Therefore, we have,
\begin{equation}\label{eqn5.18}
\left(\frac{\lambda_f}{\lambda} \right)_g =\left( \frac{\lambda_rr_f}{\lambda}\right) _g=\left( \frac{\lambda_r}{\lambda}\right) _gr_f=\left( \frac{\lambda_r}{\lambda}\right) _sr_fs_g
\end{equation}
and
\begin{equation}\label{eqn5.19}
\left(\frac{\mu_g}{\mu} \right)_f =\left( \frac{\mu_ss_g}{\mu}\right) _f=\left( \frac{\mu_s}{\mu}\right) _fs_g=\left( \frac{\mu_s}{\mu}\right) _rr_fs_g
\end{equation}
Using \eqref{eqn5.18} and \eqref{eqn5.19} in \eqref{eqn5.17} we get,
\begin{equation}
\left( \frac{\lambda_r}{\lambda}\right) _s =\left( \frac{\mu_s}{\mu}\right) _r.
\end{equation}
That is,
\begin{equation}\label{eqn5.42}
\left( \log \frac{\lambda}{\mu}\right)_{rs}=0
\end{equation}
Let $p=r+s$ and $q=r-s$. Then we have
\begin{equation}
\lambda=K(p)-L(p)q, \qquad \mu=K(p)+L(p)q
\end{equation}
where,
\begin{equation}\label{eqn5.201}
K(p)=4(1-b^2)+\frac{p}{2}(20++8b^2-4b^4)+\frac{p^2}{2}(16+20b^2-6b^4)+\frac{p^3}{2}(2+b^2)^2
\end{equation}
\begin{equation}\label{eqn5.202}
L(p)=2(1-4b^2)+\frac{p}{2}(8-12b^2+4b^4)+\frac{p^2}{2}(2+b^2)^2
\end{equation}
Now from \eqref{eqn5.42} it follows that
\begin{equation}\label{eqn5.20}
\left( \log \frac{\lambda}{\mu}\right)_{rs}=\left( \log \frac{\lambda}{\mu}\right)_{pp}-\left( \log \frac{\lambda}{\mu}\right)_{qq}=0
\end{equation}
Now substitute the values of $\lambda$ and $\mu$ in \eqref{eqn5.20} we get
\begin{equation}
\begin{split}
q^3\left(K_{pp}L^3-KL^2L_{pp}-2K_pL_pL^2+2KLL^2_p \right) \\ +q\left( -K_{pp}K^2L+K^3L_{pp}-2K_pK^2L_p+2K^2_pKL-2KL^3\right) =0.
\end{split}
\end{equation}
Since, $t$ is an arbitrary function we get,
\begin{equation}\label{eqn5.21}
K_{pp}L^3-KL^2L_{pp}-2K_sL_pL^2+2KLL^2_p=0
\end{equation}
\begin{equation}\label{eqn5.22}
-K_{pp}K^2L+K^3L_{pp}-2K_pK^2L_p+2K^2_pKL-2KL^3=0
\end{equation}
From \eqref{eqn5.201} and \eqref{eqn5.202} we can obtain easily that
\begin{equation}\label{eqn5.23}
\left[ \left(\frac{K}{L} \right)_p \right]^2=1.
\end{equation}
Therefore,
\begin{equation}\label{eqn5.24}
\frac{K}{L}=p+\frac{8+32b^2-10b^4}{(2+b^2)^2}+\frac{4b^4}{T}\left(\frac{132-60b^2+9b^4}{(2+b^2)^2}p+\frac{2(66-21b^2)}{(2+b^2)^2} \right)
\end{equation}
where,
\begin{equation*}
T=(4-16b^2)+p(8-12b^2+4b^4)+p^2(2+b^2)^2
\end{equation*}
Now differentiating \eqref{eqn5.24} with respect to $p$ we get
\begin{equation}\label{eqn5.25}
\left( \frac{K}{L}\right)_p=1- \frac{4b^4}{T'}\left(\frac{9b^4-102b^2+264}{(b^2+2)^2}+(94b^4-12b^2+8+2(b^2+2)^2p)\right)
\end{equation}
where, $T'=(-16b^2+4+(94b^4-12b^2+8)p+(b^2+2)^2p^2)^2$\\
Now \eqref{eqn5.23} will true if and only if $b=0$. Hence, we obtain the following theorem:
\begin{theorem}
A minimal surface in a Matsumoto space $(V^3,F_b)$, which is the translation surface defined on $\mathbb{R}^2$, is a plane.
\end{theorem}
|
1,108,101,565,181 | arxiv | \section{Introduction}
In the companion paper~\cite{mass_function_paper} (referred to as Ref.~I) a model for the $q\bar q$ interaction was developed that uses the NJL mechanism to ensure that a pion bound state of zero mass exists whenever mass can be spontaneously generated through the self-interactions that dress a massless quark. A novel feature of this model is its simplicity; in momentum space the kernel is the sum of a pure vector $\delta$-function interaction and an interaction which provides confinement, so that even though a feature of the Covariant Spectator Theory (CST)~\cite{Gro69,Gro74,Gro82} is that one of the quarks can be on-shell (where both the $q$ and $\bar q$ will sometimes be referred to collectively as ``quarks''), both quarks in the pair can never be on shell simultaneously. The confining interaction can be a mixture of vector and scalar exchanges, but in the chiral limit (where the undressed mass of the quark, $m_0$, is zero) the scalar part of the confining interaction decouples, allowing the
chirally invariant
vector
interactions to preserve the features of chiral symmetry. In Ref.~I the mass function was calculated by fitting two model parameters to lattice data, and the bound-state $q\bar q$ equations were defined and their properties studied.
It is the purpose of this paper to show that the simple model introduced and fixed in Ref.~I can be used to calculate the pion form factor without modifications. This is the first demonstration showing how the model can be applied to a variety of interesting physics problems. Even though the CST has been well studied, and used in previous calculations of nuclear form factors~\cite{Gross:2006fg,Pinto:2009dh}, this calculation introduces a number of new issues never before encountered. The discussion here will not only lead to some interesting new results, but also extend understanding of how to use the CST. Discussion of the results, and comparison with some previous work, is saved for the last section.
\section{Pion form factor in the Bethe-Salpeter theory}
\label{sec:piff}
The electromagnetic pion form factor in the spacelike region has been calculated in a great variety of different approaches, see, e.g. Refs.~\cite{Maris:2000,Chang:2013nia,Coester:2005cv,Brodsky:2007hb,deMelo:2005cy,Carbonell:2008tz,Biernat:2009my,Ebert:2005es,Masjuan:2008,Ananthanarayan:2012aa,Ananthanarayan:2012tt,Ananthanarayan:2013dpa,Troitsky:2013aa}. We begin by reviewing the discussion of the pion form factor in the Bethe-Salpeter (BS) formalism. We will consider a positively charged $\pi^+$ consisting of a $u$ and a $\bar d$ quark; the form factor for the $\pi^-$ can be obtained by charge conjugation. In impulse approximation, the electromagnetic form factor of the $\pi^+$ is extracted from the sum of two triangle diagrams, in which the photon couples either to the $u$ or the $\bar d$ quark, as depicted in Fig.~\ref{fig:BStriangle}.
The top diagram, with the $\bar d$ quark as spectator, is weighted by the $u$-quark's electric charge $\frac23\mathrm e$, while the bottom diagram, with the $u$ quark as spectator, is weighted by the electric charge $-\frac13\mathrm e$ of the $d$ quark traveling backward in time. The sum of the two diagrams is
\begin{widetext}
\begin{eqnarray}\label{eq:BSpicurrent}
J^\mu(P_+,P_-)=\mathrm e F_\pi (Q^2) (P_++P_-)^\mu
&=&\frac23\mathrm e\int \frac{\mathrm d^4k}{(2\pi)^4}\,
\mathrm {tr}\Big[\overline{\Gamma}_{\rm BS}(k,p_+) S(p_+)
j^\mu (p_+,p_-) S(p_-) \Gamma_{\rm BS}(p_-, k)S(k)\Big]
\nonumber\\&&
-\frac13\mathrm e\int \frac{\mathrm d^4k}{(2\pi)^4}\,
\mathrm {tr}\Big[\Gamma_{\rm BS}(k,p_-') S(p_-')
j^\mu (p_-',p_+')S(p_+') \overline{\Gamma}_{\rm BS}(p_+',k)S(k)\Big]\,,
\end{eqnarray}
\end{widetext}
where $p_\pm=k+P_\pm$, $p'_\pm=k-P_\pm$, $j^\mu (p_+,p_-)$ is the dressed current for off-shell quarks (defined below), and $S(p)$ is the dressed propagator of a quark with momentum $p$
\begin{eqnarray}
S(p)=Z(p^2)\frac{M(p^2)+\slashed p}{M^2(p^2)-p^2-\mathrm i \epsilon} \, ,
\end{eqnarray}
with $M(p^2)$ being the quark mass function and $Z(p^2)$ the wave function renormalization, as discussed in Ref.~I and reviewed below. For the model considered here, $Z(p^2)=1$.
The quark mass function $M(p^2)$ is obtained from the solution of the CST Dyson equation for the self-energy, and the constituent (dressed) mass $m$ of the quark is then determined from the condition $M(m^2)=m$. We assume equal masses for the $u$- and $d$-quarks, so the $u$ and $d$ propagators are identical.
Finally, following the notation of Ref.~I, the vertex function $\Gamma_{\rm BS}(p_1,p_2)$ describes a $\pi^+$ coupling to an {\it outgoing\/} $u$ quark of momentum $p_1$ and an {\it incoming\/} $d$ quark of momentum $p_2$ (the same as an {\it outgoing\/} $\bar d$ quark of momentum $-p_2$), while $\overline{\Gamma}_{\rm BS}(p_1,p_2)$ describes a $\pi^+$ coupling to an {\it incoming\/} $u$ quark of momentum $p_1$ and an {\it outgoing\/} $d$ quark of momentum $p_2$ (the same as an {\it incoming\/} $\bar d$ quark of momentum $-p_2$).
Before turning to the CST formalism, we show how the second contribution to the form factor can be transformed into the first, and the two added together. To do this we need the following transformations of the vertex function and the current under charge conjugation
\begin{eqnarray}
{\cal C}\Gamma^{T}_{\rm BS}(p_1,p_2){\cal C}^{-1}&=&\Gamma_{\rm BS}(-p_2,-p_1)
\nonumber\\
{\cal C}j^{\mu {T}}(p',p){\cal C}^{-1}&=&-j^\mu(-p,-p')\, .
\label{eq:Ctrans}
\end{eqnarray}
While these relations can be derived from general principles, they also follow from the typical matrix structure of $\Gamma_{\rm BS}$ [such as $\Gamma_{\rm BS}(p_1,p_2)\sim (m-\slashed{p}_1)\gamma^5(m-\slashed{p}_2)$] or $j^\mu$ [such as $j^\mu(p',p)\sim (m-\slashed{p}')\gamma^\mu(m-\slashed{p})$]. Taking the transpose of the trace, inserting ${\cal C}{\cal C}^{-1}=1$ between the operators, and using the properties (\ref{eq:Ctrans}) converts the trace from second term of Eq.~(\ref{eq:BSpicurrent}) into
\begin{eqnarray}
\mathrm {tr}\Big[\cdots\Big]&\to& -\mathrm {tr}\Big[\overline{\Gamma}_{\rm BS}(-k,-p_+')S(-p_+')
j^\mu (-p_+',-p_-')
\nonumber\\
&&\qquad\times S(-p_-')\Gamma_{\rm BS}(-p_-',-k) S(-k)\Big]\, .
\label{eq:tracebp}
\end{eqnarray}
Now changing $k\to -k$ under the integral, and noting that this converts $p_\pm'\to -p_\pm$, shows that the trace (\ref{eq:tracebp}) is identical to the trace in the first term of (\ref{eq:BSpicurrent}), except for the minus sign which converts the factor $-\frac13\to \frac13$. Hence the two terms are identical (except for the charges) and their sum equals the first term with the factor of $\frac23 \mathrm e$ replaced by $\mathrm e$. Note that the ability to change $k\to-k$ under the integral was essential to the argument.
This discussion can be easily extended to show that the $\pi^0$ form factor is identically zero, and that except for a sign, the $\pi^-$ form factor is identical to the $\pi^+$ form factor.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{BStriangleA}\vspace*{.5cm}
\includegraphics[width=0.45\textwidth]{BStriangleB}
\caption{The two triangle diagrams for the electromagnetic pion form factor. Here $P_\pm$ are the outgoing and incoming (on-shell) pion four-momenta, $q$ is the four-momentum of the virtual photon, $S$ is the dressed quark propagator, $\Gamma_{\rm BS}$ is the Bethe-Salpeter pion vertex function and $j^\mu$ is the electromagnetic off-shell quark current. The top diagram describes the interaction of the virtual photon with the $u$ quark, with the $\bar d$ quark (represented by a $d$ quark traveling backward in time with momentum $k$) as a spectator; the bottom diagram represents the interaction of the virtual photon with the $\bar d$ quark (again represented by a $d$ quark traveling backward in time) with the $u$ quark as the spectator.}\label{fig:BStriangle}
\end{center}
\end{figure}
\section{Pion form factor in the CST} \label{sec:FFandCST}
In the CST, the integral over the relative momentum of the two propagating particles is constrained by the requirement that one of the two particles must always be on shell, with contributions from terms when both particles are off-shell moved to higher order in the series of terms that define the relativistic two-body kernel. The motivation for this rearrangement of terms is that, in many examples, it can be shown that the off-shell terms (from box diagrams, for example) tend to cancel other higher order terms in the kernel (crossed box diagrams, for example) so that keeping one particle on-shell not only simplifies the equations, but also improves the convergence of the approximation to the underlying field theory.
The pion form factor in CST will therefore also involve triangle diagrams similar to the BS diagrams shown in Fig.~\ref{fig:BStriangle}, but with the internal particles constrained to their mass shell in the same way that they are constrained in the two-body bound-state equation. As shown in Ref.~I, a careful treatment of the pion bound state in the chiral limit requires a four-channel equation, with contributions from the positive and negative energy poles of both particles included. The full treatment of the triangle diagram for the $\pi^+$ form factor using the four-channel equation would therefore involve contributions from the positive- and negative-energy poles of both the $u$ and $\bar d$ quark.
In this first calculation of the pion form factor using CST, we chose to make some approximations that still preserve the important physics. To understand these approximations, study the location of the particle poles in the BS diagrams of Eq.~(\ref{eq:BSpicurrent}) and Fig.~\ref{fig:BStriangle}. First concentrate on the diagram where the photon couples to the $u$-quark and the $\bar d$-quark is spectator (top panel in Fig.~\ref{fig:BStriangle}). This diagram has six propagator poles in the complex $k_0$-plane, three of them in the lower- and three in the upper-half plane \cite{Gro83}. In the Breit frame,
where
\begin{eqnarray}
P_\pm&=&\{P_0,{\bf 0},\pm\frac12 Q\}
\nonumber\\
q&=&\{0,{\bf 0},Q\} \label{eq:BreitFrame}
\end{eqnarray}
with $P_0=\sqrt{\mu^2+\frac14Q^2}$, $\mu$ the pion mass and $Q$ the photon momentum transfer, the poles of the spectator $d$-quark are located at
$k_0=\pm E_k\mp \mathrm i \epsilon$, denoted $1^{\pm}$, where $E_k = (m^2+{\bf k}^2)^{1/2}$, and the poles of the struck $u$-quark with momenta $p_-$ and $p_+$ are at
\begin{eqnarray}
k_0&=& -P_0\pm \sqrt{m^2+{\bf k}_\perp^2+\left(k_z-\frac Q2\right)^2}\mp \mathrm i \epsilon \, ,
\nonumber\\
k_0&=&-P_0\pm \sqrt{m^2+{\bf k}_\perp^2+\left(k_z+\frac Q2\right)^2}\mp \mathrm i \epsilon \, ,
\label{eq:k0pp}
\end{eqnarray}
%
denoted $2^{\pm}$ and $3^{\pm}$, respectively. Since the square roots in the last two expressions are positive, recalling that $p_\pm=k+P_\pm$ means that $2^+$ and $3^+$ are the positive-energy poles and $2^-$ and $3^-$ are the negative-energy poles of the struck $u$-quark. The locations of these six poles in the complex $k_0$-plane depend on $m$, $\mu$, $Q$, $k_z$ and ${\bf k}_\perp=(k_x,k_y)$, and are shown in Fig.~\ref{fig:polesTriangle} for two different pion masses.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{TriangePoles1} \\
\vspace{0.5cm}
\includegraphics[width=0.45\textwidth]{TriangePoles2}
\caption{The locations of the six propagator poles in the complex $k_0$-plane of the diagram where the $d$-quark is spectator, shown here for both $Q$ and $|\bf k|$ small, with $m=0.308$ GeV. The top panel shows the case when $\mu=0.14$ GeV; the bottom shows the case when $\mu=0.42$ GeV. Note that large and different imaginary parts $\epsilon$ have been chosen for each pole in order to spread them out in the complex plane for better illustration, but in all cases $\epsilon\to0$ is implied.}\label{fig:polesTriangle}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{TriangePoles3}
\\
\vspace{0.5cm}
\includegraphics[width=0.45\textwidth]{TriangePoles4}
\caption{The locations of the same six propagator poles shown in Fig.~\ref{fig:polesTriangle}, with $m=0.308$ GeV, $\mu=0.14$ GeV, and ${k}_\perp=0$, but with $Q$ large. The top panel shows the case when $k_z\lesssim Q/2$ and the poles $1^-$ and $2^+$ pinch;
the lower panel shows the case when $k_z\gtrsim Q/2$ and the poles $1^-$ and $2^-$ get close to each other. }\label{fig:polesTriangle3}
\end{center}
\end{figure}
Before proceeding further, recall that the masses in the denominators of the propagators are not fixed, but are functions of the four-momenta, so that, for example, the denominator of the spectator propagator is $M^2(k^2)-k^2$ not $m^2-k^2$. {\it At the pole\/}, however, the mass condition $M(m^2)=m$ holds, so that the location and movement of the poles can be computed just as if the masses were fixed.
The full four-channel CST equation requires averaging the contributions from all of the propagator poles in the upper and lower half planes. Study of the bottom panel of Fig.~\ref{fig:polesTriangle} shows that, when $\mu$ is comparable to the dressed quark mass $m$, the largest contribution will come from the $1^-$ pole. For small $Q$ (and small $|\bf k|$) it is close to the poles at $2^+$ and $3^+$ in the lower half plane, and at the same time far away from the other poles. This is the on-shell contribution of the spectator, with the physical energy of the outgoing $\bar d$ antiquark in its {\it positive\/}-energy state (because the incoming $d$ quark is in its {\it negative\/} energy state). This approximation, used previously in the study of deuteron form factors, is known as the relativistic impulse approximation (RIA)~\cite{Gross:1965zz,Arn77,Arn80,VO95}.
The top panel of Fig.~\ref{fig:polesTriangle} shows that for small $\mu$ (and also small $Q$) all of the poles in the upper-half plane are close to each other (and will coalesce into a triple pole when both $Q=0$ and $\mu=0$). The requirement that the limit $\mu\to0$ be described correctly is precisely what led to the need for a four-channel CST equation in the first place, and these additional channels, included in contributions from the $2^-$ and $3^-$ poles, are also needed for a correct description of the form factor in the limit when {\it both\/} $\mu$ and $Q$ are small.
In this first calculation, we will use the RIA, and hence we cannot expect to be able to correctly describe the form factor in the limit when both $\mu$ and $Q$ are small, where the neglected contributions from the $2^-$ and $3^-$ poles cannot be ignored (in fact, the RIA becomes singular when both $Q$ and $\mu$ tend to zero).
The case when $Q$ is large poses an interesting issue. In this case the position of the poles is quite insensitive to the value of $\mu$, and therefore the RIA describes the form factor equally well for both large and small $\mu$.
However, as $k_z\to Q/2$, the integrand becomes large, with the precise role of the poles depending on whether or not $k_z$ is less than or greater than $Q/2$ [recall Eq.~(\ref{eq:k0pp})]. If $k_z
\lesssim Q/2$ the poles $1^-$ and $2^+$ pinch, as shown in the top panel of Fig.~\ref{fig:polesTriangle3}, while if $k_z\gtrsim Q/2$, the poles $1^-$ and $2^-$ are close together, as illustrated in the bottom panel of Fig.~\ref{fig:polesTriangle3}. In both cases it looks like the integral could be singular, but it remains finite (and small).
Briefly, to see what is happening, it is necessary to examine the behavior of the propagator with momentum $p_-$ when the residue of the pole $1^-$ is evaluated, i.e., when the spectator is on its negative-energy mass shell, with $k_0=-E_k+i\epsilon$.
The relevant integral is
\begin{eqnarray}
I&\sim& \int dk_z\,f(k_z) [m^2 - p_-^2]^{-1}
\nonumber\\
&=&\int dk_z\, f(k_z) [-\mu^2+2P_0E_k-Qk_z]^{-1} \, ,
\end{eqnarray}
where we have approximated $M^2(p_-^2)\simeq m^2$ because we are interested in the kinematics where $p_-^2$ is close to $m^2$, and $f(k_z)$ is the remainder of the integrand which provides the needed convergence when $k_z\to \infty$.
As $Q$ becomes very large, this integral peaks at very large $k_z$, but is still finite. To estimate it expand the factors
\begin{eqnarray}
\lim_{Q\to\infty}I &\to&\int dk_z\frac{f(k_z)}{Qk_z} \Bigg[\Big(1+\frac{2\mu^2}{Q^2}\Big)\Big(1+\frac{E_\perp^2}{2k_z^2}\Big)-1\Bigg]^{-1}
\nonumber\\
&\simeq&\int dk_z\, f(k_z)\Bigg[\frac{2\mu^2k_z}{Q}+\frac{E_\perp^2Q}{2k_z} -\mu^2\Bigg]^{-1} \, ,
\label{eq:limI}
\end{eqnarray}
where $E_\perp=\sqrt{m^2+k_\perp^2}$.
This shows that the integrand peaks at $k_z=E_\perp Q/(2\mu)$, and it is finite there provided that $\mu < 2m$. Because the peak is located at large values of $k_z$ where the remainder of the integrand, $f(k_z)$, is already very small, the integral is small as well.
The rapid peaking at high $Q$ plays a crucial role in giving the correct asymptotic behavior of the form factor, as will be discussed in Sec.~\ref{sec:highQ2limit} below. Examination of the other propagator in $p_+^2$ shows a similar behavior, but at negative $k_z$.
To summarize, for small $Q^2$ the RIA, by retaining only the spectator pole contribution $1^-$, is a good approximation to the CST triangle diagram only for sufficiently large pion masses. For large $Q^2$, on the other hand, the locations of the poles are insensitive to $\mu$ and therefore the RIA is good not only for large but also for small values of $\mu$ (the physical pion mass of $\mu=0.14$ GeV, for example) and even for vanishing pion mass in the chiral limit.
This concludes our discussion of the RIA contribution from the spectator $d$ quark.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{RIAtriangleA}\vspace*{.5cm}
\includegraphics[width=0.45\textwidth]{RIAtriangleB}
\caption{The two contributions to the $\pi^+$ form factor in the RIA.}\label{fig:polesTriangleRIA}
\end{center}
\end{figure}
Now we turn to the RIA contribution from the diagram where the $u$-quark is the spectator. The locations of the poles can be analyzed in the same way as in the first case; this will not be discussed in detail here. In essence, it is now the positive-energy pole of the spectator $u$-quark (in the lower-half $k_0$-plane) that plays the same role as the spectator pole from the $d$-quark discussed above. This contribution is represented diagrammatically in the lower panel of Fig.~\ref{fig:polesTriangleRIA}.
The diagram shown in the lower panel of Fig.~\ref{fig:polesTriangleRIA} can be transformed into the expression for the upper panel, except for a different charge factor.
As in the BS case, invariance under the transformation $\hat k \rightarrow -\hat k$, where $\hat k=\{E_k,{\bf k}\}$ is the on-shell spectator quark momentum, is needed for this transformation, and it is possible because the first contribution is obtained from the \emph{negative-energy} pole contribution of the spectator $d$-quark of the upper-half plane and the second from the \emph{positive-energy} pole contribution of the spectator $u$-quark of the lower-half plane. The first fixes $k_0=-E_k$ and the second $k_0=E_k$, and together with a change of the integration variable ${\bf k}\to -{\bf k}$ give the symmetry needed to relate the contributions from the $u$ and the $d$ spectators.
Adding the two contributions yields the $\pi^+$ form factor in RIA,
\begin{eqnarray}\label{eq:picurrentA}
J_{\rm RIA}^\mu(P_+,P_-)
&=&\mathrm e\,\int_k\,\mathrm {tr}\Big[
\bar\Gamma (-\hat k,p_+) S(p_+) j^\mu (p_+,p_-)\nonumber\\&&
\times S(p_-) \Gamma(p_-, -\hat k) \Lambda(-\hat k)\Big] \, .
\end{eqnarray}
Here, $p_\pm=P_\pm-\hat k$ are the off-shell quark momenta, $\Gamma(p,-\hat k)$ is the CST pion vertex functions, $\Lambda(\hat k)$ is the on-shell projector
\begin{eqnarray}
\Lambda(\hat k)=\frac{m+\hat k}{2m} \, ,
\end{eqnarray}
and
the shorthand
\begin{eqnarray}
\int_k\equiv\int\frac{ d^3k}{(2\pi)^3}\frac{m}{E_k}\, . \label{eq:kint}
\end{eqnarray}
is used for the momentum integration.
\section{Ingredients}
In this section the terms needed for the evaluation of the trace in Eq.~(\ref{eq:picurrentA}) are assembled. In the next section the trace is evaluated and its behavior
as $Q\to\infty$ is examined. Numerical results for the form factor are presented in Sec.~\ref{sec:Resuts}.
\subsection{Pion vertex function}
The pion vertex function $\Gamma(p,k)$ is required for the calculation of the trace in Eq.~(\ref{eq:picurrentA}). The complete CST pion vertex function is obtained by solving the full four-channel pion bound-state equation of Ref.~I.
This more ambitious task will be the subject of future work. Here we use an approximate pion vertex function that is an \emph{off-shell extension} near the chiral limit. Since the linear confining interaction does not contribute to the pseudoscalar bound-state equation in the chiral limit, as explained in Ref.~I, this estimate will be made using the vector interaction only, with the general form
\begin{eqnarray}
{\cal V}_V=\frac14 V_C\,\gamma^\mu\otimes\gamma_\mu \, ,
\end{eqnarray}
where the specific form of the scalar function $V_C$ will be given below.
The CST equation for the bound state (but with both external particles off shell) was already derived in Ref.~I; here we present an alternative derivation starting from the BS bound state equation written in the rest frame in the chiral limit (where $P=0$)
\begin{eqnarray}
\Gamma(p,p)=G(p^2)\gamma^5=i\int_{\rm poles}\frac{d^4k}{(2\pi)^4} {V}_C G(k^2) \frac{N}{D}\qquad
\label{eq:GinBSform}
\end{eqnarray}
where $G(p^2)$ is a scalar function, and the notation on the integral reminds us that the $k_0$ part of the integral is to be evaluated keeping {\it only\/} the $k_0$ poles from the quark propagators (and forsaking all others), and also anticipates the result in the chiral limit, where only the $\gamma^5$ structure will contribute to $\Gamma$. The numerator in the chiral limit therefore becomes
\begin{eqnarray}
N&=&\frac14 \gamma^\mu (M_\chi(k^2)+\slashed{k})\gamma^5(M_\chi(k^2)+\slashed{k})\gamma_\mu
\nonumber\\
&=&-(M_\chi^2(k^2)-k^2)\gamma^5
\end{eqnarray}
where $M_\chi(k^2)$ is the running mass function in the chiral limit (i.e. with $m_0=0$). The denominator is
\begin{eqnarray}
D=(M^2_\chi(k^2)-k^2-i\epsilon)^2\, .
\end{eqnarray}
To obtain the CST equation we are instructed to take the poles of the propagators {\it only\/}, which, after the cancellation of the factor $M^2_\chi(k^2)-k^2$ are single poles at $k_0=\pm E_k =\pm\sqrt{m_\chi^2+{\bf k}^2}$ with $m_\chi$ the root of the mass equation in the chiral limit, $m_\chi=M_\chi(m_\chi^2)$. Now that one of the initial quarks is on-shell, it is possible to specify the scalar function $V_C$.
As discussed in Ref.~I, in the general case when the total four-momentum $P$ may not be zero,
\begin{eqnarray}
{V}_C(p_1,p_2;k_1,\hat k_2)&=& 2C\, \frac{E_k}{m}\, (2\pi)^3\delta^3(p-k)
\nonumber\\&&\times
h(p_1^2)h(p_2^2) h(k_1^2)h(m^2) \, ,\qquad
\label{eq:VC}
\end{eqnarray}
where $p_1=p+P/2$, $p_2=p-P/2$ (and similarly for $k_1$ and $k_2$),
$C$ is a constant, $h$ is the strong form factor that models the quark-gluon vertex, and in this example $\hat k_2^2=m^2$, with $m$ the dressed quark mass. The chiral limit of (\ref {eq:VC}) follows by setting $m\to m_\chi$, $P\to0$, and $h(m_\chi^2)=1$.
Returning to Eq.~(\ref{eq:GinBSform}), extracting the $\gamma^5$, and using the chiral limit of (\ref{eq:VC}) gives
\begin{eqnarray}
G(p^2)&=&\frac{C}{m_\chi}\, h^2(p^2)\int d^3k\, \delta^3(p-k)G(\hat k^2)
\nonumber\\
&=&\frac{C}{m_\chi}\, h^2(p^2) G_0 \label{eq:Gchiral}
\end{eqnarray}
where $\hat k=\{E_k,{\bf k}\}$ is the value of the four-vector $k$ at the spectator pole, and the second line
employs the definition of the chiral limit of the vertex function, $G(\hat k^2)\equiv G_0$. Note that placing the external particle on-shell gives a consistent equation only if $C=m_\chi$, which is another way of showing the constraint on $C$ in the chiral limit that was discussed in Ref.~I.
The result (\ref{eq:Gchiral}) suggests that, near the chiral limit, the vertex functions with one particle on-shell should be well approximated by
\begin{eqnarray}
\Gamma(p_1,\hat p_2)&=&\gamma^5 h(p_1^2) G_0
\nonumber\\
\Gamma(\hat p_1, p_2)&=&\gamma^5 h(p_2^2) G_0 \,.
\label{eq:redvertex}
\end{eqnarray}
The validity of this approximation depends on the observation that the most rapid variation of the scalar functions that define $\Gamma(p_1,p_2)$ is through its dependence on the strong form factors $h$.
\subsection{Off-shell quark current}
In order to calculate a conserved current for processes involving bound states, we employ the general framework introduced by Riska and Gross \cite{Gro87}. Here the strong form factors ($h$ in this paper) attached to the interaction vertices are moved to the propagators connecting neighboring vertices, where they provide an additional modification of the dressed quark propagators connecting two bare vertices. Consistency then requires that these form factors also be \lq \lq factored out'' of the quark current, leading to the introduction of a
\emph{reduced} or \emph{bare} electromagnetic current for the off-shell quarks defined by
\begin{eqnarray}
j^\mu_R (p',p)=h^{-1}(p'^2)j^{\mu} (p',p)h^{-1}(p^2)\,.
\end{eqnarray}
In order to ensure current conservation~\cite{Gro87,Gro96}, $j^\mu_R$ must satisfy the Ward-Takahashi (WT) identity:
\begin{eqnarray}
q^\mu j_{R\mu } (p',p)=\widetilde S^{-1}(p)-\widetilde S^{-1}(p')\,,\label{eq:WTI}
\end{eqnarray}
where $\widetilde S(k)$ is the dressed quark propagator multiplied by the square of the quark form factor
\begin{eqnarray}
\tilde S(p)=h^2(p^2) S(p)\,.
\end{eqnarray}
The simplest form of the reduced current that can satisfy the WT identity (\ref{eq:WTI}) with a dressed propagator with a {\it mass function depending on momentum\/} is a generalization of the current previously introduced in Ref.~\cite{Gro96}
\begin{eqnarray}
&& j_{R}^\mu (p',p)
\nonumber\\&&
\quad=f(p',p)
\bigg[{\cal G}_1^\mu(q)
+\kappa F_2(q^2)\frac{\mathrm i \sigma^{\mu\nu}q_{\nu}}{2m}\bigg]
\nonumber\\&&\qquad+ \delta(p',p)\Lambda(-p')\,{\cal G}_4^\mu(q)+\delta(p,p'){\cal G}_4^\mu(q)\,\Lambda(-p)
\nonumber\\&&\qquad+ g(p',p)\Lambda(-p')\,{\cal G}_3^\mu(q)\,\Lambda(-p)\,,
\end{eqnarray}
where $\Lambda(-p)=(M(p^2)-\slashed{p})/2M(p^2)$, and (for $i=1,3,4$)
\begin{eqnarray}
{\cal G}^\mu_i(q)\equiv \Big(F_i(q^2)-1\Big)\widetilde\gamma^\mu+\gamma^\mu\, .
\end{eqnarray}
Here the transverse gamma matrix, $\widetilde \gamma^\mu=\gamma^\mu-q^\mu \slashed{q}/q^2$, makes no contribution to the WT identity, and the $F_i(q^2)$ (with $i=1, \ldots,4$) are dressed quark form factors (including two new off-shell form factors $F_3$ and $F_4$). All of the quark form factors are constrained by $F_i(0)=1$ with $\kappa$ the anomalous magnetic moment of the quark.
The functions $f$, $g$, and $\delta$ are fixed by the requirement that $j_{R}^\mu$ satisfies the WT identity~(\ref{eq:WTI}). Using the notation $h=h(p^2)$, $h'=h(p'^2)$, $M=M(p^2)$, and $M'=M(p'^2)$, with a propagator
\begin{eqnarray}
\tilde S^{-1}(p)=\frac{M-\slashed{p}}{h^2}\, .
\end{eqnarray}
a short calculation yields
\begin{eqnarray}
g(p',p)&=&\frac{4MM'}{h^2h'^2}\frac{(h^2-h'^2)}{(p'^2-p^2)} \\
\delta(p',p)&=&\frac{2M'}{h'^2}\frac{(M'-M)}{(p'^2-p^2)} \\
f(p',p)&=&\frac{M^2-p^2}{h^2(p'^2-p^2)}-\frac{M'^2-p'^2}{h'^2(p'^2-p^2)}\,.
\end{eqnarray}
%
Note that, if $M'=M$, $\delta$ vanishes and $f$ and $g$ reduce to results previously given in the literature. When contracted into a conserved current, or a physical photon, the terms proportional to $q^\mu$ vanish, reducing ${\cal G}_i^\mu(q)$ to
\begin{eqnarray}
&&{\cal G}_i^\mu(q)\to F_i(q^2)\gamma^\mu\,.
\end{eqnarray}
The four quark form factors $F_i$ can be calculated in the CST, but this exercise will be saved for another day. For now we will use the quark current in the chiral limit, where the mass function reduces to \cite{mass_function_paper}
\begin{eqnarray}
M_\chi(p^2)=m_\chi h^2(p^2) \, ,\label{eq:chiralmf}
\end{eqnarray}
and, as appropriate for a point-like bare quark, $\kappa=0$ and all form factors are set to unity. This simplifies the off-shell structure functions
\begin{eqnarray}
g_\chi(p',p)&=&-2\delta_\chi(p',p)=\frac{4m_\chi^2(h^2-h'^2)}{(p'^2-p^2)}\\
f_\chi(p',p)&=&\frac14g_\chi(p',p)+\frac{p'^2h^2-p^2h'^2}{h^2h'^2(p'^2-p^2)}
\nonumber\\
&=&\frac{M_\chi^2-p^2}{h^2(p'^2-p^2)}-\frac{M'^2_\chi-p'^2}{h'^2(p'^2-p^2)}
\end{eqnarray}
and reduces the current to
\begin{eqnarray}
j^\mu_{R_\chi}(p',p)&=&\frac{[\slashed{p}'\gamma^\mu\slashed{p}+p'^2\gamma^\mu]}{h'^2(p'^2-p^2)} -\frac{[\slashed{p}'\gamma^\mu\slashed{p} +p^2\gamma^\mu]}{h^2(p'^2-p^2)}
\nonumber\\
&=&\gamma^\mu\frac{(h^2p'^2-h'^2p^2)}{h^2h'^2(p'^2-p^2)} +\frac{\slashed{p}'\gamma^\mu\slashed{p}}{h^2h'^2}\frac{(h^2-h'^2)}{(p'^2-p^2)} \, .\qquad
\label{eq:reducedcurr}
\end{eqnarray}
It is interesting to compare this with the Ball-Chiu (BC) \cite{BC80} current used by Maris and Tandy~\cite{MT00}. In our notation, denoting $p'+p=2P$ their current is
\begin{eqnarray}
j^\mu_{\rm BC}(p',p)&=&\gamma^\mu \frac{h^2+h'^2}{2h^2h'^2}
+\frac{2\slashed{P}P^\mu}{h^2h'^2}\frac{h^2-h'^2}{p'^2-p^2}\, .
\end{eqnarray}
The difference between these two currents is
\begin{eqnarray}
j^\mu_{R_\chi}(p',p)-j^\mu_{\rm BC}(p',p)=\frac{X^\mu }{2h'^2h^2}\frac{(h'^2-h^2)}{(p'^2-p^2)}
\end{eqnarray}
where $X^\mu$ is purely transverse
\begin{eqnarray}
X^\mu=\slashed{p}'\gamma^\mu\slashed{p}=\slashed{p}\gamma^\mu\slashed{p}'-\slashed{q}q^\mu+q^2\gamma^\mu\, .\qquad
\end{eqnarray}
The fact that $X^\mu\ne0$ is a demonstration that the current cannot be uniquely determined by the WT identity alone. It was only after we derived our current that we became aware of the BC current used by Maris and Tandy. In fact, Maris and Tandy used this freedom to add a transverse $\rho$ contribution to their current. In the absence of a dynamical calculation, we know of no way to determine these transverse contributions.
In any case, the quark current can be computed from an integral equation that sums the $q\bar q$ interaction to all orders, and includes automatically contributions from the $\rho, \rho', \cdots$ tower of vector meson states. Since our formalism can be applied equally well to the time-like region where these states live, this will be explored in the near future.
\section{Final steps}
\subsection{Reduction of the current}
Substituting (\ref{eq:redvertex}) and (\ref{eq:reducedcurr}) into the pion current (\ref{eq:picurrentA}) gives
\begin{eqnarray}
\label{eq:pioncurrent3}
&& F_\pi(Q^2)2P_0^0
\nonumber\\&&
\quad=- G_0^2\,\int_k
\mathrm {tr} \Big[\widetilde S(p_+) j_{R_\chi}^0 (p_+,p_-)\widetilde S(p_-)
\Lambda (\hat k)\Big], \qquad
\end{eqnarray}
where we keep the pion mass $\mu\ne0$, but take the chiral limit elsewhere so that the on-shell quark has mass $m_\chi$ (so that $E_k$ is now $\sqrt{m_\chi^2+{\bf k}^2}$), the $\gamma^5$s have been removed,
and we specialize to the Breit frame~(\ref{eq:BreitFrame}) where
\begin{eqnarray}
p^2_\pm&=&\mu^2+m_\chi^2-2P_0E_k\pm k_zQ\,.
\end{eqnarray}
Evaluating the trace gives
\begin{eqnarray}
F_\pi(Q^2)=-\frac{G^2_0}{P_0}\int_k \frac{{\cal N}}{\mathcal D} \, ,
\label{eq:FFint}
\end{eqnarray}
where, using the notation $M_\pm=M_\chi(p_\pm^2)$ and $h_\pm=h_\chi(p_\pm^2)$,
\begin{eqnarray}
&&{\cal N}=-2h_\Delta\Bigg\{4P_0^2E_k^3-2P_0E_k^2(M_+M_-+3 \bar m^2 -2M_S)
\nonumber\\
&&+E_k\Big[2 \bar m^2 (M_+M_- + \bar m^2)-2m_\chi M_S(2P_0^2+\bar m^2)
\nonumber\\&&
\qquad+4m_\chi^2P_0^2-k_z^2Q^2+m_\chi k_z Q M_\Delta\Big]
\nonumber\\&&
-m_\chi P_0\Big[2m_\chi(M_+M_-+\bar m^2)-2\bar m^2 M_S +k_z Q M_\Delta \Big]\Bigg\}
\nonumber\\
&&+2 k_z Q \, h_S \Big[m_\chi P_0(M_S-2m_\chi)
\nonumber\\&&
\qquad\qquad+E_k(M_+M_- +\bar m^2-m_\chi M_S)\Big]
\end{eqnarray}
with $h_S=h_++h_-$, $h_\Delta=h_+-h_-$, $M_S=M_++M_-$, $M_{\Delta}=M_+-M_-$, and $\bar m^2=m_\chi^2+\mu^2$. The denominator is
\begin{eqnarray}
\mathcal D&=&2k_zQ(M_+^2-p_+^2)(M_-^2-p_-^2) \, .
\end{eqnarray}
To study the convergence of these integrals, look at the limit as $k$ becomes very large (in this section we use the notation $k \equiv |{\bf k}|$). In this limit, the running quark masses can be neglected compared to factors of $k$, giving
\begin{eqnarray}
{\cal N}&\stackrel{k\gg m_\chi}{\longrightarrow}&2k(h_+^2-h_-^2)\Big[k_z^2 Q^2-4k^2P_0^2\Big]
\nonumber\\
\mathcal D&\stackrel{k\gg m_\chi}{\longrightarrow}& 2k_zQ(4P_0^2k^2-k_z^2Q^2)\, .
\end{eqnarray}
Ignoring details, the most divergent term therefore behaves like
\begin{eqnarray}
\int_k\frac{{\cal N}}{\mathcal D}&\simeq&\int_k h^2 \, ,
\\
\nonumber\\ &&\nonumber
\end{eqnarray}
guaranteeing that the integrals will converge if
\begin{eqnarray}
\lim_{k\to\infty}h^2<\frac1{k^2}\, . \label{eq:conv}
\end{eqnarray}
This requires that $h$ approach zero faster than $k^{-1}$.
\subsection{High-$Q^2$ limit}
\label{sec:highQ2limit}
The high-$Q^2$ limit of the form factor is particularly interesting. In preparation for this discussion, use the symmetry of the integrand to convert (\ref{eq:FFint}) to
\begin{eqnarray}
\int_{-\infty}^\infty \, dk_z {\cal F}(k_z)=2\int_{0}^\infty dk_z \,{\cal F}(k_z)\, .
\end{eqnarray}
Next, study the arguments $p_+^2$ and $p_-^2$ in the limit of large $Q$. Keeping terms to order $1/Q$ in $p_+^2$ (which will turn out to be sufficient), gives
\begin{eqnarray}
p_+^2&\to& \mu^2+m_\chi^2-Q(E_k-k_z)-\frac{2\mu^2}{Q}E_k
\nonumber\\
p_-^2&\to& -Q(E_k+k_z)\, . \label{eq:ppmQ}
\end{eqnarray}
Note that, as $Q \rightarrow \infty$, $p_-^2\to -\infty$ for all values of $k_z$, and hence only the leading term is needed. The functions $h_-$ and $M_-$ vanish in that limit. In contrast, the behavior of $p_+^2$ at large $Q$ depends on the size of $k_z$. The approximate formula (\ref{eq:ppmQ}) shows that $p_+^2\to -\infty$ at the limits of the $k_z$ integration. We saw already in the discussion of Eq.~(\ref{eq:limI}) that the integrand only deviates significantly from zero in the vicinity of the critical value
\begin{eqnarray}
k_{z0}\simeq \frac{Q E_\perp}{2\mu}
\end{eqnarray}
for which $p_+^2$ reaches its maximum, and where $E_\perp=\sqrt{m_\chi^2+k_\perp^2}$. For large, finite $Q$, this is a large value of $k_z$ that cannot be ignored. At this critical point,
\begin{eqnarray}
E_k&\to&\frac{QE_\perp}{2\mu}\Big(1+\frac{2\mu^2}{Q^2}\Big) \, ,
\nonumber\\
p_+^2&\to& p_c^2=\mu^2+m_\chi^2- 2 \mu E_\perp\, .
\end{eqnarray}
To understand the integral it is convenient to introduce the momentum fraction
\begin{eqnarray}
x\equiv \frac{E_k-k_z}{2E_k}
\end{eqnarray}
The $k_z$ integration will be replaced by an integration over $x$ and since $k_z>0$ the $x$-integration varies between 0 and $\frac12$, with the Jacobian
\begin{eqnarray}
\int_0^\infty \frac{dk_z}{E_k}=\int_0^\frac12 \frac{dx}{2x(1-x)}\, .
\end{eqnarray}
In terms of this variable,
\begin{eqnarray}
E_k&=&\frac{E_\perp}{2\sqrt{x(1-x)}}
\nonumber\\
k_z&=&\frac{E_\perp(1-2x)}{2\sqrt{x(1-x)}}
\nonumber\\
p_+^2&\to&\mu^2+m_\chi^2-\frac{Q E_\perp}{\sqrt{x(1-x)}}\Big(x+\frac{\mu^2}{Q^2}\Big)\, .
\end{eqnarray}
The last expression displays clearly how the non-leading term is needed to give the correct limit $p_+^2\to-\infty$ as $x\to0$, and that $p_+^2$ is finite (and hence $h_+^2$ large) only in the limited region of small $x\sim \mu^2/Q^2$. Hence we may assume $x\ll1$ and write
\begin{eqnarray}
p_+^2\to \mu^2+m_\chi^2 -\mu E_\perp\Big(\sqrt{y}+\frac1{\sqrt{y}}\Big)
\end{eqnarray}
with $y=x (Q/\mu)^2$. In terms of the variable $y$, the integrand peaks near $y\sim1$, as shown in Fig.~\ref{fig:integrand_peak}.
Evaluation of the form factor at high $Q$ is therefore ideally suited to a peaking approximation, with the slowly varying part of the integral evaluated at the peak.
Fig.~\ref{fig:integrand_peak} compares $h_+^2$ with the integrand, $\mathcal N/\mathcal D$, scaled by a constant factor. Both curves lie on top of each other showing that the peaking approximation works very well.
With these approximations, the form factor at large $Q$ becomes
\begin{eqnarray}
F_\pi(Q^2)\stackrel{Q^2\gg\mu^2}{\simeq}-\frac{2G_0^2}{Q}\int_{k_\perp}\int_0^\infty \frac{dy}{y} h^2(p_+^2) \Big[\frac{{\cal \tilde N}}{\mathcal D}\Big]_{\rm peak}\, .\qquad
\end{eqnarray}
Evaluation of the terms at the peak gives
\begin{eqnarray}
{\cal \tilde N}&\to&-Q^3\frac{2 E_\perp}{\mu^2}\Big( E_\perp^2 \mu-2 E_\perp (m_\chi^2-m_\chi M_++\mu^2)\nonumber\\&&+2 m_\chi \mu (m_\chi-M_+)\Big)
\nonumber\\
\mathcal D&\to& Q^4 \frac{E_\perp^2}{\mu^2}\Big(M_+^2-\mu^2-m_\chi^2+2\mu E_\perp\Big)\, .
\end{eqnarray}
Hence the form factor falls like $Q^{-2}$ at large $Q$, with the coefficient independent of the detailed structure of the strong form factor $h$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{peak}
\caption{(Color online) ${\cal N}/\mathcal D$ times a constant (black solid line) compared with $h_+^2$ (red dashed line) for large $Q^2$. Both curves lie on top of each other and they are strongly peaked at $y=1$.}\label{fig:integrand_peak}
\end{center}
\end{figure}
\section{Results} \label{sec:Resuts}
\begin{figure}
\includegraphics[height=5.5cm]{piff1} \\\vspace{0.5cm}
\includegraphics[height=5.5cm]{piff1small}
\caption{(Color online)
The pion form factor $F_\pi (Q^2,\mu_i)$ scaled with $\lambda_i^2= (\mu_1/\mu_i)^2$ for different pion masses $\mu_1=0.42$ GeV (gray dashed line), $\mu_2=0.28$ GeV (red dotted line) and $\mu_3=0.14$ GeV (blue dotdashed line) compared with the data~\cite{Amendolia:1986wj,Brown:1973wr,Bebek:1974iz,Bebek:1976ww,Bebek:1978pe,Volmer:2000ek,Horn:2006tm,Tadevosyan:2007yd,Huber:2008id}, at high $Q^2$ (top) and low $Q^2$ (bottom).} \label{fig:piff}
\end{figure}
The numerical results for the pion form factor presented in this paper use the simple strong quark form factor $h(p^2)$,
\begin{eqnarray}
h(p^2)=\left(\frac{\Lambda_\chi^2-m_\chi^2}{\Lambda_\chi^2-p^2}\right)^n\,, \label{eq:hff}
\end{eqnarray}
obtained in Ref.~I. Here $\Lambda_\chi=2.04$ GeV is a mass parameter determined by a fit of the quark mass function to the lattice QCD data. The power $n=2$ is not inconsistent with the lattice data and ensures that the integrals will converge.
Note that $h(p^2)$ has a pole at $\Lambda_\chi^2=p^2$, but this point lies far outside of the region of the $k$ integration.
Our pion form factor is very insensitive to the particular choice of $h$, as long as $n>1$ for convergence. This remarkable property can be understood, at least for large $Q^2$, from the analysis of Sec.~\ref{sec:highQ2limit}, which revealed that the high-$Q^2$ behavior of the form factor integral is completely determined by its integrand evaluated at the peaking value $k_z=k_{z0}$ of $h$. In this work we have neglected the anomalous moment term in the quark current, proportional to $\kappa$. Conventional arguments suggest that it should be small at large $Q$ and it would vanish for point-like quarks.
We emphasize that our model, in its present form, gives reliable results only for pion masses in a limited range.
In particular, if $\mu$ is larger than the threshold value of $\mu_s=2m_\chi$, the dressed quark propagators develop poles, which allow \emph{both} quarks to go on-mass-shell \emph{at the same time} [recall the discussion following Eq.~(\ref{eq:limI})]. This can happen only because we have not yet included the confining part of the interaction. Once confinement is included this cut will vanish.
Since the value $\mu_s=2m_\chi$ is far above the physical pion mass this does not represent a serious limitation of the present model.
For values of $\mu$ below the threshold value $2m_\chi$, we showed in Sec.~\ref{sec:FFandCST} that the RIA used in this paper breaks down for small pion masses at small $Q^2$. This happens because the pole contributions from the struck quark, neglected in RIA, become large.
Therefore, the form factor in RIA for a physical pion mass is reasonable at large $Q^2$, but too large at small $Q^2$ and does not give the correct charge at $Q^2=0$.
Therefore, for values of $\mu$ somewhere near the chiral quark mass, $m_\chi$, the RIA is a good approximation.
Since the pion form factor depends on the pion mass, we adopt the notation $F_\pi(Q^2,\mu)$. In all cases, $F_\pi(0,\mu)=1$.
We found that the value $\mu$=0.42 GeV gave the best fit to the data over the full range of $Q^2$, so we adopted this form factor as a standard of comparison (whenever we do not explicitly specify the pion mass in the form factor argument, the value $\mu$=0.42 GeV is implied).
We find a remarkable scaling behavior at large $Q^2$:
\begin{eqnarray}
F_\pi(Q^2,\lambda \mu)\stackrel{Q^2\gg\mu^2}{\simeq}\lambda^{2} F_\pi(Q^2,\mu)\,, \label{eq:scaling}
\end{eqnarray}
where $\lambda$ is a scaling parameter. In particular, Fig.~\ref{fig:piff} shows the form factor results for $\mu_3=0.14$~GeV and $\mu_2=0.28$~GeV when scaled to fit the result for $\mu_1=0.42$~GeV. The three curves are compared with the experimental data~\cite{Amendolia:1986wj,Brown:1973wr,Bebek:1974iz,Bebek:1976ww,Bebek:1978pe,Volmer:2000ek,Horn:2006tm,Tadevosyan:2007yd,Huber:2008id}.
\begin{figure}
\includegraphics[height=5.5cm]{piff2}
\\\vspace{0.5cm}
\includegraphics[height=5.5cm]{piff2rescaled}
\caption{(Color online) Scaled form factors compared to the data~\cite{Amendolia:1986wj,Brown:1973wr,Bebek:1974iz,Bebek:1976ww,Bebek:1978pe,Volmer:2000ek,Horn:2006tm,Tadevosyan:2007yd,Huber:2008id}. The top panel shows $\lambda_i^2 Q^2 F_\pi (Q^2,\mu_i )$; the bottom shows $\lambda_i^2 Q^2 F_\pi (\lambda_i^2 Q^2,\mu_i)$. In both panels $\lambda_i=\mu_1/ \mu_i$for different pion masses $\mu_1=0.42$ GeV (gray dashed line), $\mu_2=0.28$ GeV (red dotted line) and $\mu_3=0.14$ GeV (blue dotdashed line). }
\label{fig:Q2piff}
\end{figure}
As $Q^2$ becomes smaller the three curves diverge as a consequence of the breakdown of the RIA in the small-$Q^2$ region for small $\mu$. In the high-$Q^2$ region, as shown in the top panel of Fig.~\ref{fig:Q2piff}, the curves almost lie on top of each other. Furthermore, if the $Q^2$ dependence is also scaled over the whole range of $Q^2$
\begin{eqnarray}
F_\pi(\lambda^2 Q^2,\lambda \mu)\simeq F_\pi(Q^2,\mu) \, \label{eq:scaling2}
\end{eqnarray}
the curves almost lie on top of each other, even at small $Q^2$. This expresses the fact that our form factor depends very weakly on the remaining scale-dependent quantities, $m_\chi$ and $\Lambda_\chi$.
The form factors satisfy a nearly monopole behavior
\begin{eqnarray}
F_\pi(Q^2)\stackrel{Q^2\gg\mu^2}{\sim} \frac{ 1}{Q^2+\nu^2}\, \label{eq:monoploe}
\end{eqnarray}
with a mass scale of $\nu\simeq 0.63$ GeV obtained from a fit to the $\mu_1=0.42$ GeV form factor.
\begin{figure}
\includegraphics[height=5.5cm]{Rho_pole}
\caption{(Color online) A $\rho$-pole contribution to the pion form factor, normalized to unity at $Q^2=0$ (purple dotted line), compared to our calculation with $\mu=0.42$ GeV (gray dashed line).} \label{fig:Rho}
\end{figure}
\section{Summary and conclusions}\label{sec:conclusions}
This paper uses the Covariant Spectator Theory (CST) to compute the pion form factor in the relativistic impulse approximation (RIA). The CST is formulated in Minkowski space, so that even though results for the form factor at space-like momentum transfer ($q^2=-Q^2<0$) are presented here, the theory can be used to calculate the form factor in the time-like region ($q^2>0$) as well. The manifestly covariant dynamical model for the $q\bar q$ interaction that is the foundation of the calculations presented here incorporates both spontaneous chiral symmetry breaking and confinement, and it is discussed in Ref.~I. Some features of this model were previously introduced by Gross, Milana and \c{S}avkli \cite{QQbar, Savkli:1999me}.
This first calculation of the pion form factor uses the quark mass function obtained in Ref.~I, and expresses the pion vertex function in terms of this mass function. This approximation is particularly good near the chiral limit.
We emphasize that this is a very simple picture for the pion. Still, when combined with the results of Ref.~I, we show that this simple picture can give results that are in good agreement with both the lattice data for the dressed quark mass and the experimental data for the pion electromagnetic form factor. We find some interesting scaling relations relating form factors with different values of $\mu$.
An interesting issue remains. This simple model is able to describe the data well, yet seems to include no contribution from the $\rho$ meson that is expected from vector meson dominance. (For a comparison of our model with what is expected from a simple $\rho$ pole, see Fig.~\ref{fig:Rho}.) Where is the $\rho$ contribution? It should be contained in the dressing of the quark current, $j^\mu$. Maris and Tandy~\cite{MT00} suggest that their Ball-Chiu current contains some of these contributions (in which case our dressed current probably also contains them). The balance between the triangle diagram with no $\rho$ contribution and contributions coming from the dynamical dressing of the quark current, including the $\rho$ pole, will best be understood once the dressed quark current has been calculated in both the time-like and space-like regions.
For a more quantitative study of the light-meson properties the solution of the complete four-channel CST equation and a fit to the light-meson spectrum is needed, which will be the subject of our future program.
\begin{acknowledgements}
T.P. is pleased to acknowledge valuable discussions with Gernot Eichmann. This work received financial support from Funda\c c\~ao para a Ci\^encia e a Tecnologia (FCT) under Grants No.~PTDC/FIS/113940/2009, No. CFTP-FCT (PEst-OE/FIS/U/0777/2013) and No. POCTI/ISFL/2/275. This work was also partially supported by the European Union under the HadronPhysics3 Grant No. 283286, and by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
\end{acknowledgements}
\bibliographystyle{h-physrev3}
|
1,108,101,565,182 | arxiv | \section{Introduction}
We consider the critical surface quasi-geostrophic (SQG) equation
\begin{equation}\tag{SQG} \label{eq:SQG}
\begin{cases}
{\partial}_t\theta +\boldsymbol{u} \cdot \nabla \theta+\kappa\Lambda\theta=f\\
\boldsymbol{u} = {\mathcal R}^\perp \theta = \nabla^\perp \Lambda^{-1} \theta\\
\theta(0)=\theta_0
\end{cases}
\end{equation}
posed on ${\mathbb T}^2=[0,1]^2$, where $\kappa\in (0,1]$ measures the strength of the dissipation, $\theta_0(x)$ is the initial datum, and $f(x)$ is a
time-independent force. As usual, $\nabla^\perp= (-\partial_2,\partial_1)$ and $\Lambda = (-\Delta)^{1/2}$ is the Zygmund operator. We assume that the datum and the force have zero mean, i.e.
$
\int_{{\mathbb T}^2} f(x) {\rm d} x = \int_{{\mathbb T}^2} \theta_0(x) {\rm d} x=0,
$
which immediately implies that \[ \int_{{\mathbb T}^2} \theta(x,t) {\rm d} x = 0\] for all $t>0$.
In this manuscript we consider the dynamical system $S(t)$ generated by \eqref{eq:SQG}
on the scale-invariant space $H^1({\mathbb T}^2)$. The main result of this paper
establishes the existence of a compact global attractor for $S(t)$, which uniformly attracts bounded
set in $H^1({\mathbb T}^2)$.
\begin{theorem}\label{thm:globalattra}
Let $f\in L^\infty({\mathbb T}^2)\cap H^1({\mathbb T}^2)$. The dynamical system $S(t)$ generated by \eqref{eq:SQG}
on $H^1({\mathbb T}^2)$ possesses a unique global attractor ${\mathcal A}$ with the
following properties:
\begin{enumerate}
\item $S(t){\mathcal A}={\mathcal A}$ for every $t\geq0$, namely ${\mathcal A}$ is invariant.
\item ${\mathcal A}\subset H^{3/2}({\mathbb T}^2)$, and is thus compact.
\item For every bounded set $B\subset H^1({\mathbb T}^2)$,
$$
\lim_{t\to\infty}{\rm dist}(S(t)B,{\mathcal A})=0,
$$
where ${\rm dist}$ stands for the usual Hausdorff semi-distance between sets given by the
$H^1({\mathbb T}^2)$ norm.
\item ${\mathcal A}$ is minimal in the class of $H^1({\mathbb T}^2)$-closed attracting set and
maximal in the class of $H^1({\mathbb T}^2)$-bounded invariant sets.
\item ${\mathcal A}$ has finite fractal dimension.
\end{enumerate}
\end{theorem}
The study of the long time behavior of hydrodynamics models in terms of finite dimensional global attractors
is closely related to questions regarding the turbulent behavior of viscous fluids, especially
in terms of statistical properties of solutions and invariant measures (see, e.g.~\cites{CF85,CFT85,CFT88,CF88,Hale,FJK96,FMRT01,FHT02,SellYou,T3} and references therein). Several fluids equations have been treated from this point of view, in contexts including 2D turbulence \cites{CF85,CFT85,CFT88,CF88,FHT02,FMRT01,IMT04,JT92,Z97,T3}, the 3D Navier-Stokes equations and regularizations thereof~\cites{Sell96,GT97,CTV07,CZG15 ,CF06,KT09}, and several other
geophysical models~\cites{CT03,FPTZ12,JT15,CD14,CTV13,WT13}, just to mention a few.
Concerning the critical SQG equation, in~\cite{CTV13} the authors have obtained the existence of a finite dimensional invariant attractor
\[\widetilde {\mathcal A} \subset H^{3/2}({\mathbb T}^2)\]
such that all point orbits converge to this attractor
$$
\lim_{t\to \infty}{\rm dist}(S(t)\theta_0, \widetilde {\mathcal A})=0, \qquad \forall \theta_0\in H^1({\mathbb T}^2),
$$
and all bounded sets in a slightly smoother space contract onto it
\begin{align}\label{eq:cptattr}
\lim_{t\to \infty}{\rm dist}(S(t)B, \widetilde {\mathcal A})=0, \qquad \forall B\subset H^{1+\delta}({\mathbb T}^2),\quad \delta>0.
\end{align}
The question of uniform attraction in $H^1({\mathbb T}^2)$ remained open in~\cite{CTV13}, and is now answered in the positive by
Theorem~\ref{thm:globalattra}. Moreover, it is in fact not hard to verify that
\begin{align}\label{eq:A:tilde:A}
{\mathcal A}=\widetilde{\mathcal A}.
\end{align}
Indeed, since ${\mathcal A}$ attracts bounded subsets
of $H^1({\mathbb T}^2)$ and $\widetilde{\mathcal A}$ is invariant, we have
$$
{\rm dist}(\widetilde{\mathcal A},{\mathcal A})={\rm dist} (S(t)\widetilde{\mathcal A},{\mathcal A})\to 0, \qquad \text{as } t\to \infty,
$$
implying $\widetilde{\mathcal A}\subset {\mathcal A}$, since ${\mathcal A}$ is closed. On the other hand, by the invariance
of ${\mathcal A}\subset H^{3/2}({\mathbb T}^2)$ and \eqref{eq:cptattr}, we have
$$
{\rm dist}({\mathcal A},\widetilde{\mathcal A})={\rm dist} (S(t){\mathcal A},\widetilde{\mathcal A})\to 0, \qquad \text{as } t\to \infty,
$$
proving the reverse inclusion ${\mathcal A}\subset \widetilde{\mathcal A}$. Henceforth, equality holds in \eqref{eq:A:tilde:A}.
Comparing Theorem~\ref{thm:globalattra} to the results in~\cite{CTV13}, the new ingredient of this manuscript is to obtain an absorbing ball for the dynamics on $H^1({\mathbb T}^2)$. That is, we prove the existence of a ball
$
B_a \subset H^1({\mathbb T}^2)
$
such that for any bounded set $B \subset H^1({\mathbb T}^2)$, there exists $t_B\geq 0$ with
\[
S(t) B \subset B_a
\]
for all $t\geq t_B$. The first difficulty here is that the space $H^1$ is critical, i.e. $\|\cdot \|_{\dot{H}^1}$ is invariant under the natural scaling of \eqref{eq:SQG}, and thus the time of local existence of a solution arising from an initial datum $\theta_0 \in H^1({\mathbb T}^2)$ is not known to depend merely on $\|\theta_0\|_{H^1}$ (rather, it may depend on the rate of decay of the Fourier coefficients, such as the rate at which $|k| |\hat \theta_0(k)| \to 0$ as $|k| \to \infty$). The second difficulty comes from the fact that the Sobolev embedding of $H^1({\mathbb T}^2)$ into $L^\infty({\mathbb T}^2)$ fails, and thus we may not directly consider the evolution of the $L^\infty$ norm of the solution.
To overcome these difficulties we proceed in three steps:
\begin{enumerate}
\item First, we use the $L^2$ to $L^\infty$ regularization given by the DeGiorgi iteration~\cites{CV10a,CD14,Sil10a} to obtain an $L^\infty$ absorbing set (cf.~Theorem~\ref{thm:absLinf}), with entry time that depends only on $\|\theta_0\|_{L^2}$ and on $\|f\|_{L^2\cap L^\infty}$ (cf.~Theorem~\ref{thm:Linfest}).
\item Second, we use a quantitative $L^\infty$ to $C^\alpha$ regularization~\cite{CZV14} via nonlinear maximum principles~\cite{CV12} to obtain a $C^\alpha$ absorbing set (cf.~Theorem \ref{thm:absLinf}) with entry time that depends only on $\|\theta_0\|_{L^\infty}$ (the solution already lies inside the $L^\infty$ absorbing set) and on $\|f\|_{L^\infty}$ (cf.~Theorem~\ref{thm:Calphaest}).
\item Lastly, we use~\cite{CTV13} to show the existence of an $H^1$ absorbing set (cf.~Theorem~\ref{thm:H1abs}) with entry time that depends only on $\|\theta_0\|_{C^\alpha}$ (the solution already lies in the $C^\alpha$ absorbing set) and on $\|f\|_{L^\infty \cap H^1}$.
\end{enumerate}
The existence of the global attractor then follows from the $H^{3/2}$ absorbing ball estimate obtained in~\cite{CTV13}. The remainder of the properties (i)--(v) stated in Theorem~\ref{thm:globalattra} follow along the lines of \cites{CFT85,Hale,PZ,SellYou,T3}, as summarized in Section~\ref{sec:globattr} below.
Lastly, we note that recently in~\cite{CD14} the authors have shown that the dynamics of weak $L^2({\mathbb T}^2)$ solutions to \eqref{eq:SQG} possesses a strong global attractor ${\mathcal A}_{L^2}$, which is a compact subset in $L^2({\mathbb T}^2)$. The proof in~\cite{CD14} uses the DeGiorgi regularization ideas of~\cite{CV10a}, the weak continuity property of the nonlinearity in \eqref{eq:SQG} for $L^\infty$ weak solutions (which may be established along the lines of~\cites{CCFS,CTV12}), and the compactness argument of~\cite{C09}. As noted in~\cite{CD14}, we have that ${\mathcal A} \subset {\mathcal A}_{L^2}$, but it is not clear whether the two attractors coincide, which remains an interesting open problem.
\section{The dynamical system generated by SQG}
We recall the following well-posedness result which summarizes the local in time existence and regularization results of~\cites{CCW00,CC04,Dong10,Ju07,Miu06,Wu07} and the global in time regularity established in~\cites{CV10a,CTV13,CV12,FPV09,KN09,KNV07}:
\begin{proposition}\label{prop:WP}
Assume that {$f\in L^\infty\cap H^1$}. Then, for all initial data $\theta_0\in H^1$ the initial value problem \eqref{eq:SQG}
admits a unique global solution
\[
\theta\in C([0,\infty);H^1)\cap L^2_{loc}(0,\infty;H^{3/2}).
\]
Moreover, $\theta$ satisfies the energy inequality
\begin{equation}\label{eq:energyin}
\| \theta(t)\|^2_{L^2}+\kappa\int_0^t \|\Lambda^{1/2}\theta(s)\|_{L^2}^2{\rm d} s\leq \|\theta_0\|^2_{L^2}
+\frac{1}{c_0\kappa}\|f\|^2_{L^2}t, \qquad \forall t\geq 0.
\end{equation}
and the decay estimate
\begin{equation}\label{eq:expdecayL2}
\|\theta(t)\|_{L^2}\leq \|\theta_0\|_{L^2}{\rm e}^{-c_0\kappa t}+\frac{1}{c_0\kappa}\|f\|_{L^2}, \qquad \forall t\geq 0,
\end{equation}
where $c_0>0$ is a universal constant. If furthermore $\theta_0\in L^\infty$, then cf.~\cites{CC04,CTV13} we have
\begin{equation}\label{eq:expdecayLinf}
\|\theta(t)\|_{L^\infty}\leq \|\theta_0\|_{L^\infty}{\rm e}^{-c_0\kappa t}+\frac{1}{c_0\kappa}\|f\|_{L^\infty} , \qquad \forall t\geq 0.
\end{equation}
\end{proposition}
Proposition \ref{prop:WP} translates into the existence of
the solution operators
\[
S(t):H^1\to H^1
\]
acting as
\[
\theta_0\mapsto S(t)\theta_0=\theta(t), \qquad \forall t\geq0.
\]
Since the forcing term $f$ is time independent, the family
$S(t)$ fulfills the semigroup property
\[
S(t+\tau)=S(t)S(\tau), \qquad \forall t,\tau\geq0.
\]
However, no continuous dependence estimate in $H^1$ is available, since the existence of solutions has been obtained as a stability result of the equation posed in $H^{1+\delta}$ (see \cites{Ju07,Miu06} for details). Consequently, it is not clear whether $S(t):H^1\to H^1$ is continuous in the $H^1$ topology for each fixed $t>0$. Along the lines of the classical references \cites{CFT85,Hale, SellYou, T3}, the theory of infinite-dimensional
dynamical systems has been adapted to more general classes of operators in recent
years (see \cites{C09,CCP12, CZ13, PZ}). It turns out that continuity for fixed $t>0$ is only needed to prove
invariance of suitable attracting sets, while their existence holds under no continuity assumptions
on $S(t)$. We shall however see in Section \ref{sec:globattr} that invariance of the attractor may be nonetheless recovered. We recall that:
\begin{definition}
A set $B_{a} \subset H^1$ is said to be an absorbing set for the semigroup $S(t)$ on $H^1$ if for every bounded set $B\subset H^1$
there exists an entering time $t_B\geq 0$ (depending only $B$) such that
\[
S(t)B\subset B_{a}
\]
for all $t\geq t_B$.
\end{definition}
The absorbing set, besides giving a first rough estimate of the dissipativity of the system, is the crucial
preliminary step needed to prove the existence of the global attractor. In particular, a \emph{sufficient}
condition for the existence of the global attractor is the existence of a
\emph{compact} absorbing set.
\section{De Giorgi iteration yields an \texorpdfstring{$L^\infty$}{L infinity} absorbing set}
The first step towards the proof of the existence of a regular uniformly absorbing set
consists in showing that the dynamics can be restricted to uniformly bounded solutions. To put it
in different words, we aim to show the existence of an absorbing set $B_\infty\subset L^\infty \cap H^1$.
\begin{theorem}\label{thm:absLinf}
There exists $c_0>0$ a universal constant, such that the set
\begin{equation*}
B_{\infty}=\left\{\varphi\in L^\infty\cap H^1: \|\varphi\|_{L^\infty}\leq \frac{2}{c_0\kappa}\|f\|_{L^\infty}\right\}
\end{equation*}
is an absorbing set for $S(t)$. Moreover,
\begin{equation}\label{eq:estimateLinf}
\sup_{t\geq 0}\sup_{\theta_0\in B_\infty}\|S(t)\theta_0\|_{L^\infty}\leq \frac{3}{c_0\kappa}\|f\|_{L^\infty}.
\end{equation}
\end{theorem}
The above theorem is a consequence of Theorem \ref{thm:Linfest} below.
It is worth noticing that at this stage $B_{\infty}$ is an unbounded in $H^1$.
The main result of this section is an $L^\infty$ estimate on the solutions to \eqref{eq:SQG} based
on a De Giorgi type iteration procedure and standard a priori estimates.
The proof closely follows that of \cite{CV10a} and \cite{CD14}.
\begin{theorem}\label{thm:Linfest}
Let $\theta(t)$ be the solution to \eqref{eq:SQG} with initial datum $\theta_0\in H^1$. Then
\begin{equation}\label{eq:linfty}
\| \theta(t)\|_{L^\infty}\leq \frac{c}{\kappa}\left[\|\theta_0\|_{L^2}+\frac{1}{\kappa^{1/2}}\|f\|_{L^2}\right]{\rm e}^{-c_0\kappa t}+\frac{1}{c_0\kappa}\|f\|_{L^\infty}
\end{equation}
for all $t \geq 1$, for some constant $c>0$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:Linfest}]
We split the proof in two steps: first, we prove that $\theta(\tau)\in L^\infty$ for almost every $\tau\in(1/2,1)$, and then we will exploit
\eqref{eq:expdecayLinf} to conclude the proof.
For $M\geq 2\|f\|_{L^\infty}$ to be fixed later, we denote by $\eta_k$ the levels
$$
\eta_k=M(1-2^{-k})
$$
and by $\theta_k$ the truncated function
$$
\theta_k(t)=(\theta(t)-\eta_k)_+ = \max\{ \theta(t) - \eta_k,0\}.
$$
Define also the time cutoffs
\begin{equation}\label{eq:timecut}
\tau_k=\frac12(1-2^{-k}).
\end{equation}
As observed in \cites{CV10a, CD14}, in view of pointwise inequality of \cite{CC04}, the level set inequality
\begin{equation*}
\|\theta_k(t_2)\|^2_{L^2}+ 2\kappa \int_{t_1}^{t_2}\|\Lambda^{1/2}\theta_k(\tau)\|^2_{L^2}{\rm d} \tau\leq \|\theta_k(t_1)\|^2_{L^2}+
2\|f\|_{L^\infty}\int_{t_1}^{t_2}\|\theta_k(\tau)\|_{L^1}{\rm d} \tau
\end{equation*}
holds for any $t_2\geq t_1\geq 0$. Taking $t_1=s\in (\tau_{k-1},\tau_k)$ and $t_2=t\in(\tau_k,1]$, we then obtain
\begin{equation*}
\sup_{t\in[\tau_k,1]}\|\theta_k(t)\|^2_{L^2}+ 2\kappa \int_{\tau_k}^{1}\|\Lambda^{1/2}\theta_k(\tau)\|^2_{L^2}{\rm d} \tau\leq \|\theta_k(s)\|^2_{L^2}+
2\|f\|_{L^\infty}\int_{\tau_{k-1}}^{1}\|\theta_k(\tau)\|_{L^1}{\rm d} \tau.
\end{equation*}
Upon averaging over $s\in (\tau_{k-1},\tau_k)$, it follows that the quantity
$$
Q_k= \sup_{t\in[\tau_k,1]} \|\theta_k(t)\|^2_{L^2}+ 2\kappa \int_{\tau_k}^{1}\|\Lambda^{1/2}\theta_k(t)\|^2_{L^2}{\rm d} t,
$$
obeys the inequality
\begin{equation}\label{eq:iter1}
Q_k\leq 2^{k}\int_{\tau_{k-1}}^{1}\|\theta_k(s)\|^2_{L^2}{\rm d} s+2\|f\|_{L^\infty}\int_{\tau_{k-1}}^{1}\|\theta_k(t)\|_{L^1}{\rm d} t.
\end{equation}
for all $k\in \mathbb{N}$, Moreover, due to \eqref{eq:energyin}, we also have
\begin{equation}\label{eq:iter00}
Q_0\leq \|\theta_0\|^2_{L^2}+\frac{1}{c_0\kappa}\|f\|^2_{L^2}.
\end{equation}
We now bound the right hand side by a power of $Q_{k-1}$.
By the H\"older inequality and the Sobolev embedding $H^{1/2}\subset L^4$, it is not hard to see that
\begin{equation}\label{eq:L3}
\|\theta_\ell\|_{L^3({\mathbb T}^2\times[\tau_\ell,1])}^2\leq \frac{c}{\kappa^{2/3}}Q_\ell, \qquad \forall \ell\in \mathbb{N}.
\end{equation}
Since
$$
\theta_{k-1}\geq 2^{-k}M, \quad \mbox{on the set}\quad \{(x,t):\,\theta_k(x,t)>0\},
$$
we deduce that
$$
\mathds{1}_{\{\theta_k>0\}}\leq \frac{2^k}{M}\theta_{k-1}.
$$
Using the fact that $\theta_k\leq\theta_{k-1}$ and that the bound \eqref{eq:L3} holds, we infer that
\begin{align*}
2^{k}\int_{\tau_{k-1}}^{1}\|\theta_k(s)\|^2_{L^2}{\rm d} s
&\leq 2^{k}\int_{\tau_{k-1}}^{1}\int_{{\mathbb T}^2}\theta^2_{k-1}(x,s) \mathds{1}_{\{\theta_k>0\}}{\rm d} x\,{\rm d} s\\
&\leq \frac{2^{2k}}{M}\int_{\tau_{k-1}}^{1}\int_{{\mathbb T}^2}\theta^3_{k-1}(x,s) {\rm d} x\,{\rm d} s \leq c\frac{2^{2k}}{M\kappa }Q_{k-1}^{3/2},
\end{align*}
and similarly,
\begin{align*}
\int_{\tau_{k-1}}^{1}\|\theta_k(t)\|_{L^1}{\rm d} t &\leq \int_{\tau_{k-1}}^{1}\int_{{\mathbb T}^2}\theta_{k-1}(x,s) \mathds{1}^2_{\{\theta_k>0\}}{\rm d} x\,{\rm d} s\\
&\leq \frac{2^{2k}}{M^2}\int_{\tau_{k-1}}^{1}\int_{{\mathbb T}^2}\theta^3_{k-1}(x,s) {\rm d} x\,{\rm d} s \leq c\frac{2^{2k}}{M^2 \kappa}Q_{k-1}^{3/2}.
\end{align*}
From \eqref{eq:iter1}, the above estimates and the fact that $M\geq 2\|f\|_{L^\infty}$, it follows that
\begin{equation}\label{eq:iter2}
Q_k\leq c\frac{2^{2k}}{M\kappa }Q_{k-1}^{3/2}.
\end{equation}
Hence, if we ensure
$$
M\geq \frac{c}{\kappa}\sqrt{Q_0},
$$
then $Q_k\to 0$ as $k\to \infty$. In light of \eqref{eq:iter00}, the above constraint is in particular satisfied if
\begin{equation}\label{eq:iter3}
M\geq \frac{c}{\kappa}\left[\|\theta_0\|_{L^2}+\frac{1}{\kappa^{1/2}}\|f\|_{L^2}\right].
\end{equation}
This implies that $\theta$ is bounded above by $M$. Applying the same argument to $-\theta$, we
infer the bound
\begin{equation*}
\|\theta(\tau)\|_{L^\infty}\leq \frac{c}{\kappa}\left[\|\theta_0\|_{L^2}+\frac{1}{\kappa^{1/2}}\|f\|_{L^2}\right], \qquad \text{a.e.}\ \tau\in (1/2,1).
\end{equation*}
Once $\theta(\tau)\in L^\infty$ for some $\tau\in (1/2,1)$, we can exploit the decay estimate
\eqref{eq:expdecayLinf} to deduce the uniform bound \eqref{eq:linfty}, thereby concluding
the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:absLinf}]
Define
\begin{equation*}
B_{\infty}=\left\{\varphi\in L^\infty\cap H^1: \|\varphi\|_{L^\infty}\leq \frac{2}{c_0\kappa}\|f\|_{L^\infty}\right\}.
\end{equation*}
For a fixed bounded set $B\subset H^1$ let
$$
R=\|B\|_{H^1}=\sup_{\varphi\in B}\|\varphi\|_{H^1}.
$$
Thanks to \eqref{eq:linfty} and the Poincar\'e inequality,
we deduce that if $\theta_0\in B$ then
\begin{equation*}
\| S(t)\theta_0\|_{L^\infty}\leq \frac{c}{\kappa}\left[R+\frac{1}{\kappa^{1/2}}\|f\|_{L^2}\right]{\rm e}^{-c_0\kappa t}+\frac{1}{c_0\kappa}\|f\|_{L^\infty} , \qquad \forall t\geq 1.
\end{equation*}
Define the entering time $t_B = t_B(R,\|f\|_{L^2 \cap L^\infty})\geq 1$ so that
$$
\frac{c}{\kappa}\left[R+\frac{1}{\kappa^{1/2}}\|f\|_{L^2}\right]{\rm e}^{-c_0\kappa t_B}\leq\frac{1}{c_0\kappa}\|f\|_{L^\infty},
$$
for which we see that $S(t)B\subset B_\infty$ for all $t\geq t_B$. Thus $B_\infty$ is absorbing, and
Theorem \ref{thm:absLinf} is proven.
\end{proof}
\begin{remark}\label{rmk:L2toLinf}
If we replace the time cutoffs in \eqref{eq:timecut} with
$$
\tau_k=t_0 (1-2^{-k}), \qquad t_0\in (0,1),
$$
it follows that the solution regularizes from $L^2$ to $L^\infty$ instantaneously.
\end{remark}
\section{Nonlinear lower bounds yield H\"older absorbing sets}
We devote this section to the improvement of the regularity of absorbing sets, namely
from $L^\infty$ to $C^\alpha$, for $\alpha\in (0,1)$ small enough depending on $B_\infty$.
\begin{theorem}\label{thm:Calpha}
There exists
$\alpha=\alpha(\|f\|_{L^\infty},\kappa)\in (0,1/4]$ and a constant $c_1\geq 1$
such that the set
\begin{equation*}
B_{\alpha}=\left\{\varphi\in C^\alpha\cap H^1: \|\varphi\|_{C^\alpha}\leq \frac{c_1}{\kappa}\|f\|_{L^\infty}\right\}
\end{equation*}
is an absorbing set for $S(t)$. Moreover,
\begin{equation}\label{eq:Calphaunif}
\sup_{t\geq 0}\sup_{\theta_0\in B_\alpha}\|S(t)\theta_0\|_{C^\alpha}\leq \frac{2 c_1}{\kappa}\|f\|_{L^\infty},
\end{equation}
holds.
\end{theorem}
In light of Theorem \ref{thm:absLinf}, the solutions to \eqref{eq:SQG} emerging from data in a bounded subset of $H^1$ are absorbed in finite time
by a fixed subset of $L^\infty$. Therefore, in order to prove Theorem \ref{thm:Calpha}, it suffices
to restrict our attention to solutions emanating from initial data $\theta_0\in L^\infty$
and derive a number of a priori bounds solely in terms of $\|\theta_0\|_{L^\infty}$.
For convenience, in the course of this section we will set
\begin{equation}\label{eq:globalLinf}
K_\infty=\|\theta_0\|_{L^\infty}+\frac{1}{c_0\kappa}\|f\|_{L^\infty},
\end{equation}
so that in view of \eqref{eq:expdecayLinf} the solution originating from $\theta_0$ satisfies the global bound
\begin{equation}\label{eq:globalLinf2}
\|\theta(t)\|_{L^\infty}\leq K_\infty, \qquad \forall t\geq 0.
\end{equation}
The main result of this section is the following a priori estimate in suitable H\"older space.
\begin{theorem}\label{thm:Calphaest}
Assume that $\theta_0\in L^\infty\cap H^1$. There exists
$\alpha=\alpha(\|\theta_0\|_{L^\infty},\|f\|_{L^\infty},\kappa)\in (0,1/4]$
such that
\begin{equation}\label{eq:Calpha}
\|\theta(t)\|_{C^\alpha}\leq c \left[\|\theta_0\|_{L^\infty}+\frac{1}{c_0\kappa}\|f\|_{L^\infty}\right], \qquad \forall t\geq t_\alpha= \frac{3}{2(1-\alpha)}
\end{equation}
for some positive constant $c>0$.
\end{theorem}
The precise expression of $\alpha$ is given below in \eqref{eq:alpha}. The proof of Theorem \ref{thm:Calphaest}
requires several intermediate steps culminating in Lemma \ref{lem:Calpha}. For now, let us prove
Theorem \ref{thm:Calpha} assuming Theorem \ref{thm:Calphaest}.
\begin{proof}[Proof of Theorem \ref{thm:Calpha}]
We first show that there exists $\alpha\in(0,1/4]$ and $c_1\geq 1$ such that $B_\alpha$ is absorbing. Clearly,
it is enough to prove that the $L^\infty$-absorbing set $B_\infty$ is itself absorbed by $B_\alpha$.
Fix $\alpha$ as suggested by Theorem \ref{thm:Calphaest}, namely,
$$
\alpha=\alpha(\|B_\infty\|_{L^\infty},\|f\|_{L^\infty},\kappa), \quad \mbox{where} \quad \|B_\infty\|_{L^\infty}=\sup_{\varphi\in B_\infty}\|\varphi\|_{L^\infty}\leq \frac{2}{c_0\kappa}\|f\|_{L^\infty}.
$$
Take $\theta_0\in B_\infty$. By \eqref{eq:estimateLinf},
$$
\|S(t)\theta_0\|_{L^\infty}\leq \frac{3}{c_0\kappa}\|f\|_{L^\infty}, \qquad \forall t\geq 0.
$$
Consequently, \eqref{eq:Calpha} implies that
$$
\|S(t)\theta_0\|_{C^\alpha}\leq \frac{4c}{c_0\kappa}\|f\|_{L^\infty}, \qquad \forall t\geq t_\alpha,
$$
namely $S(t)\theta_0\in B_\alpha$ for all $t\geq t_\alpha$, upon choosing $c_1=4c/c_0$. The fact
that $t_\alpha$ depends only on $\|B_\infty\|_{L^\infty}$, $\|f\|_{L^\infty}$, and $\kappa$, implies that
$$
S(t)B_\infty\subset B_\alpha, \qquad \forall t\geq t_\alpha,
$$
as sought. The uniform estimate \eqref{eq:Calphaunif} follows from the propagation
of H\"older regularity proven in \cite{CTV13}, namely the property that if $\theta_0\in C^\alpha$,
then
\begin{equation}\label{eq:propag}
\|S(t)\theta_0\|_{C^\alpha}\leq [\theta_0]_{C^\alpha}+c \left[\|\theta_0\|_{L^\infty}+\frac{1}{c_0\kappa}\|f\|_{L^\infty}\right], \qquad \forall t\geq0.
\end{equation}
This concludes the proof of the theorem.
\end{proof}
The rest of the section is dedicated to the proof of Theorem \ref{thm:Calphaest}. The techniques employed
have the flavor of those devised in \cites{CTV13,CV12}, although the approach is closely related to that of \cite{CZV14}
used for a proof of eventual regularity for supercritical SQG.
\subsection{Time dependent nonlinear lower bounds}
In order to estimate $C^\alpha$-seminorms it is natural to consider the finite difference
\begin{align*}
\delta_h\theta(x,t)=\theta(x+h,t)-\theta(x,t),
\end{align*}
which is periodic in both $x$ and $h$, where $x,h \in {\mathbb T}^2$. As in \cites{CV12,CTV13}, it follows that
\begin{equation}\label{eq:findiff}
L (\delta_h\theta)^2+ D[\delta_h\theta]=0,
\end{equation}
where $L$ denotes the differential operator
\begin{equation}
\label{eq:L:def}
L={\partial}_t+\boldsymbol{u}\cdot \nabla_x+(\delta_h\boldsymbol{u})\cdot \nabla_h+ \Lambda
\end{equation}
and
\begin{equation}
\label{eq:D:gamma:def}
D[\varphi](x)= c \int_{\mathbb{R}^2} \frac{\big[\varphi(x)-\varphi(x+y)\big]^2}{|y|^{3}}{\rm d} y.
\end{equation}
Let $\xi:[0,\infty)\to[0,\infty)$ be a bounded decreasing differentiable function to be determined later. For
\begin{equation*}
0<\alpha \leq \frac14
\end{equation*}
to be fixed later on,
we study the evolution of the quantity $v(x,t;h)$ defined by
\begin{equation}\label{eq:v}
v(x,t;h) =\frac{|\delta_h\theta(x,t)|}{(\xi(t)^2+|h|^2)^{\alpha/2}}.
\end{equation}
The main point is that when $\xi(t)=0$ we have that
$$
\|v(t)\|_{L^\infty_{x,h}}=\esup_{x,h\in{\mathbb T}^2} |v(x,t;h)| = \sup_{x\neq y \in{\mathbb T}^2} \frac{|\theta(x,t)-\theta(y,t)|}{|x-y|^{\alpha}} = [\theta(t)]_{C^\alpha}.
$$
From \eqref{eq:findiff} and a short calculation (see~\cite{CZV14}) we obtain that
\begin{align}
L v^2+\frac{\kappa D[\delta_h\theta] }{(\xi^2+|h|^2)^\alpha}
&=2\alpha |\dot\xi|\frac{\xi}{\xi^2+|h|^2}v^2 -2\alpha \frac{h}{\xi^2+|h|^2}\cdot \delta_h\boldsymbol{u} \, v^2+
\frac{\delta_hf}{(\xi^2+|h|^2)^{\alpha/2}} v\notag \\
&\leq 2\alpha |\dot\xi|\frac{\xi}{\xi^2+|h|^2}v^2 +2\alpha \frac{|h|}{\xi^2+|h|^2}|\delta_h\boldsymbol{u}|v^2+
\frac{2\|f\|_{L^\infty}}{(\xi^2+|h|^2)^{\alpha/2}} v\label{eq:ineq1}
\end{align}
where $\delta_h\boldsymbol{u}= {\mathcal R}^\perp \delta_h\theta$. We will bound the terms on the right-hand
side of \eqref{eq:ineq1} in such a way so that they can be compared with the dissipative term $D[\delta_h\theta]$
and its nonlinear lower bounds derived in the following lemma.
\begin{lemma}\label{lem:nonlinbdd}
There exists a positive constant
$c_2$ such that
\begin{equation}\label{eq:N1}
\frac{D[\delta_h\theta](x,t)}{(\xi(t)^2+|h|^2)^\alpha}
\geq \frac{|v(x,t;h)|^3}{c_2\|\theta(t)\|_{L^\infty}(\xi(t)^2+|h|^2)^{\frac{1-\alpha}{2}}}
\end{equation}
holds for any $x,h\in {\mathbb T}^2$ and any $t\geq 0$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:nonlinbdd}]
In the course of the proof, we omit the dependence on $t$ of all functions. It is understood
that every calculation is performed pointwise in $t$.
Arguing as in \cite{CTV13}, it can be shown that for $r\geq 4|h|$ there holds
$$
D[\delta_h\theta](x)\geq \frac{1}{2r}|\delta_h\theta(x)|^2-c|\delta_h\theta(x)|\|\theta\|_{L^\infty}\frac{|h|}{r^2},
$$
where $c\geq 1$ is an absolute constant.
A choice satisfying $r\geq 4(\xi^2+|h|^2)^{1/2}\geq 4|h|$ can be made as
$$
r=\frac{4c\|\theta\|_{L^\infty}}{|\delta_h\theta(x)|} (\xi^2+|h|^2)^{1/2},
$$
from which it follows that
\begin{align*}
D[\delta_h\theta](x)&\geq \frac{|\delta_h\theta(x)|^2}{2r}\left[1-\frac12\frac{|h|}{(\xi^2+|h|^2)^{1/2}}\right]\\
&\geq \frac{|\delta_h\theta(x)|^2}{4r}=\frac{|\delta_h\theta(x)|^3}{16c \|\theta\|_{L^\infty} (\xi^2+|h|^2)^{1/2}}
\end{align*}
The lower bound \eqref{eq:N1} follows by dividing the above inequality by $(\xi^2+|h|^2)^\alpha$.
\end{proof}
The choice for the function $\xi$ is now closely related to the lower bound \eqref{eq:N1}. We assume that $\xi$
solves the ordinary differential equation
\begin{equation}\label{eq:ODE}
\dot\xi =- \xi^{{\frac{1+2\alpha}{3}}},\qquad \xi(0)=1.
\end{equation}
More explicitly,
\begin{equation}\label{eq:xi}
\xi(t)=\begin{cases}
\displaystyle \left[1-\frac23(1-\alpha) t\right]^{\frac{3}{2(1-\alpha)}}, \quad &\text{if } t\in [0,t_\alpha],\\ \\
0,\quad &\text{if } t \in (t_\alpha,\infty),
\end{cases}
\end{equation}
where
\begin{equation}\label{eq:regtime}
t_\alpha=\frac{3}{2(1-\alpha)}.
\end{equation}
We then have the following result.
\begin{lemma}\label{lem:ODE}
Assume that the function $\xi:[0,\infty)\to[0,\infty)$ is given by \eqref{eq:xi}. Then
the estimate
\begin{equation}\label{eq:est1}
2\alpha |\dot\xi(t)|\frac{\xi(t)}{\xi(t)^2+|h|^2}|v(x,t;h)|^2\leq \frac{\kappa|v(x,t;h)|^3}{8c_2\|\theta(t)\|_{L^\infty}(\xi(t)^2+|h|^2)^{\frac{1-\alpha}{2}}} +\frac{c}{\kappa^2}\|\theta(t)\|^2_{L^\infty}
\end{equation}
holds pointwise for $x,h \in {\mathbb T}^2$ and $t\geq 0$, where $c_2$ is the same constant appearing in \eqref{eq:N1}.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:ODE}]
We again suppress the $t$-dependence in all the estimates below.
In view of \eqref{eq:ODE} and the fact that $\alpha\leq 1/4$, a simple computation shows that
$$
2\alpha |\dot\xi|\frac{\xi}{\xi^2+|h|^2}|v(x;h)|^2\leq \frac12\frac{\xi^{{\frac{4+2\alpha}{3}}}}{\xi^2+|h|^2}|v(x;h)|^2
\leq \frac12\frac{|v(x;h)|^2}{(\xi^2+|h|^2)^{\frac{1-\alpha}{3}}}.
$$
Therefore, the $\varepsilon$-Young inequality
\begin{equation}\label{eq:young}
ab\leq \frac{2\varepsilon}{3} a^{3/2}+\frac{1}{3\varepsilon^2}b^3,\qquad a,b,\varepsilon>0
\end{equation}
with $\varepsilon=\kappa/(12c_2\|\theta\|_{L^\infty})$ implies that
$$
2\alpha |\dot\xi|\frac{\xi}{\xi^2+|h|^2}|v(x;h)|^2
\leq\frac{\kappa|v(x;h)|^3}{8c_2\|\theta\|_{L^\infty}(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}}
+\frac{c}{\kappa^2}\|\theta\|^2_{L^\infty},
$$
which is what we claimed.
\end{proof}
In the same fashion, we can estimate the forcing term appearing in \eqref{eq:ineq1}.
\begin{lemma}\label{lem:force}
For every $x,h \in {\mathbb T}^2$ and $t\geq 0$ we have
\begin{equation}\label{eq:est2}
\frac{2\|f\|_{L^\infty}}{(\xi(t)^2+|h|^2)^{\alpha/2}} v(x,t;h)\leq
\frac{\kappa|v(x,t;h)|^3}{8c_2\|\theta(t)\|_{L^\infty}(\xi(t)^2+|h|^2)^{\frac{1-\alpha}{2}}}
+ c\kappa^{1/2} \|f\|_{L^\infty}^{3/2}\|\theta(t)\|^{1/2}_{L^\infty},
\end{equation}
where $c_2$ is the same constant appearing in \eqref{eq:N1}.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:force}]
Applying once more Young inequality \eqref{eq:young}
we infer that
$$
\frac{2\|f\|_{L^\infty}}{(\xi^2+|h|^2)^{\alpha/2}} v(x;h)\leq
\frac{\kappa|v(x;h)|^3}{8c_2\|\theta\|_{L^\infty}(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}}+c (\xi^2+|h|^2)^{\frac{1-4\alpha}{4}}
\kappa^{1/2}\|f\|_{L^\infty}^{3/2}\|\theta\|^{1/2}_{L^\infty}.
$$
The conclusion follows from the assumption $\alpha\leq 1/4$ and the bounds $\xi,|h|\leq 1$.
\end{proof}
If we now apply the bounds \eqref{eq:est1}-\eqref{eq:est2} to \eqref{eq:ineq1}, we end up with
\begin{equation}\label{eq:ineq2}
\begin{aligned}
L v^2+\frac{\kappa}{2}\frac{ D[\delta_h\theta] }{(\xi^2+|h|^2)^\alpha}&+\frac{\kappa|v|^3}{4c_2\|\theta\|_{L^\infty}(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}}\\
&\leq 2\alpha \frac{|h|}{\xi^2+|h|^2}|\delta_h\boldsymbol{u}|v^2+c\left[\|\theta\|^2_{L^\infty}+\kappa^{1/2} \|f\|_{L^\infty}^{3/2}\|\theta\|^{1/2}_{L^\infty}\right].
\end{aligned}
\end{equation}
In the next section, we provide an upper bound on the remaining term containing $\delta_h\boldsymbol{u}$.
\subsection{Estimates on the nonlinear term}
We would like to stress once more that the only restriction on $\alpha$ so far has consisted in
imposing $\alpha\in(0,1/4]$. This arose only in the proof of Lemma \ref{lem:force}.
In order to deal with Riesz-transform contained in $\delta_h\boldsymbol{u}$, the H\"older exponent will be
further restricted in terms of the initial datum $\theta_0$ and the forcing term $f$. It is crucial
that this restriction only depends on $\|\theta_0\|_{L^\infty}$ and $\|f\|_{L^\infty}$.
\begin{lemma}\label{lem:rieszbdd}
Suppose that $\theta_0\in L^\infty$, and set
\begin{equation}\label{eq:alpha}
\alpha=\min\left\{\frac{\kappa}{c_3K_\infty},\frac14\right\}, \qquad K_\infty=\|\theta_0\|_{L^\infty}+\frac{1}{c_0\kappa}\|f\|_{L^\infty},
\end{equation}
for a universal constant $c_3\geq 64$. Then
\begin{equation}\label{eq:est3}
2\alpha \frac{|h||\delta_h\boldsymbol{u}(x,t)|}{\xi(t)^2+|h|^2}|v(x,t;h)|^2\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x,t)}{(\xi(t)^2+|h|^2)^\alpha}
+ \frac{\kappa}{8c_2K_\infty(\xi(t)^2+|h|^2)^{\frac{1-\alpha}{2}}} |v(x,t;h)|^3,
\end{equation}
for every $x,h \in {\mathbb T}^2$ and $t\geq 0$, where $c_2$ is the same constant appearing in \eqref{eq:N1}.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:rieszbdd}]
By the same arguments of \cites{CTV13,CV12}, for $r\geq 4|h|$ it is possible to derive the upper bound
$$
|\delta_h\boldsymbol{u}(x)|\leq c \left[r^{1/2} \big[D[\delta_h\theta](x)\big]^{1/2} +\frac{\|\theta\|_{L^\infty}|h|}{r} \right],
$$
pointwise in $x,h\in {\mathbb T}^2$ and $t\geq 0$. Using the Cauchy-Schwarz inequality, we deduce that
\begin{align*}
\frac{2\alpha |h|}{\xi^2+|h|^2}|\delta_h\boldsymbol{u}(x)||v(x;h)|^2
&\leq \frac{2\alpha}{(\xi^2+|h|^2)^{1/2}}|\delta_h\boldsymbol{u}(x)||v(x;h)|^2\\
&\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x)}{(\xi^2+|h|^2)^\alpha} + c\left[\frac{\alpha^2}{\kappa(\xi^2+|h|^2)^{1-\alpha}}r |v(x;h)|^4 +\alpha \frac{\|\theta\|_{L^\infty}}{r}|v(x;h)|^2\right].
\end{align*}
We then choose $r$ as
$$
r=\frac{\kappa^{1/2}\|\theta\|^{1/2}_{L^\infty} (\xi^2+|h|^2)^{\frac{1-\alpha}{2}}}{\alpha^{1/2}v(x;h)}=
\frac{\kappa^{1/2}\|\theta\|^{1/2}_{L^\infty}(\xi^2+|h|^2)^{1/2}}{\alpha^{1/2}|\delta_h\theta(x)|}.
$$
In view of \eqref{eq:alpha}, this is a feasible choice, since
$$
r\geq \frac{\kappa^{1/2}\|\theta\|^{1/2}_{L^\infty}}{2\alpha^{1/2}\|\theta\|_{L^\infty}}|h|=
\frac{\kappa^{1/2}}{2\alpha^{1/2}\|\theta\|^{1/2}_{L^\infty}}|h|\geq
\frac{\kappa^{1/2}}{2\alpha^{1/2}K_\infty^{1/2}}|h|\geq 4|h|.
$$
Thus, thanks to \eqref{eq:alpha}, we obtain
\begin{align*}
2\alpha \frac{|h|}{\xi^2+|h|^2}|\delta_h\boldsymbol{u}(x)||v(x;h)|^2
&\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x)}{(\xi^2+|h|^2)^\alpha}
+ c\frac{\alpha^{3/2}\|\theta\|^{1/2}_{L^\infty} }{\kappa^{1/2}(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}} |v(x;h)|^3\\
&\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x)}{(\xi^2+|h|^2)^\alpha}
+ c\frac{\alpha^{3/2}K_\infty^{1/2} }{\kappa^{1/2}(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}} |v(x;h)|^3\\
&\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x)}{(\xi^2+|h|^2)^\alpha}
+ c\frac{\alpha}{(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}} |v(x;h)|^3.
\end{align*}
By possibly further reducing $\alpha$ so that
$$
\alpha \leq \frac{\kappa}{8c c_2 K_\infty},
$$
we deduce that
\begin{align*}
2\alpha \frac{|h|}{\xi^2+|h|^2}|\delta_h\boldsymbol{u}(x)||v(x;h)|^2
\leq \frac{\kappa}{2} \frac{D[\delta_h\theta](x)}{(\xi^2+|h|^2)^\alpha}
+ \frac{\kappa}{8c_2K_\infty(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}} |v(x;h)|^3,
\end{align*}
which concludes the proof.
\end{proof}
We now proceed with the last step in the proof of Theorem \ref{thm:Calphaest}, which consists
of H\"older $C^\alpha$ estimates, where the exponent $\alpha$ is given by \eqref{eq:alpha}.
\subsection{Locally uniform H\"older estimates}
From the global bound \eqref{eq:globalLinf}, \eqref{eq:ineq2} and the estimate \eqref{eq:est3},
it follows that for $\alpha$ complying with \eqref{eq:alpha} the function $v^2$ satisfies
\begin{equation*}
L v^2+\frac{\kappa|v|^3}{8c_2K_\infty(\xi^2+|h|^2)^{\frac{1-\alpha}{2}}}
\leq c\left[K_\infty^2+\kappa^{1/2} \|f\|_{L^\infty}^{3/2}K_\infty^{1/2}\right].
\end{equation*}
Taking into account that $\xi^2+|h|^2\leq 1 + {\rm diam}({\mathbb T}^2)^2 = 2$ for all $h \in {\mathbb T}^2$, and that $\|f\|_{L^\infty}\leq c_0\kappa K_{\infty}$, we arrive at
\begin{equation}\label{eq:ineq3}
L v^2+\frac{\kappa|v|^3}{16c_2K_\infty}
\leq cK_\infty^2
\end{equation}
which holds pointwise for $(x,h) \in {\mathbb T}^2 \times {\mathbb T}^2$.
In the next lemma we show that the above inequality gives uniform control on the $C^\alpha$ seminorm
of the solution.
\begin{lemma}\label{lem:Calpha}
Assume that $\theta_0\in L^\infty$, and fix $\alpha$ as in \eqref{eq:alpha}. There exists a time $t_\alpha>0$
such that the solution to \eqref{eq:SQG} with initial datum $\theta_0$ is $\alpha$-H\"older continuous. Specifically,
\begin{equation*}
[\theta(t)]_{C^\alpha}\leq c \left[\|\theta_0\|_{L^\infty}+\frac{1}{c_0\kappa}\|f\|_{L^\infty}\right], \qquad \forall t\geq t_\alpha= \frac{3}{2(1-\alpha)}.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:Calpha}]
Thanks to \eqref{eq:ineq3}, the function
$$
\psi(t)=\|v(t)\|_{L^\infty_{x,h}}^2
$$
satisfies the differential inequality
\begin{equation}\label{eq:diffineq}
\frac{{\rm d}}{{\rm d} t} \psi+\frac{\kappa}{16c_2K_\infty} \psi^{3/2}\leq cK_\infty^2.
\end{equation}
This can be justified as follows: $v^2$ is a bounded continuous function of $x$ and $h$, so that we
can evaluate \eqref{eq:ineq3} at a point
$(\bar x, \bar h) = (\bar x(t), \bar h(t)) \in {\mathbb T}^2\times {\mathbb T}^2$ at which $v^2(t)$ attains its maximum value.
Since, at this point we have ${\partial}_hv^2={\partial}_xv^2=0$ and $\Lambda v^2\geq 0$, the inequality \eqref{eq:diffineq} holds in view of the Rademacher theorem (see
\cites{CC04,CTV13, CZV14} for details).
Moreover, by the very definition of $v$,
$$
\psi(0)\leq\frac{4\|\theta_0\|^2_{L^\infty}}{\xi_0^{2\alpha}}=4\|\theta_0\|^2_{L^\infty}\leq 4K_\infty^2.
$$
From a standard comparison for ODEs it immediately follows that
\begin{equation}\label{eq:bddpsi}
\psi(t)\leq cK^2_\infty, \qquad \forall t\geq 0,
\end{equation}
for some sufficiently large constant $c>0$.
With \eqref{eq:bddpsi} at hand, we have thus proven that
$$
[\theta(t)]_{C^\alpha}^2=\psi(t) \leq c K_\infty^2, \qquad \forall t\geq t_\alpha,
$$
where $t_\alpha$ is given by \eqref{eq:regtime}, thereby concluding the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Calphaest}]
The bound \eqref{eq:Calpha}
follows from estimate \eqref{eq:globalLinf2} for the $L^\infty$ norm and the bound of Lemma~\ref{lem:Calpha} for the H\"older seminorm
\begin{align*}
\|\theta(t)\|_{C^\alpha}&=\|\theta(t)\|_{L^\infty}+[\theta(t)]_{C^\alpha} \leq K_\infty+[\theta(t)]_{C^\alpha} \leq c K_\infty
\qquad \forall t\geq 0,
\end{align*}
and a sufficiently large constant $c>0$.
\end{proof}
\begin{remark}
The quantitive regularization estimate at time $t_\alpha$ from $L^\infty$ to $C^\alpha$ is given by the ODE \eqref{eq:ODE}.
More precisely, $t_\alpha$ is determined by the initial datum $\xi(0)$, conveniently chosen to be 1 in the proof above. If instead we let $\xi(0)=\xi_0>0$,
then
\begin{equation}\label{eq:xi2}
\xi(t)=\begin{cases}
\displaystyle \left[\xi_0^{\frac{2(1-\alpha)}{3}}-\frac23(1-\alpha) t\right]^{\frac{3}{2(1-\alpha)}}, \quad &\text{if } t\in [0,t_\alpha],\\ \\
0,\quad &\text{if } t \in (t_\alpha,\infty),
\end{cases}
\end{equation}
and
\begin{equation}\label{eq:regtime2}
t_\alpha=\frac{3 }{2(1-\alpha)}\xi_0^{\frac{2(1-\alpha)}{3}}.
\end{equation}
In particular, $t_\alpha$ can be made arbitrarily small by a suitable small choice of $\xi_0$. This observation,
together with Remark~\ref{rmk:L2toLinf} recovers the result of~\cite{CV10a} and shows that solutions to \emph{forced}
\eqref{eq:SQG} regularize instantaneously from
$L^2$ to $C^\alpha$.
\end{remark}
\section{The absorbing set in \texorpdfstring{$H^1$}{H 1}}
With Theorem \ref{thm:Calpha} at hand, it is now possible to ensure the existence of a bounded
absorbing set in $H^1$.
\begin{theorem}\label{thm:H1abs}
There exists $\alpha=\alpha(\|f\|_{L^\infty},\kappa)\in (0,1/4]$
and a constant $R_1=R_1(\|f\|_{L^\infty\cap H^1}, \kappa)\geq 1$
such that the set
\begin{equation*}
B_1=\left\{\varphi\in C^\alpha\cap H^1: \|\varphi\|^2_{H^1}+\|\varphi\|^2_{C^\alpha}
\leq R_1^2\right\}
\end{equation*}
is an absorbing set for $S(t)$. Moreover,
\begin{equation}\label{eq:H1unif}
\sup_{t\geq 0}\sup_{\theta_0\in B_1}\left[\|S(t)\theta_0\|^2_{H^1}+\|S(t)\theta_0\|^2_{C^\alpha}+\int_t^{t+1}\|S(\tau)\theta_0\|^2_{H^{3/2}}{\rm d} \tau\right]\leq
2 R_1^2.
\end{equation}
The expression for $R_1$ can be computed explicitly from \eqref{eq:Calphaunif} and \eqref{eq:K1} below.
\end{theorem}
Since in establishing the existence of an $H^1$ absorbing ball the dynamics can be restricted to the $C^\alpha$ absorbing ball,
in order to prove Theorem~\ref{thm:H1abs} it is enough to establish an a priori estimate for initial data that are H\"older continuous.
\begin{lemma}\label{lem:H1absest}
Assume that $\theta_0\in H^1\cap C^\alpha$. Then
\begin{equation}\label{eq:H1exp}
\|\theta(t)\|^2_{H^1}\leq \|\theta_0\|^2_{H^1}{\rm e}^{-\frac{c_0\kappa}{4} t}+ K_1,
\end{equation}
where $K_1=K_1(\|f\|_{L^\infty\cap H^1}, \kappa, \|\theta_0\|_{C^\alpha})\geq 1$ is given in \eqref{eq:K1} below.
Moreover, for every $t\geq 0$ we have
\begin{equation}\label{eq:H32int}
\int_t^{t+1}\|\theta(\tau)\|^2_{H^{3/2}}{\rm d} \tau\leq \frac{c}{\kappa}\left[\|\theta_0\|^2_{H^1}+ K_1\right].
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:H1absest}]
The proof closely follows the lines of \cite{CTV13}, and thus we omit many details.
We apply $\nabla$ to \eqref{eq:SQG} and take the inner product with $\nabla \theta$, to obtain
\begin{equation}\label{eq:gradeq}
({\partial}_t+\boldsymbol{u}\cdot \nabla+\Lambda)|\nabla\theta|^2+\kappa D[\nabla \theta]=-2{\partial}_\ell\boldsymbol{u}_j{\partial}_j\theta{\partial}_\ell\theta+2\nabla f\cdot \nabla \theta,
\end{equation}
pointwise in $x$, where, as before,
\begin{equation*}
D[\nabla \theta](x)= c \int_{\mathbb{R}^2} \frac{\big[\nabla \theta(x)-\nabla \theta(x+y)\big]^2}{|y|^{3}}{\rm d} y.
\end{equation*}
From \eqref{eq:propag}, we also know that
\begin{equation*}
\|\theta(t)\|_{C^\alpha}\leq M:=
c \left[\|\theta_0\|_{C^\alpha}+\frac{1}{c_0\kappa}\|f\|_{L^\infty}\right], \qquad \forall t\geq0.
\end{equation*}
Thanks to \cite{CV12}*{Theorem 2.2}, we then deduce the lower bound
\begin{equation}\label{eq:nonbdd}
D[\nabla \theta](x,t)\geq \frac{|\nabla \theta(x,y)|^{\frac{3-\alpha}{1-\alpha}}}{c_4 M^\frac{1}{1-\alpha}}
\end{equation}
Arguing as in Lemma \ref{lem:rieszbdd}, we obtain for $r>0$ that
$$
|\nabla\boldsymbol{u}(x,t)|\leq c \left[r^{1/2} \big[D[\nabla\theta](x,t)\big]^{1/2} +\frac{M}{r} \right],
$$
By choosing $r=\kappa^{1/2}M^{1/2}|\nabla\theta(x,t)|^{-1}$ and the Cauchy-Schwarz inequality
we then infer that
$$
|\nabla\boldsymbol{u}(x,t)||\nabla\theta(x,t)|^2\leq \frac\kappa2 D[\nabla\theta](x,t)+ \frac{c}{\kappa^{1/2}}M^{1/2}|\nabla\theta(x,t)|^3.
$$
From \eqref{eq:gradeq}, we have
\begin{align*}
({\partial}_t+\boldsymbol{u}\cdot \nabla+\Lambda)|\nabla\theta|^2+\frac\kappa2 D[\nabla \theta]
&\leq \frac{c}{\kappa^{1/2}}M^{1/2}|\nabla\theta(x,t)|^3+2|\nabla f||\nabla \theta| \notag\\
& \leq \frac\kappa4 \frac{ |\nabla \theta(x,y)|^{\frac{3-\alpha}{1-\alpha}}}{ c_4 M^\frac{1}{1-\alpha}} +
\left[\frac{c M}{\kappa}\right]^{\frac1{4\alpha}}+2|\nabla f||\nabla \theta|,
\end{align*}
so that together with \eqref{eq:nonbdd} we arrive at
\begin{equation}\label{eq:gradeq2}
({\partial}_t+\boldsymbol{u}\cdot \nabla+\Lambda)|\nabla\theta|^2+\frac\kappa4 D[\nabla \theta]\leq \left[\frac{c M}{\kappa}\right]^{\frac1{4\alpha}}+2|\nabla f||\nabla \theta|.
\end{equation}
Integrating over ${\mathbb T}^2$ and using the identity
$$
\frac12\int_{{\mathbb T}^2} D[\nabla \varphi](x){\rm d} x=\int \nabla\varphi(x)\cdot \Lambda\nabla\varphi(x){\rm d} x=\|\varphi\|_{H^{3/2}}^2,
$$
we obtain the differential inequality
\begin{equation*}
{\frac{\dd}{\dd t}} \|\theta\|_{H^1}^2+\frac{\kappa}{2}\|\theta\|^2_{H^{3/2}}\leq \left[\frac{c M}{\kappa}\right]^{\frac1{4\alpha}}+2\| f\|_{H^1}\| \theta\|_{H^1}.
\end{equation*}
From the Poincar\'e inequality, we then have
\begin{equation}\label{eq:H1diff}
{\frac{\dd}{\dd t}} \|\theta\|_{H^1}^2+\frac{\kappa}{4}\|\theta\|^2_{H^{3/2}}\leq \left[\frac{c M}{\kappa}\right]^{\frac1{4\alpha}}+\frac{4}{c_0\kappa}\| f\|_{H^1}^2.
\end{equation}
From the above, \eqref{eq:H1exp} follows from the Poincar\'e inequality and the standard Gronwall lemma,
provided we set
\begin{equation}\label{eq:K1}
K_1:=\frac{4}{c_0\kappa}\left[\left(\frac{c M}{\kappa}\right)^{\frac1{4\alpha}}+\frac{4}{c_0\kappa}\| f\|_{H^1}^2
\right].
\end{equation}
By integrating \eqref{eq:H1diff} on $(t,t+1)$ and applying \eqref{eq:H1exp}, we also recover \eqref{eq:H32int}.
\end{proof}
\section{The global attractor}\label{sec:globattr}
Once the existence of an $H^1$-bounded absorbing set for $S(t)$ is established, we aim to prove
the existence of the global attractor by improving the regularity of the absorbing set to $H^{3/2}$
(see Theorem \ref{thm:H32abs} below).
Following the general theory recently developed in \cite{CCP12}, this automatically implies the
existence of a minimal compact attracting set for $S(t)$ which, however, might not be invariant,
due to the possible lack of continuity of $S(t)$, for fixed $t>0$, as a map acting on $H^1$ (see
\cites{CCP12,CZK15} for examples of non-invariant attractors).
Full invariance will be recovered in a subsequent step (Section \ref{sub:invariance}), by exploiting
the $H^{3/2}$-regularity of the absorbing set and a continuity estimate proven in
\cite{CTV13}*{Proposition 5.5}.
\subsection{Compact absorbing sets}
The existence and regularity of the attractor in Theorem \ref{thm:globalattra} follow from the existence of an
absorbing set bounded in $H^{3/2}$.
\begin{theorem}\label{thm:H32abs}
There exists a constant $R_2=R_2(\|f\|_{L^\infty\cap H^1}, \kappa)\geq 1$
such that the set
\begin{equation*}
B_2=\left\{\varphi\in H^{3/2}: \|\varphi\|_{H^{3/2}}
\leq R_2\right\}
\end{equation*}
is an absorbing set for $S(t)$. Moreover,
\begin{equation}\label{eq:H32unif}
\sup_{t\geq 0}\sup_{\theta_0\in B_2}\|S(t)\theta_0\|_{H^{3/2}}\leq
2 R_2.
\end{equation}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:H32abs}]
As usual, it is enough to show that $B_2$ absorbs $B_1$, the $H^1$ absorbing set obtained in Theorem~\ref{thm:H1abs}. If $\theta_0\in B_1$, then
\eqref{eq:H1unif} implies that
\begin{equation}\label{eq:H32int2}
\sup_{t\geq 0}\int_t^{t+1}\|S(\tau)\theta_0\|^2_{H^{3/2}}{\rm d} \tau\leq 2 R^2_1.
\end{equation}
By testing \eqref{eq:SQG} with $\Lambda^3\theta$ and using standard arguments, we deduce that
\begin{align*}
{\frac{\dd}{\dd t}} \|\theta\|^2_{H^{3/2}}+\kappa\|\theta\|^2_{H^2}\leq \frac1\kappa\|f\|^2_{H^1}+
2\left|\int_{{\mathbb T}^2}\left[\Lambda^{3/2}(\boldsymbol{u}\cdot\nabla\theta)-\boldsymbol{u}\cdot \nabla\Lambda^{3/2}\theta\right]\Lambda^{3/2}\theta {\rm d} x\right|.
\end{align*}
By means of the commutator estimate
$$
\|\Lambda^{3/2} (\varphi \psi)-\varphi\Lambda^{3/2}\psi\|_{L^2}
\leq c\left[\|\nabla \varphi\|_{L^4}\|\Lambda^{1/2}\psi\|_{L^4}+\|\Lambda^{3/2}\varphi\|_{L^4}\|\psi\|_{L^4}\right],
$$
and the Sobolev embedding $H^{1/2}\subset L^4$, we therefore have
\begin{align*}
{\frac{\dd}{\dd t}} \|\theta\|^2_{H^{3/2}}+\kappa\|\theta\|^2_{H^2}
&\leq \frac1\kappa\|f\|^2_{H^1}+
c\|\theta\|_{H^{3/2}}\left[\|\Lambda \boldsymbol{u}\|_{L^4}\|\Lambda^{3/2}\theta\|_{L^4}+\|\Lambda^{3/2}\boldsymbol{u}\|_{L^4}\|\Lambda\theta\|_{L^4}\right]\\
&\leq \frac1\kappa\|f\|^2_{H^1}+c\|\theta\|_{H^{3/2}}^2\|\theta\|_{H^2}\\
&\leq \frac1\kappa\|f\|^2_{H^1}+\frac{c}{\kappa}\|\theta\|_{H^{3/2}}^4+\frac\kappa2\|\theta\|^2_{H^2}.
\end{align*}
Hence,
\begin{align*}
{\frac{\dd}{\dd t}} \|\theta\|^2_{H^{3/2}}+\frac\kappa2\|\theta\|^2_{H^2}\leq \frac1\kappa\|f\|^2_{H^1}+\frac{c}{\kappa}\|\theta\|_{H^{3/2}}^4.
\end{align*}
Thanks to the local integrability \eqref{eq:H32int2} and the above differential
inequality, the uniform Gronwall lemma implies
$$
\|S(t)\theta_0\|^2_{H^{3/2}}\leq \left[2R^2_1+ \frac1\kappa\|f\|^2_{H^1}\right]{\rm e}^{\frac{c}{\kappa}R^2_1}, \qquad \forall t\geq 1.
$$
Thus, setting
$$
R_2^2:=\left[2R^2_1+ \frac1\kappa\|f\|^2_{H^1}\right]{\rm e}^{\frac{c}{\kappa}R^2_1},
$$
we obtain that
$$
S(t)B_1\subset B_2, \qquad \forall t\geq 1,
$$
as we wanted.
\end{proof}
We summarize below the consequences of the above result,
as they follow from \cite{CCP12}*{Proposition 8}.
\begin{corollary}\label{cor:attr}
The dynamical system $S(t)$ generated by \eqref{eq:SQG} on $H^1$ possesses a unique
global attractor ${\mathcal A}$ with the
following properties:
\begin{itemize}
\item ${\mathcal A}\subset H^{3/2}$ and is the $\omega$-limit set of $B_2$, namely,
$$
{\mathcal A}=\omega(B_2)= \bigcap_{t\geq 0}\overline{\bigcup_{\tau\geq t} S(\tau)B_2}.
$$
\item For every bounded set $B\subset H^1$,
$$
\lim_{t\to\infty}{\rm dist}(S(t)B,{\mathcal A})=0,
$$
where ${\rm dist}$ stands for the usual Hausdorff semi-distance between sets given by the
$H^1$ norm.
\item ${\mathcal A}$ is minimal in the class of $H^1$-closed attracting set.
\end{itemize}
\end{corollary}
\subsection{Invariance of the attractor}\label{sub:invariance}
To conclude the proof of Theorem \ref{thm:globalattra}, we establish the invariance
of the attractor obtained in Corollary \ref{cor:attr}. To this end, we recall the following continuity
result for $S(t)$.
\begin{proposition}[\cite{CTV13}*{Proposition 5.5}]
For every $t>0$, $S(t):B_2\to H^1$ is Lipschitz-continuous in the topology of $H^1$.
\end{proposition}
In other words, the restriction of $S(t)$ to the regular absorbing set $B_2\subset H^{3/2}$
is a continuos map. It turns out that this suffices to complete the proof of Theorem \ref{thm:globalattra}.
\begin{proposition}\label{prop:attr}
The global attractor ${\mathcal A}$ of $S(t)$ is fully invariant, namely
$$
S(t){\mathcal A}={\mathcal A}, \qquad \forall t\geq0.
$$
Moreover, ${\mathcal A}$ is maximal in the class of $H^1$-bounded invariant sets.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:attr}]
This proof is classical, so we only sketch here some details.
Since the global attractor is
the $\omega$-limit set of $B_2$, we have that
$$
{\mathcal A}=\omega(B_2)= \big\{\eta\in H^1: S(t_n)\eta_{n}\to \eta \text{ for some } \eta_n\in B_2,\ t_n\to \infty \big\}.
$$
According to \cite{CCP12}*{Proposition 13}, full invariance of ${\mathcal A}$ follows if one can
show that ${\mathcal A}\subset S(t_0){\mathcal A}$ for some $t_0>0$.
Since $B_2$ is absorbing, we may fix $t_0>0$ such that $S(t)B_2\subset B_2$ for all $t\geq t_0$.
Let $\eta\in \omega(B_2)$. Then
there exist $t_n\to\infty$ and $\eta_n\in B_2$ such that
$$
S(t_n)\eta_n\to \eta \qquad \text{as } n\to \infty, \text{ strongly in } H^1.
$$
We may suppose $t_n\geq 2t_0$ for every $n\in \mathbb{N}$.
Since $\omega(B_2)$ is attracting,
we get in particular
$$
\lim_{n\to\infty}{\rm dist} (S(t_n-t_0)B_2,\omega(B_2))=0,
$$
which in turn implies
$$
\lim_{n\to\infty}\left[\inf_{\xi\in \omega(B_2)} \|S(t_n-t_0)\eta_n-\xi\|_{H^1}\right]=0.
$$
So there is a sequence $\xi_n\in \omega(B_2)$ such that
$$
\lim_{n\to\infty}\left[\|S(t_n-t_0)\eta_n-\xi_n\|_{H^1}\right]=0.
$$
But $\omega(B_2)$ is compact, thus, up to a subsequence, $\xi_n\to\xi\in \omega(B_2)$, which yields at
once
$$
S(t_n-t_0)\eta_n\to\xi.
$$
Note that $S(t_n-t_0)\eta_n\in B_2$ since $t_n\geq 2t_0$.
Using the continuity of $S(t_0)$ on $B_2$
$$
S(t_0)S(t_n-t_0)\eta_n\to S(t_0)\xi.
$$
On the other hand,
$$
S(t_0)S(t_n-t_0)\eta_n=S(t_n)\eta_n\to\eta.
$$
We conclude that $\eta=S(t_0)\xi$, i.e., $\eta\in S(t_0)\omega(B_2)$. Hence,
${\mathcal A}\subset S(t_0){\mathcal A}$, and full invariance follows. Once this is established,
the maximality with respect to invariance is classical.
\end{proof}
\section*{Acknowledgements}
The work of PC was supported in part by the NSF grants DMS-1209394 and DMS-1265132,
MCZ was supported in part by an AMS-Simons Travel Award,
while the work of VV was supported in part by the NSF grants DMS-1348193 and DMS-1514771, and an Alfred P. Sloan Research Fellowship.
\begin{bibdiv}
\begin{biblist}
\bib{CV10a}{article}{
author={Caffarelli, L.A.},
author={Vasseur, Al.},
title={Drift diffusion equations with fractional diffusion and the
quasi-geostrophic equation},
journal={Ann. of Math. (2)},
volume={171},
date={2010},
pages={1903--1930},
}
\bib{CT03}{article}{
author={Cao, C.},
author={Titi, E.S.},
title={Global well-posedness and finite-dimensional global attractor for
a 3-D planetary geostrophic viscous model},
journal={Comm. Pure Appl. Math.},
volume={56},
date={2003},
pages={198--233},
}
\bib{CCP12}{article}{
author={Chepyzhov, V.V.},
author={Conti, M.},
author={Pata, V.},
title={A minimal approach to the theory of global attractors},
journal={Discrete Contin. Dyn. Syst.},
volume={32},
date={2012},
pages={2079--2088},
}
\bib{CTV07}{article}{
author={Chepyzhov, V.V.},
author={Titi, E.S.},
author={Vishik, M.I.},
title={On the convergence of solutions of the Leray-$\alpha$ model to the
trajectory attractor of the 3D Navier-Stokes system},
journal={Discrete Contin. Dyn. Syst.},
volume={17},
date={2007},
pages={481--500},
}
\bib{C09}{article}{
author={Cheskidov, A.},
title={Global attractors of evolutionary systems},
journal={J. Dynam. Differential Equations},
volume={21},
date={2009},
pages={249--268},
}
\bib{CCFS}{article}{
author={Cheskidov, A.},
author={Constantin, P.},
author={Friedlander, S.},
author={Shvydkoy, R.},
title={Energy conservation and Onsager's conjecture for the Euler
equations},
journal={Nonlinearity},
volume={21},
date={2008},
pages={1233--1252},
}
\bib{CD14}{article}{
author={Cheskidov, A.},
author={Dai, M.},
title={The existence of a global attractor for the forced critical surface quasi-geostrophic equation in $L^2$},
journal = {ArXiv e-prints},
eprint = {1402.4801},
date = {2014},
}
\bib{CF06}{article}{
author={Cheskidov, A.},
author={Foias, C.},
title={On global attractors of the 3D Navier-Stokes equations},
journal={J. Differential Equations},
volume={231},
date={2006},
pages={714--754},
}
\bib{CCW00}{article}{
author={Constantin, P.},
author={C\'ordoba, D.},
author={Wu, Jiahong},
title={On the critical dissipative quasi-geostrophic equation},
journal={Indiana Univ. Math. J.},
volume={50},
date={2001},
pages={97--107},
}
\bib{CF85}{article}{
author={Constantin, P.},
author={Foias, C.},
title={Global Lyapunov exponents, Kaplan-Yorke formulas and the dimension
of the attractors for $2$D Navier-Stokes equations},
journal={Comm. Pure Appl. Math.},
volume={38},
date={1985},
pages={1--27},
}
\bib{CF88}{book}{
author={Constantin, P.},
author={Foias, C.},
title={Navier-Stokes equations},
series={Chicago Lectures in Mathematics},
publisher={University of Chicago Press, Chicago, IL},
date={1988},
pages={x+190},
}
\bib{CFT85}{article}{
author={Constantin, P.},
author={Foias, C.},
author={Temam, R.},
title={Attractors representing turbulent flows},
journal={Mem. Amer. Math. Soc.},
volume={53},
date={1985},
pages={vii+67},
}
\bib{CFT88}{article}{
author={Constantin, P.},
author={Foias, C.},
author={Temam, R.},
title={On the dimension of the attractors in two-dimensional turbulence},
journal={Phys. D},
volume={30},
date={1988},
pages={284--296},
}
\bib{CTV12}{article}{
author={Constantin, P.},
author={Tarfulea, A.},
author={Vicol, V.},
title={Absence of anomalous dissipation of energy in forced two
dimensional fluid equations},
journal={Arch. Ration. Mech. Anal.},
volume={212},
date={2014},
pages={875--903},
}
\bib{CTV13}{article}{
author={Constantin, P.},
author={Tarfulea, A.},
author={Vicol, V.},
title = {Long time dynamics of forced critical SQG},
journal={Comm. Math. Phys.},
volume={335},
date = {2014},
pages={93--141},
}
\bib{CV12}{article}{
author={Constantin, P.},
author={Vicol, V.},
title={Nonlinear maximum principles for dissipative linear nonlocal
operators and applications},
journal={Geom. Funct. Anal.},
volume={22},
date={2012},
pages={1289--1321},
}
\bib{CC04}{article}{
author={C{\'o}rdoba, A.},
author={C{\'o}rdoba, D.},
title={A maximum principle applied to quasi-geostrophic equations},
journal={Comm. Math. Phys.},
volume={249},
date={2004},
pages={511--528},
}
\bib{CZ13}{article}{
author={Coti Zelati, M.},
title={On the theory of global attractors and Lyapunov functionals},
journal={Set-Valued Var. Anal.},
volume={21},
date={2013},
pages={127--149},
}
\bib{CZG15}{article}{
author={Coti Zelati, M.},
author={Gal, C.G.},
title={Singular Limits of Voigt Models in Fluid Dynamics},
journal={J. Math. Fluid Mech.},
volume={17},
date={2015},
pages={233--259},
}
\bib{CZK15}{article}{
author={Coti Zelati, M.},
author={Kalita, P.},
title={Minimality properties of set-valued processes and their pullback
attractors},
journal={SIAM J. Math. Anal.},
volume={47},
date={2015},
pages={1530--1561},
}
\bib{CZV14}{article}{
author={Coti Zelati, M.},
author={Vicol, V.},
title = {On the global regularity for the supercritical SQG equation},
journal = {ArXiv e-prints},
eprint = {1410.3186},
date = {2014},
}
\bib{Dong10}{article}{
author = {Dong, H.},
title = {Dissipative quasi-geostrophic equations in critical Sobolev spaces: smoothing effect and global well-posedness},
journal = {Discrete Contin. Dyn. Syst.},
volume = {26},
date = {2010},
pages = {1197--1211},
}
\bib{FPTZ12}{article}{
author={Farhat, A.},
author={Panetta, R.L.},
author={Titi, E.S.},
author={Ziane, M.},
title={Long-time behavior of a two-layer model of baroclinic
quasi-geostrophic turbulence},
journal={J. Math. Phys.},
volume={53},
date={2012},
pages={115603, 24},
}
\bib{FHT02}{article}{
author={Foias, C.},
author={Holm, D.D.},
author={Titi, E.S.},
title={The three dimensional viscous Camassa-Holm equations, and their
relation to the Navier-Stokes equations and turbulence theory},
journal={J. Dynam. Differential Equations},
volume={14},
date={2002},
pages={1--35},
}
\bib{FJK96}{article}{
author={Foias, C.},
author={Jolly, M.S.},
author={Kukavica, I.},
title={Localization of attractors by their analytic properties},
journal={Nonlinearity},
volume={9},
date={1996},
pages={1565--1581},
}
\bib{FMRT01}{book}{
author={Foias, C.},
author={Manley, O.},
author={Rosa, R.},
author={Temam, R.},
title={Navier-Stokes equations and turbulence},
series={Encyclopedia of Mathematics and its Applications},
volume={83},
publisher={Cambridge University Press, Cambridge},
date={2001},
pages={xiv+347},
}
\bib{FPV09}{article}{
author={Friedlander, S.},
author={Pavlovi{\'c}, N.},
author={Vicol, V.},
title={Nonlinear instability for the critically dissipative
quasi-geostrophic equation},
journal={Comm. Math. Phys.},
volume={292},
date={2009},
pages={797--810},
}
\bib{GT97}{article}{
author={Gibbon, J.D.},
author={Titi, E.S.},
title={Attractor dimension and small length scale estimates for the
three-dimensional Navier-Stokes equations},
journal={Nonlinearity},
volume={10},
date={1997},
pages={109--119},
}
\bib{Hale}{book}{
author={Hale, J.K.},
title={Asymptotic behavior of dissipative systems},
publisher={American Mathematical Society},
place={Providence, RI},
date={1988},
}
\bib{IMT04}{article}{
author={Ilyin, A. A.},
author={Miranville, A.},
author={Titi, E. S.},
title={Small viscosity sharp estimates for the global attractor of the
2-D damped-driven Navier-Stokes equations},
journal={Commun. Math. Sci.},
volume={2},
date={2004},
pages={403--426},
}
\bib{JT92}{article}{
author={Jones, D.A.},
author={Titi, E.S.},
title={On the number of determining nodes for the $2$D Navier-Stokes
equations},
journal={J. Math. Anal. Appl.},
volume={168},
date={1992},
pages={72--88},
}
\bib{Ju07}{article}{
author={Ju, N.},
title={Dissipative 2D quasi-geostrophic equation: local well-posedness, global regularity and similarity solutions},
journal={Indiana Univ. Math. J.},
volume={56},
date={2007},
pages={187--206},
}
\bib{JT15}{article}{
author={Ju, N.},
author={Temam, R.},
title={Finite dimensions of the global attractor for 3D primitive
equations with viscosity},
journal={J. Nonlinear Sci.},
volume={25},
date={2015},
pages={131--155},
}
\bib{KT09}{article}{
author={Kalantarov, V.K.},
author={Titi, E.S.},
title={Global attractors and determining modes for the 3D
Navier-Stokes-Voight equations},
journal={Chin. Ann. Math. Ser. B},
volume={30},
date={2009},
pages={697--714},
}
\bib{KN09}{article}{
author={Kiselev, A.},
author={Nazarov, F.},
title={A variation on a theme of Caffarelli and Vasseur},
journal={Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI)},
volume={370},
date={2009},
pages={58--72, 220},
}
\bib{KNV07}{article}{
author={Kiselev, A.},
author={Nazarov, F.},
author={Volberg, A.},
title={Global well-posedness for the critical 2D dissipative quasi-geostrophic equation},
journal={Invent. Math.},
volume={167},
date={2007},
pages={445--453},
}
\bib{Miu06}{article}{
author={Miura, H.},
title={Dissipative quasi-geostrophic equation for large initial data in
the critical Sobolev space},
journal={Comm. Math. Phys.},
volume={267},
date={2006},
pages={141--157},
}
\bib{PZ}{article}{
author={Pata, V.},
author={Zelik, S.},
title={A result on the existence of global attractors for semigroups of
closed operators},
journal={Commun. Pur. Appl. Anal.},
volume={6},
date={2007},
pages={481--486},
}
\bib{Sell96}{article}{
author={Sell, G.R.},
title={Global attractors for the three-dimensional Navier-Stokes
equations},
journal={J. Dynam. Differential Equations},
volume={8},
date={1996},
pages={1--33},
}
\bib{SellYou}{book}{
author={Sell, G.R.},
author={You, Y.},
title={Dynamics of evolutionary equations},
publisher={Springer-Verlag, New York},
date={2002},
}
\bib{Sil10a}{article}{
author={Silvestre, L.},
title={Eventual regularization for the slightly supercritical
quasi-geostrophic equation},
journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire},
volume={27},
date={2010},
pages={693--704},
}
\bib{T3}{book}{
author={Temam, R.},
title={Infinite-dimensional dynamical systems in mechanics and physics},
publisher={Springer-Verlag},
place={New York},
date={1997},
}
\bib{WT13}{article}{
author={Wang, M.},
author={Tang, Y.},
title={On dimension of the global attractor for 2D quasi-geostrophic
equations},
journal={Nonlinear Anal. Real World Appl.},
volume={14},
date={2013},
pages={1887--1895},
}
\bib{Wu07}{article}{
author={Wu\ ,J.},
title={Existence and uniqueness results for the 2-D dissipative quasi-geostrophic equation},
journal={Nonlinear Analysis},
volume={67},
date={2007},
pages={3013--3036},
}
\bib{Z97}{article}{
author={Ziane, M.},
title={Optimal bounds on the dimension of the attractor of the
Navier-Stokes equations},
journal={Phys. D},
volume={105},
date={1997},
pages={1--19},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,565,183 | arxiv | \section{Computation as maximization in information space}
The early history of constraint processing
is written in three MIT theses:
Sutherland's, Waltz's, and Steele's \cite{sth63,wltz72,stl80}.
Already in this small selection one can discern
two radically different approaches.
Sutherland and Steele use relaxation:
starting form a guessed assignment of values to variables,
constraints are successively used
to adjust variables in such a way
as to satisfy better the constraint under consideration.
These authors followed an old idea brought into prominence
under the name of relaxation by Southwell \cite{sthwll40}.
Waltz adopted a radically different approach
(and was, to our knowledge, the first to do so).
He associated with each of the problem's variables
a \emph{domain}; that is, the set of all values
that are not \emph{a priori} impossible.
Each constraint is then used to eliminate values from
the domains of one or more variables affected by the constraint
that are incompatible with that constraint.
In this paper we are concerned with the latter method,
which we call the \emph{domain reduction} method.
The attraction of domain reduction
is its completeness for finite domains:
if a solution exists, then it will be found.
This in contrast with relaxation,
which can flounder forever\footnote{
But, as one may expect, domain reduction is no cure-all.
For some problems, relaxation quickly finds a solution,
and domain reduction requires an infeasible amount of time.
The $n$-queens problem for large $n$ is an example.
Van Hentenryck and Michel \cite{vhmchl05}, page 89,
mention $n = 10,000$ as a routine example for relaxation
in combination with their search technique.
}.
In this paper we present domain reduction
as an example of the view of
\emph{computation as monotonic gain of information}.
This view was pioneered by Dana Scott,
who was the first to make mathematical sense
\cite{scottPRG70} of a recursively defined function $f$.
He did this by associating with the definition of $f$
a sequence $a$ of partial functions.
If $x$ is such that $f(x)$ requires a recursion depth is at most
$n$, then $a_n(x)$ is defined and equal to $f(x)$;
otherwise $a_n(x)$ is undefined.
Thus $a$ is a sequence of partial functions
in which each function agrees with the previous one,
but is ``more defined''.
In general, if two partial functions $g$ and $h$ of the same
type are such that $h$ is defined wherever $g$ is
and such that they have the same value when both are defined,
then Scott proposed to regard $g$ as an approximation to $h$
and noted that this notion of approximation is a partial order
in the set of partial functions of the same type.
Moreover Scott proposed to transfer the information concept
from random variables, as it was in Shannon's information theory,
to partial functions,
noting that a partial function can be regarded as containing
more information than partial functions approximating it.
The approach to the semantics of recursive definitions
can be summarized by saying that every such definition
can be regarded as the limit of a sequence of approximations
each containing more information about the limit of the sequence
than the previous one.
Scott was aware that it might seem somewhat far-fetched to give
such an interpretation to the notion of ``information''.
As a justification Scott \cite{scott72} gave another example of a
set partially ordered by information:
that of numerical intervals.
Although this certainly strengthened the case,
this suggestion has not, as far as we know, been followed up.
In this paper we do so,
motivated by the opportunities for deeper understanding
of constraint solving.
In numerical applications
the view of computation as monotonic gain of information
is more than a theoretically interesting insight:
it is adds an essential capability.
Suppose a conventional numerical computation is stopped
after 1,000 iterations and yields 1.912837465
and that it yields 1.912877134
when allowed to run for 10,000 iterations,
what do we know about the improvement obtained, if any?
If results, intermediate and final,
were expressed as intervals we would, say,
have [1.911, 1.938]\footnote{
Note the smaller number of decimals: with intervals
it becomes clear that additional decimals would be meaningless.
} after 1,000 iterations
and perhaps [1.9126, 1.9283]\footnote{
The smaller interval warrants another decimal.
} after 10,000 iterations.
Here we see that we \emph{know more} about the unknown solution
as a result of the additional computational work.
Rephrasing ``knowing more'' as ``gain in information''
suggests that the effect of iteration in interval arithmetic
can be described as ``monotonic gain of information''.
The important qualification ``monotonic'' is there
because in interval arithmetic we never need to settle for less information
as a result of additional computational work,
though we may fail to make a gain.
Moreover, such a stalling of progress
is a useful criterion for halting the iteration.
Because of the special importance of solving constraint satisfaction
problems over the reals by means of floating-point arithmetic,
we choose our example problem from this area.
Section~\ref{sec:IAvsIC} gives the needed review of interval
methods; section~\ref{sec:example} describes the example.
The new view of domain reduction as monotonic information gain
is used in Section~\ref{sec:solving}
to develop the method from first principles.
This suggests regarding the set of constraints in a constraint satisfaction
problem as a formula in predicate logic with a fixed interpretation
of predicate symbols.
The standard semantics only assigns meanings to closed formulas,
whereas here we have a formula with free variables.
Accordingly, in Section~\ref{sec:notTerm} we develop the
required extension of the semantics of predicate logic.
This needs a novel treatment of relations,
also in this section.
\section{Related work}
Following Mackworth's AC-3 algorithm \cite{mckwrth77}
there are many other papers concerned with converging
fair iterations \cite{aptEssence,bnldr97,vhmcbn98,vhsdl98,srrnpn91}.
For historical references
we refer to the textbooks \cite{dechter,aptBook2003}.
We address the connections with the work of Saraswat \emph{et al.}
\cite{srrnpn91} in Section~\ref{sec:furthWrk}.
\section{Interval arithmetic and interval constraints}
\label{sec:IAvsIC}
To facilitate the use of information in computation
we do not use interval arithmetic directly,
but indirectly via a constraint satisfaction problem
(CSP).
Such problems are solved by associating with each
unknown a \emph{set} of possible values
instead of the usual single value.
This is especially appropriate for real-valued
unknowns.
In combination with the use of floating-point arithmetic,
the sets of possible values
take the form of intervals
with floating-point numbers as bounds.
This special case of CSP solving is called
\emph{interval constraints} \cite{clr87,aptEssence}.
We introduce interval constraints by means of an example.
In interval arithmetic the rule for adding intervals is
$$
[a,b]+[c,d] = \seT{x+y}{x \in [a,b] \wedge y \in [c,d]}
$$
so that, e.g., $[0,2]+[0,2] = [0,4]$.
The analogous operation in interval constraints
starts by defining the constraint $\ensuremath{\mbox{{\it sum}}}(x,y,z)$
which holds between the reals
$x$, $y$, and $z$ iff $x+y=z$.
In other words, the formula $\ensuremath{\mbox{{\it sum}}}(x,y,z)$ is true
whenever $x+y=z$.
This leads to the following inference
\begin{center}
\begin{tabular}{c}
$\ensuremath{\mbox{{\it sum}}}(x,y,z)$ \\
$x \in [0,2] \wedge y \in [0,2] \wedge
z \in [-\infty,+\infty]$ \\
\hline
$x \in [0,2] \wedge y \in [0,2] \wedge
z \in [0,4]$ \\
\end{tabular}
\end{center}
We use here the conventional format for inference:
the premises above the horizontal line;
the conclusion below.
The above inference coincides,
in this special case, with interval arithmetic.
Only the interval for $z$ is narrowed.
In interval constraints we may have
\emph{a priori} constraints on \emph{all} variables,
as in
\begin{center}
\begin{tabular}{c}
$\ensuremath{\mbox{{\it sum}}}(x,y,z)$ \\
$x \in [0,2] \wedge y \in [0,2] \wedge
z \in [3,5]$ \\
\hline
$x \in [1,2] \wedge y \in [1,2] \wedge
z \in [3,4]$ \\
\end{tabular}
\end{center}
Here the intervals for all three variables are narrowed.
As a result, the effect of the operation
can no longer be exclusively characterized as an addition
or as its inverse:
the effect is a mixture of several operations.
We can formulate the effect algebraically as
applying an operator,
the \emph{contraction operator}
of the constraint $\ensuremath{\mbox{{\it sum}}}$,
that maps triples of intervals to triples of intervals,
in this case as
\begin{equation}\label{eq:shrinc}
([0,2],[0,2],[3,5])
\mapsto ([1,2],[1,2],[3,4]).
\end{equation}
The righthand side of (\ref{eq:shrinc})
is the smallest triple (``box'') that can be inferred:
any box that is strictly smaller would exclude
points that are possible according the given premises of the
inference.
Thus this box is the optimal solution
to the given constraint-satisfaction problem.
The optimal solution is obtained
by one addition and two subtractions of interval arithmetic
plus a few bound comparisons.
Similarly efficient algorithms exist for some other constraints,
such as product, integer power, trigonometric and logarithmic
functions.
We may express the contraction operator
for the \emph{sum} constraint as a mapping
from a tuple $B$ of intervals
to the least such tuple containing
the intersection of $B$ and the constraint.
In general a CSP is a conjunction of many constraints.
After applying the contraction operator for each of these once,
it is often the case that another round of applications
yields further contractions in the intervals
for some of the variables.
As the contractions are implemented
in floating-point interval arithmetic
and are assured valid by outward rounding,
there is a limit and it is reached
after a finite number of rounds of contractions.
In each of the rounds it may happen that a constraint is found
that does not contain variables for which a bound has changed.
In such cases the contraction operator
for that constraint has no effect and can be skipped.
Algorithms have been developed
that perform such optimizations \cite{aptEssence}.
\section{An example of solving by interval constraints}
\label{sec:example}
Let us consider the problem of determining the intersection
points of a parabola and a circle.
For example, to solve the system
\begin{equation}\label{eq:original}
\begin{array}{rcl}
y &=& x^2 \\
x^2 + y^2 &=& 1
\end{array}
\end{equation}
with $x \in [0,1]$ and $y \in [0,1]$.
One can eliminate $y$ and solve instead
$x^4 + x^2 = 1$,
which has two real roots.
However, for the purpose of illustrating
solving by interval constraints,
we ignore this opportunity for simplification
and we numerically solve the original system
(\ref{eq:original}).
The method of interval constraints applies to a class
of constraints in the form of
equalities or inequalities between real-valued expressions.
The \ensuremath{\mbox{{\it sum}}}\ constraint in Section~\ref{sec:IAvsIC} is an example:
it takes the form of the equation $x+y=z$.
As we mentioned in that section, there is an efficient
implementation of the optimal contraction operator for it.
The second equation in (\ref{eq:original}) is not primitive;
it has to be transformed
to an equivalent set of primitive constraints.
In this example the primitive constraints
\ensuremath{\mbox{{\it sq}}}, \ensuremath{\mbox{{\it sum}}}, and \ensuremath{\mbox{{\it one}}}\ are needed.
The constraint $\ensuremath{\mbox{{\it sq}}}(u,v)$ is defined as $u^2 = v$,
$\ensuremath{\mbox{{\it sum}}}(u,v,w)$ is defined as $u+v=w$,
and $one(u)$ is defined as $u=1$.
In this way (\ref{eq:original}) becomes the
following set of constraints:
\begin{equation}\label{eq:constraints}
\{\ensuremath{\mbox{{\it sq}}}(x,y), \ensuremath{\mbox{{\it sq}}}(y,z), \ensuremath{\mbox{{\it sum}}}(y,z,u), \ensuremath{\mbox{{\it one}}}(u)\}.
\end{equation}
The unknowns $x$, $y$, $z$, and $u$ are real numbers.
The introduction of $z$ and $u$ is the result of reducing
the given constraints to primitive ones.
In more typical cases the given constraints are so complex
that the introduced variables greatly outnumber the original ones.
In the example it is given that
$x$, $y$ and $z$
satisfy the above constraints.
From the original problem statement we have in addition
that $x \in [0,1]$ and $y \in [0,1]$.
Of the auxiliary unknown $z$
we initially know nothing:
$z \in [-\infty,+\infty]$ and
$u \in [-\infty,+\infty]$.
In effect, we have transformed
(\ref{eq:original})
to the system
\begin{eqnarray}\label{eq:deriv}
\begin{array}{rcl}
y &=& x^2 \\
z &=& y^2 \\
y + z &=& u \\
u &=& 1
\end{array}
\end{eqnarray}
Instead of solving the original system
(\ref{eq:original})
we solve equivalently the constraints
(\ref{eq:constraints}).
This is done by repeatedly applying in arbitrary order
the contraction operators until
there is no change in any of the intervals
associated with the unknowns.
Applying the contraction operators of $\ensuremath{\mbox{{\it sq}}}(y,z)$ and $one(u)$ results
in a drastic narrowing of the intervals for $z$ and $u$:
they change from
$[-\infty,+\infty]$ to $[0,1]$ for $z$ and to $[1,1]$ for $u$.
After this, none of the contraction operators
of the four constraints results in a change.
Therefore this is as far
as contraction operator application can take us.
To obtain more information about possibly existing solutions,
we split the CSP with interval $X = [0,1]$ for unknown $x$
into two CSPs that are identical
except for the intervals of $x$.
In the first CSP the interval for $x$ is the left half of $X$;
in the second CSP it is the right half.
Then we start another round of contraction operator applications
starting from one of the halves as initial box:
\begin{equation} \label{eq:box1}
x \in [0,\frac{1}{2}],
y \in [0,1],
z \in [0,1],
u \in [1,1].
\end{equation}
Applying the contraction operator
for $\ensuremath{\mbox{{\it sq}}}(x,y)$ results in $y \in [0,1/4]$.
Applying the contraction operator
for $\ensuremath{\mbox{{\it sq}}}(y,z)$ results in $z \in [0,1/16]$.
Applying the contraction operator
for $\ensuremath{\mbox{{\it sum}}}(y,z,u)$ results in $u \in [0,5/16]$.
Applying the contraction operator
for $one(u)$ causes the interval for $u$ to become empty.
This proves that there is no solution in the initial box
(\ref{eq:box1}).
We now turn to the other half:
\begin{equation} \label{eq:box2}
x \in [\frac{1}{2},1],
y \in [0,1],
z \in [0,1],
u \in [1,1].
\end{equation}
Applying the contraction operator for $\ensuremath{\mbox{{\it sq}}}(x,y)$ results in
$y \in [\frac{1}{4}, 1]$.
Continuing in tabular form gives
\begin{center}
\begin{tabular}{c|c|c|c|c}
& \multicolumn{4}{c}{Interval} \\ \cline{2-5}
& $x$ & $y$ & $z$ & $u$ \\ \cline{2-5}
& $[0.5,1]$ & $[0,1]$ & $[0,1]$ & $[1,1]$ \\
\hline
Apply & & & & \\
\cline{1-1}
$\ensuremath{\mbox{{\it sq}}}(x,y)$ & & [$\frac{1}{4}$,1] & & \\
$\ensuremath{\mbox{{\it one}}}(u)$ & & & & $[1,1]$ \\
$\ensuremath{\mbox{{\it sum}}}(y,z,u)$ & & & [0,$\frac{3}{4}$] & \\
$\ensuremath{\mbox{{\it sq}}}(y,z)$ & & $[\frac{1}{4},\frac{1}{2}\surd 3]$
& $[\frac{1}{16},\frac{3}{4}]$ & \\
\end{tabular}
\end{center}
Now the intervals for $x$ and $y$ continue getting smaller
until the least floating-point box has been reached
that contains a solution:
the intervals for $x$ converge to
a small interval containing
$\surd (\frac{1}{2}(\surd 5 - 1))$,
while the intervals for $y$ converge to
a small interval containing
$\frac{1}{2} (\surd 5 -1 )$.
\section{Notation and terminology for relations and constraints}
\label{sec:notTerm}
We take it that (\ref{eq:constraints})
is intuitively clear,
but how do we characterize mathematically
any solutions that such a CSP may have
and how do we characterize mathematically
an algorithm for obtaining such a solution?
Consider for example the constraints $sq(x,y)$ and $sq(y,z)$.
They clearly have something in common: $sq$,
which must be some kind of relation.
But the constraints are different from each other
(otherwise their conjunction could be simplified
by dropping either of them)
and also different from $sq$, whatever \emph{that} may be.
In this section we develop
a set-theoretic formulation of constraint-satisfaction problems
and illustrate it by the example in Section~\ref{sec:example}.
We find that such a formulation is facilitated
by a treatment of relations and operations on them
that is in the spirit of the conventional treatment,
but differs in details.
In particular, we need to clarify the difference
between relations and constraints
as well as the connection between these.
\subsection{Functions}
We denote by $S \to T$ the set of total functions
that are defined on $S$ and have values in $T$.
If $f \in (S \to T)$ we say that $f$
``has type'' $S \to T$.
If $S' \subseteq S$, then we define $f_{S'}$,
the \emph{restriction} of $f$ to $S'$
as the function in $S' \to T$
such that for all $x \in S'$
we have $f_{S'}(x) = f(x)$.
\subsection{Tuples}
As is the case conventionally,
our relations are sets of tuples of the same arity.
However, we need the possibility to index tuples
either by variables
or by the conventional indexes \set{0,1,2, \ldots}.
Hence we define a tuple
as an element of the function set $I \to T$,
where $I$ is an arbitrary set to serve as index set.
$I \to T$ is the \emph{type} of the tuple.
\emph{Example}
If $t$ is a tuple in $\set{x,y} \to \ensuremath{\mathcal{R}}$,
then we may have $t(x) = 1.1$ and $t(y) = 1.21$.
\emph{Example}
$t \in \ensuremath{\mathbf{3}} \to \set{a,b,c}$,
where $\ensuremath{\mathbf{3}} = \set{0,1,2}$
and $t(0) = b$, $t(1) = c$, and $t(2) = c$.
In cases like this, where the index set is an ordinal,
we use the compact notation $t = [b,c,c]$.
In general, we write \ord{n}\ for $\set{0,\ldots, n-1}$.
When a function is regarded as a tuple,
then the restriction operation on functions
is called \emph{projection}.
E.g. if $t = [2,1,3]$ and $t' = t_{\{0,2\} }$,
then $t'(0) = 2$ and $t'(2) = 3$;
$t'(1)$ is not defined.
\subsection{Approximation structures}
In \cite{scott72} Dana Scott proposed that computation steps
be viewed as transitions in a partially ordered space of data.
In his view computation consists of generating a time-ordered
sequence $d_0, d_1, d_2, \ldots$ with the property that
the successive data $d_i$ are each approximated by the previous
in the sense of holding information
about the limit of the sequence
that is compatible and is at least as informative.
We write
$d_0 \sqsubseteq d_1 \sqsubseteq d_2 \sqsubseteq \cdots$
where $\sqsubseteq$ is the partial order.
Scott was primarily interested in using his approach
to model mathematically the evaluation of recursively
defined functions. This requires mathematically rather
sophisticated constructions.
However, the idea also applies to situations
covered by the following definition.
\begin{definition}\label{def:apprStruct}
An \emph{approximation structure} for a set $D$
is a set $A$ of subsets of $D$ such that
(1) $A$ is closed under finite intersection,
(2) $A$ is closed under intersection of (possibly infinite)
$\subseteq$-descending chains of subsets,
and (3) $A$ contains $D$ as an element.
The information order $\sqsubseteq$ of $A$ is defined
as the inverse of the inclusion $\subseteq$ of subsets.
An \emph{approximation domain} is a pair $\langle D, A \rangle$
formed by a set $D$ and an approximation structure $A$ on $D$.
It turns out to be tiresome to say
``an approximation domain $(D,A)$ for some $A$'',
so that we may speak of
``an approximation domain $D$''
when no ambiguity arises regarding $A$.
\end{definition}
\begin{lemma}\label{lem:leastElt}
If $D' \subseteq D$,
then there exists in any approximation structure for $D$
a $\subseteq$-least element containing $D'$.
\end{lemma}
\begin{definition}
If $A$ is an approximation structure for $D$,
then for $D' \subseteq D$ we define $\alpha_A(D')$
to be the least element of $A$ containing $D'$.
\end{definition}
The set $\alpha_A(D')$
corresponds to the maximum amount of information about $D'$ that is
expressible within approximation structure $A$.
\emph{Example}
The intervals form an approximation structure
in the set $\ensuremath{\mathcal{R}}$ of real numbers,
where we define an interval as
$\set{x \in \ensuremath{\mathcal{R}}: a \leq x \leq b}
$,
where
$a \in \ensuremath{\mathcal{R}} \cup \set{-\infty}$
and
$b \in \ensuremath{\mathcal{R}} \cup \set{+\infty}$.
We write $[a,b]$ for this interval.
Note that with this definition,
e.g., $+\infty \not \in [0,+\infty]$.
\emph{Example}
Let $F$ be a subset of the set $\ensuremath{\mathcal{R}}$ of reals.
The $F$-intervals are an approximation structure in $\ensuremath{\mathcal{R}}$,
where an $F$-interval is
$\set{x \in \ensuremath{\mathcal{R}} : a \leq x \leq b}$
where
$a \in F \cup \set{-\infty}$
and
$b \in F \cup \set{+\infty}$.
An important case: $F$
is the set of finite double-length IEEE-standard
floating-point numbers.
The latter include $-\infty$ and $+\infty$,
so that pairs of these numbers
are a convenient representation
for the elements of this approximation structure.
\subsection{Relations}
A relation is a set of tuples with the same type.
This type is the \emph{type} of the relation.
If $r$ is a relation with type $I \to T$,
then the \emph{projection} of $r$ on $I' \subseteq I$
is $\set{f' \in I' \to T :
\exists f \in r. f_{I'} = f'
}$
and denoted $\pi_{I'}r$.
\emph{Example}\\
$\ensuremath{\mbox{{\it sum}}} = \set{[x,y,z] \in (\ensuremath{\mathbf{3}} \to \ensuremath{\mathcal{R}}) : x+y=z}$
is a relation of type $\ensuremath{\mathbf{3}} \to \ensuremath{\mathcal{R}}$,
where $\ensuremath{\mathbf{3}} = \{0,1,2\}$.
Compare this relation to the relation
$\sigma = \set{s \in (\set{x,y,z} \to \ensuremath{\mathcal{R}}) : s_x+s_y=s_z}$.
As their types are different,
they are different relations;
$[2,2,4] \in \ensuremath{\mbox{{\it sum}}}$ is not the same tuple
as $s \in \sigma$ where
$s_x=2$,
$s_y=2$, and
$s_z=4$.
\emph{Example}\\
If $S$ has one element,
then a relation of type $S \to T$ is a \emph{unary} relation.
Such a relation is often identified with a subset of $T$.
For example, for $a$ in $\ensuremath{\mathcal{R}} \cup \set{-\infty}$
and $b$ in $\ensuremath{\mathcal{R}} \cup \set{+\infty}$,
$\set{f \in (\set{x} \to \ensuremath{\mathcal{R}}) : a \leq f_x \leq b}$
is a unary relation that
is often identified with the interval $[a,b]$.
Maintaining the distinction between the two is important in the
current setting (see Section \ref{sec:constraints}).
\begin{definition}
If $r_0$ and $r_1$ are relations
with types $I_0 \to T$ and $I_1 \to T$, respectively,
then the \emph{join} $r_0 \Join r_1$ of
$r_0$ and $r_1$ is
$$\set{f \in (I_0 \cup I_1) \to T :
f_{I_0} \in r_0 \mbox{ and }
f_{I_1} \in r_1
}.$$
The join of relations that have disjoint index sets
is called the \emph{product} of these relations.
\end{definition}
We avoid the term ``Cartesian product'' because
it is usually understood to consist of tuples with index
set $\{0,\ldots,n-1\}$ for some natural number $n$.
\begin{definition}
Let $r$ be a relation of type $I \to T$ and let $I \subseteq J$.
Then we write the \emph{cylinder} on $r$ with respect to $J$
as $\pi_J^{-1} r$ and define it as
the greatest relation $g \subseteq (J \to T)$
such that $\pi_I g = r$.
\end{definition}
Cylindrification is inverse to projection in the sense that
$\pi_I(\pi_J^{-1} r) = r$.
\begin{definition}
Let $I = \{i_0, \ldots, i_{n-1}\}$ be an index set.
A \emph{box} is a product of unary relations
$
r_0 \subseteq \{i_0\} \to D
,\ldots,
r_{n-1} \subseteq \{i_{n-1}\} \to D
$.
In case $ r_0 ,\ldots, r_{n-1} $ are intervals,
then one may refer to the box
as an \emph{interval box}.
\end{definition}
\subsection{Boxes as approximation domain}\label{sec:boxApprox}
\begin{lemma}
Let $I = \set{i_0,\ldots,i_{n-1}}$ be a finite index set
and let $B$ be the set of boxes of type $I \to D$.
Then $\angTup{I \to D, B}$ is an approximation domain.
\end{lemma}
\emph{Proof}
We need to show the three defining properties
(Definition~\ref{def:apprStruct}).
In this case one can show closure under arbitrary
finite or infinite intersection,
so that the first two properties can be established simultaneously.
Let \seT{r^j}{j \in J} be a possibly infinite family of boxes,
$r^j = r_0^j \Join \cdots \Join r_{n-1}^j$,
with $r^j_k \subseteq \set{i_k} \to D$ for all $k \in \ord{n}$.
Let
$$ r
= \bigcap_{j \in J} r^j
= \bigcap_{j \in J} (r_0^j \Join \cdots \Join r_{n-1}^j).
$$
Then
\begin{eqnarray*}
f \in r = \bigcap_{j \in J} r^j
& \Leftrightarrow & \forall j \in J. \; f \in r^j
\;\Leftrightarrow\; \forall j \in J. \; \forall k \in \ord{n}. \;
f_{i_k} \in r_k^j \\
& \Leftrightarrow & \forall k \in \ord{n}. \; \forall j \in J. \;
f_{i_k} \in r_k^j
\;\Leftrightarrow\; \forall k \in \ord{n}. \;
f_{i_k} \in \bigcap_{j \in J} r_k^j \\
& \Leftrightarrow &
f \in \bigcap_{j \in J} r_0^j \Join \cdots \Join
\bigcap_{j \in J} r_{n-1}^j\\
\end{eqnarray*}
Hence
$$
\bigcap_{j \in J} r^j
= \bigcap_{j \in J} r_0^j \Join \cdots \Join \bigcap_{j \in J} r_{n-1}^j
$$
is also a box,
so that the intersection of a possibly infinite family of boxes
is a box.
We finally need to show that the full relation $r = I \to D$ is a box.
Letting $r_k = \set{i_k} \to D$,
we have that
\begin{eqnarray*}
I \to D &=& \\
\set{i_0, \ldots, i_{n-1}} \to D & = & \\
(\set{i_0} \to D) \Join \cdots \Join (\set{i_{n-1}} \to D) &=& \\
r_0 \Join \cdots \Join r_{n-1}
\end{eqnarray*}
is a box.
\hfill $\Box$
\paragraph{}
Therefore, for every relation $r$
of type $\set{i_0,\ldots,i_{n-1}} \to D$
there is a least box containing $r$,
which justifies the following definition.
\begin{definition}
The \emph{box operator} applied to a relation
$r$ with type $\set{i_0,\ldots,i_{n-1}} \to D$
is the least box $\ensuremath{\Box} r$ that contains $r$.
\end{definition}
\subsection{Constraints}\label{sec:constraints}
A constraint is a syntactic entity
that is used to denote a relation.
A constraint has the form of an atomic formula
in a theory of predicate logic without
function symbols.
The semantics of predicate logic assigns a relation $r$
to an atomic formula
$p(q_0, \ldots, q_{n-1})$ with set $V$ of variables.
The relation $r$ depends on the interpretation of $p$
and on the tuple $[q_0, \ldots, q_{n-1}]$ of arguments.
These arguments are variables,
not necessarily all different.
The first-order predicate logic interpretation of the
language of atomic formulas,
which identifies the argument occurrences by numerical indexes,
forces $\ord{n} = \set{0,\ldots, n-1}$
to be the index set of the relation $M(p)$,
the relation that is the meaning of the predicate symbol $p$
under the given interpretation.
In our setting, instead, the index set associated with
the constraint denoted by
$p(q_0, \ldots, q_{n-1})$ is the set $V$ of
distinct variables occurring in atomic formula
$p(q_0, \ldots, q_{n-1})$.
The interpretation $M$ that assigns a relation
of type $\set{0,\ldots, n-1}$ to an $n$-ary predicate symbol $p$
needs to be extended to an interpretation $M$
that also assigns a relation of type $V \to D$
to a constraint.
\begin{definition}
Let $c = p(q_0, \ldots, q_{n-1})$
where $V$ is the set of variables in
$\{q_0, \ldots, q_{n-1}\}$.
We define
$$ M(c) =
\set{a \in V \to D : [a(q_0), \ldots, a(q_{n-1})] \in M(p)}.
$$
\end{definition}
As a result of this definition the meaning of a constraint $c$
with set $V$ of variables is a relation of type $V \to D$.
One can view the argument tuple of a constraint as an
operator that converts a relation $M(p)$ of type $\ord{n} \to D$
to relation $M(c)$ of type $V \to D$.
This is an extension of the usual semantics of predicate logic.
\paragraph{}
\emph{Example}\\
Let $\ensuremath{\mbox{{\it sq}}}$ be the binary relation over the reals
where the second argument is the square of the first.
That is,
$M(sq) = \set{f \in (\set{0,1} \to \ensuremath{\mathcal{R}}): f_1 = f_0^2}$.
The constraints
$\ensuremath{\mbox{{\it sq}}}(x,y)$,
$\ensuremath{\mbox{{\it sq}}}(y,x)$, and
$\ensuremath{\mbox{{\it sq}}}(x,x)$
denote different relations,
as we verify below.
Given that
$M(sq) = \set{f \in (\set{0,1} \to \ensuremath{\mathcal{R}}): f_1 = f_0^2}$
we have
\begin{eqnarray*}
M(sq(x,y)) &=&
\set{a \in (\{x,y\} \to \ensuremath{\mathcal{R}}) : [a(x),a(y)] \in M(sq)} \\
&=&
\set{a \in (\{x,y\} \to \ensuremath{\mathcal{R}}) : a(x)^2 = a(y)} \\
M(sq(y,x)) &=&
\set{a \in (\{x,y\} \to \ensuremath{\mathcal{R}}) : [a(y),a(x)] \in M(sq)} \\
&=&
\set{a \in (\{x,y\} \to \ensuremath{\mathcal{R}}) : a(y)^2 = a(x)} \\
M(sq(x,x)) &=&
\set{a \in (\{x\} \to \ensuremath{\mathcal{R}}) : [a(x),a(x)] \in M(sq)} \\
&=&
\set{a \in (\set{x} \to \ensuremath{\mathcal{R}}) : a(x) = 0 \vee a(x) = 1}
\end{eqnarray*}
\begin{definition}
A tuple $ f \in V \to D $ satisfies a constraint $c$ if
and only if the restriction of $f$ to the set of variables occurring
in $c$ belongs to $M(c)$.
\end{definition}
\subsection{Constraint-satisfaction problems}
\begin{definition}
A \emph{constraint-satisfaction problem} (CSP)
has the form $\angTup{C,V,D,M}$
and consists of
a set $C = \set{s_0,\ldots,s_{m-1}}$ of constraints,
a set $V$, which is the set of the variables
occurring in the constraints,
a set $D$, the \emph{domain} of the CSP, and
an interpretation $M$,
which maps every $n$-ary predicate symbol occurring in
any of the constraints to a relation of type $\ord{n} \to D$.
A \emph{solution} to $\angTup{C,V,D,M}$
is $a \in V \to D$ such that
$a_{V_i} \in M(s_i)$ for all $i \in \ord{m}$,
where $V_i$ is the set of variables in $s_i$.
\end{definition}
It follows that the set $\sigma$ of solutions of the CSP
is a relation of type $V \to D$.
\emph{Example}
In $\angTup{C,V,D,M}$,
let $C = \{\ensuremath{\mbox{{\it sq}}}(x,y), \ensuremath{\mbox{{\it sq}}}(y,z), \ensuremath{\mbox{{\it sum}}}(y,z,u), \ensuremath{\mbox{{\it one}}}(u)\}$
(Equation (\ref{eq:constraints})),
$V = \set{x,y,z,u}$,
$D = \ensuremath{\mathcal{R}}$,
$M(\ensuremath{\mbox{{\it sq}}}) = \{f \in (\{0,1\} \to \ensuremath{\mathcal{R}}) : f(1) = f(0)^2\}$,
$M(\ensuremath{\mbox{{\it sum}}}) = \{f \in (\{0,1,2\} \to \ensuremath{\mathcal{R}}) : f(2) = f(0)+f(1)\}$,
and $M(\ensuremath{\mbox{{\it one}}}) = \{f \in (\{0\} \to \ensuremath{\mathcal{R}}) : f(0) = 1\}$.
The set $\sigma$ of solutions
is a relation $\sigma \subseteq V \to \ensuremath{\mathcal{R}}$
such that
$\pi_{\set{x,y}} \sigma = \set{p_0,p_1}$
where
$p_0(x) = - \surd (\frac{1}{2}(\surd 5 - 1)) $,
$p_0(y) = \frac{1}{2} (\surd 5 -1 )$,
$p_1(x) = \surd (\frac{1}{2}(\surd 5 + 1)) $,
and
$p_1(y) = \frac{1}{2} (\surd 5 -1 )$.
This example shows a CSP with a finite and small solution set.
Sudoku puzzles are another such example.
It often happens that the solution set has an infinite number of
elements, or a finite number that is too large to list
or to process on a computer.
\begin{theorem}\label{thm:joinSigmas}
Let $\sigma$ be the solution set of a CSP
$C = \set{s_0,\ldots,s_{m-1}}$ with $M$ as interpretation
for its predicate symbols.
Then we have
$$
\sigma = M(s_0) \Join \cdots \Join M(s_{m-1}).
$$
\end{theorem}
\emph{Proof}
By induction on the size of set
$\set{s_0,\ldots,s_{m-1}}$.
The base case $ C = \set{s_0}$
is trivial.
Assume that the theorem holds for a constraint set
$C_k = \set{s_0,\ldots,s_{k-1}}$
of size $k \geq 1$, and let
$\sigma(C_k) = M(s_0) \Join \cdots \Join M(s_{k-1})$
denote the solution set of $C_k$. Consider constraint set
$C_{k+1} = C_k \cup \set{s_k}$.
Any tuple $t$ which is a solution of $C_{k+1}= C_k \cup \set{s_k}$
must be such that the restriction of $t$ to the set of
variables occurring in $C_k$ is a solution of $C_k$, and
the restriction of $t$ to the set of variables occurring in
$s_k$ is a solution of $s_k$.
Whence $\sigma(C_{k+1}) \subseteq \sigma(C_k) \Join M(s_k)$.
Conversely, if $ t \in \sigma(C_k) \Join M(s_k)$, then by
construction $t$ satisfies $C_k$ as well as $s_k$, whence $t$
satisfies $C_{k+1} = C_k \cup \set{s_k}$.
Therefore $\sigma(C_{k+1}) = \sigma(C_k) \Join M(s_k)$.
\hspace{\fill}$\Box$
\section{Solving constraint-satisfaction problems}
\label{sec:solving}
What does it mean to ``solve'' a CSP?
It is rare for the solution set $\sigma$
to have but few elements, as it does in Sudoku.
Though occupying only a small proportion of the type,
$\sigma$ may have a finite
and overwhelmingly large number of elements;
it may also be an infinite set.
Hence we can typically only hope to obtain
\emph{some} information about $\sigma$.
Useful information can come in the form of an \emph{approximation}.
If the approximation domain consists of computer-representable
sets, as it typically does,
then $\Box \sigma$ is computer-representable,
but will usually give too little information about $\sigma$.
But $\Box \sigma$ is useful in case one can show
that it is empty:
in that case $\sigma$ is empty;
i.e. the CSP has no solutions.
This is an advantage of treating
numerical problems as CSPs:
in conventional computation one can only conclude that no
solutions were found.
By formulating the problem
as a CSP with intervals as approximation structure
one may be able to prove that no solutions exist.
The possibility of proof of non-existence by means of standard
floating-point arithmetic (and all its rounding errors)
is a valuable complement to conventional numerical analysis.
In case it is not possible
to show that $\Box \sigma$ is empty,
one subdivides the box under consideration
and one may be able to show
that one of these subdivisions has no solutions.
Let box $P$ (``probe'') be such a subdivision.
We use it to reduce the partial solution of
the problem of determining $\sigma$
to that of determining any solutions that might occur in $P$,
or to find, also usefully, that no solutions occur in $P$.
Thus we proceed to obtain information about $\sigma \cap P$.
This intersection is in general not a box,
so is not necessarily computer-representable.
Hence it is an appropriate task for an algorithm
to determine $\Box (\sigma \cap P)$
for a given CSP and a suitable $P$,
or an approximation to $\Box (\sigma \cap P)$
(which is itself an approximation).
Subdivision of $P$ should result in subsets of $P$
whose union includes $P$.
These subsets are subject to the same consideration:
if absence of solutions cannot be shown
and if amenable to subdivision,
the process repeats for such a subset.
Any box $P$ defines a tree of subsets
to be processed in this way:
solving a CSP requires,
in addition to an attempt to show
the absence of solutions in a given box,
a search over the tree of subboxes of the initially given box.
The ``solution'' of a numerical CSP
is necessarily a list of boxes
each of which is too small to subdivide
and of which the absence of solutions cannot be shown.
Of a solution $x \in \ensuremath{\mathcal{R}}^n$
the best one can typically do is to fail to show
that $\Box(\{x\})$ contains no solutions of the CSP.
\subsection{Contraction operators}
A contraction operator transforms a box $B$
into a box $B' \subseteq B$
such that there is no solution in $B \setminus B'$.
Two kinds of contraction operators on boxes are defined here:
operators defined by relations, and operators defined by constraints.
\subsubsection{Contraction operators defined by a relation}
\label{sec:Contraction_relation}
\begin{definition}\label{def:gamma}
Let $D$ be an approximation domain and $I$ an index set.
Any relation $r$ of type $I \to D$
determines the mapping $\gamma_r(P) = \Box(r \cap P)$,
the \emph{contraction operator} of $r$,
that maps boxes with type $I \to D$
to boxes with the same type.
\end{definition}
Benhamou and Older \cite{bnldr97} introduced this formula
for intervals of reals.
Here it is generalized to approximation systems in general.
\begin{lemma}\label{lem:gamma:properties}
The contraction operator $\gamma_r$
is idempotent, monotonic, inflationary and correct.
\end{lemma}
\emph{Proof}
We have that
$
\Box(\Box(r \cap P) \cap P)
=
\Box(r \cap P) \cap P
=
\Box(r \cap P)
$;
hence $\gamma_r$ is idempotent.
$\Box$ is monotonic and intersection is monotonic in both
arguments, so $\gamma_r$ is monotonic.
$\gamma_r(P) = \Box(r \cap P) \subseteq \Box P = P$,
so that
$P \ensuremath{\sqsubseteq} \gamma_r(P)$.
That is, $\gamma_r$ moves up in the
(information) partial order: $\gamma_r$ is inflationary.
We have that
$r \cap (P \setminus \gamma_r(P))) = \emptyset$
meaning that $\gamma_r$ is correct in the sense that
it does not remove any part of $r$ from its argument.
\paragraph{An example of a contraction operator}
The contraction operator for the $\ensuremath{\mbox{{\it sum}}}$ constraint
acting on a box
$$
(\set{x} \to [a,b]
)\Join( \set{y} \to [c,d]
)\Join( \set{z} \to [e,f])
$$
where
$a,b,c,d,e,f$ are finite
IEEE-standard floating-point numbers is given by
\begin{multline*}
\gamma_{M(sum(x,y,z))}
((\set{x} \to [a,b]
)\Join( \set{y} \to [c,d]
)\Join( \set{z} \to [e,f])) = \\
(\set{x} \to [a',b']
)\Join( \set{y} \to [c',d']
)\Join( \set{z} \to [e',f']).
\end{multline*}
Here
\begin{eqnarray*}
\; [a',b'] &=& [a,b] \cap [(e-d)^-,(f-c)^+] \\
\; [c',d'] &=& [c,d] \cap [(e-b)^-,(f-a)^+] \\
\; [e',f'] &=& [e,f] \cap [(a+c)^-,(b+d)^+] \\
\end{eqnarray*}
where
superscript $^-$ means that the floating-point
operation is performed in round-toward-minus-infinity
mode
and
superscript $^+$ means that the floating-point
operation is performed in round-toward-plus-infinity
mode.
In this way correctness of $\gamma_{sum}$
is maintained in the presence of rounding errors.
In Equation~(\ref{eq:shrinc}) the contraction operator
is applied in the case where
$a = 0$,
$b = 2$,
$c = 0$,
$d = 2$,
$e = 3$,
and
$f = 5$.
Applying
$\gamma_{M(sum(x,y,z))}$
in this special case gives
\begin{center}
\begin{tabular}{lll}
$[a',b']$ & $=\;\; [0,2] \cap [1,5]$ & $=\;\; [1,2]$\\
$[c',d']$ & $=\;\; [0,2] \cap [1,5]$ & $=\;\; [1,2]$\\
$[e',f']$ & $=\;\; [3,5] \cap [0,4]$ & $=\;\; [3,4]$\\
\end{tabular}
\end{center}
This only gives the general idea.
A practical algorithm has to take care of the possibility
of overflow.
It also has to allow for the possibility
that $a,c$ or $e$ are $-\infty$
and
that $b,d$ or $f$ may be $+\infty$
so that the undefined cases
$(+\infty) - (+\infty)$,
$(-\infty) + (+\infty)$,
and
$(+\infty) + (-\infty)$
have to be circumvented.
For details about such algorithms see \cite{hckvnmdn01}.
\subsubsection{Contraction operators defined by a CSP}
\label{sec:Contraction_CSP}
In the CSP defined by the constraints $\set{s_0,\ldots,s_{m-1}}$,
let us write $\sigma_i$ for $M(s_i)$.
Then Theorem~\ref{thm:joinSigmas} says that
$$
\sigma = \sigma_0 \Join \cdots \Join \sigma_{m-1}.
$$
The $\gamma$ operator of Definition~\ref{def:gamma}
is not useful for $r = \sigma$,
but it can be useful for the $r = \sigma_i$,
the solution sets for the constraints by themselves.
In fact, the constraints are chosen to be such
that one has an efficient algorithm for each $\gamma_{\sigma_i}$.
\begin{definition}\label{def:bigGamma}
Let
$\angTup{\set{s_0,\ldots,s_{m-1}},V,D,M}$
be a CSP.
Let $\sigma_i = M(s_i)$ and let
$V_i$
be the set of variables of $s_i$.
We define
$$\gamma_i(P) = \pi_V^{-1}(\gamma_{\sigma_i}(\pi_{V_i}P)),
\quad i=0,\ldots,m-1,
$$
for any box $P$ of type $V \to D$,
and call
$\gamma_i$ the contraction operator of $s_i$.
We define
$$\Gamma(P) = \gamma_0(P)
\cap \cdots \cap
\gamma_{m-1}(P),
$$
and call $\Gamma$ the contraction operator of the CSP.
\end{definition}
\begin{lemma}\label{lem:Gamma:properties}
$\Gamma$ is inflationary, monotonic, and correct.
\end{lemma}
\emph{Proof}
Since, by Lemma \ref{lem:gamma:properties},
each $\gamma_{\sigma_i}$ is inflationary,
one has
\begin{eqnarray*}
\Gamma(P) &=& \bigcap_{i=0}^{m-1} \gamma_i(P)
= \; \Join_i \gamma_i(P) \\
&=& \Join_i \pi_V^{-1}(\gamma_{\sigma_i}(\pi_{V_i}P)) \\
&=& \pi_V^{-1} ( \Join_i (\gamma_{\sigma_i}(\pi_{V_i}P))) \\
&\sqsupseteq& \pi_V^{-1} ( \Join_i \pi_{V_i}P)) \\
&=& P
\end{eqnarray*}
Hence $\Gamma$ is \emph{inflationary.}
$\Gamma$ is monotone, as a composition of monotone operators,
since both projection $\pi_{V_i}$ and cylindrification $\pi_V^{-1}$
are monotone operators.
Finally $\Gamma$ is correct.
Indeed, since by Lemma \ref{lem:gamma:properties}
each $\sigma_i$ is correct, i.e. satisfies
$\sigma_i \cap (\pi_{V_i} P \setminus \gamma_{\sigma_i} (\pi_{V_i} P) ) = \emptyset$,
one has, for any tuple $f$, that
$ f \in (P \setminus \Gamma(P)) \Leftrightarrow
f \in P \mbox{ and } \exists i\ f \not \in \gamma_i(P)$
i.e., $f_{V_i} \not \in \sigma_i$ i.e., $ f \not \in \sigma$.
Hence
$ f \in (P \setminus \Gamma(P)) $
implies $ f \not \in \sigma$,
thus
$ \sigma \cap (P \setminus \Gamma(P)) = \emptyset $.
Therefore $\Gamma$ is correct.
\hspace{\fill}$\Box$
\paragraph{}
A counter example to the idempotency of $\Gamma$
is given by the CSP example discussed earlier,
in Section \ref{sec:example}:
\begin{equation*
\{\ensuremath{\mbox{{\it sq}}}(x,y), \ensuremath{\mbox{{\it sq}}}(y,z), \ensuremath{\mbox{{\it sum}}}(y,z,u), \ensuremath{\mbox{{\it one}}}(u)\}.
\end{equation*}
It is enough to take \emph{e.g.}
the approximation domain of (real)
boxes included in $ \set{x,u,z,u} \to \ensuremath{\mathcal{R}}$,
the corresponding $\Gamma$ operator operating on that domain,
together with the box $P$ informally described in equation
(\ref{eq:box2}), namely
$ P = \seT{f : \set{x,y,z,u} \to \ensuremath{\mathcal{R}}}{f(x) \in [\frac{1}{2},1],
f(y) \in [0,1],
f(z) \in [0,1],
f(u) \in [1,1].
}
$.
The sequence $(\Gamma ^n(P))_{n\in\mathbb{N}}$
is strictly decreasing until it stabilizes at the smallest box,
in the approximation domain, containing the tuple
$f : \set{x,y,z,u} \to \ensuremath{\mathcal{R}}$,
such that $f(x) = \surd (\frac{1}{2}(\surd 5 - 1)),
f(y) = \frac{1}{2} (\surd 5 -1 ),
f(z) = \frac{1}{4} (\surd 5 -1 )^2$,
and
$f(u) = 1$.
\subsection{Algorithms}
Algorithms for solving CSPs proceed
by applying contraction operators.
Hence the algorithms only remove tuples from consideration
that are not part of the solution.
In the course of this process
absence of solutions of the CSP may be demonstrated,
but solutions are not, in general, constructed.
In the case of a discrete $D$ it may happen that
applying constraint contractors may result in a box
that contains a single tuple.
This tuple will then need to be substituted in the CSP
to check whether it is a solution.
However, in the type of CSP we are concerned with here
(reals with floating-point intervals as approximation domain),
finding a solution this way
is but a remote theoretical possibility
(the problem would have to have an exact solution
in terms of floating-point numbers, which, moreover,
upon substitution would miraculously avoid rounding errors).
Hence for numerical CSPs the best we can expect
is an algorithm that results in a small box.
This box can be small indeed:
in double-length IEEE-standard floating-point arithmetic
the box can have as projections intervals of relative width
around $10^{-17}$.
The result shows that, \emph{if} a solution exists,
it has to be in that box.
Among the algorithms that use contraction operators
to solve CSPs we distinguish two types of iteration
according to the order in which the operators are applied.
We distinguish \emph{rigid} order from and \emph{flexible} order.
The latter type leaves more choice
in the choice of the next operator to be applied.
Consider a CSP $\angTup{C,V,D,M}$ with contraction operators
$\gamma_0,\ldots,\gamma_{m-1}.$
The rigid-order algorithm applies the $m$ operators
in such an order that between two successive applications
of any particular operator all other operators are applied.
The rigid-order algorithm is susceptible to improvement.
In a typical CSP $m$ can be in the order of hundreds or thousands,
whereas each of the constraints typically has few arguments.
In numerical CSPs, for example, there are three or fewer.
Usually each constraint shares an argument with several others.
In such a situation
most of the contractor applications have no effect:
each application affects only few of many arguments
and it may well be that the next operator belongs to a constraint
that does not involve any of these few arguments,
so that its application has no effect.
This suggests a chaotic algorithm,
one that avoids
such ineffectual choices of operator applications\footnote{
The term ``chaotic'' has been adopted by the constraint processing
literature via a detour from a numerical algorithm \cite{chzn69}.
}.
There is considerable scope for such optimization,
as the only constraint on the sequence of operator applications
is that this sequence be \emph{fair} in the following sense.
\begin{definition}\label{def:fair}
Let $k \in (\mathbbm{N} \to A)$ be an infinite sequence
of which the elements are members of a finite set $A$.
$k$ is \emph{fair} iff each element of $A$ occurs
infinitely many times in $k$.
\end{definition}
Thus, in a fair sequence, it is possible,
but not necessary,
that between two occurrences of the same item
all other items have occurred.
A chaotic algorithm with $m$ operators applies the operators
in a fair sequence.
Such an algorithm can generate a fair sequence
while maintaining a record of the last index in the sequence
where a change was effected.
As soon as all the operators
have been applied without any resulting change,
then, by idempotence, the algorithm can be halted:
the rest of the infinitely long fair sequence
consists of operator applications that have no effect.
For details, see \cite{aptEssence}.
\subsection{Maximization property of the chaotic algorithm}
The chaotic algorithm solves the following problem:
\begin{equation}\label{eq:maxT}
\left.
\begin{array}{ll}
\mbox{{\bf maximize}} & B \\
\mbox{{\bf subject to}} & B \ensuremath{\sqsubseteq} \Gamma(B)
\end{array}
\right\}
\end{equation}
\noindent
where $B$ ranges over the boxes in the approximation domain, and
$\Gamma$ is the $\Gamma$ operator associated with the CSP.
The problem is stated in a format borrowed from
``mathematical programming'' in the sense that this includes,
for example, linear programming.
In the above format the total order among real numbers
has been replaced by the partial order
which is the Scott information order
described in Section~\ref{sec:notTerm}.
The generalization from the total order of mathematical programming
to programming with partial orders is due to Parker
who captures a wide variety of algorithms
in this framework \cite{prkr87}.
It is easily seen that chaotic iteration
solves the maximization problem if the sequence
generated by the algorithm converges
to the least fixpoint of $\Gamma$.
Note that $\ensuremath{\sqsubseteq}$ is the information order,
where $B_0 \ensuremath{\sqsubseteq} B_1$
iff each of the projections of $B_1$ is a subset
of the corresponding projection of $B_0$.
\paragraph{Fixpoints}
We review some basic facts about fixpoints.
Let \mbox{$\langle D,\ensuremath{\sqsubseteq},\bot \rangle$}
be a complete partially ordered set.
Completeness means here that every infinite ascending chain
$c_0 \ensuremath{\sqsubseteq} c_1 \ensuremath{\sqsubseteq} \ldots$ has a least upper bound
$\bigsqcup_{i=0}^\infty c_i$ that is an element of
the partially ordered set.
Let $\Gamma \in (D \to D)$ be monotonic and continuous.
Continuity of a function $f \in D \to D$ means
that for every infinite ascending chain
$c_0 \ensuremath{\sqsubseteq} c_1 \ensuremath{\sqsubseteq} \ldots$
we have
$f(\bigsqcup_{i=0}^\infty c_i) = \bigsqcup_{i=0}^\infty f(c_i)$.
In case of a finite $D$
such as the partially ordered set of floating-point intervals,
monotonicity implies continuity.
By the Knaster-Tarski theorem, $\Gamma$ has a least fixpoint
$\ensuremath{\mbox{{\it lfp}}}(\Gamma) \in D$.
This may be seen as follows.
By monotonicity of $\Gamma$,
$$
\bot \ensuremath{\sqsubseteq} \Gamma(\bot) \ensuremath{\sqsubseteq} \Gamma^2(\bot) \ensuremath{\sqsubseteq} \cdots
$$
By the completeness of the partially ordered set,
$\bigsqcup_{n=0}^\infty \Gamma^n(\bot) \in D$.
By the continuity of $\Gamma$,
$$
\Gamma(\bigsqcup_{n=0}^\infty \Gamma^n(\bot))
=
\bigsqcup_{n=0}^\infty \Gamma(\Gamma^n(\bot))
=
\bigsqcup_{n=0}^\infty \Gamma^n(\bot).
$$
Hence $\bigsqcup_{n=0}^\infty \Gamma^n(\bot)$
is a fixpoint of $\Gamma$.
We now turn to the Tarski fixpoint theorem.
Let $\Gamma \in (D \to D)$ be monotonic, but
assume now that partially ordered set
\mbox{$\langle D,\ensuremath{\sqsubseteq},\bot \rangle$}
is a complete lattice, a richer structure. Completeness means here
that \emph{any} subset of $D$ has a least upper bound and a greatest
lower bound. In particular $D$ possesses a largest element $\top$.
Then by the Tarski fixpoint theorem $\Gamma$ has a least fixpoint
$\ensuremath{\mbox{{\it lfp}}}(\Gamma) \in D$.
This may be seen as follows.
Consider the set
$S = \seT{a\in D}{\Gamma(a) \ensuremath{\sqsubseteq} a}$. $S$ is non-empty since it contains
top element $ \top \in D$. Let
$l = \sqcap S$ be the greatest lower bound of $S$.
Then for any element $a \in S$, one has
$$ a \in S \Rightarrow l \ensuremath{\sqsubseteq} a \Rightarrow \Gamma(l) \ensuremath{\sqsubseteq} \Gamma(a) \ensuremath{\sqsubseteq} a $$
by monotonicity of $\Gamma$.
Hence $\Gamma(l)$ is lower bound for $S$, $ \Gamma(l) \ensuremath{\sqsubseteq} l = \sqcap S$.
Therefore $l \in S$.
One then has the chain of implications
$$ \Gamma(l)
\ensuremath{\sqsubseteq} l \Rightarrow \Gamma(\Gamma(l))
\ensuremath{\sqsubseteq} \Gamma(l) \Rightarrow \Gamma(l) \in S \Rightarrow l
\ensuremath{\sqsubseteq} \Gamma(l) \Rightarrow l = \Gamma(l).
$$
Hence $l$ is a fixpoint of $\Gamma$.
It is also the least fixpoint, since
$S$ contains every fixpoint, and $l = \sqcap S$.
Therefore $l= \sqcap S = \ensuremath{\mbox{{\it lfp}}}(\Gamma)$ is the least fixpoint of $\Gamma$.
\paragraph{Application of fixpoint theory
to the chaotic algorithm}
\begin{theorem}\label{thm:probeAppr}
Given a CSP $\angTup{C,V,D,M}$
with contraction operator $\Gamma$ and solution set $\sigma$.
For any box $P$ of type $V \to D$ we have
$$
(\sigma \cap P)
\subseteq
\Box(\sigma \cap P)
\subseteq
\Gamma^n(P)
$$
for all $n = 0,1,2,\ldots$
\end{theorem}
\emph{Proof}
The first inclusion follows from the definition of the $\Box$
operator.
We consider the case where there are $m = 2$ constraints,
which easily extends to arbitrary greater values of $m$.
We write $\sigma_i = M(s_i)$
and $V_i$ for the set of variables in $s_i$,
for $i = 0,1$.
We first consider the case $n = 1$.
\begin{eqnarray*}
\Box(\sigma \cap P) &=& \\
\Box\seT{a \in (V \to D)}
{a_{V_0} \in \sigma_0 \wedge
a_{V_1} \in \sigma_1 \wedge a \in P}
&=& \\
\Box\set{a \in (V \to D) :
a_{V_0} \in \sigma_0 \wedge
a_{V_1} \in \sigma_1 \wedge
a_{V_0} \in \pi_{V_0}P \wedge
a_{V_1} \in \pi_{V_1}P } &=& \\
\Box\set{a \in (V \to D) :
a_{V_0} \in (\sigma_0 \cap \pi_{V_0}P) \wedge
a_{V_1} \in (\sigma_1 \cap \pi_{V_1}P) } &=& \\
\Box(\pi_V^{-1} (\sigma_0 \cap \pi_{V_0}P) \cap
\pi_V^{-1} (\sigma_1 \cap \pi_{V_1}P)) &\subseteq&\\
\Box(\pi_V^{-1} \Box(\sigma_0 \cap \pi_{V_0}P) \cap
\pi_V^{-1} \Box(\sigma_1 \cap \pi_{V_1}P)) &=&\\
\pi_V^{-1} \Box(\sigma_0 \cap \pi_{V_0}P) \cap
\pi_V^{-1} \Box(\sigma_1 \cap \pi_{V_1}P) &=&\\
\gamma_0(P) \cap
\gamma_1(P) &=&\\
\Gamma(P). &&\\
\end{eqnarray*}
We have shown that
$ \Box(\sigma \cap P) \subseteq \Gamma(P). $
We also have $ \Box(\sigma \cap P) \subseteq \Gamma^2(P).$
This is because of the correctness of $\Gamma$:
it does not remove any solution tuples from its argument.
Hence we have $ \Box(\sigma \cap P) \subseteq \Gamma^n(P)$
for any $n \geq 0$.
\hspace{\fill}$\Box$
By Definition~\ref{def:bigGamma},
$\Gamma$ is the intersection of contraction operators,
one for each constraint,
each of which can be efficiently computed.
The results of these operators are exact
in the sense that the results are by definition approximations
and are therefore exactly representable.
Thus Theorem~\ref{thm:probeAppr} can serve as the basis for an algorithm
for approximating the set of solutions in $P$.
In terms of the information order $\ensuremath{\sqsubseteq}$ Theorem~\ref{thm:probeAppr}
states that
$\Gamma^n(P) \ensuremath{\sqsubseteq} \Box(\sigma \cap P) \ensuremath{\sqsubseteq} (\sigma \cap P)$.
\begin{theorem}
$\Gamma$ is monotonic on the partially ordered
set of subboxes of $P$ ordered by information order.
\end{theorem}
\emph{Proof}
Each contraction operator
$\gamma_i :
P \mapsto \pi_V^{-1}(\gamma_{\sigma_i}(\pi_{V_i}P))
$
is monotone, and the join of two monotone operators is
monotone.
\hspace{\fill}$\Box$
\paragraph{}
Observe that the set of boxes contained in $P$ defines an approximation
structure for $P$.
$\Gamma$ is monotonic.
The partially ordered set of subboxes of
$P$ is ordered by information order
and is a complete lattice with least element $P$.
Accordingly,
$\Gamma$, restricted to the approximation structure,
has a least fixpoint $\ensuremath{\mbox{{\it lfp}}}(\Gamma)$,
by the Tarski fixpoint theorem.
Summarizing, we have
$\Gamma^n(P) \ensuremath{\sqsubseteq}
\ensuremath{\mbox{{\it lfp}}}(\Gamma) \ensuremath{\sqsubseteq}
\Box(\sigma \cap P) \ensuremath{\sqsubseteq}
(\sigma \cap P)
$
for all $n$.
If the box operator $\Box$ is continuous over the approximation domain
defined over $D$, then $\Gamma$ is also
continuous by compositionality of continuous functions, and
by the Knaster-Tarski theorem
$\bigsqcup_{i=0}^\infty \Gamma^i(P) $
is the least fixpoint of $\Gamma$ contained in $P$.
In particular, if $D$ is the set $F$
of finite double-length IEEE-standard floating-point numbers,
and the approximation domain is given by the set of $F$-intervals,
then domain $D$ is finite, hence both operators $\Box$ and $\Gamma$
are continuous.
The subboxes of $P$ form a complete partially ordered set
trivially because the finiteness of the set of floating-point
numbers.
Therefore
$\bigsqcup_{i=0}^\infty \Gamma^i(P)
= \bigsqcup_{i=0}^n \Gamma^i(P)
$,
for some finite $n$,
is the least fixpoint of $\Gamma$, restricted to $P$.
\begin{theorem}
Let a CSP
$\angTup{\set{s_0,\ldots,s_{m-1}},V,D,M}$, with
contraction operator $\Gamma$, and contraction operators
$\gamma_i$ for each individual constraint $s_i$ be given.
If the approximation structure over $D$ is such that
the box operator $\Box$ is continuous, then,
for every box $P$,
every fair iteration of continuous operators $\gamma_i$ starting with
$P$ converges towards the least fixpoint
$ \sqcup_{j=0}^\infty \Gamma ^j(P) $
of $\Gamma$, restricted to $P$.
\end{theorem}
\emph{Proof}
Let
$k_0, k_1, k_2, \ldots$
be a fair iteration, where for each $n$,
$ k_n \in \set{0,\ldots,m-1}
$
is the index of the constraint $s \in \set{s_0,\ldots,s_{m-1}} $
selected at the $n$th iteration step.
The corresponding iteration starting from some box $P$ is given by
the sequence of boxes
\begin{eqnarray*}
P_0 &=& P \\
P_n &=& \gamma_{k_n} (P_{n-1}), \qquad n > 0
\end{eqnarray*}
We first show that
\begin{equation}\label{eq:bound1}
\forall j\ \exists q\ \Gamma ^j (P) \sqsubseteq P_q
\end{equation}
Indeed,
$k$ is a fair sequence, and since all operators $\gamma_i$ are
inflationary and monotone, for each $j$,
one can choose $q$ such that the initial iteration subsequence
$k_0, \ldots, k_{q-1}$
contains, for each constraint $s_l$ in $C$,
at least $j$ occurrences of index $l$ of $s_l$ in
$\set{0,\ldots,m-1}$; these occurrences
correspond to at least
$j$ applications of the contraction operator $\gamma_l$.
Next, we observe that
\begin{equation}\label{eq:bound2}
\forall q\ P_q \sqsubseteq \Gamma ^q (P),
\end{equation}
which follows by induction on $q$.
Whence
$\sqcup_{j=0}^\infty \Gamma ^j(P) \sqsubseteq \sqcup_{j=0}^\infty P_j$
by (\ref{eq:bound1}), and
$\sqcup_{j=0}^\infty P_j \sqsubseteq \sqcup_{j=0}^\infty \Gamma ^j(P) $
by (\ref{eq:bound2}).
The two limits are equal.
\hspace{\fill}$\Box$
\section{Further work}
\label{sec:furthWrk}
Concurrent constraint programming (CCP)
(\cite{srrnpn91} and further references there)
is a model of concurrent programming.
This model is based on an abstraction of a computer
store that is more abstract than the one
used in conventional programming languages.
Usually the store is modeled as a vector of storable values
(numbers, characters) indexed by the variables accessible to
the program.
Thus to every variable there corresponds a single value.
The conventional read operation on a variable yields this value.
The conventional write operation on a variable changes this value.
In CCP it is not assumed that the value of a variable
is precisely known:
the store is a \emph{constraint} on the values of variables.
The conventional read operation
is replaced by {\tt ask}, an operation in the form of a logic formula
that succeeds if and only if it is logically entailed by the store.
The conventional write operation
is replaced by {\tt tell}, an operation in the form of a logic formula
$T$
that has the effect of replacing the store $S$ by a logical
equivalent of $S \wedge T$, provided that this is consistent.
The generalization of the conventional store to CCP
requires that the store becomes a logical theory $S$
that is \emph{satisfaction-complete}
in the sense that for every formula $C$
admissible as {\tt ask} or {\tt tell}
it is the case that either
$ S \models \exists C$ or
$ S \models \neg \exists C$
where $\exists$ denotes existential closure.
See \cite{clrk91} and further references there.
CCP seems to have a great deal of unexploited potential.
Its motivation and terminology is in the area of concurrent
programming, with the aim of generalizing the many
different approaches
(Hewitt's Actors, Hoare's CSP, Milner's CCS, various flavours
of concurrent logic programming).
CCP is linked to constraint solving
by its formulation in terms of predicate logic.
Thus CCP promises to be a framework for constraint solving
with parallelism built in, a promising feature given
the massive amount of computation that is typical of
constraint problems.
To realize this promise it is necessary to generalize
CCP beyond the restriction of the store
as a satisfaction-complete theory.
For example, in the case of interval constraints,
where the domain is the reals,
the theory of the store is not satisfaction-complete.
Consequently, the result of a converging iteration with
interval constraints means that \emph{if} a solution
exists, then it has to be in the remaining intervals.
Often one knows from other sources that a solution exists
(e.g. that the CSP arises from a polynomial of odd degree being
equated to zero) and the remaining intervals are close to
the resolution of the floating-point system.
In such a situation the weakness of the conclusion
does not stand in the way
of it being of great practical value.
We have not explored whether the valuable features
of CCP can be preserved when the store
is not a necessarily a satisfaction-complete theory.
\section{Concluding remarks}
\label{sec:Conclusion}
We see the contributions of this paper as the following.
Although in the usual definition of CSP the constraints
look like atomic formulas of predicate logic,
the semantics of a CSP is given independently.
We use the standard semantics of first-order predicate logic
to define the solution set of a CSP
and we define approximation systems as a set-theoretic device
to interface our framework for CSPs
with the well-known chaotic iteration algorithm.
Parker's observation \cite{prkr87} was that the operations research
paradigm of maximizing a real-valued objective function
under constraints can be generalized to maximization
in partially ordered spaces.
Scott's contribution \cite{scott72} was
that computation can be viewed as information gain.
We combine these insights, so that many of Parker's examples
can be seen as iterations in which information is monotonically
gained.
Among these examples we concentrate on solving systems
where the constraints are nonlinear
equations or inequalities over the reals.
Constraint processing by
domain reduction can be viewed as the use of the computer
for monotonic gain of information.
This is more than a theoretical point of view.
What is lacking in the current practice of computing
is a quantitative treatment of the \emph{work} done by
the cpu per, say, gigacycle.
The domain reduction method can be used to compare how many
gigacycles were required to obtain the most recent domain reduction,
expressed, say, as ratio of the cardinalities, or volumes,
of the box before and after this reduction.
One may conclude that a reduction of $x$ percent is not worth
the $y$ gigacycles it cost, that further diminishing returns
for computational effort are to be expected, and that therefore
it is time to terminate the iteration.
\section{Acknowledgments}
This research was supported by our universities,
by INRIA Rocquencourt, France,
and by the Natural Science
and Engineering Research Council of Canada.
\bibliographystyle{abbrvnat}
|
1,108,101,565,184 | arxiv | \section{Introduction}
Measurements of the cosmic microwave background (CMB) fluctuations by {\it WMAP}~\cite{Hinshaw:2012aka} and {\it Planck}~\cite{Ade:2013zuv} Collaborations opened a new era in high precision cosmology. These data are well described by the standard spatially-flat $\Lambda$CDM cosmology with a power low spectrum of adiabatic scalar perturbations, and make a great step towards the precise determination of cosmological parameters, in particular of the Hubble constant $H_0 = 100\, h~{\rm km~s}^{-1}~{\rm Mpc}^{-1}$, the density parameters $\Omega_b$ and $\Omega_{dm}$ for the baryon and dark matter fractions, and therefore of the whole matter density $\Omega_m = \Omega_b + \Omega_{dm}= 1-\Omega_\Lambda$. Recently the latest results of the {\it Planck} Collaboration have been published \cite{Planck:2015xua} based on the full mission {\it Planck} data. They are in excellent agreement with the 2013 data \cite{Ade:2013zuv} but with improved precision.
The {\it Plank} data imply a rather low value for the Hubble constant, which is in tension with the direct astronomical measurements of $h$. The {\it Planck} 2015 TT,TE,EE+lowP data, which we take in this Letter as a benchmark, determine the Hubble constant with 1 per cent precision, $h = 0.6727\pm 0.0066$ \cite{Planck:2015xua}.
Direct astronomical measurements of the Hubble constant indicate larger values. The analysis \cite{Riess:2011yx} of the Hubble Space Telescope (HST) data based on over 600 cepheids in host galaxies and 8 samples of SNe Ia yields $h = 0.738 \pm 0.024$ including both statistical and systematic errors. Independent analysis of Carnegie Hubble program \cite{Freedman:2012ny} using the Spitzer Space Telescope data for calibration purposes lead to $h = 0.743 \pm 0.021$. Both these results are discordant with the Planck result at about 2.5$\sigma$ level. Other astronomical estimates also typically imply high values of the Hubble constant. For example, the analysis of the gravitational lensing time delay measurements of the system RXJ1131-1231 implies $h= 0.787 \pm 4.5$ \cite{Suyu:2012aa}.
In addition to $h$, there is tension between other CMB derived observables and their direct low redshift measurements.
The {\it Planck} results show a tension between the cosmological constraints on $\sigma_8$ and $\Omega_m$ from the CMB \cite{Ade:2013lmv,Ade:2015fva} and from clusters as cosmological probes.
Cluster data prefer lower values of these observables deviated at more than $2\sigma$ level, see e.g. Refs.~\cite{Vikhlinin:2008ym,Bohringer:2014ooa}.
Recently Baryon Acoustic Oscillations (BAO) in the Ly$\alpha$ forest of BOSS DR11 quasars have been studied at redshift z=2.34~\cite{Font-Ribera:2014wya,Delubac:2014aqe}. The measured position of the BAO peak determines the angular distance, $D_A(z)$ and expansion rate, $H(z)$. Obtained constraints imply values of $D_A$ and $H$ that are, respectively, 7\% low and 7\% high compared to the predictions of a flat $\Lambda$CDM cosmological model with the best-fit Planck parameters. The significance of this discrepancy is approximately $ 2.5\sigma$~\cite{Delubac:2014aqe}.
The tension between CMB based determination of several observables by the {\it Planck} Collaboration and direct low $z$ measurements is intriguing and deserves attention. The cause of discrepancy may lie in some calibration errors. On the other hand, it may hint to a deficiency of the standard $\Lambda$CDM paradigm. In this paper we show that this discrepancy may be resolved if a certain fraction of dark matter is unstable. Decaying Dark Matter (DDM) models have been considered previously, see e.g. the most recent Refs \cite{Audren:2014bca,Blackadder:2014wpa}, with the stringent constraint on DDM decay width $\Gamma$. However, in these papers it was assumed that the whole of DM is susceptible to the decay, concluding that the decay time must be larger than 100 Gyr or so. We instead assume that dark matter consists of two fractions, the stable dark matter being dominant while a subdominant unstable part decays between recombination and the present epoch.
\section{Decaying Dark Matter}
\paragraph{Planck constraints.}
To ensure that our model fits the Planck data we accept Planck derived values for all cosmological parameters relevant at recombination. In particular, this means that the sum of initial densities of stable and decaying components of dark matter is fixed, and after formal redshift to the present moment is determined by the Planck value $\omega_{sdm} + \omega_{ddm} = 0.1198$.
In our model we vary the initial fraction of decaying component in the cosmological mass density
\begin{equation}
F \equiv \frac{ \omega_{ddm}}{\omega_{sdm} + \omega_{ddm}}.
\end{equation}
We assume that decay occurs into invisible massless particles and does not produce too many photons.
Alternatively, one can consider a scenario when dark matter consists of two particle species with masses
$M+\mu$ and $M-\mu$, and the heavier component decays into the lighter one with emission of invisible
massless particles. In this case the dark mass fraction disappearing due to decay is equivalent to $F=\mu/M$.
Throughout the paper we normalize the width of the decaying component $\Gamma$ to km/s/Mpc, i.e.
in the same units as $H_0$.
$\Gamma$ is another independent cosmological parameter in our model which we also vary for fitting the data.
It is bounded from above by the requirement that the unstable fraction does not decay substantially
before the last scattering to measurably affect the CMB. Hence, we take $\Gamma < 5000$ in which
range observed CMB spectra are not altered by decays.
Furthermore, we require that the angular diameter distance to the last scattering should be the same for all values of parameters, namely we fix the sound horizon angle $100*\theta_s$ to the Planck value $1.04077$. This determines Hubble parameter $h$ as a function of $F$ and $\Gamma$ and guarantees that derived CMBR spectra in our model are identical (at high $l$) to the best fit Plank spectrum for all values of parameters. Resulting $h$ as a function of $\Gamma$ is shown in Fig.~\ref{fig:G_F} for different values of $F$. Let us remark also that for the choice of parameters as in Fig.~\ref{fig:G_F},
the age of the Universe
$t_0 = \frac13 H_0^{-1} \Omega_\Lambda^{-1/2} \ln\big[(1+\Omega_\Lambda^{1/2})/(1-\Omega_\Lambda^{1/2}) \big]$
remains nearly the same as predicted by {\it Planck}, $t_0 \approx 13.8$~Gyr,
since increasing of $H_0$ is compensated by increasing of dark energy fraction $\Omega_\Lambda$.
\begin{figure}
\includegraphics[width=0.48\textwidth]{G_F.pdf}
\caption{
Hubble parameter $h$ as a function of DM decay width $\Gamma$ for different values of the DDM fraction $F$.
}
\label{fig:G_F}
\end{figure}
Relevant cosmological calculations have been carried out using the CLASS Boltzmann code~\cite{Lesgourgues:2011re,Blas:2011rf}. The parameter space is explored using the Markov Chain Monte-Carlo technique with the Monte Python package~\cite{Audren:2012wb}. We verified that all CMB spectra are identical at $l \agt 40$. At smaller $l$ the spectra somewhat deviate because the cosmological constant in our model is typically larger as compared to the standard $\Lambda$CDM (we consider spatially flat Universe only). However, corresponding changes are smaller than the cosmic variance. Therefore we do not constrain model parameters using low $l$ Planck data and we use supernova data instead.
\begin{figure}
\vspace{0.2cm}
\hspace{-0.5cm}\includegraphics[width=0.50\textwidth]{L4GF.pdf}
\caption{One and two sigma likelihood contours for our model parameters. Solid and dashed lines correspond to a dataset consisting of JLA sample of SN Ia and HST measurements of $h$, on top of the best fit Planck model parameters. Addition of Planck cluster data results in much narrower shaded area. }
\label{fig:L}
\end{figure}
\paragraph{Adding supernova and HST constraints.}
For fitting to supernovae observations we use the JLA~\cite{Betoule:2014frx} compilation composed of 740 SN Ia. This is the largest data set to date containing samples from low redshift $z \approx 0.02$ to a large one, $z \approx 1.3$. The data were obtained from the joint analysis of SDSS II and SNLS, improving the analysis by means of a recalibration of light curve fitter SALT2 and in turn reducing possible systematic errors. For "standardization" of SN data the linear model for the distance modulus $\mu$ is employed with four nuisance parameters in the distance estimates. All necessary data for the analysis were retrieved from~\cite{JLA}. Resulting best fit values for all nuisance parameters in our cosmology do not differ notably from the values quoted in Ref.~\cite{Betoule:2014frx}, derived
for $\Lambda$CDM.
\begin{figure*}
\includegraphics[width=0.48\textwidth,angle=0]{H_z_BAO.pdf}
\label{fig:BAO}
\includegraphics[width=0.48\textwidth,angle=0]{BAO_Da.pdf}
\caption{
Hubble parameter $h(z)$ (left panel) and angular diameter distance $D_A$ (right panel). Model curves are presented for fixed $\Gamma = 2000$ and several values of $F$. Points at non-zero redshift $z$ are the SDSS BAO data. HST measurement at $z=0$ is also shown with the symbol size comparable to the errorbars.
}
\label{fig:BAO}
\end{figure*}
We further constrain our model using determination of the Hubble parameter with the HST~\cite{Riess:2011yx}. Resulting one and two sigma likelihood contours in the plane of $\Gamma$ and $F$ are shown in Fig.~\ref{fig:L} by solid and dashed lines. We see that the base $\Lambda$CDM with $\Gamma = F = 0$ is outside of 2$\sigma$ contours in our model. Derived likelihood for the Hubble parameter corresponds $h = 0.716 \pm 0.02$ at one $\sigma$. Therefore, with a fraction of decaying dark matter the data of Planck on CMBR anisotropies, data on supernova, and HST data all can be reconciled.
\paragraph{DDM and BAO.}
We now turn to the data on Baryon Acoustic Oscillations.
The measurement of the characteristic scale of BAO in the correlation function of different matter distribution tracers provides a powerful tool to probe the cosmic expansion and a convincing method for setting cosmological constraints. The BAO peak in the correlation function at a redshift $z$ appears at the angular separation $\Delta \theta = r_d/(1 + z)D_A(z)$, where $D_A $ is the angular diameter distance and $r_d = r_s(z_d)$ is the sound horizon at the drag redshift, i.e. at the epoch when baryons decoupled from photons. BAO feature also appears at the redshift separation $\Delta z = r_d/D_H$, where $D_H \equiv c/H(z)$. Therefore, measurement of the BAO peak position at some $z$ constrains the combinations of cosmological parameters that determine $D_H/r_d$ and $D_A/r_d$ at that redshift.
Recently independent constraints on $H\, r_d$ and $D_A/r_d$ were obtained using SDSS/BOSS data at $z = 0.35$ \cite{Chuang:2011fy,Xu:2012fw}, z = 0.57 \cite{Kazin:2013rxa,Anderson:2013zyy}, and z = 2.34 \cite{Delubac:2014aqe}. These data are plotted in Fig.~\ref{fig:BAO}. Note that the derived constraints for $H(z)$ and $D_A$ are not independent, the correlation coefficient is 0.5. To avoid cluttering in displaying results obtained by different authors at the same redshift, in the right panel we plot the results of Refs. \cite{Xu:2012fw,Kazin:2013rxa,Anderson:2013zyy,Delubac:2014aqe} for $D_A/r_d$, while in the left panel the results of Refs. \cite{Chuang:2011fy,Anderson:2013zyy,Delubac:2014aqe} for $h$ are presented. Again, the solid line corresponds to the $\Lambda$CDM model with the best fit {\it Planck} measurements.
Two other models, $F=0.1$ and $F=0.2$ both with $\Gamma = 2000$, are also shown.
We see that the data systematically deviate from the base $\Lambda$CDM.
Thought at $z<1$ each deviation is about 1$\sigma$ (and therefore this is not considered as a problem)
they all are in the direction of the DDM models, except for the result of ref.~\cite{Anderson:2013zyy} at $z=0.57$.
We repeated procedure of likelihood analysis of previous subsection with BAO data added. We used BOSS BAO likelihoods included into Monte Python package~\cite{Audren:2012wb}, latest release 2.1. The result is similar to the one presented in Fig.~\ref{fig:L} but with one and two sigma contours shifted down by about a factor of two. However, the result would clearly depend upon the dataset chosen. In Ref.~\cite{Aubourg:2014yra} DDM model was analyzed with the inclusion of the latest BAO results only, at $z=0.57$ \cite{Anderson:2013zyy} and $z = 2.34$ \cite{Delubac:2014aqe}, with a pessimistic conclusions. In Fig.~\ref{fig:BAO} we can see the origin for this conclusion as well. DDM helps to ease tension at $z = 2.34$ both for $H(z)$ and
$D_A$, which are at the 2.5$\sigma$ level compared to the predictions of the base $\Lambda$CDM. However, results of~\cite{Anderson:2013zyy} at $z=0.57$, which are also discrepant at 1$\sigma$ level, behave differently. While DDM is better fit for $H(z)$, it is not so for $D_A$, the latter is represented by an upper (red) datapoint at $z=0.57$ in the left panel of Fig.~\ref{fig:BAO}. Overall, DDM does not help much here. As Ref.~\cite{Planck:2015xua} describes, at present it is not clear whether the discrepancy at $z = 2.34$ is caused by systematics in the Ly$\alpha$ BAO measurements (which are more complex and less mature than galaxy BAO measurements) or it is an indicator of a new physics.
\paragraph{DDM and cluster counts.}
Decaying Dark Matter model is capable to resolve tension between the base $\Lambda$CDM model and the cluster data as well \cite{Aoyama:2014tga}. This is displayed in Fig.~\ref{fig:sigma8} in the $\sigma_8$ and $\Omega_m$ parameter plane. The base $\Lambda$CDM corresponds to the error cross marked PLANCK CMB. Shaded areas correspond to the parameter regions allowed (at $2\sigma$) by {\it Planck} cluster data \cite{Ade:2013lmv,Ade:2015fva} and by extended ROSAT-ESO Flux Limited X-ray Galaxy Cluster Survey (REFLEX II) \cite{Bohringer:2014ooa}. We should also note that earlier results obtained in \cite{Vikhlinin:2008ym}, while in agreement with \cite{Bohringer:2014ooa}, are even father away from the Planck base $\Lambda$CDM model. In DDM model, when $F$ and $\Gamma$ are varied, $\sigma_8$ and $\Omega_m$ closely follow the line marked DDM in Fig.~\ref{fig:sigma8} and cross the region allowed by the cluster data. White circle on this line represents a model with $F=0.1$ and $\Gamma = 2000$. With smaller values of $F$ and/or $\Gamma$ the dot representing a model moves to right, closer to the base $\Lambda$CDM model.
We have added Planck constraints \cite{Ade:2013lmv} from Sunyaev-Zeldovich cluster counts, $\sigma_8\left(\Omega_m/0.27\right)^{0.3} = 0.78 \pm 0.01$, to our likelihood analysis (without BAO). Result is shown in Fig.~\ref{fig:L} as shaded area. ($1\sigma$ area continues actually up to $F \approx 0.25$ and $\Gamma \approx 100$ but this is unresolved at the scale of this figure.) Now the likelihood of base $\Lambda$CDM is vanishingly small compared to a best fit DDM models. However, as Planck collaboration concluded on cluster counts issue~\cite{Ade:2015fva}: it is unclear if this tension arise from low-level systematics in the astrophysical studies, or represents the first glimpse of something more important.
We would say again that the hypothesis of decaying dark matter may help to resolve this tension as well.
In fact, from the joint fit shown in Fig. ~\ref{fig:L} one can see that the issues of $H_0$ and $\sigma_8$
can be resolved with the same parameter values of the DDM model.
\begin{figure}
\includegraphics[width=0.48\textwidth,angle=0]{sigma8.pdf}
\caption{
$\Omega_m$ and $\sigma_8$ derived from cluster counts and from CMB. Line marked DDM shows trend of these parameters when $F$ and $\Gamma$ are varied in our model. White circle represents a model with $F=0.1$ and $\Gamma = 2000$ as an example.
}
\label{fig:sigma8}
\end{figure}
\section{Conclusions}
Cosmological parameters deduced from the {\it Planck} measurements of the CMB anisotropies with unprecedented accuracy are at some tension with direct astronomical measurements of various parameters at low redshifts. We have shown that {\it Planck}-inspired $\Lambda$CDM cosmology can be reconciled both with HST measurements and with cluster data within the hypotheses of Decaying Dark Matter. Joint fit to {\it Planck}, supernova, HST and Planck cluster data tells that if the dark matter decayed between recombination and the present time, then the unstable fraction should be about 10 per cent at the recombination epoch. Situation with the BAO discrepancies is less clear at present and we should wait to see in which direction the intrigue will develop.\\[2mm]
\noindent
\paragraph*{Note Added.}
After our paper has been submitted to arXiv, Ref. \cite{Enqvist:2015ara} appeared
where DDM was also suggested as a resolution to the possible tension between the CMB and
weak lensing determinations of $\sigma_8$. \\[3mm]
{\bf Acknowledgements}
\vspace{2mm}
\noindent
The work of Z.B. was supported in part by the MIUR
grant PRIN No. 2012CPPYP7 ``Astroparticle Physics" and in part by
Rustaveli National Science Foundation grant No. DI/8/6-100/12.
A.D. and I.T. acknowledge support of the Russian Federation Government Grant No. 11.G34.31.0047.
Numerical part of the work has been done at the cluster of the Theoretical Division of INR RAS.
|
1,108,101,565,185 | arxiv | \section{Introduction}
The original motivation of this paper comes from a partnership of the authors with Eurotunnel, the company operating the tunnel under the Channel. Eurotunnel is currently facing an increasing congestion due to the trucks waiting in the terminal before being loaded in the shuttles. A way to address this issue consists in scheduling the shuttles so that the trucks do not wait too long in the terminal.
In railway transportation, a traditional point of view considers that the demand can be smoothed by offering sufficiently enough departures over a day. Timetabling is then guided by other considerations, such as robustness, maintainability, or rolling stock. For instance, Swiss, Dutch and German companies usually design periodic timetables, which present many advantages~\cite{cordone2011optimizing, kroon2009new}. The way to optimize this kind of timetables has been the topic of many researches, initiated by Serafini and Ukovich \cite{serafini1989mathematical} and by Voorhoeve \cite{voorhoeve1993rail} explicitly in the railway context, see \cite{kroon2003variable, liebchen2003finding, liebchen2002case, nachtigall1996genetic} for further works. In the context of periodic timetables, a way to adapt the schedules to a demand with strong variations consists in inserting new departures at peak-hours and deleting departures when the demand is low.
Since the trip of the trucks in the tunnel is a small part of their whole journey, it is a reasonable approximation to assume that they cannot choose their arrival time in the terminal. Moreover, increasing the size of the fleet is not always doable in practice (the shuttles are expensive and the tunnel is used by other vehicles, which limits the maximal number of shuttle trips over a day). We face thus a different problem than the one addressed in the aforementioned literature: the demand is assumed to be fixed and nonelastic to the departures times, and the number of shuttles cannot be adjusted to the demand. Given a fleet of shuttles and a demand of transportation known in advance, the problem consists in designing a schedule for the shuttles that minimizes the waiting time of the users. There are timetabling problems with similar features, see \cite{barrena2014exact, cacchiani2008column, cacchiani2010non, caprara2002modeling, cai1998greedy, ingolotti2006new} for instance, but these articles are at a more macroscopic level than what we require to solve our problem. Moreover, in the present work, the schedules have to be designed in an offline manner. In a transportation context, and especially for Eurotunnel, computing the schedule in advance is mandatory.
We study several versions of the problem, mainly according to two features. The first feature is whether the shuttles are allowed to come back at the terminal after having realized the trip. The second feature is the objective function. We consider in turn the following two quantities to be minimized: the maximum waiting time and the average waiting time. The first objective is perhaps a fairer one regarding the users, while the second one is relevant for the global efficiency.
It seems that the question we address in the present paper is new. Moreover, it may be relevant for any situation where a demand, known in advance, has to be processed by batches and for which we want to minimize the processing time. This kind of situation is often met in chemical industry. An example whose motivation is very close to ours considers a test to be performed on samples, which arrive continuously~\cite{brauner2007scheduling}. The test takes a certain amount of time and can be performed on several samples simultaneously. The question is then to determine the test schedule that minimizes the processing time.
We propose efficient algorithms for the different versions. ``Efficient'' here means ``theoretically efficient'', according to their time complexity and the performance guarantee. It also means ``practically efficient'' for almost all cases, as shown by numerical experiments conducted on real-world instances. It might also be worth noting that, depending on the version considered, the proof techniques rely on various fields of optimization (convexity, Karush-Kuhn-Tucker conditions, binary search, domination results, shortest paths in finite graphs,...).
\section{Model}
\subsection{The problems}
We are given a fleet of $S$ shuttles, for which departure times have to be determined. All shuttles have a capacity $C\geq 0$ and are situated in the same loading terminal at the beginning of the day. The users are infinitesimal and arrive continuously in the terminal over a finite period of time, modeled as the interval $[0,T]$, following the cumulative nondecreasing function $D:[0,T]\rightarrow\mathbb{R}_+$, where $D(t)$ is the total number of users arrived in the terminal during the interval $[0,t]$. We assume throughout the paper that $D(T)>0$. The shuttles have to carry the users over a fixed trip.
When the users arrive in the terminal, they enter a queue. This queue closes when all the users who will leave with the next shuttle have arrived in the queue and users can enter a new queue only if the previous one is closed (this is how it works at Eurotunnel). When a queue is closed, the users in that queue can start boarding the shuttle. The process in illustrated on Figure~\ref{fig:loading}. Loading a shuttle with a total of $x$ users takes a time $\nu x$. Note that setting $\nu$ to zero allows to model the case where the users do not have to wait for the last user before boarding. Even if no users arrive strictly after time $T$, loading and departures are allowed after that instant.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{terminal.png}
\caption{The process of arrival, loading, and departure in the terminal.}\label{fig:loading}
\end{center}
\end{figure}
Two possibilities are considered regarding the shuttles. Either return is not allowed: once the shuttles leave, they never come back to the terminal; or it is allowed: once the shuttles leave, they come back to the terminal after a time equal to $\pi\geq 0$. Two objective functions to be minimized are considered: the maximum waiting time and the average waiting time.
We have thus four problems:
\begin{itemize}
\item \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}, which consists of not allowing return and minimizing the maximum waiting time.
\item \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}, which consists of not allowing return and minimizing the average waiting time.
\item \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}, which consists of allowing return and minimizing the maximum waiting time.
\item \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}, which consists of allowing return and minimizing the average waiting time.
\end{itemize}
Practical constraints impose that overtake is not possible and thus, when return is allowed, the departure orders of the shuttles remain the same over the whole period. It is nevertheless possible to have simultaneous trips. This is an approximation in the case of Eurotunnel (we neglect the security distance and the length of the shuttles). For other situations, as the chemical application mentioned in the introduction, it may match what is met in practice.
\subsection{The demand}
Throughout the paper, we assume that $D(\makebox[1ex]\cdot)$ is upper semicontinuous. It allows to model discontinuity in the arrival process (batch of users arriving simultaneously). Yet, a weaker requirement could lead to mathematical difficulties, e.g., nonexistence of optimal solutions even for very simple cases.
The {\em pseudo-inverses} of $D(\makebox[1ex]\cdot)$, defined by
$$\tau\colon y\in[0,D(T)]\longmapsto \left\{\begin{array}{ll}\inf\left\{t\in[0,T]\colon D(t)>y\right\}\in[0,T]&\mbox{if}\;y<D(T)\\mathsf{T}&\mbox{otherwise}\end{array}\right.$$
and
$$\bar\tau\colon y\in[0,D(T)] \longmapsto \inf\left\{t\in[0,T]\colon D(t)\geq y\right\}\in[0,T],$$
play an important role in the paper. Note that they are nondecreasing functions and that $\tau(y)\geq\bar\tau(y)$ for all $y\in[0,D(T)]$. Times $\tau(y)$ and $\bar\tau(y)$ are respectively interpreted as the arrival times of the first user after the $y$ firsts and of the last user of these $y$ firsts. Since $D(\makebox[1ex]\cdot)$ is upper semicontinuous, we have the following properties, with proofs for sake of completeness.
\begin{lemma}\label{lem:pseudo}
We have $D(\tau(y))\geq D(\bar\tau(y))\geq y$ for every $y\in[0,D(T)]$.
\end{lemma}
\begin{proof}
Since $D(\makebox[1ex]\cdot)$ is nondecreasing, the first inequality is a direct consequence of the inequality $\tau(y)\geq\bar\tau(y)$, which is obvious from the definition. To prove the second inequality, consider $(t_n)$ a nonincreasing sequence converging toward $\bar\tau(y)$ such that $D(t_n)\geq y$ for all $n$. By the upper semicontinuity of $D(\makebox[1ex]\cdot)$, we get then $D(\bar\tau(y))\geq y$.
\end{proof}
\begin{lemma}\label{lem:semicont}
$\tau(\makebox[1ex]\cdot)$ is upper semicontinuous and $\bar\tau(\makebox[1ex]\cdot)$ is lower semicontinuous.
\end{lemma}
\begin{proof}
We first prove that $\tau(\makebox[1ex]\cdot)$ is upper semicontinuous. Let $\alpha$ be some real number such that $\{y\colon\tau(y)<\alpha\}$ is nonempty and take from it an arbitrary element $y_0$. We want to prove that $\{y\colon\tau(y)<\alpha\}$ is open for the induced topology on $[0,D(T)]$. If $y_0=D(T)$, then this set is $[0,D(T)]$ and thus open. Otherwise, by the definition of $\tau(\makebox[1ex]\cdot)$, we know that there exists $t_0<\alpha$ such that $D(t_0)>y_0$. For any element $y$ in $[0,D(t_0))$, we have $\tau(y)\leq t_0$, and thus $[0,D(t_0))$ is an open set containing $y_0$ and fully contained in $\{y\colon\tau(y)<\alpha\}$. The set $\{y\colon\tau(y)<\alpha\}$ is thus an open set of $[0,D(T)]$ for every real number $\alpha$, which precisely means that $\tau(\makebox[1ex]\cdot)$ is upper semicontinuous.
We prove now that $\bar\tau(\makebox[1ex]\cdot)$ is lower semicontinuous. Let $\alpha$ be some real number. Consider a converging sequence $(y_n)$ such that $\bar\tau(y_n)\leq\alpha$ for all $n$. For every fixed $n$, there exists thus a sequence $(t_{n,k})$ indexed by $k$ such that $D(t_{n,k})\geq y_n$ and $t_{n,k}\leq\alpha+\frac 1 k$ for all $k$. Now, consider the sequence $(t_{n,n})$. It is such that $D(t_{n,n})\geq y_n$ and $t_{n,n}\leq\alpha+\frac 1 n$ for all $n$. Since $[0,T]$ is compact, we can extract a nonincreasing converging subsequence $(t_n)$ from the sequence $(t_{n,n})$ such that $D(t_n)$ converges towards some real number nonsmaller than $\lim_{n\to\infty}y_n$ with $t_n\leq\alpha+\frac 1 n$ for all $n$. It implies that $\bar\tau(\lim_{n\to\infty}y_n)\leq \alpha$, which means that $\bar\tau(\makebox[1ex]\cdot)$ is lower semicontinuous.
\end{proof}
\begin{lemma}\label{lem:increas_semicont}
If $D(\makebox[1ex]\cdot)$ is increasing, then $\tau(y)=\bar\tau(y)$ for every $y\in[0,D(T)]$.
\end{lemma}
\begin{proof}
If $\bar\tau(y)=T$, then the equality is obvious. We can thus assume that $\bar\tau(y)<T$.
For every $t>\bar\tau(y)$, we have $D(t)>D(\bar\tau(y))$ since $D(\makebox[1ex]\cdot)$ is increasing, and Lemma~\ref{lem:pseudo} implies that $D(t)>y$. By definition of $\tau(\makebox[1ex]\cdot)$, we have $\tau(y)\leq \bar\tau(y)$. The reverse inequality being clear from the definitions, we get the result.
\end{proof}
\subsection{Mathematical model}
For the four problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}, \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{}, and \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}, a feasible solution is characterized by two nondecreasing sequences of nonnegative real numbers $\boldsymbol{d}=d_1,d_2,\ldots$ and $\boldsymbol{y}=y_1,y_2,\ldots$. The $d_j$'s are the successive departure times of the shuttles, and the $y_j$'s are their successive cumulative loads: the $j$th departure occurs at time $d_j$ with a load of $y_j-y_{j-1}$ users, where we set $y_0=0$.
Denote by $g^{\max}(\boldsymbol{d},\boldsymbol{y})$ the value of the maximum waiting time and by $g^{\ave}(\boldsymbol{d},\boldsymbol{y})$ the value of the average waiting time. There are explicit expressions of these objective functions. Note that $\tau(y_{j})$ can be interpreted as the first arrival time of a user leaving with the ``$(j+1)$th shuttle''.
\begin{eqnarray*}
g^{\max}(\boldsymbol{d},\boldsymbol{y})&=&\max_{j\colon y_j>y_{j-1}}\big(d_j-\tau(y_{j-1})\big),\\
g^{\ave}(\boldsymbol{d},\boldsymbol{y})&=&\frac 1 {D(T)} \sum_j\int_{y_{j-1}}^{y_j}(d_j-\bar\tau(y))dy,
\end{eqnarray*}
where the indices $j$ range over all departures.
Problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} can be written under the following form,
\begin{equation}\label{Pnoreturn}\tag{P$_{\mbox{\textup{\tiny no return}}}$}
\begin{array}{rl@{\hspace{1cm}}rr}
\operatorname{Min} & g(\boldsymbol{d},\boldsymbol{y}) & \\
\mbox{s.t.} & y_j-y_{j-1}\leq C & j=1,\ldots,S & \textup{(i)}\\
& y_{j-1}\leq y_j & j=1,\ldots,S & \textup{(ii)} \\
& d_{j-1}\leq d_j & j=2,\ldots,S& \textup{(iii)}\\
& y_S=D(T) & & \textup{(iv)}\\
& \bar\tau(y_j)+\nu(y_j-y_{j-1})\leq d_j & j=1,\ldots,S & \textup{(v)}\\
& y_0=0, & &
\end{array}
\end{equation}
where $g(\makebox[1ex]\cdot)$ is either $g^{\max}(\makebox[1ex]\cdot)$ or $g^{\ave}(\makebox[1ex]\cdot)$. Constraint (i) ensures that the total amount of users in any shuttle does not exceed the shuttle capacity. Constraint (ii) ensures that the indices of the $y_j$ variables are consistent. Constraint (iii) ensures that the shuttles do not overtake. Constraint (iv) ensures that every user eventually leaves the terminal in a shuttle. Constraint (v) ensures that the departure time of a shuttle occurs once the last user of this shuttle has arrived and the loading is over.
Problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} always admit optimal solutions when they are feasible, i.e., when $CS\geq D(T)$. Indeed, $\bar\tau(y_j)+\nu(y_j-y_{j-1})$ is upper-bounded by $T+\nu C$ and adding a constraint $d_j\leq T+\nu C$ for all $j$ does not change the optimal value; since $\bar\tau(\makebox[1ex]\cdot)$ is lower semicontinuous (Lemma~\ref{lem:semicont}), the set of feasible solutions of the optimization problem obtained with this new contraint is compact; its objective function is lower semicontinuous (and even continuous in the case of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}).
The following properties for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} will be useful in some proofs.
\begin{claim}\label{claim:change_obj}
Replacing $g^{\max}(\makebox[1ex]\cdot)$ by $\max_{j}\big(d_j-\tau(y_{j-1})\big)$ does not change the optimal value of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}.
\end{claim}
\begin{proof}
Let $(\boldsymbol{d},\boldsymbol{y})$ be a feasible solution of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}. We are going to build a feasible solution $(\boldsymbol{d}',\boldsymbol{y})$ (with the same $\boldsymbol{y}$) such that
\begin{equation}\label{eq:comp}
g^{\max}(\boldsymbol{d},\boldsymbol{y})\geqg^{\max}(\boldsymbol{d}',\boldsymbol{y})=\max_{j}\big(d'_j-\tau(y_{j-1})\big).
\end{equation}
We set $d_1'=\bar\tau(y_1)+\nu y_1$ and define inductively $d'_j=\max(d_{j-1}',\bar\tau(y_j)+\nu(y_j-y_{j-1}))$. We have $d_j'\leq d_j$ for all $j$ and it implies the inequality in \eqref{eq:comp}. Let us prove the equality of \eqref{eq:comp}:
if $\max_{j}\big(d'_j-\tau(y_{j-1})\big)$ is attained for a $\bar\jmath$ such that $y_{\bar\jmath-1}<D(T)$, then there exists $k\geq\bar\jmath$ such that $y_k>y_{k-1}=y_{\bar\jmath-1}$ and $d_k'\geq d_{\bar\jmath}'$, which means that the maximum is also attained for a $k$ such that $y_k>y_{k-1}$; and if $\max_{j}\big(d'_j-\tau(y_{j-1})\big)$ is attained for a $\bar\jmath$ such that $y_{\bar\jmath-1}=D(T)$, then there exists $\ell\leq \bar\jmath-1$ such that $y_{\ell-1}<y_{\ell}=y_{\bar\jmath-1}$ and by construction $d'_{\ell}=d'_{\ell+1}=\cdots=d'_S$ (since $y_{\ell}=y_{\ell+1}=\cdots=y_S=D(T)$), which means that the maximum is also attained for an $\ell$ such that $y_{\ell}>y_{\ell-1}$.
\end{proof}
\begin{claim}\label{claim:remove_dj}
If $D(\makebox[1ex]\cdot)$ is increasing, for any of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, there is an optimal solution such that $d_j=\bar\tau(y_j)+\nu(y_j-y_{j-1})$ for all $j\in\{1,\ldots,S\}.$
\end{claim}
\begin{proof}
Let $(\boldsymbol{d},\boldsymbol{y})$ be an optimal solution (we do not care which objective function is used yet). Without loss of generality, we can assume that $d_1=\bar\tau(y_1)+\nu y_1$ and that for all $j\in\{2,\ldots,S\}$ we have \begin{equation}\label{eq:dd}d_j=\max\big(d_{j-1},\bar\tau(y_j)+\nu(y_j-y_{j-1})\big)\end{equation} (just redefine $d_j$ according to these equalities if necessary). When $\nu=0$, a straightforward induction on $j$ shows that we have then always $d_j=\bar\tau(y_j)$. We can thus assume that $\nu>0$.
Suppose for a contradiction that there is a $j$ such that $d_j>\bar\tau(y_j)+\nu(y_j-y_{j-1})$. Denote by $j_1$ the smallest index for which this inequality holds. We necessarily have $d_{j_1}=d_{j_1-1}$ (because of the equality~\eqref{eq:dd}). Denote by $j_0$ the smallest index $j< j_1$ such that $d_j=d_{j_1}$. Note that since $D(\makebox[1ex]\cdot)$ is increasing, we have that $\bar\tau(\makebox[1ex]\cdot)$ is continuous (it is upper and lower semi-continuous with Lemma~\ref{lem:increas_semicont}).
For some small $\varepsilon>0$, we define $(\bar\boldsymbol{d},\bar\boldsymbol{y})$ as follows:
$$\bar y_j=\left\{\begin{array}{ll}
y_j-\varepsilon & \mbox{for $j=j_0,\ldots,j_1-1$} \\
y_j & \mbox{otherwise}
\end{array}\right.$$
and
$$\bar d_j=\left\{\begin{array}{ll}
\max\big(\bar d_{j-1},\bar\tau(\bar y_j)+\nu(\bar y_j-\bar y_{j-1})\big) & \mbox{for $j=j_0,\ldots,j_1$}\\
d_j & \mbox{otherwise,}
\end{array}\right.$$
where $\bar d_0=0$. We first check that $(\bar\boldsymbol{d},\bar\boldsymbol{y})$ is a feasible solution of \eqref{Pnoreturn}.
The definition of $j_1$ implies that $d_{j_0}>0$. Thus if $j_0=1$, we have $y_1>0$ and for a small enough $\varepsilon$, the vector $\bar\boldsymbol{y}$ satisfies constraint (ii). Otherwise, we have $\bar\tau(y_{j_0-1})+\nu(y_{j_0-1}-y_{j_0-2})=d_{j_0-1}<d_{j_0}=\bar\tau(y_{j_0})+\nu(y_{j_0}-y_{j_0-1})$. It implies that $y_{j_0-1}<y_{j_0}$ (as otherwise the equality would imply that $y_{j_0-1}<y_{j_0-2}$). Thus, for a small enough $\varepsilon$, we have $\bar\boldsymbol{y}$ satisfies constraint (ii). It also satisfies obviously constraint (iv).
For $j\in\{2,\ldots,j_1\}\cup\{j_1+2,\ldots,S\}$, checking $\bar d_{j-1}\leq \bar d_j$ is straightforward. The remaining case is $j=j_1+1$. A direct induction shows that $\bar d_j\leq d_j$ for $j\leq j_1-1$. Since $\bar\tau(y_{j_1})+\nu(y_{j_1}-y_{j_1-1})<\bar\tau(y_{j_1-1})+\nu(y_{j_1-1}-y_{j_1-2})$ (because $d_{j_1-1}=d_{j_1}$), for $\varepsilon$ small enough, we have $\bar d_{j_1-1}\geq \bar\tau(\bar y_{j_1})+\nu(\bar y_{j_1}-\bar y_{j_1-1})$. Here, we use the fact that $\bar\tau(\makebox[1ex]\cdot)$ is continuous. Thus $\bar d_{j_1}=\bar d_{j_1-1}$. Since we have $\bar d_{j_1-1}\leq d_{j_1-1}$ by the above induction, we finally obtain $\bar d_{j_1}\leq d_{j_1}\leq d_{j_1+1}=\bar d_{j_1+1}$. Therefore, $\bar\boldsymbol{d}$ satisfies constraint (iii).
Constraint (i) is satisfied for all $j$, except maybe for $j=j_1$. We have proved that $\bar d_{j_1}=\bar d_{j_1-1}$. Since $\bar d_{j_1-1}=\bar\tau(\bar y_j)+\nu(\bar y_j-\bar y_{j-1})$ for some $j'\leq j_1-1$, we have $\bar\tau(\bar y_{j_1})+\nu(\bar y_{j_1}-\bar y_{j_1-1})\leq\bar d_{j_1}=\bar\tau(\bar y_{j'})+\nu(\bar y_{j'}-\bar y_{j'-1})$, and thus $\nu(\bar y_{j_1}-\bar y_{j_1-1})\leq\nu(\bar y_{j'}-\bar y_{j'-1})\leq \nu C$. Therefore constraint (i) is also satisfied for $j=j_1$.
Since the constraint (v) is clearly satisfied, $(\bar\boldsymbol{d},\bar\boldsymbol{y})$ is a feasible solution of \eqref{Pnoreturn}.
A careful examination of the arguments used when we checked constraint (ii) shows that actually $\bar d_{j_0}< d_{j_0}$. The same induction as the one used we checked constraint (iii) shows that $\bar d_{j_1-1}<d_{j_1-1}$. We have proved that $\bar d_{j_1-1}\geq\bar\tau(\bar y_{j_1})+\nu(\bar y_{j_1}-\bar y_{j_1-1})$. Thus $\bar d_{j_1}=\bar d_{j_1-1}$, and $\bar d_{j_1}<d_{j_1}$. We have
$$
\sum_{j=j_0}^{j_1}\int_{\bar y_{j-1}}^{\bar y_j}\big(\bar d_j-\bar\tau(u)\big)du\leq\int_{\bar y_{j_0-1}}^{\bar y_{j_1}}\big(\bar d_{j_1}-\bar\tau(u)\big)du<\int_{y_{j_0-1}}^{y_{j_1}}\big(d_{j_1}-\bar\tau(u)\big)du=\sum_{j=j_0}^{j_1}\int_{y_{j-1}}^{y_j}\big(d_j-\bar\tau(u)\big)du,$$ which in contradiction with the optimality assumption. This settles the case of $g^{\ave}(\cdot)$. The other case is dealt with similarly.
\end{proof}
Problems \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} can be written almost identically under the following form. We use infinitely many variables since there is no \textit{a priori} reason to have a bounded number of departures, and there are indeed special cases for which there is no optimal solution with a finite number of departures. However, if $\pi>0$, we prove that any optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} requires a finite number of departures, see Proposition~\ref{prop:finite_max}. The case of \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} remains open.
\begin{equation}\label{Preturn}\tag{P$_{\mbox{\textup{\tiny return}}}$}
\begin{array}{rl@{\hspace{1cm}}rr}
\operatorname{Min} & g(\boldsymbol{d},\boldsymbol{y}) & \\
\mbox{s.t.} & y_j-y_{j-1}\leq C & j=1,\ldots,+\infty & \textup{(i)}\\
& y_{j-1}\leq y_j & j=1,\ldots,+\infty & \textup{(ii)} \\
& d_{j-1}\leq d_j & j=2,\ldots,S & \textup{(iii)}\\
& \displaystyle{\lim_{j\rightarrow+\infty}y_j=D(T)} & & \textup{(iv)}\\
& \bar\tau(y_j)+\nu(y_j-y_{j-1})\leq d_j & j=1,\ldots,+\infty & \textup{(v)}\\
& d_j+\pi+\nu(y_{j+S}-y_{j+S-1})\leq d_{j+S}& j=1,\ldots,+\infty & \textup{(vi)} \\
& y_0=0, & &
\end{array}
\end{equation}
where $g(\makebox[1ex]\cdot)$ is either $g^{\max}(\makebox[1ex]\cdot)$ or $g^{\ave}(\makebox[1ex]\cdot)$. Constraints (i), (ii), (iii), (iv), and (v) have the same meaning as for the previous problems. Constraint (vi) ensures that the time between two consecutive departures of a same shuttle is not smaller than the time required for a full trip plus the time needed to load the users.
In the model \eqref{Preturn}, the shuttles are not identified. Note however that their schedules can be easily be recovered: the departure times of a shuttle $s$ is of the form
$$d_s,d_{s+S},d_{s+2S},\ldots$$ and the time at which the loading starts for a shuttle with departure time $d_j$ can be chosen to be $d_j-\nu(y_j-y_{j-1})$ (the loading starts as late as possible).
While it can be shown that problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} always admits an optimal solution when it is feasible (see Proposition~\ref{prop:finite_max}), we were not able to settle the case of problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}.
\subsection{Computational model}
We assume that the following operations take constant time:
\begin{itemize}
\item Evaluation of $D(t)$ for any $t\in[0,T]$.
\item Integration of $D(\makebox[1ex]\cdot)$ between two values.
\item Evaluation of $\tau(y)$ and $\bar\tau(y)$ for any $y\in\mathbb{R}_+$.
\item Evaluation of $\sup\{y\colon\bar\tau(y)+\nu y\leq \alpha\}$ for any $\alpha\in\mathbb{R}_+$.
\end{itemize} Note that if $D(\makebox[1ex]\cdot)$ is piecewise affine with a natural description, as it is usually the case in practice, these assumptions are easily matched. Moreover, we set as constants of the computational model the capacity $C$, the length of the period $T$, the cumulative demand $D(\makebox[1ex]\cdot)$, the loading rate $\nu$, and the return time $\pi$. The complexity functions will be expressed in terms of $S$ and the accuracy of the computed solution.
\section{Main results}\label{sec:mainresults}
In the present section, we present our main findings. Many results state the existence of algorithms with a guarantee that the returned solution has a value close to the optimal value $OPT$ of the considered problem. Except for two easy results -- Corollary~\ref{cor:approx} and Proposition~\ref{prop:finite_max} -- all proofs are postponed to other sections.
We organize the results presented in that section in three subsections. The first subsection -- Section~\ref{subsec:one} -- deals with the special case where $D(\makebox[1ex]\cdot)$ is a constant function, i.e., when all users are in the loading terminal from the beginning of the period, and with returns allowed.
It seems to us that these results are also interesting in themselves, because they form a natural situation for which there is a very efficient algorithm. The second subsection -- Section~\ref{subsec:noret} -- deals with the general case where the shuttles are not allowed to come back, i.e., with the case covered by the problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}. The case where the shuttles are allowed to come back, i.e., when we deal with the problems \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}, is discussed in Section~\ref{subsec:ret}.
\subsection{All users in the terminal from the beginning}\label{subsec:one}
In this subsection, we present results regarding the four problems when $D(t)=D(T)$ for all $t\in[0,T]$ (all users are from the beginning in the terminal). For the problems for which return is not allowed (\textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}), an obvious optimal solution is given by $y_j^*=jD(T)/S$ and $d_j^*=\nu D(T)/S$ for $j\in\left\{1,\ldots,S\right\}$ and the optimal value is $\nu D(T)/S$ for both problems, provided that $D(T)\leq CS$ (otherwise, there is no feasible solution at all): the shuttles take all the same amount of users, start immediately the loading process, and have the same departure time.
The rest of the section is devoted to the results regarding the problems \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}. For the first one, there are closed-from expressions for the optimal value and an optimal solution.
\begin{proposition}\label{prop:S1max}
When $D(t)=D(T)$ for all $t\in[0,T]$, the optimal value of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} is $$\frac{\nu D(T)}{S}+\left(\left\lceil \frac{D(T)}{CS}\right\rceil-1\right)\pi.$$
\end{proposition}
In the proof, we actually provide a closed-form expression for an optimal solution. For \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} however, there does not seem to be a closed-form expression for an optimal solution, and not even for the optimal value. There is nevertheless an efficient algorithm.
\begin{proposition}\label{prop:S1}
Suppose $\pi>0$. When $D(t)=D(T)$ for all $t\in[0,T]$, the optimal value of \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} can be computed in constant time and an optimal solution can be computed in $O(S)$.
\end{proposition}
If $\pi=0$, the optimal value is $\frac {\nu D(T)}{2S}$, and it is not too difficult to see that there is no optimal solution. In a transportation context, $\pi=0$ looks unrealistic. However, the chemical application mentioned in the introduction could be a situation where this equality could be met: as soon as the test is over for a batch, we can start the test for a new one.
\subsection{When return is not allowed}\label{subsec:noret}
We have the existence of an efficient approximation algorithm for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}. The algorithm is actually an easy binary search (Section~\ref{subsubsec:algo}).
\begin{theorem}\label{thm:pmax}
Let $\rho>0$. A feasible solution $(\boldsymbol{d},\boldsymbol{y})$ of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} -- if the problem is feasible -- satisfying $g^{\max}(\boldsymbol{d},\boldsymbol{y})\leq OPT + \rho$ can be computed in $O\left(S\log{\frac 1 \rho}\right)$.
\end{theorem}
With an additional assumption on $D(\makebox[1ex]\cdot)$, this theorem provides actually an approximation scheme.
\begin{corollary}\label{cor:approx}
If $D(\makebox[1ex]\cdot)$ is increasing, the algorithm of Theorem~\ref{thm:pmax} computes in $O(S\log\frac S {\varepsilon})$ a $(1+\varepsilon)$-approximation for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}.
\end{corollary}
A schedule for the shuttles requires to specify $S$ real numbers. Taking an output sensitive point of view, this corollary states thus the existence of a polynomial approximation scheme in this particular case.
\begin{proof}[Proof of Corollary~\ref{cor:approx}]
Suppose $D(\makebox[1ex]\cdot)$ increasing. Let $(\boldsymbol{d},\boldsymbol{y})$ be a feasible solution. According to Lemma~\ref{lem:increas_semicont}, we have then $\tau(y_{j-1})=\bar\tau(y_{j-1})$ for every $j$ and the maximum waiting time for shuttle $j$ is at least $\tau(y_j)+\nu(y_j-y_{j-1})-\tau(y_{j-1})$. Note that if $y_j=y_{j-1}$, this quantity is zero. Hence, the sum of the maximum waiting times over all nonempty shuttles is at least $T+\nu D(T)$ and the optimal value $OPT$ of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} is at least $(T+\nu D(T))/S$. Setting $\rho$ to $\varepsilon(T+\nu D(T))/S$ leads to the result.
\end{proof}
For \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, there exists an efficient approximation algorithm too. The algorithm is also described later (Section~\ref{subsubsec:algo_ave}). We already outline that this algorithm is not a binary search as in the former case, but consists in building a quite simple weighted ``approximative'' graph, in which a shortest path is computed.
\begin{theorem}\label{thm:pave}
Suppose that $D(\makebox[1ex]\cdot)$ admits right derivatives everywhere (denoted $D'_+(t)$) and that $\inf_{t\in[0,T)}D'_+(t)$ is positive. Then, for any positive integer $M$, a feasible solution $(\boldsymbol{d},\boldsymbol{y})$ of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} -- if the problem is feasible -- satisfying $$g^{\ave}(\boldsymbol{d},\boldsymbol{y})\leq OPT+O\left(\frac {S^2} M\right)$$ can be computed in $O\left(SM^3\right)$.
\end{theorem}
As for Corollary~\ref{cor:approx} above, this theorem could be interpreted as a polynomial approximation scheme by using the fact that $D(\makebox[1ex]\cdot)$ is increasing.
\subsection{When return is allowed}\label{subsec:ret}
The following proposition implies that when $\pi$ is larger than $0$, any optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} requires a finite number of nonempty departures.
\begin{proposition}\label{prop:finite_max}
If $\pi>0$, there exists an optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} and the number of nonempty departures in any optimal solution is at most
$$\left(2\left\lceil\frac {T} \pi \right\rceil +1\right)S+\left(\frac{\nu}{\pi}+\frac 1 C\right)D(T).$$
\end{proposition}
\begin{proof}
A feasible solution with an infinite number of nonempty departures has an infinite objective value and it thus strictly dominated by any solution by a finite number of departures. Thus, the set of feasible solutions can be reduced to the solutions where the number of nonempty departures is finite. Since $\bar\tau(\makebox[1ex]\cdot)$ is lower semicontinuous (Lemma~\ref{lem:semicont}), the set of feasible solutions is compact and the objective function is lower semicontinuous which leads then to existence of an optimal solution.
The schedule consisting in making the shuttles wait until time $T$, loading them at full capacity (except maybe for the last departure), and making them leave as soon as the loading is completed provides a feasible solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} with a value $T+\nu D(T)/S+(\lceil D(T)/(CS)\rceil-1)\pi$.
Consider an optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} and denote by $k$ the number of departures after time $T$. The users in the last shuttle to leave have waited at least $(k/S-1)\pi$. We have thus
$$\left(\frac k S-1\right)\pi\leq T+\frac{\nu D(T)}S+\frac{\pi D(T)}{CS},$$ which implies that $$k\leq\frac {TS} \pi+\frac{\nu D(T)}{\pi}+\frac{D(T)}{C}+S.$$
Before time $T$, the number of departures is at most $\lceil T/\pi\rceil S$.
\end{proof}
The next theorem states that there exists an algorithm computing arbitrarily good feasible solutions for \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} within reasonable computational times when $S$ is small. As for Section~\ref{subsec:noret}, this algorithm is described later in the paper (Section~\ref{sec:ret}). It is based on the computation of a shortest path in an ``approximative'' graph, as for Theorem~\ref{thm:pave}. It also uses Proposition~\ref{prop:finite_max} in a crucial way (actually a slight variation of it: Lemma~\ref{lem:t+}).
\begin{theorem}\label{thm:pmaxret}
Suppose that $D(\makebox[1ex]\cdot)$ admits right derivatives everywhere, $\pi$ is positive, and $\inf_{t\in[0,T)}D'_+(t)$ is positive. Then, for any positive integer $M$, a feasible solution $(\boldsymbol{d},\boldsymbol{y})$ of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} satisfying $$g^{\max}_{\mbox{\textup{\tiny return}}}(\boldsymbol{d},\boldsymbol{y})\leq OPT+O\left(\frac {S^2} M\right)$$ can be computed in $O\left(\beta^{3S}M^{3S+2}\right)$, where $\beta$ depends only on the constants of the computational model.
\end{theorem}
As above, the theorem actually ensures that the algorithm is an approximation scheme since we can bound from below $OPT$ using only the input values. If $S$ is considered as constant, this becomes even a polynomial approximation scheme.\\
We do not know whether there is a counterpart to Proposition~\ref{prop:finite_max} for problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}. If such a counterpart would exist, then almost the same technique as the one used in Section~\ref{sec:ret} would lead to a theorem similar to Theorem~\ref{thm:pmaxret} for \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}. The existence of such a theorem remains thus open.
\section{All users in the terminal from the beginning}
Consider the case where all users are in the loading terminal from the beginning. To ease the reading, and for the present section only, we use $D$ to denote the quantity $D(T)$.
We treat first the case of problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{}.
\begin{proof}[Proof of Proposition~\ref{prop:S1max}]
For \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{}, when $S=1$, an optimal solution is obtained by loading at full capacity the shuttle for each departure (except maybe for the last departure for which the shuttle load is $D-C\lfloor D/C\rfloor$) and by making the shuttle leave immediately after each loading process. The optimal value is then $\nu D+(\lceil D/C\rceil-1)\pi$. When $S>1$, consider the problem Q defined as the problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} without the constraint that the shuttles do not overtake (constraint (iii) in \eqref{Preturn}). The optimal value of Q provides a lower bound of the optimal value of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{}. Since there is no constraint linking the different shuttles, problem Q can be solved separately for each shuttle $s$ with a demand $D_s$ to carry, such that $\sum_s D_s=D$. The optimal solutions of Q are thus obtained from the optimal solutions of
$$\begin{array}{rll}
\operatorname{Min} & \displaystyle{\max_{s\in\left\{1,\ldots,S\right\}}\left(\nu D_s+\left(\left\lceil\frac{D_s}C\right\rceil-1\right)\pi\right)} \\
\mbox{s.c.} & \displaystyle{\sum_{s=1}^SD_s=D} & \\
& D_s\geq 0 & s=1,\ldots,S.
\end{array}$$
The solution given by $D_s=D/S$ for all $s$ is clearly optimal (and it is actually the unique optimal solution when $\nu>0$). Hence, there is an optimal solution for Q in which all shuttles have the same departure times and, for each travel, carry the same amount of users. Its value is $\nu D/S+(\lceil D/(CS)\rceil-1)\pi$. Since the shuttles do not overtake in this optimal solution of Q, it is actually a feasible solution for the problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{}, and thus an optimal solution for this latter problem (its value being equal to a lower bound).
\end{proof}
The rest of this section is devoted to the proof of Proposition~\ref{prop:S1}, which ensures the existence of an efficient algorithm solving problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} when $D(t)=D$ for all $t\in[0,T]$. We start by considering the special case of problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} when $S=1$. In such a case, it is always beneficial to define $d_j=(j-1)\pi+\nu y_j$. Assuming that the $y_j$'s are given, it provides a feasible solution since $\bar\tau(y)=0$ for all $y\in[0,D]$.
The objective function of \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} becomes thus
$$\frac 1 D\sum_{j=1}^{+\infty}\left((j-1)\pi+\nu \sum_{i=1}^{j}x_i\right)x_j=\frac 1 D\left(\sum_{j=1}^{+\infty}(j-1)\pi x_j+\frac 1 2\nu\sum_{j=1}^{+\infty}x_j^2\right)+\frac {\nu D} 2 ,$$ where $x_j=y_j-y_{j-1}$. Solving \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} when $S=1$ reduces thus to solving
\begin{equation}\label{S1averet}\tag*{$P(D)$}
\begin{array}{rlr}
\operatorname{Min} & \displaystyle{\sum_{j=1}^{+\infty}(j-1)\pi x_j+\frac 1 2\nu\sum_{j=1}^{+\infty}x_j^2} \\
\mbox{s.t.} & \displaystyle{\sum_{j=1}^{+\infty}x_j=D} \\
& 0\leq x_j\leq C & j=1,\ldots,+\infty,
\end{array}
\end{equation}
which is a convex program (with infinitely many variables). We will show that there is always an optimal solution of \ref{S1averet} with a finite support. Then, we will solve $P_0(D)$, defined as the program~\ref{S1averet} with the additional constraint $|\{j\colon x_j\neq 0\}|<+\infty$, with the help of the Karush-Kuhn-Tucker conditions (that do not apply otherwise).
\begin{lemma}\label{lem:y_p0}
Suppose that $\pi>0$. Then $P_0(D)$ has an optimal solution and it is necessarily of the form
$$\begin{array}{l}
x_0^*=0 \\
x_j^*=\left\{
\begin{array}{ll}
C & \quad\mbox{if $j\leq a$,} \smallskip\\
\displaystyle{\frac{D-aC}{\theta(a)-a}+\frac\pi\nu\left(\frac{a+\theta(a)+1} 2-j\right)} & \quad\mbox{if $a+1\leq j\leq \theta(a)$,} \smallskip
\\
0 & \quad\mbox{otherwise,}
\end{array}\right.
\end{array}$$
with $a\in\mathbb{Z}_+$ such that $a\leq \frac DC$ and where $$\theta(a)=a+\left\lceil\frac{-1+\sqrt{1+\frac{8\nu C}\pi(D-aC)}}{2}\right\rceil.$$
\end{lemma}
\begin{proof}
Consider the following program
\begin{equation}\label{P0Dn}\tag*{$P_0^n(D)$}\begin{array}{rlr}
\operatorname{Min} & \displaystyle{\sum_{j=1}^{n}(j-1)\pi x_j+\frac 1 2\nu\sum_{j=1}^{n}x_j^2} \\
\mbox{s.t.} & \displaystyle{\sum_{j=1}^nx_j=D} \\
& 0\leq x_j\leq C & j=1,\ldots,n.
\end{array}\end{equation}
Note that \ref{P0Dn} is actually $P_0(D)$ with the additional constraint that $\sup\{j\colon x_j\neq 0\}\leq n$.
For $n<D/C$, \ref{P0Dn} has no feasible solutions, and for $n\geq D/C$, the set of feasible solutions is nonempty. In this case, by compactness and continuity of the objective function, \ref{P0Dn} has an optimal solution $\boldsymbol{x}^*$. We necessarily have $x_j^*\geq x_{j+1}^*$ for every $j\in\left\{1,\ldots,n-1\right\}$, otherwise, exchanging the two values would strictly decrease the objective function. Let $a$ be the largest index $j$ such that $x_j^*=C$, with the convention that $a=0$ if there are no such index $j$, and let $b+1$ be the smallest index $j$ such that $x_j^*=0$, with the convention that $b=n$ if there is no such index $j$.
The constraints being all affine, the Karush-Kuhn-Tucker conditions apply. There is thus a real number $\lambda\in\mathbb{R}$ and two collections $\boldsymbol{\mu}, \boldsymbol{\omega}\in\mathbb{R}_+^n$ such that
for every $j\in\{1,\ldots,n\}$ we have
\begin{equation}\label{eq:kkt}
\nu x_j^*+(j-1)\pi+\lambda+\mu_j-\omega_j=0 \qquad \mbox{and} \qquad \omega_jx^*_j=\mu_j(x_j^*-C)=0.
\end{equation}
Summing this equality from $j=a+1$ to $j=b$ and noting that $\mu_j=\omega_j=0$ and $\sum_{j=a+1}^bx_j^*=D-aC$ by definition of $a$ and $b$ provide an expression of $\lambda$ in terms of $a$ and $b$. Replacing $\lambda$ by this expression in the same equality leads to
\begin{equation*}
x_j^*=\left\{
\begin{array}{ll}
C & \quad\mbox{if $j\leq a$,} \smallskip\\
\displaystyle{\frac{D-aC}{b-a}+\frac\pi\nu\left(\frac{a+b+1} 2-j\right)} & \quad\mbox{if $a+1\leq j\leq b$,} \smallskip
\\
0 & \quad\mbox{otherwise.}
\end{array}\right.
\end{equation*}
Using this equality for $j=b$ gives the following equation.
$$(b-a)(b-a-1)<\frac{2\nu}\pi(D-aC).$$
Equation~\eqref{eq:kkt} specialized for $j=b+1$ gives $$(b-a)(b-a+1)\geq\frac{2\nu}\pi(D-aC).$$
These two inequalities together -- treated as conditions on a second order polynomial in $b-a$ -- imply the necessary condition
$$
-\frac{1}2+\frac{\sqrt{1+\frac{8\nu}\pi(D-aC)}}2\leq b-a<\frac 12+\frac{\sqrt{1+\frac{8\nu}\pi(D-aC)}}2
$$
which imposes a unique integer value for $b-a$ and $b$ takes a unique value $\theta(a)$ for each value of $a$.
We have proved that any optimal solution of \ref{P0Dn} is of this form. Now, note that by definition of $a$, we necessarily have $a\leq \lfloor D/C\rfloor$. It means that there are only finitely many optimal solutions of the \ref{P0Dn}'s when $n$ goes to infinity. Since the set of feasible solutions of the \ref{P0Dn}'s is nondecreasing when $n$ goes to infinity, it means actually that there exists an $n_0$ such that any optimal solution of \ref{P0Dn} for $n\geq n_0$ is an optimal solution of $P_0^{n_0}(D)$. Moreover, any feasible solution of $P_0(D)$ is a feasible solution of $P_0^n(D)$ for some $n\geq n_0$, and thus is dominated by the optimal solutions of $P_0^{n_0}(D)$. These latter are thus the optimal solutions of $P_0(D)$.
\end{proof}
Let $v(D)$ and $v_0(D)$ be the optimal values of respectively \ref{S1averet} and $P_0(D)$. Note that $v(D)\leq v_0(D)$.
\begin{lemma}\label{lem:D2}
If $\pi>0$, we have
$$v_0(D-\varepsilon)\leq v(D)$$ for every $\varepsilon\in(0,D]$.
\end{lemma}
\begin{proof}
Let $\varepsilon\in(0,D]$. Consider a feasible solution $\boldsymbol{x}$ of $P(D)$. Let $N_\varepsilon\in\mathbb{Z}_+$ be such that $\sum_{j=N_\varepsilon+1}^{+\infty} x_j<\varepsilon$. Define inductively
$$x'_j=\left\{\begin{array}{ll}\min(x_j,D-\varepsilon-\sum_{i=1}^{j-1}x_i') & \mbox{for $j\leq N_\varepsilon$} \\ 0 & \mbox{for $j\geq N_\varepsilon+1$.}\end{array}\right.$$ This $\boldsymbol{x}'$ is a feasible solution of $P_0(D-\varepsilon)$. Since $x_j'\leq x_j$ for all $j$, the value given by $\boldsymbol{x}'$ to the objective value of $P_0(D-\varepsilon)$ is nonlarger that the value obtained by $\boldsymbol{x}$ for $P(D)$. The inequality follows.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:S1}]
Let us deal with the case $S=1$. Using the fact that $P_0(D)$ is a convex program, we easily get that $v_0(\makebox[1ex]\cdot)$ is a convex function. It is thus continuous on $(0,+\infty)$, and since $v_0(0)=0$, we have that $v_0(\makebox[1ex]\cdot)$ is continuous everywhere on $[0,+\infty)$. Making $\varepsilon$ tend toward $0$ in Lemma~\ref{lem:D2} and the inequality $v(D)\leq v_0(D)$ imply that $v_0(D)=v(D)$. Since any feasible solution of $P_0(D)$ is a feasible solution of $P(D)$ with the same value for the objective function, every optimal solution of $P_0(D)$ is an optimal solution of $P(D)$. An algorithm computing an optimal solution of $P(D)$ can then be derived from Lemma~\ref{lem:y_p0}: we just have to try all the finitely many possible values for $a$. The proof for any value of $S$ will be obtained by showing that an optimal solution in this case consists just in replicating optimal solutions for the one-shuttle case.
When $S>1$, consider the problem Q defined as the problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{} without the constraint that the shuttles do not overtake (constraint (iii) in \eqref{Preturn}). The optimal value of Q provides a lower bound of the optimal value of \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}. Since there is no constraint linking the different shuttles, problem Q can be solved separately for each shuttle $s$ with a demand $D_s$ to carry, such that $\sum_s D_s=D$. The optimal solutions of Q are thus obtained from the optimal solutions of
$$\begin{array}{rll}
\operatorname{Min} & \displaystyle{\sum_{s=1}^S\left(v(D_s)+\frac \nu 2 D_s^2\right)} \\
\mbox{s.c.} & \displaystyle{\sum_{s=1}^SD_s=D} & \\
& D_s\geq 0 & \forall s=1,\ldots,S.
\end{array}$$
The fact that $P(D)$ is a convex program implies that the map $v(\makebox[1ex]\cdot)$ is convex. As a consequence, the solution $D_s=D/S$ for all $s$ is an optimal solution of the previous program. Hence, there is an optimal solution for Q in which all shuttles have the same departure times and, for each travel, carry the same amount of users. Since the shuttles do not overtake in this optimal solution of Q, it is actually a feasible solution for the problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}{}, and thus an optimal solution for this latter problem (its value being equal to a lower bound).
\end{proof}
\newpage
\section{When return is not allowed}\label{sec:noret}
\subsection{Minimizing the maximum waiting time}\label{subsec:pmax}
\subsubsection{The algorithm}\label{subsubsec:algo}
If $CS<D(T)$, there is no feasible solution. We can thus assume that $CS\geq D(T)$. The algorithm is a binary search starting with the values $h^+=T+\nu D(T)$ and $h^-=0$ which are respectively upper and lower bounds of the optimal value. While the gap $h^+-h^-$ is larger than $\rho$, we consider the tentative value $h=\frac{h^++h^-} 2$ and the system
\begin{equation}\label{Smaxnoreth}\tag{S$_h$}
\left\{
\begin{array}{l@{\hspace{1cm}}r}
y_j = \sup \mathcal{S}_j^h & j=1,\ldots,S\\
y_S=D(T) &\\
y_0 = 0\\
d_j = h+\tau(y_{j-1}) & j=1,\ldots,S,
\end{array}\right.
\end{equation}
where $\mathcal{S}_j^h=\left\{y\in\mathbb{R}_+ \colon y\leq C+y_{j-1}, \bar{\tau}(y)+\nu(y-y_{j-1})-\tau(y_{j-1})\leq h,y\leq D(T)\right\}$.
Each iteration of the binary search consists in deciding whether \eqref{Smaxnoreth} has a feasible solution or not, and it can be done in $O(S)$ by computing the values of the $y_j$'s and the $d_j$'s iteratively (here we use in particular the computational assumptions on $D$). As we are going to prove, \eqref{Smaxnoreth} has a feasible solution if and only if the problem has a feasible solution with a value of the objective function at most $h$. If \eqref{Smaxnoreth} has a feasible solution, we update thus the value of $h^+$ with the current value of $h$, otherwise, we update $h^-$ with $h$. When $h^+-h^-\leq\rho$, the solution of program $(\text{S}_{h^+})$ is feasible for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and its value $h^+$ is at most at $\rho$ from the optimal value.
\subsubsection{Proof of Theorem~\ref{thm:pmax}}
For any fixed $h$, \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} has a feasible solution with a value of the objective function at most $h$ if and only if the following system has a feasible solution.
\begin{equation}\label{Qmaxnoreth}\tag{Q$_h$}
\left\{
\begin{array}{l@{\hspace{1cm}}rr}
d_j-\tau(y_{j-1})\leq h & j=1,\ldots,S & \textup{(Qi)}\\
y_j-y_{j-1}\leq C & j=1,\ldots,S & \textup{(Qii)}\\
y_{j-1}\leq y_j & j=1,\ldots,S & \textup{(Qiii)}\\
d_{j-1}\leq d_j & j=2,\ldots,S& \textup{(Qiv)}\\
y_S=D(T) & &\textup{(Qv)}\\
\bar\tau(y_j)+\nu(y_j-y_{j-1})\leq d_j& j=1,\ldots,S & \textup{(Qvi)}\\
y_0=0. &
\end{array}\right.
\end{equation}
We claim that \eqref{Qmaxnoreth} has a feasible solution if and only if \eqref{Smaxnoreth} has one. Once this equivalence is established, the correctness of the binary search described above is almost immediate using Claim~\ref{claim:change_obj}.
Let $(\boldsymbol{d},\boldsymbol{y})$ be a feasible solution of \eqref{Smaxnoreth}. We use without further mention that $\mathcal{S}_j^h$ is closed. It satisfies the constraints $\textup{(Qi)}, \textup{(Qii)}$, and $\textup{(Qv)}$. We have $y_{j-1}\leq C+y_{j-1}$ and $y_{j-1}\leq D(T)$. Since $\bar{\tau}(y)\leq \tau(y)$ for all $y$, we also have $\bar{\tau}(y_{j-1})+\nu(y_{j-1}-y_{j-1})-\tau(y_{j-1})\leq h$. It means that $y_{j-1}$ belongs to $\mathcal{S}_j^h$, and thus $y_{j-1}\leq y_j$. Hence, $(\boldsymbol{d},\boldsymbol{y})$ satisfies also constraint $\textup{(Qiii)}$. Since $\bar{\tau}(y_j)+\nu(y_j-y_{j-1})-\tau(y_{j-1})\leq h$, the solution also satisfies constraint $\textup{(Qvi)}$ and since $\tau(\makebox[1ex]\cdot)$ is nondecreasing, it satisfies constraint $\textup{(Qiv)}$. Therefore, it is a feasible solution of \eqref{Qmaxnoreth} and the existence of a feasible solution of \eqref{Smaxnoreth} implies the existence of a feasible solution of \eqref{Qmaxnoreth}.
For the converse implication, suppose that \eqref{Qmaxnoreth} admits a feasible solution, and consider the optimization problem consisting in maximizing $\sum_{j=1}^Sy_j$ over its feasible solutions. These feasible solutions form a compact set of $\mathbb{R}_+^S$ since it is obviously bounded and since the semicontinuities of $\tau(\makebox[1ex]\cdot)$ and $\bar\tau(\makebox[1ex]\cdot)$ imply that it is closed. There is thus an optimal solution $(\boldsymbol{d}^*,\boldsymbol{y}^*)$ to that optimization problem. Suppose for a contradiction that there is a $j$ such that $y_j^*< \sup \mathcal{S}_j^h$. Denote $j_0$ the largest such index. Let us slightly increase $y_{j_0}^*$, while letting the other $y_j^*$ untouched. Redefine $d_j^*$ to be $h+\tau(y_{j-1}^*)$ for all $j\geq j_0$. The pair $(\boldsymbol{d}^*,\boldsymbol{y}^*)$ remains feasible for \eqref{Qmaxnoreth} (we use here the fact that $\sup\mathcal{S}_j^h$ is nondecreasing with $j$), while increasing the quantity $\sum_{j=1}^Sy_j^*$, which is a contradiction with the optimality assumption. Thus, we have $y_j^*=\sup \mathcal{S}_j^h$ for all $j$ and $d_j^*:=h+\tau(y_{j-1}^*)$ for all $j$ provides a feasible solution for \eqref{Smaxnoreth}.
\qed
\subsection{Minimizing the average waiting time}\label{subsec:pave}
\subsubsection{The algorithm} \label{subsubsec:algo_ave}The following map will be useful in the description of the algorithm.
$$f^{\operatorname{ave}}:(d,y,y')\longmapsto\int_y^{y'}(d-\bar\tau(u))du.$$
Define the directed graph $\mathcal{G}=(\mathcal{V},\mathcal{A})$ by
$$\begin{array}{rcl}
\mathcal{V} & = & \{(0,0)\}\cup\{\eta,2\eta,\ldots,M\eta\}\times\{\eta,2\eta,\ldots,R\eta\}\smallskip\\
\mathcal{A} & = & \left\{\big((z,r),(z',r')\big)\in\mathcal{V}^2\colon r+z'=r'\;\mbox{and}\;\bar\tau(r')-\bar\tau(r)+\nu(z'-z)+\frac 1 2 \gamma\eta\geq 0\right\},
\end{array}$$
where we use the following notations:
$$\alpha=\inf_{t\in[0,T)}D'_+(t),\qquad R=\left\lfloor\frac{D(T)M}C\right\rfloor,\qquad\eta=\frac C M,\qquad\mbox{and}\qquad\gamma=2\left(\frac 1\alpha+2\nu\right).$$
Set for each arc $a=\big((z,r),(z',r')\big)$ a weight $w(a)=f^{\operatorname{ave}}\big(\bar\tau(r')+\nu(z'-\eta),r+\eta,r'\big)$. \\
If $CS<D(T)$, there is no feasible solution. We can thus assume that $CS\geq D(T)$. The algorithm consists first in computing a path $\tilde p$ minimizing $\sum_{a\in\mathcal{A}(p)}w(a)$, among all paths $p$ with at most $S$ arcs starting at $(0,0)\in\mathcal{V}$ and ending at a vertex $(z,r)$ with $r=R\eta$. Such paths exist, see Lemma~\ref{lem:kopt} below. The computation of $\tilde p$ can be done in $O(S|\mathcal{A}|)$ via dynamic programming. Let the vertex sequence of $\tilde p$ be $\big((z_0,r_0),(z_1,r_1),\ldots,(z_n,r_n)\big)$. The algorithm consists then in defining recursively
$$\tilde y_j=\left\{\begin{array}{ll}
0 & \mbox{for $j=0$} \\
\min\big(r_j+\eta,\tilde y_{j-1}+C,D(T)\big) & \mbox{for $j=1,\ldots,n$} \smallskip\\
D(T) & \mbox{for $j=n+1,\ldots,S$}
\end{array}\right.$$
and
$$\tilde d_j=\left\{\begin{array}{ll}
\bar\tau(\tilde y_j)+\nu(\tilde y_j-\tilde y_{j-1})+j\gamma\eta & \mbox{for $j=1,\ldots,n$} \smallskip\\
\max(\tilde d_n,T+\nu(\tilde y_{n+1}-\tilde y_n)) &\mbox{for $j=n+1,\ldots,S$}
\end{array}\right.$$
and outputting the pair $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$. The construction of the graph is sketched on Figure~\ref{fig:algo}.\\
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{algo.pdf}
\caption{\label{fig:algo} A feasible path in the algorithm proposed for solving \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}.}
\end{center}
\end{figure}
As it will be shown below, this $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is a feasible solution of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} providing a value to the objective function within a $O\left(\frac {S^2} M\right)$ gap to the optimal value.
\subsubsection{Proof of Theorem~\ref{thm:pave}}
We provide three lemmas, which are proved in a separate section at the very end of the paper to ease the reading. Theorem~\ref{thm:pave} results immediately from their combination.
In the proofs of the lemmas, we assume that $M$ is large enough so that $\eta<D(T)$. Since in Theorem~\ref{thm:pave}, $M$ appears in `big O' formulas, it is a valid assumption. Anyway it is what is sought in practice: the larger $M$, the larger the accuracy of the solution. An $\eta$ of same order of magnitude of $D(T)$ would be useless.
\begin{lemma}\label{lem:kopt} For every optimal solution $(\boldsymbol{d}^*,\boldsymbol{y}^*)$, there is a path $p$ with at most $S$ arcs starting at $(0,0)\in\mathcal{V}$ and ending at a vertex $(z,r)$ with $r=R\eta$ and such that
$$\frac 1 {D(T)}\sum_{a\in A(p)}w(a)\leq g^{\ave}(\boldsymbol{d}^*,\boldsymbol{y}^*).$$
\end{lemma}
\begin{lemma}\label{lem:feas}
The pair $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is a feasible solution of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}.
\end{lemma}
\begin{lemma}\label{lem:byy} The following inequality holds:
$$g^{\ave}(\tilde\boldsymbol{d},\tilde\boldsymbol{y})\leq\frac 1 {D(T)}\sum_{a\in A(\tilde p)}w(a)+O\left(\frac {S^2} M\right).$$
\end{lemma}
\subsection{When the demand function is a step function}
Better complexity results can be obtained when the demand is a step function. A {\em step function} is a function that can be written as a finite linear combination of indicator functions of intervals. The assumption on $D(\makebox[1ex]\cdot)$ being a step function means that the users arrive only on a finite number of instants. As it has already been noted, the assumption $\nu=0$ is equivalent to the assumption that every user boards a shuttle as soon as he arrives in the terminal.
\begin{proposition}\label{prop:pmax}
Assume that $D(\makebox[1ex]\cdot)$ is a step function defined with $K$ discontinuities, supposed to be part of the input. Suppose moreover that $\nu=0$. Then for each of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, there is an algorithm computing an optimal solution in $O(K^2S)$.
\end{proposition}
It turns out that when $C$ and the values taken by $D(\makebox[1ex]\cdot)$ are integer, the loads of the shuttles in the optimal solution returned by the algorithm are also integer. We cover thus the case where the users are atoms.
\subsubsection*{The algorithm} We provide only the algorithm for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}, the other case can be dealt similarly. Let $t_1<\cdots<t_K$ be the $K$ discontinuities. Define the directed graph $\mathcal{G}=(\mathcal{V},\mathcal{A})$ by
$$\begin{array}{rcl}
\mathcal{V} & = & \{0\}\cup\left\{D(t_k)+Cq\colon k\in\left\{1,\ldots,K\right\}, q\in\{0,1,\ldots,Q\}\right\} \smallskip\\
\mathcal{A} & = & \{(y,y')\in\mathcal{V}^2\colon 0\leq y'-y\leq C\},
\end{array}$$
where $Q=\lfloor D(T)/C\rfloor$. Note that the vertex set is a finite subset of $\mathbb{R}_+$. Set for each arc $a=(y,y')$ a weight $w(a)=\bar\tau(y')-\tau(y)$. We consider the two vertices $0$ and $D(T)$ (obtained with $k=K$ and $q=0$). \\
If $CS<D(T)$, there is no feasible solution. We can thus assume that $CS\geq D(T)$. The algorithm consists first in computing a $0$-$D(T)$ path $\tilde p$ with $S$ arcs minimizing $\max_{a\in \mathcal{A}(\tilde p)}w(a)$. Within the proof of Proposition~\ref{prop:pmax} below, we show that from any feasible solution we can build a $0$-$D(T)$ path with $S$ arcs in $\mathcal{G}$. Thus, when the problem is feasible, such paths exist in $\mathcal{G}$. The computation of $\tilde p$ can be done in $O(S|\mathcal{A}|)$ via dynamic programming. Let the vertex sequence of $\tilde p$ be $(\tilde y_0,\tilde y_1,\ldots,\tilde y_S)$. The end of the algorithm consists in defining $\tilde d_j=\bar\tau(\tilde y_j)$ for all $j\in\{1,\ldots,S\}$ and outputting the pair $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$.
As it will be shown below, this $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is an optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}.
\subsubsection*{Proof of Proposition~\ref{prop:pmax}} According to Claim~\ref{claim:change_obj}, we replace the objective function of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} by $\max_{j\in\left\{1,\ldots,S\right\}}(d_j-\tau(y_{j-1}))$. It can easily be checked that $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is feasible for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}. It provides a value $\max_{j\in\left\{1,\ldots,S\right\}}(\bar\tau(\tilde y_j)-\tau(\tilde y_{j-1}))$ for \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} (with the alternative objective function), and this value coincides with $\max_{a\in \mathcal{A}(\tilde p)}w(a)$. The path $\tilde p$ describes therefore a solution of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} with a value equal to $\max_{a\in \mathcal{A}(\tilde p)}w(a)$.
Conversely, let $(\boldsymbol{d},\boldsymbol{y})$ be any feasible solution of \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}. Let $\bar\boldsymbol{y}$ be the sequence defined by $\bar y_j=\min\{y\in\mathcal{V}\colon y\geq y_j\}$. On the one hand, we have $\bar y_{j-1}\leq \bar y_j$ because $y_{j-1}\leq y_j$. On the other hand, we have $\bar y_{j-1}+C\geq y_{j-1}+C\geq y_j$.
If $\bar y_{j-1}+C\in\mathcal{V}$, we have $\bar y_{j-1}+C\geq\bar y_j$ by definition of $\bar y_j$. If $\bar y_{j-1}+C\notin\mathcal{V}$, then $\bar y_{j-1}+C> D(T)\geq\bar y_j$ since $D(T)\in\mathcal{V}$. Thus, $(\bar y_{j-1},\bar y_j)\in\mathcal{A}$ for all $j\in\{1,\ldots,S\}$. We have $\bar y_0=0$ and $\bar y_S=D(T)$ and the sequence $\bar \boldsymbol{y}$ is a $0$-$D(T)$ path $p$ with $S$ arcs.
Second, we prove that $\bar\tau(\bar y_j)-\tau(\bar y_{j-1})\leq d_j-\tau( y_{j-1})$ as follows. There exists a unique $k$ such that $D(t_k)<y_j\leq D(t_{k+1})$. By definition of $D(\makebox[1ex]\cdot)$, we have $D(t)=D(t_k)$ for all $t\in[t_k,t_{k+1})$, and thus $\bar\tau(y_j)=t_{k+1}$. Since $D(t_{k+1})\in\mathcal{V}$, we have $\bar y_j\leq D(t_{k+1})$ by definition of $\bar y_j$, and hence $\bar\tau(\bar y_j)\leq t_{k+1}$ (directly by definition of $\bar\tau(\makebox[1ex]\cdot)$). Therefore, $\bar\tau(\bar y_j)-\tau(\bar y_{j-1})\leq\bar\tau(y_j)-\tau(y_{j-1})\leq d_j-\tau(y_{j-1})$ (where we use the fact that $\tau(\makebox[1ex]\cdot)$ is nondecreasing).
Finally, we have $\max_{a\in\mathcal{A}(p)}w(a)\leq \max_{j\in\{1,\ldots,S\}}\big(d_j-\tau( y_{j-1})\big)$. As the path $\tilde p$ is optimal, $\max_{a\in\mathcal{A}(\tilde p)}w(a)$ is a lower bound on the value taken by the (alternative) objective function on $(\boldsymbol{d},\boldsymbol{y})$. \qed
\section{When return is allowed}\label{sec:ret}
\subsubsection*{The algorithm} The following map will be useful:
$$f^{\max}\colon(\ell,y,y')\longmapsto\left\{\begin{array}{ll} \max(\ell,\bar\tau(y'))+\nu(y'-y)-\tau(y) & \mbox{if $y'\geq y$} \\ 0 & \mbox{if $y'=y$.}\end{array}\right.$$
We introduce the following two sets
$$\begin{array}{rcl}
\mathcal{Q} & = & \{0,\eta,\ldots,(\left\lfloor T^+/\eta\right\rfloor+1)\eta\}^S \smallskip\\
\mathcal{R} & = & \left\{\boldsymbol{r}\in\{0,\eta,\ldots,R\eta\}^S\colon 0\leq r_k-r_{k-1}\leq M\eta\mbox{ for $k=2,\ldots,S$}\right\},
\end{array}$$ where
$$\eta=\frac{C}{M},\qquad R=\left\lfloor\frac{D(T)M}C\right\rfloor,\qquad\mbox{and}\qquad T^+=T+\frac{\nu D(T)}S+\left(\left\lceil\frac{D(T)}{CS}\right\rceil-1\right)\pi.$$
Define the directed graph $\mathcal{G}=(\mathcal{V},\mathcal{A})$ by
$$\begin{array}{rcl}
\mathcal{V} & = &\left\{(z,\boldsymbol{q},\boldsymbol{r})\in\{0,\eta,\ldots,M\eta\}\times\mathcal{Q}\times\mathcal{R}\colon r_k\leq D(q_k)\;\mbox{for $k=1,\ldots,S$}\right\}\medskip\\
\mathcal{A} & = & \left\{\big((z,\boldsymbol{q},\boldsymbol{r}),(z',\boldsymbol{q}',\boldsymbol{r}')\big)\in \mathcal{V}^2\;\mbox{satisfying $(\star)$}\right\},
\end{array}
$$
where $$(\star)\quad r_S+z'=r_1'\quad\mbox{and}\quad q_k'-q_k-\nu(r_k-r_{k-1}) -\pi+(1+\nu)\eta\geq 0\mbox{ for $k=1,\ldots, S$}.$$
We adopt the convention $D(t)=D(T)$ when $t\geq T$ and we define $r_0=r_1-z$. Set for each arc $a=\big((z,\boldsymbol{q},\boldsymbol{r}),(z',\boldsymbol{q}',\boldsymbol{r}')\big)$ a weight
$w(a)=\max_{k\in\left\{1,\ldots,S\right\}}f^{\max}(q_k'-\eta,r_{k-1}'+\eta,r'_k)$
where $r'_0=r'_1-z'$.\\
The algorithm consists first in computing a path $\tilde p$ minimizing $\max_{a\in A(p)}w(a)$ among all paths $p$ starting at $(0,\boldsymbol{0},\boldsymbol{0})\in\mathcal{V}$ (the `all zero' vector) and ending at a vertex $(z,\boldsymbol{q},\boldsymbol{r})$ with $r_S=R\eta$. Such paths exist, see Lemma~\ref{lem:loptmax} below. It can be done in $O(|\mathcal{V}||\mathcal{A}|)$ via dynamic programming. Let the vertex sequence of $\tilde p$ be $\big((0,\boldsymbol{0},\boldsymbol{0}),(z_0,\boldsymbol{q}^{0},\boldsymbol{r}^{0}),\ldots,(z_n,\boldsymbol{q}^n,\boldsymbol{r}^{n})\big)$. The vector $\boldsymbol{r}^i$ models the cumulative loads of the $S$ shuttles when they perform their $i$th departure. The vector $\boldsymbol{q}^{i}$ models the times at which the loading of the $S$ shuttles starts when they perform their $i$th departure. These quantities are computed only approximatively (with an accuracy $\eta$).
The algorithm consists then in defining recursively
for all $j=iS+k$ with $i=0,\ldots,n$ and $k=1,\ldots,S$
$$\begin{array}{rcl}
\tilde y_{j}&=&\left\{\begin{array}{ll}\min\big(r_{k}^i+\eta,y_{j-1}+C,D(T)\big)&\mbox{if } r^i_k>r^i_{k-1}\\\tilde y_{j-1}&\mbox{otherwise}\end{array}\right.\\
\tilde d_{j}&=&\max(q_{k}^i,\bar\tau(\tilde y_{j}))+j\tilde\gamma\eta+\nu(\tilde y_{j}-\tilde y_{j-1})
\end{array}$$
where $\tilde y_0=0$, $r^i_0=r^{i-1}_S$, $r^0_0=0$, and $\tilde\gamma=(1+2\nu+1/\alpha)$. For $j=(n+1)S+1,\ldots,N$
$$\begin{array}{rcl}
\tilde y_{j}&=&D(T)\\
\tilde d_{j}&=&\max\big(\tilde d_{j-S}+\pi,T\big)+\nu(\tilde y_j-\tilde y_{j-1}).
\end{array}$$
and outputting the pair $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$.\\
As it will be stated below, this $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is a feasible solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} providing a value to the objective function $g^{\max}(\makebox[1ex]\cdot)$ within a $O\left(\frac {S^2} M\right)$ gap to the optimal value.
\subsubsection*{Proof of Theorem~\ref{thm:pmaxret}}
We provide four lemmas. The proof of Lemma~\ref{lem:t+} is almost the one of Proposition~\ref{prop:finite_max} and the proofs of the three others follow the same scheme as the ones of Lemmas~~\ref{lem:kopt},~\ref{lem:feas}, and~\ref{lem:byy}. They are thus omitted. Theorem~\ref{thm:pmaxret} results immediately from their combination.
\begin{lemma}\label{lem:loptmax}
For every optimal solution $(\boldsymbol{d}^*,\boldsymbol{y}^*)$, there is a path $p$ starting at $(0,\boldsymbol{0},\boldsymbol{0})\in\mathcal{V}$ and ending at a vertex $(z,\boldsymbol{q},\boldsymbol{r})$ with $r_S=R\eta$ and such that
$$\max_{a\in A(p)} w(a)\leq g^{\max}(\boldsymbol{d}^*,\boldsymbol{y}^*).$$
\end{lemma}
\begin{lemma}\label{lem:t+}
There is an optimal solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} for which $T^+$ is an upper bound on the loading time of the last departure.
\end{lemma}
\begin{lemma}\label{lem:feasret}
The pair $(\tilde \boldsymbol{d}, \tilde \boldsymbol{y})$ is a feasible solution of \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}.
\end{lemma}
\begin{lemma}\label{lem:bllyymax}
The following inequality holds:
\begin{eqnarray*}
g^{\max}(\tilde\boldsymbol{d},\tilde\boldsymbol{y})&\leq&\max_{a\in A(\tilde p)} w(a)+O\left(\frac {S^2}M\right)\\
\end{eqnarray*}
\end{lemma}
\section{Experimental results}\label{sec:experiments}
In this section, we test the performance of the algorithms described in previous sections for problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}, \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, and \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}. As explained in Section~\ref{sec:mainresults}, we do not have such an algorithm for problem \textup{P$^{\ave}_{\mbox{\textup{\tiny return}}}$}.
\subsection{Data}
Our experiments are based on real data provided by our partner Eurotunnel. They are related to the transportation of freight trucks between France and Great Britain. Some parameters are fixed as constants of the problems and do not vary from an instance to another. For the constants $C, T, \nu ,\pi$ of our problem, we take the real values used in practice by the company:
$$
C=32,\quad T=1440\min \mbox{ (one day)}, \quad \nu=0.625\min,\quad \pi=34\min.
$$
(The value taken for $\pi$ is actually the duration of a trip going from France to Great-Britain, and not of the round trip, which lasts approximatively twice this quantity.)
Two functions $D(\makebox[1ex]\cdot)$ are used. The first one (``1P'') is a piecewise affine map obtained by averaging the real demand over several days. It turns out that this function has a peak period in the morning. The second function (``2P''), also piecewise affine, is obtained from the first by artificially adding a second peak period in the evening. In both cases, $D(\makebox[1ex]\cdot)$ is increasing and $D(T)=2016$. For problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} and \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, we consider $S\in[100,250]$ since the number of shuttle trips in every direction is within this range for a typical day.
The numerical experiments are performed on a Macbook Pro of 2014 with four 2.2 Ghz processors and 16 Gb of ram.
\subsection{Results}
The problems \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{}, \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, and \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} are solved with algorithms described in this article. The results are summarized in the following tables.
Table~\ref{tab:Pmaxnoret} gives the numerical results for problem \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}. The next column is the number of shuttles $S$ in the fleet. The third column provides the parameter $\varepsilon$ of the algorithm, which is an \textit{a priori} upper bound on the optimality gap (Corollary~\ref{cor:approx}). The two next columns give respectively the lower bound and the upper bound (value of the feasible solution returned by the algorithm), both expressed in minutes. The next column is the optimality gap. The last column provides the CPU time spent solving the problem.
\begin{table}[h]
\begin{tabular}{rr|r|rrr|r}
\multicolumn{1}{c}{$D$} & \multicolumn{1}{c|}{$S$} & \multicolumn{1}{c|}{$\varepsilon$} & \multicolumn{1}{c}{LB} & \multicolumn{1}{c}{UB} & \multicolumn{1}{c|}{gap} & \multicolumn{1}{c}{CPU}\\&&&&&\multicolumn{1}{c|}{(\%)}&\multicolumn{1}{c}{(\second)}\\
\hline
1P & 100 & $10^{-4}$ &27.2 & 27.2 & 0.0 & 0 \\
1P & 150 & $10^{-4}$ &18.1 & 18.1 & 0.0 & 0 \\
1P & 200 & $10^{-4}$ &13.6 & 13.6 & 0.0 & 0 \\
1P & 250 & $10^{-4}$ &11.0 & 11.0 & 0.0 & 0 \\
\hline
2P & 100 & $10^{-4}$ &27.0 & 27.0 & 0.0 & 0 \\
2P & 150 & $10^{-4}$ &18.0 & 18.0 & 0.0 & 0 \\
2P & 200 & $10^{-4}$ &13.5 & 13.5 & 0.0 & 0 \\
2P & 250 & $10^{-4}$ &10.8 & 10.8 & 0.0 & 0 \\
\end{tabular}
\bigskip
\caption{Numerical results for problem \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}\label{tab:Pmaxnoret}}
\end{table}
Table~\ref{tab:Pavenoret} gives the numerical results for problem \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}. The columns are the same as for Table~\ref{tab:Pmaxnoret} except the third one which provides here the parameter $M$ of the algorithm. We know from Theorem~\ref{thm:pave} that the gap between the upper bound and the lower bound converges asymptotically to 0 when $M$ goes to infinity. We tried $M=32$ and $M=128$.
\begin{table}[h]
\begin{tabular}{rr|r|rrr|r}
\multicolumn{1}{c}{$D$} & \multicolumn{1}{c|}{$S$} & \multicolumn{1}{c|}{$M$} & \multicolumn{1}{c}{LB} & \multicolumn{1}{c}{UB} & \multicolumn{1}{c|}{gap} & \multicolumn{1}{c}{CPU}\\&&&&&\multicolumn{1}{c|}{(\%)}&\multicolumn{1}{c}{(\second)}\\
\hline
1P & 100 & 32 &17.3 & 19.2 & 10.0 & 34 \\
1P & 100 & 128 &18.7 & 19.2 & 2.5 & 1930 \\
\hline
1P & 200 & 32 &7.7 & 9.6 & 19.4 & 70 \\
1P & 200 & 128 &9.1 & 9.6 & 5.0 & 4035 \\
\hline
2P & 100 & 32 &17.5 & 19.4 & 9.9 & 38 \\
2P & 100 & 128 &18.9 & 19.4 & 2.5 & 2387 \\
\hline
2P & 200 & 32 &7.9 & 9.7 & 19.2 & 76 \\
2P & 200 & 128 &9.2 & 9.7 & 5.0 & 4463 \\
\end{tabular}
\bigskip
\caption{Numerical results for problem \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}\label{tab:Pavenoret}}
\end{table}
Table~\ref{tab:Pmaxret} gives the numerical results for problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}. The columns are the same as for Table~\ref{tab:Pavenoret}. Since the computation time was prohibitively long as soon as $S\geq 2$, we made experiments for $S=1$. To get realistic waiting times for the users, we divided the demand functions by 3.5 leading to (``1P$^*$'') and (``2P$^*$''). Again, we know from Theorem~\ref{thm:pmaxret} that for large $M$, we will be close to the optimal solution and we tried $M=16$ and $M=32$.
\begin{table}[h]
\begin{tabular}{rr|r|rrr|r}
\multicolumn{1}{c}{$D$} & \multicolumn{1}{c|}{$S$} & \multicolumn{1}{c|}{$M$} & \multicolumn{1}{c}{LB} & \multicolumn{1}{c}{UB} & \multicolumn{1}{c|}{gap} & \multicolumn{1}{c}{CPU}\\&&&&&\multicolumn{1}{c|}{(\%)}&\multicolumn{1}{c}{(\second)}\\
\hline
1P$^*$ & 1 & 16 &168.6 & 214.2 & 21.3 & 104 \\
1P$^*$ & 1 & 32 &184.5 & 210.8 & 12.5 & 1654 \\
\hline
2P$^*$ & 1 & 16 &101.0 & 131.0 & 22.9 & 106\\
2P$^*$ & 1 & 32 &106.9 & 126.3 & 15.4 & 1848 \\
\end{tabular}
\bigskip
\caption{Numerical results for problem \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}\label{tab:Pmaxret}}
\end{table}
\subsection{Comments}
In Table~\ref{tab:Pmaxnoret}, the results for problem \textup{P$^{\max}_{\mbox{\textup{\tiny no return}}}$}{} are extremely convincing, the optimal solutions were found almost immediately.
In Table~\ref{tab:Pavenoret}, the algorithm for problem \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} was able to find provable good solutions within reasonable computation times. We may note that increasing $M$ after some threshold does not seem to improve the quality of the return solution. This was confirmed by other experiments not shown here. It may indicate that the algorithm could be used efficiently in practice.
In Table~\ref{tab:Pmaxret}, the same holds for \textup{P$^{\max}_{\mbox{\textup{\tiny return}}}$}{} once we have accepted to work with one shuttle. Finding an efficient algorithm with at least two shuttles seems to remain a challenging task.
\section{Proofs of Lemmas of Section~\ref{subsec:pave}}
\begin{claim}\label{lem:low_y}
We have $r_j\leq \tilde y_j\leq r_j+\eta$ for $j=0,\ldots,n$.
\end{claim}
\begin{proof}
We have $\tilde y_j\leq r_j+\eta$ by definition. Using $r_j-r_{j-1}\leq M\eta$ in a feasible path, a direct induction shows that $\tilde y_j\geq r_j$ for $j=0,\ldots,n$.
\end{proof}
\begin{claim}\label{lem:infbarD}
Suppose that $\alpha>0$. Then for all $y\in[0,D(T)]$ and $\delta\in[0,D(T)-y]$, we have $\bar\tau(y+\delta)\leq\bar\tau(y)+\delta/\alpha$ and $\tau(y+\delta)\leq\tau(y)+\delta/\alpha$.
\end{claim}
\begin{proof}
Diewert~\cite{diewert1981alternative} extended the Mean Value Theorem to semicontinuous functions. According to his result,
for any $0\leq a\leq b\leq T$, there exists $c\in[a,b)$ such that
$$
\limsup_{t\to 0^+} \frac{D(c+t)-D(c)}{t}\leq \frac{D(b)-D(a)}{b-a}.
$$
Since $$\alpha=\inf_{t\in[0,T)}D'_+(t)\leq D'_+(c)=\limsup_{t\to 0^+} \frac{D(c+t)-D(c)}{t},$$ we have $D(a)+\alpha(b-a)\leq D(b)$ for any $0\leq a\leq b\leq T$. With $a=\bar\tau(y)$ and $b=\bar\tau(y)+\delta/\alpha$, we get $y+\delta\leq D(\bar\tau(y))+\delta\leq D(\bar\tau(y)+\delta/\alpha)$ (the first inequality is given by Lemma~\ref{lem:pseudo}). By definition of $\bar\tau$, we have $\bar\tau(y+\delta)\leq\bar\tau(y)+\delta/\alpha$.
The second inequality is proved along the same lines.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:kopt}]
Let $(\boldsymbol{d}^*,\boldsymbol{y}^*)$ be an optimal solution of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{} such that $d_j^*=\bar\tau(y_j^*)+\nu(y_j^*-y_{j-1}^*)$ for all $j\in\{1,\ldots,S\}$ (Claim~\ref{claim:remove_dj}). Consider the sequence $\lfloor y^*_1/\eta\rfloor\eta,\ldots,\lfloor y^*_S/\eta\rfloor\eta$ and remove the repetitions. Since the sequence is nondecreasing, we obtain an increasing sequence $\boldsymbol{r}=r_1,\ldots,r_n$. We introduce $\sigma\colon\{1,\ldots,n\}\rightarrow\{1,\ldots,S\}$ with $\sigma(j)$ being the smallest index such that $r_j=\lfloor y_{\sigma(j)}^*/\eta\rfloor\eta$. We then define $z_j=r_j-r_{j-1}$ for $j\in\left\{1,\ldots,n\right\}$, with $r_0=0$. We prove that the sequence $(z_j,r_j)_{j\in\left\{1,\ldots,n\right\}}$ provides a feasible path from the vertex $(0,0)$ to $(z_n,r_n)$ in $\mathcal{G}$. First note that $r_n=R\eta$ since $y_S^*=D(T)$ and that $z_j>0$. For all $j\in\left\{1,\ldots,n\right\}$, we have $z_j=r_j-r_{j-1}=\left(\lfloor y_{\sigma(j)}^*/\eta\rfloor-\lfloor y_{\sigma(j)-1}^*/\eta\rfloor+\lfloor y_{\sigma(j)-1}^*/\eta\rfloor-\lfloor y_{\sigma(j-1)}^*/\eta\rfloor\right)\eta< M\eta+\eta$, since $\lfloor y_{\sigma(j)-1}^*/\eta\rfloor=\lfloor y_{\sigma(j-1)}^*/\eta\rfloor$ and $y^*_{\sigma(j)}-y^*_{\sigma(j)-1}\leq C$. Thus $z_j\leq M\eta$. Moreover by definition, $r_j\leq R\eta$. Therefore $(z_j,r_j)\in\mathcal{V}$ for all $j\in\left\{1,\ldots,n\right\}$. Let us now prove that $((z_{j-1},r_{j-1}),(z_j,r_j))\in\mathcal{A}$ for all $j\in\left\{2,\ldots,n\right\}$. By definition, $z_j+r_{j-1}=r_j$.
Note that because of the definition of $r_j$, we have $r_j\leq y_{\sigma(j)}^*\leq y_{\sigma_{(j+1)}-1}^*<r_j+\eta$.
Combining these inequalities for all $j$ with Claim~\ref{lem:infbarD} leads to
\begin{eqnarray*}
\bar\tau(r_j)-\bar\tau(r_{j-1})+\nu(z_j-z_{j-1})&\geq & \bar\tau(y_{\sigma(j)}^*)-\eta/\alpha-\bar\tau(y_{\sigma(j-1)}^*) \\
& & \qquad+\nu(y_{\sigma(j)}^*-y_{\sigma(j)-1}^*-y_{\sigma(j-1)}^*+y_{\sigma(j-1)-1}^*-2\eta)\\
& = & d_{\sigma(j)}^*-d_{\sigma(j-1)}^*-(1/\alpha+2\nu)\eta \\
&\geq& -(1/\alpha+2\nu)\eta.
\end{eqnarray*}
The sequence $(z_j,r_j)_{j\in\left\{1,\ldots,n\right\}}$ is then a feasible path $p$ from the vertex $(0,0)$ to $(z_n,r_n)$ in $\mathcal{G}$, with at most $S$ arcs. The only thing that remains to be checked in that the claimed inequality holds.
We have $f^{\operatorname{ave}}( d_{\sigma(j)}^*, y_{\sigma(j)-1}^*, y_{\sigma(j)}^*)\geq f\big(\bar\tau(r_j)+\nu(z_j-\eta),r_{j-1}+\eta, r_j\big)$ for all $j\in\left\{1,\ldots,n\right\}$ since $f^{\operatorname{ave}}(\makebox[1ex]\cdot)$ is nonincreasing in the second term and nondecreasing in the first and third terms. Thus, $$\frac 1 {D(T)}\sum_{a\in A(p)}w(a)\leq\frac 1 {D(T)}\sum_{j=1}^nf^{\operatorname{ave}}( d_{\sigma(j)}^*, y_{\sigma(j)-1}^*, y_{\sigma(j)}^*)
\leqg^{\ave}(\boldsymbol{d}^*,\boldsymbol{y}^*).$$ Since this inequality holds for any optimal solution of \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}, we get the conclusion.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:feas}]
We are going to check that $(\tilde\boldsymbol{d},\tilde\boldsymbol{y})$ is feasible for \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}.
For $j=1,\ldots,n$, we have $\tilde y_j-\tilde y_{j-1}\leq C$ by definition of $\tilde \boldsymbol{y}$. For $j=n+2,\ldots,S$, we have $\tilde y_j-\tilde y_{j-1}=0$. Finally, we have $\tilde y_{n+1}-\tilde y_n\leq D(T)-r_n<\eta\leq C$ (where we use Claim~\ref{lem:low_y} to bound $\tilde y_n$). Thus, $\tilde \boldsymbol{y}$ satisfies constraint (i).
For $j=1,\ldots,n$, if $r_j>r_{j-1}$, we have $\tilde y_{j-1}\leq r_{j-1}+\eta\leq r_j\leq \tilde y_j$ (the last inequality being Claim~\ref{lem:low_y}) and if $r_j=r_{j-1}$, necessarily $r_j=r_{j-1}=0$ and $\tilde y_{j-1}=\tilde y_j=\eta$.
Thus, $\tilde \boldsymbol{y}$ satisfies constraint (ii).
Consider $j\in\{2,\ldots,n\}$. We have
\begin{eqnarray*}
\tilde d_j-\tilde d_{j-1} & = & \bar\tau(\tilde y_j)+\nu(\tilde y_j-\tilde y_{j-1})-\tau(\tilde y_{j-1})-\nu(\tilde y_{j-1}-\tilde y_{j-2})+\gamma\eta \\
& \geq & \bar\tau(r_j)-\bar\tau(r_{j-1}+\eta)+\nu(r_j-2r_{j-1}+r_{j-2}-2\eta)+\gamma\eta \\
& \geq & \bar\tau(r_j)-\bar\tau(r_{j-1})-\eta/\alpha+\nu(z_j-z_{j-1}-2\eta)+\gamma\eta\\
& \geq & 0.
\end{eqnarray*} The first inequality is obtained with the help of Claim~\ref{lem:low_y}. For the second one, we use Claim~\ref{lem:infbarD} and also that $z_j=r_j-r_{j-1}$ and $z_{j-1}=r_{j-1}-r_{j-2}$ which hold because $\tilde p=\big((z_0,r_0),(z_1,r_1),\ldots,(z_n,r_n)\big)$ is a path in $\mathcal{G}$. For the last inequality, we use $\bar\tau(r_j)-\bar\tau(r_{j-1})+\nu(z_j-z_{j-1})+\frac 1 2 \gamma\eta\geq 0$, which holds again because $\tilde p$ is a path, and the definition of $\gamma$. For $j\geq n+1$, we have $\tilde d_j\geq \tilde d_{j-1}$ by definition. Constraint (iii) is thus satisfied for all $j$.
If $n<S$, then $\tilde y_S=D(T)$ by definition. From now on, we suppose thus that $n=S$. We also suppose that $S\geq 2$. The case $S=1$ being easy to check (and anyway, for a complexity point of view, this case does not matter). If $\tilde y_{S-1}=r_{S-1}+\eta$, then $\tilde y_{S-1}+C=r_{S-1}+\eta+C\geq r_S+\eta>D(T)$ (here we use that $z_S\leq C$ and that $r_S=R\eta$) and thus $\tilde y_S=D(T)$. If $\tilde y_{S-1}=D(T)$, then $\tilde y_S=D(T)$ since $\tilde y_{S-1}\leq \tilde y_S\leq D(T)$. Hence, in all these cases, $\tilde \boldsymbol{y}$ satisfies constraint (iv). The only remaining case is when $\tilde y_{S-1}=\tilde y_{S-2}+C$. If $j$ is an index in $\left\{1,\ldots,S-2\right\}$ such that $\tilde y_j=r_j+\eta$, then we have $r_{j+1}+\eta\leq r_j+C+\eta=\tilde y_j+C$ and $r_{j+1}+\eta\leq D(T)$, and thus $\tilde y_{j+1}=r_{j+1}+\eta$. It implies that as soon as some $j_0\in\left\{1,\ldots,S-1\right\}$ is such that $\tilde y_{j_0}=r_{j_0}+\eta$, we have $\tilde y_{S-1}=r_{S-1}+\eta$, which is a case we have already dealt with. Since $r_j+\eta\leq r_S\leq D(T)$ for $j\in\{1,\ldots,S-1\}$, we are left with the case where $\tilde y_j=\tilde y_{j-1}+C$ for every $j\in\left\{1,\ldots,S-1\right\}$. In this situation, we have $\tilde y_{S-1}=(S-1)C$
and hence $\tilde y_{S-1}+C=CS\geq D(T)$. Since $r_S+\eta> D(T)$, we get that $\tilde y_S=D(T)$, and $\tilde \boldsymbol{y}$ satisfies constraint (iv) in every case.
For $j=1,\ldots,n$, we have $\tilde d_j\geq \bar\tau(\tilde y_j)+\nu(\tilde y_j-\tilde y_{j-1})$ by definition, and for $j\geq n+1$, we have $\tilde d_j\geq T+\nu(\tilde y_{n+1}-\tilde y_n)\geq\bar\tau(\tilde y_j)+\nu(\tilde y_j-\tilde y_{j-1})$. Thus $\tilde \boldsymbol{d}$ satisfies constraint (v) and $(\tilde \boldsymbol{d},\tilde \boldsymbol{y})$ is feasible for \textup{P$^{\ave}_{\mbox{\textup{\tiny no return}}}$}{}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:byy}]
Our goal is to bound from above the following quantity \begin{equation}\label{eq:g}
g^{\ave}(\tilde \boldsymbol{d},\tilde \boldsymbol{y})=\frac 1 {D(T)}\sum_{j=1}^Sf^{\operatorname{ave}}(\tilde d_j,\tilde y_{j-1},\tilde y_j)
\end{equation}
We proceed by splitting the expression into two parts: the sum from $j=1$ to $j=n$, and the sum from $j=n+1$ to $j=S$.
Using Claims~\ref{lem:low_y} and~\ref{lem:infbarD}, we have
$\bar\tau(\tilde y_j)+\nu(\tilde y_j-\tilde y_{j-1})\leq q_j+\eta/\alpha+\nu\eta$, where $q_j=\bar\tau(r_j)+\nu(r_j-r_{j-1})$.
Thus we have for all $j\leq n$,
\begin{equation}\label{eq:B1}
\sum_{j=1}^nf^{\operatorname{ave}}(\tilde d_j,\tilde y_{j-1},\tilde y_j)\leq \sum_{j=1}^nf^{\operatorname{ave}}(q_j+\eta/\alpha+\nu\eta+j\gamma\eta,r_{j-1},r_j+\eta),
\end{equation}
since $f^{\operatorname{ave}}(\makebox[1ex]\cdot)$ is nonincreasing in the second term and nondecreasing in the first and third terms and where we extend the definition of $\bar\tau(\makebox[1ex]\cdot)$ by letting $\bar\tau(y)=T$ for all $y>D(T)$.
For the second part, we proceed as follows. Since $r_n+\eta=(R+1)\eta>D(T)$, Claim~\ref{lem:low_y} immediately implies $D(T)-\tilde y_n\leq\eta$. With Claim~\ref{lem:infbarD}, we get thus $T\leq\bar\tau(\tilde y_n)+\eta/\alpha$, where we used $T=\bar\tau\big(D(T)-\tilde y_n+\tilde y_n\big)$. This provides
$$\tilde d_{n+1}\leq\bar\tau(\tilde y_n)+\eta/\alpha +\nu(r_n-r_{n-1})+\nu\eta+n\gamma\eta=q_n+(1/\alpha +\nu+n\gamma)\eta.$$
Using again the fact that $f^{\operatorname{ave}}(\makebox[1ex]\cdot)$ is nonincreasing in the second term and nondecreasing in the first and third terms and with the help of Claim~\ref{lem:low_y}, we get
\begin{equation}\label{eq:B2}
\sum_{j=n+1}^Sf^{\operatorname{ave}}(\tilde d_{j},\tilde y_{j-1},\tilde y_{j})\leq f^{\operatorname{ave}}(q_n+\eta/\alpha+\nu\eta+n\gamma\eta,r_n,r_n+\eta),\end{equation}
since the terms indexed by $j=n+2,\ldots,S$ are all zero and since $D(T)<r_n+\eta$.
We aim at comparing the upper bounds in Equations~\eqref{eq:B1} and~\eqref{eq:B2} with
\begin{equation}\label{eq:w}
\sum_{a\in A(\tilde p)}w(a)=\sum_{j=1}^nf^{\operatorname{ave}}(q_j-\nu\eta,r_{j-1}+\eta,r_j).
\end{equation}
We first compare the $j$th term of the bound in~\eqref{eq:B1} with the $j$th term of the sum in~\eqref{eq:w}.
$$
f^{\operatorname{ave}}(q_j+\eta/\alpha+\nu\eta+j\gamma\eta,r_{j-1},r_j+\eta)-f^{\operatorname{ave}}(q_j-\nu\eta,r_{j-1}+\eta,r_j)=I_j^1+I_j^2+I_j^3
$$ with
\begin{eqnarray*}
I_j^1 &=& \int_{r_{j-1}}^{r_{j-1}+\eta}\big(q_j+\eta/\alpha+\nu\eta+j\gamma\eta-\bar\tau(u)\big)du\\
I_j^2&=&\int_{r_{j-1}+\eta}^{r_{j}}\big(j\gamma\eta+\eta/\alpha+2\nu\eta\big)du\\
I_j^3 &=& \int_{r_{j}}^{r_{j}+\eta}\big(q_j+\eta/\alpha+\nu\eta+j\gamma\eta-\bar\tau(u)\big)du.
\end{eqnarray*}
Since $\bar\tau(\makebox[1ex]\cdot)$ in nondecreasing, we get
\begin{eqnarray*}
I_j^1 &\leq& \big(\bar\tau(r_j)-\bar\tau(r_{j-1})+\nu(r_j-r_{j-1})\big)\eta+(1/\alpha+\nu+j\gamma)\eta^2\\
I_j^2&\leq& (r_j-r_{j-1})(j\gamma+1/\alpha+2\nu)\eta-(j\gamma+1/\alpha+2\nu)\eta^2\\
I_j^3&\leq& \nu(r_j-r_{j-1})\eta+(1/\alpha+\nu\eta+j\gamma)\eta^2.
\end{eqnarray*}
Using $j\gamma\leq n\gamma$ and $\gamma=2(1/\alpha+2\nu)$, we obtain
$$I_j^1+I_j^2+I_j^3\leq \big(\bar\tau(r_j)-\bar\tau(r_{j-1})+2\nu(r_j-r_{j-1})\big)\eta+(n+1/2)\gamma\eta^2+(r_j-r_{j-1})(n+1/2)\gamma\eta.$$
We now bound the term in Equation~\eqref{eq:B2}. Let $I=f^{\operatorname{ave}}(q_n+\eta/\alpha+\nu\eta+n\gamma\eta,r_n,r_n+\eta)$. We have
\begin{eqnarray*}
I &=& \int_{r_n}^{r_n+\eta}(q_n+\eta/\alpha+\nu\eta+n\gamma\eta-\bar\tau(u))du.\\
&\leq & \nu(r_n-r_{n-1})\eta+(1/\alpha+\nu+n\gamma\big)\eta^2 .
\end{eqnarray*}
We have thus
\begin{eqnarray*}
g^{\ave}(\tilde \boldsymbol{d},\tilde \boldsymbol{y})-\frac 1 {D(T)}\sum_{a\in A(\tilde p)}w(a) & \leq & \frac 1 {D(T)}\left(\sum_{j=1}^n(I_j^1+I_j^2+I_j^3)+I\right) \\
& \leq & \frac 1 {D(T)}\left(\bar\tau(r_n)+2\nu r_n+r_n(n+1)\gamma+\nu C+(n+1)^2\gamma \eta\right)\eta.
\end{eqnarray*}
Using $r_n\leq D(T)$ and $\bar\tau(r_n)\leq T$ leads to
$$
g^{\ave}(\tilde \boldsymbol{d},\tilde \boldsymbol{y})\leq \frac 1{D(T)}\sum_{a\in A(p)}w(a)+\left(\frac{T+\nu C}{D(T)}+\gamma (S+1)\right)\eta+\frac {\gamma (S+1)^2}{D(T)}\eta^2.
$$
\end{proof}
\bibliographystyle{plain}
|
1,108,101,565,186 | arxiv | \section*{Introduction}
\input{sections/intro.tex}
\section*{Related Work}
\input{sections/related.tex}
\section*{Methods}
\subsection*{From electronic health records to medical diagnosis training data}
\input{sections/ehr.tex}
\label{sec:ehr}
\subsection*{Models}
\input{sections/methods.tex}
\subsection*{Evaluation dataset and metrics}
\input{sections/dataset.tex}
\section*{Results}
\input{sections/exp.tex}
\label{sec:experiments}
\section*{Discussion}
\input{sections/discussion.tex}
\label{sec:discussion}
\section*{Conclusion}
\input{sections/future_work.tex}
\section{Methods}
We formulate clinical differential diagnosis as a classification task. We assume access to a labeled data set of $N$ clinical cases, $S = \{(\mathbf{x}_1, y_1), ... , (\mathbf{x}_N, y_N)\}$ derived from electronic health records. Each $x_n$ is an observed clinical case with $x_n \subset \mathcal{F}$, where $\mathcal{F}$ = \{$f_1$, \cdots , $f_K$\} is a set of $K$ findings and $y_n \in \{1, ..., L\}$ is the corresponding diagnosis.
\subsection{Dataset construction for model training}
\label{sec:training_set}
Here, we describe general approaches for automatically creating data sets for model training for diagnosis; the data sets are constructed using both expert systems and electronic health records. We evaluate the trained models on independent data sets described in \S~\ref{sec:exp}.
\noindent\textbf{Training set from EHR}
We represent entire EHR in temporal order: data is organized by patient and by time. Each patient may have different types of records, including encounters (visits with the doctor), medications, labs etc. For the purpose of this work, we use only the encounters. For a patient $p$, let $\mathcal{T}_p = [e_{1,p}, \cdots e_{t,p}]$ represent their timeline (chronological ordering) of all their $t,p$ encounters. \TODO{FIGURE} provides an overview of the timeline a single patient.
An important consideration in constructing clinical cases specific for a disease is to design electronic phenotypes that can serve as markers for that disease. We direct astute readers to \cite{Banda18} for in-depth discussion on electronic phenotyping and various approaches available. For the purpose of this paper, assume that for a disease $d$, the set of phenotypes, $\textrm{phenotype}(d)$ is available. Then, we consider encounters for a patient with that phenotype. We assign all the encounters with the temporal window for that diagnosis, assuming that the last visit corresponds to resolution of the condition. We define resolution to be when there has been at least $\tau$ days with no follow-up. We construct a set of findings (user-focused) for that patient corresponding to that diagnosis from the clinical note corresponding to the first visit of that temporal window. In particular, we use the chief complaints and the HPI section.
Algo.~\ref{algo:ehr} provides an overview of the algorithm for generating clinical cases for one disease at a time.
\begin{algorithm}
\begin{algorithmic}[1]
\State {\bf Input}: A disease $d$, its phenotypes $\textrm{phenotype}(d)$ and EHR: \{$\mathcal{T}_p$\}, resolution time $\tau$
\State {\bf Output}: A set of clinical cases $\mathcal{D}_d = \{(x_1, d), \cdots ,(x_T , d )\}$ corresponding to disease $d$.
\State $\mathcal{D}_d \gets \emptyset$
\State $\mathcal{P}$ $\leftarrow$ IDENTIFY-PATIENTS(\{$\mathcal{T}_p$\}, $S_d$)
\Comment{Select patients that have diagnosis d.}
\For{$p \in \mathcal{P}$}
\State $(t_{s,p}, t_{e,p})$ $\leftarrow$ RESOLVED-DISEASE-TIME-WINDOW(p, $\mathcal{T}_p$, $\textrm{phenotype}(d)$, $\tau$)
\State $x$ $\leftarrow$ EXTRACT-FINDINGS($t_{s,p}$)
\State $\mathcal{D}_d \gets \mathcal{D}_d \cup (x,d)$
\caption{Algorithm for constructing clinical cases for a single disease. }
\label{algo:ehr}
\end{algorithmic}
\end{algorithm}
The method IDENTIFY-PATIENTS identifies all the patients in EHR who satisfy the following two criteria: (1) the patient $p$ has an encounter $e_{k,p}$ with at lease one property in $S_d$ satisfied and (2) $e_{k,p}$ is an out-patient visit. The second requirement ensures that we focus on patients with disease manifestations with variant of disease, $d$.
The method RESOLVED-DISEASE-TIME-WINDOW identifies the temporal window of encounters for the patient $p$ that satisfies $\textrm{phenotype}(d)$ and also the problem is resolved for $\tau$.
Referring back to \TODO{FIGURE}, we also show an example of a clinical case constructed for a patient.
\noindent\textbf{Training set from expert system}
\label{sec:case_simulator}
\input{sections/simulator.tex}
Similar to the work of \cite{ravuri18}, wee also used the expert system to generate clinical cases. Algo.~\ref{algo:generator} provides an overview of the algorithm used to generate clinical cases.
\TODO{TODO: may want to include about identifiability, when we incorporate it}
\section{Experiments}
\subsection{Electronic Phenotyping}
\footnotesize
\begin{table*}
\centering
\begin{tabular}{ l| l| l }
\toprule
Disorder & ICD9& ICD10 \\
\midrule
strep throat& 034.0, 462& J02.00, J02.9, J03.01, J03.00\\
allergic rhinitis& 477.0 477.1, 477.2, 477.8, 477.9& J30.1, J30.2, J30.5, J30.8, J30.81, J30.89, J30.9 \\
acute bronchitis& 466.0, 466.19 &J20.2, J20.3, J20.4, J20.5, J20.6, J20.7, J20.8, J20.9 \\
bacterial conjunctivitis& 372.03& H10.31, H10.32, H10.33, H10.021, H10.022, H10.023, H10.029, H10.30\\
viral conjunctivitis& 077.99, 077.8, 077.3& B30.0, B30.1, B30.2, B30.8, B30.9\\
infectious mononucleosis& 075& B27.0, B27.00, B27.01, B27.02, B27.09, B27.90\\
constipation& 564.00, 564.01, 564.09& K59.0, K59.00, K59.01, K59.02, K59.09\\
GERD &530.11, 530.81 & K21.0, K21.9\\
migraine& 346.0, 346.1, 346.2, 346.4, 346.9& G43, G43.1, G43.4, G43.5, G43.9\\
tension headache&307.81, 339.10 &G44.20, G44.201, G44.209, G44.21, G44.211, G44.219\\
cluster headache&339.00, 339.01& G44.00, G44.001, G44.009, G44.01, G44.011, G44.019\\
acute gastroenteritis& 008.61, 008.62, 008.69, 008.8, 009.0, 009.1, 009.2, 009.3& A09, A08.0, A08.1, A08.2, A08.3, A08.4, A08.8\\
UTI &599.0 &N39.0, N30.0, N30.9\\
Candidal vulvovaginitis & 112.1& B37.3\\
Influenza &487.1, 488.81, 488.82, 488.89, 497.1& J09, J09.X1, J09.X2, J09.X3, J09.X0
\bottomrule
\end{tabular}
\caption{ICD codes that are used in electronic phenotypes for identifying the diagnosis for a patient. If at an encounter, a patient has been assigned a code (e.g. K21.9) and did not do any subsequent follow-up in the next 30 days, then that encounter is used as a positive example for the corresponding disease (in this example, it will be GERD.}
\bottomrule
\end{table*}
\normalsize
\subsection{Dataset}
\noindent\textbf{Training set:} We use the approaches described in \S~\ref{sec:training_set} to generate training sets for model training. We focus on \TODO{$K$} diseases that are common in outpatient clinics.
\TODO{describe the dataset -- years considered, number of data points}
{\bf specific details for EHR:} We constrained phenotypes to be ICD-9 and ICD-10 codes and used $\tau=30$ days for disease resolution. We use the inhouse medical entity-extractor to identify the patient-facing symptoms/findings from the clinical notes. \TODO{TODO: describe entity extractor}
\TODO{TODO: validate: We also explored if account visits prior to the diagnosis of interest to gather chief complaints. For acute, out patient visits, we found those pre-visits to be rare and hence did not pursue this approach}
{\bf specific details for Expert systems:}
\\
\noindent\textbf{Evaluation set:}
We used two sources of data for evaluation. The first is the Semigran dataset \cite{Semigran}. This dataset consists of 45 standardized patient clinical vignettes. The vignettes also have simplified inputs that are amenable to online symptom checkers. We used these simplified inputs for evaluation. The data sets had three categories of triage urgency: emergent care required (for example, pulmonary embolism), non-emergent care reasonable (for example, otitis media), and self care reasonable (for example, viral upper respiratory tract infection).
As a second data set for evaluation, we constructed an in-house clinical cases constructed by medical doctors. Note that the doctors were unaware of the exact strategy we uaed for For each of the $K$ diseases in our training set, the doctors provided $N$ distinct clinical vignettes (in the form of structured findings) resulting in a total of $K\times N$ cases. \TODO{TODO: details of this data set}
\subsection{Metrics}
We report recall@k (k $\in \{5,10\}$).
\begin{equation}
\textrm{recall@k} = \frac{\sum_{t=1}^{T} \sum_{j=1}^{j=K}[\hat{y}^{(t)}[j] = y^{(t)}] }{T},
\end{equation}
This metric (also called \emph{sensitivity}) is valuable in deployment contexts that involve aiding doctors in diagnosis, as it ensures that the relevant disease condition is considered within a small range of false positives.
We also report mean of per-class accuracy ($\textrm{mca)}$. For a data set consisting of $C$ classes, with $T_c$ examples in each class, mean accuracy is the average of per-class accuracy:
\begin{equation}
\textrm{mca} = \frac{1}{C}\sum_{c}\frac{\sum_{t=1}^{T_c} I[\hat{y}^{(t)}[0] = y^{(t)}]}{T_k},
\end{equation}
where, for $t^{th}$ example, $\hat{y}^{(t)}[j]$ is the $j^{th}$ top class predicted from a model and $ y^{(t)}$ is its corresponding ground truth label, where $I$ denotes the indicator function.
\subsection{Next Steps}
See \href{https://docs.google.com/spreadsheets/d/1w_AG0Ore1oN-mLpwxsd_d6SVkAoiBe-M1k5TabJn9Fk/edit#gid=0}{google sheet} for next steps
\subection{Results}
\subsection{Qualitative Examples}
\subsection*{Results}
\noindent\textbf{Learning diagnosis model from EHR}
The goal of this initial set of experiments is to establish the applicability of the diagnosis models learned from electronic health records to patient-facing situations.
Table~\ref{tbl:semigran_0} shows the accuracy of the models trained using $\mathcal{D}_0$ on the evaluation datasets. The results show that we can learn a model from EHR that generalizes to new datasets. Note that in this experiment, both the training set and the evaluation set have the same diseases as labels. Also, all the three machine learned models have similar performance, and this trend continues for larger number of diseases. In our setting, linear models are as effective as neural net models, and this makes sense given that the input space is high dimensional and very sparse.
In the table we also report results from Fraser {\it et al.}\cite{Fraser18} where twenty medical experts studied each case in entirety (some cases include more information such as labs that are not available in patient facing applications) and came to consensus, showing that the top-3 accuracy is still not at 100\%, showcasing the difficulty of agreeing on diagnoses even by human experts.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
\textbf{Dataset} & \textbf{Approach} & \textbf{top-1} & \textbf{top-3} & \textbf{top-5} & \textbf{top-10} & \textbf{top-20}\\
\hline
\multirow{1}{*} \textbf{EHR test} & LR & 54.19\% (.32)& 79.03\% (.25) & 88.16\%(.18) & 95.30\%(.10) & 98.9\%(.06) \\
& mlp & 56.11\% (.55)& 82.11\% (.38) & 90.79\% (.22) & 96.99\% (.11) & 99.38\% (.02) \\
& mlp-Embedding & 53.16\% (.94)& 79.03\% (.57) & 88.52\% (.42) & 96.00\% (.27) & 99.19\% (.06) \\
\hline
\multirow{1}{*}
\textbf{Semigran} & experts\cite{Fraser18} & 72.1\% & 84.3 \% & - & - & -\\
& LR & 50.67\% (1.86) & 75.11\% (1.86) & 82.22\% (1.56) & 88.45\% (.99) & 91.55\% (1.86) \\
& mlp &48.44\% (3.65) & 69.78\% (2.53) & 77.33\% (2.89) & 84.89\% (2.89) & 90.22\% (1.25) \\
& mlp-Embedding & 48.89\% (3.51) & 69.33\% (1.85) & 76.45\% (1.99) & 87.11\% (3.97) & 92.0\% (3.71) \\
\bottomrule
\end{tabular}
\caption{Model trained on $\mathcal{D}_0$. Top-K accuracy on evaluation set. Standard deviation across 5 random initialization is provided in ()}
\label{tbl:semigran_0}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c|c}
\toprule
\textbf{Disorder} & \textbf{Top positively weighted symptoms} & \textbf{Top negatively weighted symptoms}\\
\hline
\multirow{1}{*} Lumbar strain & low back pain, back pain, & NOT low back pain, NOT numbness, \\
& lifting, NSAID use & antibiotics, NOT back pain \\
\hline
\multirow{1}{*} Meningitis & headache, neck stiffness, & NOT nystagmus, NOT neck stiffness, \\
& shunt, paralysis & alcohol intoxication, never smoker \\
\hline
\multirow{1}{*} Peptic ulcer disease & abdominal pain, epigastric pain, & rash, acute, \\
& black stool, anxiety & allergy, chills\\
\hline
\multirow{1}{*} URI & cough,throat pain, & NOT fever, NOT exudate, \\
& congestion, nasal congestion & abdominal pain, recent\\
\hline
\multirow{1}{*} Acute pharyngitis & throat pain, exudate, & NOT exudate, NOT cough,\\
& swollen glands, fever & NOT fever, NOT nasal congestion \\
\hline
\multirow{1}{*} UTI& dysuria, urinary frequency, & NOT blood in urine, NOT flank pain\\
& sexual intercourse, antibiotics & NOT flank tenderness, NOT back pain \\
\bottomrule
\end{tabular}
\caption{Symptoms with largest learned weights from the logistic regression model trained on $\mathcal{D}_0$}
\label{tab:model_wts}
\end{table}
We also want to call out that the results from the study in Semigran {\it et al.}~\cite{Semigran} that releases this evaluation set; the average performance of the online symptom checkers is at 50\% in top-20 for \textbf{Semigran}. In a recent study \cite{razzaki18}, results were provided for only 30 clinical cases. When extrapolated, assuming remaining 15 cases were wrongly diagnosed, their top-1 accuracy is at 46.6\% and top-3 and 64.67\%. None of these papers discuss the actual number of diseases and findings that are available to the model used for diagnosis, which makes it difficult for us to make a direct comparison.
It was also interesting to note that there is no significant improvement in performance as we increase the complexity of the model. We also refer readers to the supplementary section of Rajkomar {\it et al}~\cite{Rajkomar18} where similar observation was made: a simple linear model gave most of the accuracy and ensembling with non-linear deep nets provided small improvement gains. Since our goal is to understand the performance trade-off, we did not focus on ensembling strategies. In Table~\ref{tab:model_wts}, we present the top-4 symptoms with largest positive and negative weights learned by the logistic regression model. The model has learned to hone in on symptoms that are predictive of the underlying disease.
\noindent\textbf{Diagnosis accuracy vs. disease coverage}
Figure~\ref{fig:comparison} compares the performance for models trained on different training data splits from $\mathcal{D}_0 \cdots \mathcal{D}_{+100}$. Top-k accuracy decreases for all $k$ and for both test sets as the number of additional diseases is increased in the training set. This decrease in accuracy is largest for top-1 accuracy, which drops from 49.6\% to 33\% on average, and smallest for top-20 accuracy, which drops from 90.8\% to 83.6\% on average.
\begin{figure}[H]
\centering
\includegraphics[scale=0.22]{figs/results/performance_line_graph_EHR.png}
\caption{Comparing performance of different methods on training diagnosis models with labels restricted to diseases in \textbf{Semigran}. Error bars over 5 different random sampling of the additional disorders being added, while keeping the random seed fixed. We found similar trends with the neural network model that uses embedding, and do not show results due to space constraints. Error bars computed on 5 different random choices of the additional diseases added.}
\label{fig:comparison}
\end{figure}
We can characterize the slope of the above lines according to $A_k = \beta_\mathcal{D} * |\mathcal{D}_k| + \beta_{M} $, where $A_k$ is the top-$k$ accuracy, $|\mathcal{D}_k|$ is the number of additional diseases added to the disease space, $\beta_\mathcal{D}$ is a coefficient for the size of the disease space and $\beta_M$ is a coefficient for the model architecture. We can see from tbl.~\ref{tbl:regression_coefficients}, that all top-k accuracies drop with increasing number of diseases as shown by negative values for $\beta_\mathcal{D}$. This drop is most significant for top-1 accuracy where the addition of every additional disease results in a drop of $0.156\%$ points in accuracy as represented by $\beta_\mathcal{D}$.
\begin{table}[ht]
\centering
\begin{tabular}{l|r|r|r|r}
\toprule
& $\beta_\mathcal{D}$ & \textbf{Std. Err.} & \textbf{t-value} & \textbf{p-value}\\\hline
Top-1 & -0.156 & 0.010 & -15.52 & $\leq$ 2e-16 \\
Top-3 & -0.104 & 0.0078 & -13.47 & $\leq$ 2e-16 \\
Top-5 & -0.102 & 0.0071 & -14.41 & $\leq$ 2e-16\\
Top-10 & -0.092 & 0.0074 & -12.45 & $\leq$ 2e-16 \\
Top-20 & -0.070 & 0.0056 & -12.40 & $\leq$ 2e-16 \\
\bottomrule
\end{tabular}
\caption{Regression coefficients demonstrating the change in top-k accuracy for each additional disease added to the disease space. We find decreasing accuracy, represented by a negative $\beta_\mathcal{D}$, across all models }
\label{tbl:regression_coefficients}
\end{table}
These results have an important consequence. The evaluation set is fixed and corresponds to a small set of diseases that are of interest. The models are trained with increasing coverage of diseases (trained using $\mathcal{D}_0$ to $\mathcal{D}_{+100}$), to reflect the need to increase model's ability to diagnose more diseases when deployed. However, from Figure~\ref{fig:comparison}, we can see that the performance of the models on the fixed evaluation set drops as we increase the coverage of diseases (trained from $\mathcal{D}_0$ to $\mathcal{D}_{+100}$) at a rate of 1\% for every 10 diseases for top-3 accuracy.
|
1,108,101,565,187 | arxiv | \section{Introduction \label{s:intro}}
Subluminous B stars (sdBs) are core helium-burning stars with very thin hydrogen envelopes and masses around $0.5M_{\rm \odot}$
(Heber \cite{heber86}, see Heber \cite{heber09} for a review). A large fraction of the sdB stars ($40\,\%$ to $80\,\%$) are members of short period binaries (Maxted et al. \cite{maxted01}; Napiwotzki et al. \cite{napiwotzki04a}). Several studies were undertaken to determine the orbital parameters of sub\-dwarf binaries, and found periods ranging from $0.07$ to more than $10\,{\rm d}$ with a peak at $0.5$ to $1.0\,{\rm d}$ (e.g. Edelmann et al. \cite{edelmann05}; Morales-Rueda et al. \cite{morales03}). For close binary sdBs, common envelope (CE) ejection is the most probable formation channel. In this scenario two main sequence stars of different masses evolve in a binary system. The heavier one will reach the red giant phase first and fill its Roche lobe. If the mass transfer to the com\-panion is dynamically unstable, a common envelope is formed. Due to friction the two stellar cores lose orbital energy, which is deposited within the envelope and leads to a shortening of the binary period. Eventually the common envelope is ejected and a close binary system is formed, which contains a core helium-burning sdB and a main sequence companion. If the companion has already evolved to a white dwarf (WD) when the red giant fills its Roche lobe, a close sdB+WD binary is formed (Han et al. \cite{han02,han03}). Under certain conditions, two consecutive CE phases are possible as well.
\begin{table}[t!]
\caption{Solved binary systems.}
\label{tab:solved}
\begin{center}
\begin{tabular}{llll}
SDSS name & short name & other names \\
& & \\
\hline
\\[-3mm]
SDSS\,J002323.99$-$002953.2 & J0023$-$0029 & PB\,5916 \\
SDSS\,J113840.68$-$003531.7 & J1138$-$0035 & PG\,1136-003 \\
SDSS\,J150513.52+110836.6 & J1505+1108 & PG\,1502+113 \\
SDSS\,J165404.25+303701.7 & J1654+3037 & PG\,1652+307 \\
SDSS\,J172624.09+274419.3 & J1726+2744 & PG\,1724+278 \\
SDSS\,J204613.40$-$045418.7 & J2046$-$0454 & $-$ \\
SDSS\,J225638.34+065651.0 & J2256+0656 & PG\,2254+067 \\
\hline \\[-3mm]
\end{tabular}
\end{center}
\end{table}
In general it is difficult to put constraints on the nature of the close companions to sdB stars. Since most of the binaries are single-lined, only lower limits have been derived from the binary mass functions, which are in general compatible with main sequence stars of spectral type M or compact objects like white dwarfs. Only in special and hence rare cases can tighter constraints be put on the nature of the companions.
Subdwarf binaries with massive WD companions turned out to be candidates for supernova type Ia (SN Ia) progenitors because these systems lose angular momentum due to the emission of gravitational waves and start mass transfer. This mass transfer, either from accretion of He onto the WD during the sdB phase (e.g. Yoon \& Langer \cite{yoon04} and references therein), or the subsequent merger of the system after the sdB star itself has turned into a WD (Tutukov \& Yungelson \cite{tutukov81}; Webbink \cite{webbink84}) may cause the companion to approach the Chandrasekhar limit and explode as SN Ia.
SN~Ia play a key role in the study of cosmic evolution (e.g. Riess et al. \cite{riess98}; Leibundgut \cite{leibundgut01}; Perlmutter et al. \cite{perlmutter99}). One of the best known candidate systems for the double degenerate merger scenario is the sdB+WD binary KPD\,1930$+$2752 (Maxted et al. \cite{maxted00}; Geier et al. \cite{geier07}). Mereghetti et al. (\cite{mereghetti09}) showed that in the X-ray binary HD\,49798 a massive ($>1.2\,M_{\rm \odot}$) white dwarf accretes matter from a closely orbiting subdwarf O companion. The predicted amount of accreted material is sufficient for the WD to reach the Chandrasekhar limit. This makes HD\,49798 another candidate for SN\,Ia progenitor. Furthermore, Perets et al. (\cite{perets10}) showed that helium accretion onto a white dwarf may be responsible for a subclass of faint and calcium-rich SN Ib events.
Geier et al. (\cite{geier08}, \cite{geier10a}, \cite{geier10b}) analysed high resolution spectra of sdB stars in close binaries. Assuming synchronised rotation they constrained the masses and the nature of the unseen companions in 31 cases. While most of the derived companion masses were consistent with either late type main sequence stars or white dwarfs, the compact companions of some sdBs may be either massive white dwarfs, neutron stars (NS) or stellar mass black holes (BH). However, Geier et al. (\cite{geier10b}) also showed that the assumption of orbital synchronisation in close sdB binaries is not always justified and that their analysis suffers from huge selection effects.
The existence of sdB+NS/BH systems is predicted by binary evolution theory (Podsiadlowski et al. \cite{podsi02}; Pfahl et al. \cite{pfahl03}). The formation channel includes two phases of unstable mass transfer and one supernova explosion. The fraction of sdB+NS/BH systems is predicted to be about $2\%$ of the close sdB binaries (Geier et al. \cite{geier10b}). Yungelson \& Tutukov (\cite{yungelson05}) and Nelemans (\cite{nelemans10}) performed independent binary evolution calculations and confirm that sdB+NS/BH systems should exist. According to the results of Nelemans (\cite{nelemans10}) about $1\%$ of the subdwarfs in close binaries should have a neutron star companion, whereas only $0.01\%$ should be orbited by a black hole. Yungelson \& Tutukov (\cite{yungelson05}) predict the sdB+NS fraction to be of the order of $0.8\%$.
Since sdB stars eventually evolve to WDs there should also exist a population of white dwarfs with massive compact companions. Badenes et al. (\cite{badenes09}) reported the discovery of a close binary consisting of a massive white dwarf and an unseen neutron star or black hole companion, but Marsh et al. (\cite{marsh10}) most recently showed that the system is double-lined and consists of a massive white dwarf orbited by a low mass white dwarf. The system mass is below the Chandrasekhar limit. Their results were confirmed by Kulkarni \& van Kerkwijk (\cite{kulkarni10}). Common envelope ejection was proposed as the most likely formation channel for the binary PSR\,J1802$-$2124, which consists of a millisecond pulsar and a CO white dwarf in close orbit ($P=0.7\,{\rm d}$, Ferdman et al. \cite{ferdman10}). This peculiar system may have evolved through an earlier sdB+NS phase.
\section{The MUCHFUSS project}\label{s:much}
The discovery of sdB binary candidates with massive compact companions provides a first hint that a whole population of non-interacting binaries with such companions may be present in our Galaxy. The known candidate sdB+NS/BH binaries have low orbital inclinations ($15-30^{\rm \circ}$, Geier et al. \cite{geier10b}). High inclination systems must exist as well and should be more numerous. In this case a determination of the orbital parameters is sufficient to put a lower limit to the companion mass by calculating the binary mass function. If this lower limit exceeds the Chandrasekhar mass and no sign of a companion is visible in the spectra, the existence of a massive compact companion is proven without the need for any additional assumptions.
The project Massive Unseen Companions to Hot Faint Underluminous Stars from SDSS\footnote{Sloan Digital Sky Survey} (MUCHFUSS) aims at finding sdBs with compact companions like massive white dwarfs ($M>1.0\,{\rm M_{\odot}}$), neutron stars or black holes. About $80$ binaries have been selected for follow-up. Survey and target selection are described in detail in Geier et al. (\cite{geier10c}). The same selection criteria that we applied to find such binaries are also well suited to single out hot subdwarf stars with constant high radial velocities (RV) in the Galactic halo and search for hypervelocity stars. First results of this second part of the project (Hyper-MUCHFUSS) are presented in Tillich et al. (\cite{tillich10}).
Here we present the spectroscopic analysis of the first sdB binaries discovered in the course of the MUCHFUSS project (see Table~\ref{tab:solved}). In Sect.~\ref{s:data} the observations and the data reduction are described. Sects.~\ref{s:orbit} and \ref{s:atmo} deal with the determination of the orbital and atmospheric parameters of the sdB stars. Sect.~\ref{s:comp} explains the way the minimum masses of the unseen companions are constrained, while results are presented in Sect.~\ref{s:results}. The efficiency of our target selection is discussed in Sect.~\ref{s:efficient}, a short summary and an outlook are eventually given in Sect.~\ref{s:summary}.
\begin{figure*}[t!]
\begin{center}
\resizebox{17cm}{!}{\includegraphics{15794fg1.eps}}
\end{center}
\caption{Medium resolution spectra of the programme stars taken with different instruments. Multiple observations of the same target
have been shifted to rest wavelength and coadded.}
\label{specexample}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\resizebox{8.5cm}{!}{\includegraphics{15794fg2_1.eps}}
\resizebox{8.5cm}{!}{\includegraphics{15794fg2_2.eps}}
\resizebox{8.5cm}{!}{\includegraphics{15794fg2_3.eps}}
\resizebox{8.5cm}{!}{\includegraphics{15794fg2_4.eps}}
\end{center}
\caption{Radial velocity plotted against orbital phase. The RV data were phase folded with the most likely orbital periods. The residuals are plotted below. The RVs were measured from spectra obtained with SDSS (rectangles), CAHA3.5m/TWIN (upward triangles), WHT/ISIS (diamonds), INT/IDS (downward triangles), ESO-VLT/FORS1 (triangles turned to the left), Gemini/GMOS (triangles turned to the right), ESO-NTT/EFOSC2 (circles), SOAR/Goodman (hexagons) and SAAO-1.9m/Grating (stars).}
\label{rv1}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\resizebox{8.5cm}{!}{\includegraphics{15794fg3_1.eps}}
\resizebox{8.5cm}{!}{\includegraphics{15794fg3_2.eps}}
\resizebox{8.5cm}{!}{\includegraphics{15794fg3_3.eps}}
\end{center}
\caption{Radial velocity curves (see Fig~\ref{rv1}).}
\label{rv2}
\end{figure*}
\begin{table}[t!]
\caption{Follow-up observations 2009/2010.
The first column lists the date of observation, while in the second
the used telescope and instrumentation is shown. In the third column the observers are listed.}
\label{follow-up-runs}
\begin{center}
\begin{tabular}{llll} \hline
\noalign{\smallskip}
Date & Telescope\,\&\,Instrument & Observer\\ \hline
\noalign{\smallskip}
2009/06/05--2009/06/09 & ING-INT/IDS & R. \O., R. O., \\
& & T. O. \\
2009/07/22--2009/07/26 & CAHA-3.5m/TWIN & T. K. \\
2009/08/24--2009/08/27 & ING-WHT/ISIS & S. G. \\
2009/11/08--2009/11/12 & ESO-NTT/EFOSC2 & T. K. \\
April/August 2009 & Gemini-North/GMOS & Service \\
2010/02/12--2010/02/15 & SOAR/Goodman & B. B. \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Multi-site observations and data reduction}\label{s:data}
Follow-up medium resolution spectra were taken during de\-dicated follow-up runs (see Table~\ref{follow-up-runs}) with the EFOSC2 spectrograph ($R\simeq2200,\lambda=4450-5110\,{\rm \AA}$) mounted at the ESO\,NTT, the ISIS spectrograph ($R\simeq4000,\lambda=3440-5270\,{\rm \AA}$) mounted at the WHT, the TWIN spectrograph mounted at the CAHA-3.5m telescope ($R\simeq4000, \lambda=3460-5630\,{\rm \AA}$), the Goodman spectrograph mounted at the SOAR telescope ($R\simeq2500, \lambda=3500-6160\,{\rm \AA}$), the GMOS spectrograph ($R\simeq1200,\lambda=3770-4240\,{\rm \AA}$) mounted at the Gemini North telescope and the IDS spectrograph mounted at the Isaac Newton Telescope ($R\simeq1400,\lambda=3000-6800\,{\rm \AA}$). Informations about data taken in the course of our survey are provided in Geier et al. (\cite{geier10c}). Additional data could be gathered, when our targets were observed with the IDS spectrograph (March 2007, observer: T. M., C. C.; $R\simeq4000, \lambda=3930-5100\,{\rm \AA}$) and the grating spectrograph (March 2003, April 2004, observer: T. M.; $R\simeq4600, \lambda=4170-5030\,{\rm \AA}$) mounted at the 1.9m Radcliffe Telescope. Example spectra are shown in Fig.~\ref{specexample}.
\begin{table*}[t!]
\caption{Derived orbital parameters.}
\label{tab:orbits}
\begin{center}
\begin{tabular}{lllrr}
Object & $T_{0}$ & P & $\gamma$ & K\\
& [$-$2\,450\,000] & [d] & [${\rm km\,s^{-1}}$] & [${\rm km\,s^{-1}}$]\\
\hline
\\[-3mm]
J0023$-$0029 & $5069.850\pm0.008$ & $1.4876\pm0.0001$ & $16.4\pm2.1$ & $81.8\pm2.9$ \\
J1138$-$0035 & $4991.388\pm0.001$ & $0.207536\pm0.000002$ & $23.3\pm3.7$ & $162.0\pm3.8$ \\
J1505+1108 & $4938.867\pm0.002$ & $0.74773\pm0.00005$ & $-77.1\pm1.2$ & $97.2\pm1.8$ \\
J1654+3037 & $4991.5322\pm0.0008$ & $0.25357\pm0.00001$ & $40.5\pm2.2$ & $126.1\pm2.6$ \\
J1726+2744 & $4981.667\pm0.005$ & $0.50198\pm0.00005$ & $-36.7\pm4.8$ & $118.9\pm3.7$ \\
J2046$-$0454 & $4693.352\pm0.002$ & $0.24311\pm0.00001$ & $87.6\pm5.7$ & $134.3\pm7.8$ \\
J2256+0656 & $5070.662\pm0.002$ & $0.7004\pm0.0001$ & $-7.3\pm2.1$ & $105.3\pm3.4$ \\
\hline \\[-3mm]
\end{tabular}
\end{center}
\end{table*}
In order to obtain a good wavelength calibration, arc lamp exposures have been taken before or after the single exposures. In addition to that bright single sdBs have been taken as RV standards in most of the runs. In some cases the RVs of certain instruments (TWIN, GMOS) had to be corrected by a constant offset of up to $\simeq50\,{\rm km\,s^{-1}}$, which was derived from the RV measurements of the standard stars. The slit width was always chosen to be smaller than the size of the seeing discs to minimize systematic errors due to movement of the objects within the slit. Reduction was done either with the \texttt{MIDAS}, \texttt{IRAF} or \texttt{PAMELA}\footnote{http://www2.warwick.ac.uk/fac/sci/physics/research/astro/people\\/marsh/software} and \texttt{MOLLY}$^{2}$ packages.
\begin{table*}[t!]
\caption{Significance of the circular orbital solutions. The best solutions for the orbital periods are given together with their minimum $\chi^{2}$ and reduced $\chi^{2}$ values as well as the number $n$ of RVs. The second best aliases (further than $1\%$ away from the best solution) and the $\Delta \chi^{2}$-values with respect to the best solutions are given as well. The systematic error adopted to normalise the reduced $\chi^{2}$ ($e_{\rm norm}$) is given for each case.The probabilities for the orbital period to deviate from our best solution by more than $1\%$ ($p_{\rm false}[1\%]$) or $10\%$ ($p_{\rm false}[10\%]$) are given in the last columns.}
\label{tab:sig}
\begin{center}
\begin{tabular}{llllllllll}
Object & best solution & $\chi^{2}$ & $\chi^{2}_{\rm reduced}$ & 2nd best alias & $\Delta\,\chi^{2}$ & $n$ & $e_{\rm norm}$ & $\log{p_{\rm false}}[1\%]$ & $\log{p_{\rm false}}[10\%]$ \\
& [d] & & & [d] & & & [${\rm km\,s^{-1}}$] & & \\
\hline
\\[-3mm]
J0023$-$0029 & $1.4876$ & $157$ & $3.74$ & $0.5976$ & $130$ & $47$ & $8.0$ & $-3.0$ & $-3.4$ \\
J1138$-$0035 & $0.207536$ & $213$ & $5.33$ & $0.260192$ & $426$ & $45$ & $16.0$ & $-3.5$ & $-3.5$ \\
J1505+1108 & $0.74773$ & $155$ & $4.30$ & $0.75709$ & $679$ & $41$ & $7.0$ & $<-4.0$ & $<-4.0$ \\
J1654+3037 & $0.25357$ & $18$ & $0.54$ & $0.20397$ & $64$ & $38$ & $-$ & $<-4.0$ & \\
J1726+2744 & $0.50198$ & $82$ & $2.48$ & $1.00998$ & $77$ & $38$ & $12.0$ & $-1.2$ & $-1.9$ \\
J2046$-$0454 & $0.24311$ & $52$ & $3.05$ & $0.31971$ & $39$ & $22$ & $17.0$ & $-1.1$ & $-1.1$ \\
J2256+0656 & $0.7004$ & $276$ & $6.13$ & $2.1903$ & $976$ & $50$ & $13.0$ & $<-4.0$ & $<-4.0$ \\
\hline \\[-3mm]
\end{tabular}
\end{center}
\end{table*}
\section{Orbital parameters \label{s:orbit}}
The radial velocities were measured by fitting a set of mathematical functions (Gaussians, Lorentzians and polynomials) to the hydrogen Balmer lines as well as helium lines if present using the FITSB2 routine (Napiwotzki et al. \cite{napiwotzki04b}). The RVs of the GMOS spectra have been measured by fitting three Gaussians to the $H_{\rm \gamma}$ line. Three functions are used to match the continuum, the line and the line core, respectively and mimic the typical Voigt profile of spectral lines. The profiles are fitted to all suitable lines simultaneously using $\chi^{2}$-minimization and the RV shift with respect to the rest wavelengths is measured. The RVs and formal $1\sigma$-errors are given in Appendix~\ref{app:RV}. Assuming circular orbits sine curves were fitted to the RV data points in fine steps over a range of test periods. For each period the $\chi^{2}$ of the best fitting sine curve was determined. The result is similar to a power spectrum with the lowest $\chi^{2}$ indicating the most likely period (see Fig.~\ref{chi}).
In order to estimate the significance of the orbital solutions and the contributions of systematic effects to the error budget, we normalised the $\chi^{2}$ of the most probable solution by adding systematic errors in quadrature until the reduced $\chi^{2}$ reached $\simeq1.0$. Using these modified uncertainties we performed Monte Carlo simulations for the most likely periods. For each simulation a randomised set of RVs was drawn from Gaussian distributions with central value and width corresponding to the RV measurements and the analysis repeated. From these simulations the probabilities for the orbital periods to deviate from our best solution by more than $1\%$ or $10\%$ were calculated.
In order to derive most conservative errors for the RV semi-amplitude $K$ and the system velocity $\gamma$ we fixed the most likely period and created new RV datasets with a bootstrapping algorithm. Ten thousand RV datasets were obtained by random sampling with replacement from the original dataset. In each case an orbital solution was calculated in the way described above. The standard deviation of these results was adopted as error estimate. The RV curves are given in Figs.~\ref{rv1} and \ref{rv2}. The residuals of the RV curves after subtracting the best orbital solution are of the same order in all cases (see Figs.~\ref{rv1}, \ref{rv2}). The accuracy is limited by the resolution of the spectra and their signal-to-noise. Combining data obtained with different instruments is also expected to contribute to the systematic error. Nevertheless, we found that all orbital solutions given here are significant (see Table~\ref{tab:sig}).
Edelmann et al. (\cite{edelmann05}) reported the discovery of small eccentricities ($e<0.06$) in the orbital solutions of five close hot subdwarf binaries. All of these binaries are expected to have formed via common envelope ejection. Although the CE phase is very short, it should nevertheless be very efficient in circularising the binary orbits. That is why the discovery of Edelmann et al. (\cite{edelmann05}) came as a surprise. Napiwotzki et al. (in prep.) found more such systems with even shorter periods.
In order to investigate whether the orbital solutions of our programme binaries can be improved by allowing for eccentricity, we fitted eccentric orbits to our radial velocity data and performed statistical tests (F-test, see Pringle \cite{pringle75}, and the Bayesian information criterion BIC) to check whether eccentric solutions are significant or not. In all cases the circular solutions were preferred. However, the derived upper limits for the orbital eccentricities range from $0.15$ to $0.3$, which means that low eccentricities as the ones reported by Edelmann et al. (\cite{edelmann05}) cannot be firmly excluded.
\begin{figure*}[t!]
\begin{center}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_1.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_2.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_3.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_4.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_5.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_6.eps}}
\resizebox{6.0cm}{!}{\includegraphics{15794fg4_7.eps}}
\end{center}
\caption{$\chi^{2}$ plotted against orbital period. The lowest peak corresponds to the most likely solution.}
\label{chi}
\end{figure*}
\section{Atmospheric parameters \label{s:atmo}}
Atmospheric parameters have been determined by fitting model spectra to the hydrogen Balmer and helium lines in the way described in Geier et al. (\cite{geier07}). The single spectra have been corrected for their orbital motion and coadded. Depending on the effective temperature of the stars, LTE models with solar metallicity ($T_{\rm eff}<30\,000\,{\rm K}$) or ten times solar metallicity ($T_{\rm eff}>30\,000\,{\rm K}$) have been used. The enhanced metallicity models account for the radiative levitation of heavy elements in the diffusion dominated atmospheres (for a detailed discussion see O'Toole \& Heber \cite{otoole06}).
In order to investigate systematic effects introduced by the individual instruments, especially the different resolutions and wavelength coverages, the parameters have been derived separately from spectra taken with different instruments. As can be seen in Table~\ref{tab:atm} no constant systematic shifts are present. The weighted means have been calculated and adopted as final solutions. Typical systematic errors introduced by different model grids are of the order of $\pm0.05$ in $\log{g}$ and $500\,{\rm K}$ in $T_{\rm eff}$ (e.g. Lisker et al. \cite{lisker05}; Geier et al. \cite{geier07}). These uncertainties were added in quadrature to the statistical errors.
Three of our programme stars have been classified as hot subdwarfs by Eisenstein et al. (\cite{eisenstein06}), but the authors pointed out that the atmospheric parameters of the sdO/Bs given in their catalogue are not accurate.
All stars of our sample are situated on or near the Extreme Horizontal Branch (EHB) and are most likely core-helium burning stars (see Fig.~\ref{tefflogg}). Since the orbital periods of these binaries are short, they can only have formed via common envelope ejection. Population synthesis models (Han et al. \cite{han02}, \cite{han03}) predict a mass range of $M_{\rm sdB}=0.37-0.48\,M_{\rm \odot}$ for sdBs in binaries formed in this way. The mass distribution shows a sharp peak at a mass of about $0.47\,{\rm M_{\odot}}$. This theoretical mass distribution is consistent with analyses of close binary systems (e.g. Geier et al. \cite{geier07}; For et al. \cite{for10}) as well as asteroseismic analyses of pulsating sdBs (see Charpinet et al. \cite{charpinet08} and references therein). If the progenitor star was massive enough on the main sequence to ignite core helium-burning under non-degenerate conditions, the sdB mass may be as low as $0.3\,{\rm M_{\odot}}$. A small fraction of the sdB population is predicted to be formed in that way (Han et al. \cite{han02}, \cite{han03}). Especially for sdB binaries with massive companions this formation scenario may become important.
\section{Constraining the nature of the unseen companions \label{s:comp}}
Since the programme stars are single-lined spectroscopic binaries, only their mass functions can be calculated.
\begin{equation}
\label{equation-mass-function}
f_{\rm m} = \frac{M_{\rm comp}^3 \sin^3i}{(M_{\rm comp} +
M_{\rm sdB})^2} = \frac{P K^3}{2 \pi G}
\end{equation}
Although the RV semi-amplitude $K$ and the period $P$ can be derived from the RV curve, the sdB mass $M_{\rm sdB}$, the companion mass $M_{\rm comp}$ and the inclination angle $i$ remain free parameters. Adopting $M_{\rm sdB}=0.47\,{\rm M_{\odot}}$ and $i<90^{\rm \circ}$ we derive a lower limit for the companion mass (see Table\,\ref{rvmasses}).
For mini\-mum companion masses lower than $0.45\,M_{\rm \odot}$ the companion may be a late type main sequence star or a compact object like a WD. Main sequence stars in this mass range are outshined by the sdBs and not visible in optical spectra (Lisker et al. \cite{lisker05}). That is the reason why the companions' nature still remains unknown for most of the $\simeq$80 known sdB systems with low minimum companion masses (see Fig.~\ref{periodK}). If on the other hand the minimum companion mass exceeds $0.45\,M_{\rm \odot}$, spectral features of a main sequence companion become visible in the optical. The non-detection of such features therefore allows us to exclude a main sequence star. The companion must then be a compact object. More massive compact companions like massive WDs, neutron stars or black holes are more likely as soon as the minimum mass exceeds $1.00\,M_{\rm \odot}$ or even the Chandrasekhar limit $1.40\,M_{\rm \odot}$.
Due to the fact that we selected targets with high RV shifts, the distribution of orbital inclinations in our target sample is not random any more. Our selection strategy strongly favours high inclination angles, and therefore the companion masses are likely to be close to their minimum values. The probability of detecting eclipses, reflection effects or variations caused by ellipsoidal deformation in the light curves of systems with short orbital periods should therefore be significantly higher than in an unbiased sample.
\begin{figure}[t!]
\begin{center}
\resizebox{8.5cm}{!}{\includegraphics{15794fg5.eps}}
\end{center}
\caption{$T_{\rm eff}-\log{g}$-diagram. The helium main sequence (HeMS) and the EHB band (limited by the zero-age EHB, ZAEHB, and the terminal-age EHB, TAEHB) are superimposed with EHB evolutionary tracks from Dorman et al. (\cite{dorman93}).}
\label{tefflogg}
\end{figure}
\begin{table}[t!]
\caption{Derived minimum masses and most probable nature of the companions.}
\label{rvmasses}
\begin{center}
\begin{tabular}{llll}
Object & $f(M)$ & $M_{\rm 2min}$ & Companion\\
& [$M_{\rm \odot}$] & [$M_{\rm \odot}$] & \\
\hline
\\[-3mm]
J0023$-$0029 & $0.084$ & $0.40$ & MS/WD \\
J1138$-$0035 & $0.091$ & $0.42$ & WD \\
J1505+1108 & $0.071$ & $0.37$ & MS/WD \\
J1654+3037 & $0.053$ & $0.32$ & MS/WD \\
J1726+2744 & $0.087$ & $0.41$ & MS/WD \\
J2046$-$0454 & $0.061$ & $0.34$ & MS/WD \\
J2256+0656 & $0.085$ & $0.40$ & MS/WD \\
\hline \\[-3mm]
\end{tabular}
\end{center}
\end{table}
\section{Results}\label{s:results}
The spectra of all stars in our sample have been checked for spectral features of their companions. Hot subdwarfs with faint main sequence companions usually show spectral lines of the Mg\,{\sc i} triplet at $\simeq5170\,{\rm \AA}$ (Lisker et al. \cite{lisker05}) and the Ca\,{\sc ii} triplet at $\simeq8650\,{\rm \AA}$. No such features are visible in the spectra of our programme stars (see e.g. Fig.~\ref{specexample}). Stark \& Wade (\cite{stark03}) ana\-lysed optical and IR photometry (2MASS) and found no indication of an IR-excess caused by a cool companion in the case of J1654+3037. According to the catalogue of Reed \& Stiening (\cite{reed04}), who performed a similar analysis, J1505+1108 shows signs of an IR-excess in the H and K-bands, but the large errors of these measurements and the missing spectral signatures of a cool companion in the SDSS spectra are strong indications, that no visible companion is present.
J1654+3037 and J2046$-$0454 have very similar orbital parameters. The periods are short ($0.25\,{\rm d}$) and the minimum companion masses are constrained to $0.32\,{\rm M_{\odot}}$ and $0.34\,{\rm M_{\odot}}$. Whether the companions are M dwarfs or WDs is therefore not yet clear. In the former case a reflection effect should be easily detectable in the light curves. Photometric follow-up will allow us to clarify the nature of the companions.
The companion of the short period ($0.2\,{\rm d}$) system J1138$-$0035 is most likely a white dwarf. The minimum companion mass is constrained to $0.42\,{\rm M_{\odot}}$ and no sign of a companion is seen in the spectra. A light curve taken by the SuperWASP project (Pollacco et al. \cite{pollacco06}) shows no variation exceeding $\simeq1\%$ (see Fig.~\ref{lc}). Due to the short period of this system a reflection effect should be visible, if the companion should be a cool main sequence star. The absence of such a variation leads to the conclusion that the companion is most likely a white dwarf.
The orbital periods of J1726+2744 ($0.5\,{\rm d}$), J2256+0656 ($0.7\,{\rm d}$) and J1505+1108 ($0.74\,{\rm d}$) are longer. Their minimum companion masses are similar ($0.37-0.41\,{\rm M_{\odot}}$) and close to the border between main sequence stars and white dwarfs. The companions of J1726+2744 and J2256+0656 are most likely WDs. Koen (\cite{koen09}) and Shimanskii et al. (\cite{shimanskii08}) recently showed that reflection effects can still be detected in the light curves of sdB binaries with similar orbital periods. A reflection effect in J0023$-$0029 on the other hand is most likely not detectable, because the orbital period is too long ($1.5\,{\rm d}$).
\begin{figure}[t!]
\resizebox{8.5cm}{!}{\includegraphics{15794fg6.eps}}
\caption{SuperWASP light curve of J1138$-$0035 folded to the orbital phase. The 11213 data points taken between 2006/07/05 and 2009/07/02 are binned to 100 phase bins. Relative flux is plotted against the orbital phase.}
\label{lc}
\end{figure}
\section{Efficiency of target selection}\label{s:efficient}
The goal of the MUCHFUSS project is to find sdB binaries with massive compact companions and study this population of close binaries. We tried to optimise our target selection to achieve this goal. Fig.~\ref{periodK} illustrates the efficiency of our target selection. The RV semiamplitudes of all known sdB binaries with spectroscopic solutions (open symbols) are plotted against their orbital periods (Geier et al. \cite{geier10c}). Binaries which have initially been discovered in photometric surveys due to indicative features in their light curves (eclipses, reflection effects, ellipsoidal variations) are marked with open circles. Binaries discovered by RV variations from time resolved spectroscopy are marked with open diamonds. The dashed, dotted and solid lines mark the regions to the right where the minimum companion masses derived from the binary mass function (assuming $0.47\,{\rm M_{\odot}}$ for the sdBs) exceed $0.45\,{\rm M_{\odot}}$, $1.00\,{\rm M_{\odot}}$ and $1.40\,{\rm M_{\odot}}$.
Most of the known sdB binaries are situated beneath the $0.45\,{\rm M_{\odot}}$ line, which means that the companion type cannot be constrained from the mass function alone. Photometry is necessary to clarify the companions' nature in these cases. The most massive sdB binary known to date is KPD\,1930+2752 with a WD companion of $0.9\,{\rm M_{\odot}}$. This short period system has been discovered based on indicative features in its light curve (upper left corner in Fig.~\ref{periodK}; Bill\`{e}res et al. \cite{billeres00}).
The seven binaries from the MUCHFUSS project are marked with filled diamonds. It can be clearly seen that they belong to the sdB binary population with the largest minimum masses close to $0.45\,{\rm M_{\odot}}$. We therefore conclude that our target selection is efficient and singles out sdB binaries with massive companions.
\begin{figure}[t!]
\resizebox{\hsize}{!}{\includegraphics{15794fg7.eps}}
\caption{The RV semiamplitudes of all known sdB binaries with spectroscopic solutions plotted against their orbital periods (Geier et al. \cite{geier10c}). Binaries which have initially been discovered in photometric surveys due to indicative features in their light curves (eclipses, reflection effects, ellipsoidal variations) are marked with open circles. Binaries discovered by detection of RV variations from time resolved spectroscopy are marked with open diamonds. The dashed, dotted and solid lines mark the regions to the right where the minimum companion masses derived from the binary mass function (assuming $0.47\,{\rm M_{\odot}}$ for the sdBs) exceed $0.45\,{\rm M_{\odot}}$, $1.00\,{\rm M_{\odot}}$ and $1.40\,{\rm M_{\odot}}$. The seven binaries from the MUCHFUSS project are marked with filled diamonds.}
\label{periodK}
\end{figure}
\section{Summary and Outlook}\label{s:summary}
A multi-site follow-up campaign is being conducted with medium resolution spectrographs mounted at several different telescopes of mostly $2\,{\rm m}$ to $4\,{\rm m}$-class. First results were presented for seven close binary sdBs with short orbital periods ranging from $\simeq0.21\,{\rm d}$ to $1.5\,{\rm d}$ and most likely compact companions. The atmospheric parameters of all objects are compatible with core helium-burning stars on the EHB. Comparing our small sample with the known population of close sdB binaries we are able to show that our target selection method is efficient. All binaries solved up to now have high minimum companion masses compared to the rest of the sdB binary population.
Up to now we have found significant orbital solutions for about $10\%$ of our target sample. Photometric follow-up observations will allow us to clarify the nature of the companions in most cases. A database of more than $700$ spectra has been built up and some binaries will be solvable with only a few additional RV points.
\begin{acknowledgements}
A.T., S.G. and H.H. are supported by the Deutsche Forschungsgemeinschaft (DFG) through grants HE1356/45-1, HE1356/49-1, and HE1356/44-1,
respectively. R.\O. acknowledges funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007--2013)/ERC grant agreement N$^{\underline{\mathrm o}}$\,227224 ({\sc prosperity}), as well as from the Research Council of K.U.Leuven grant agreement GOA/2008/04. Travel to the DSAZ (Calar Alto, Spain) was supported by DFG under grants HE1356/48-1 and HE1356/50-1. Travel to La Palma for the observing run at the WHT was funded by DFG through grant He 1356/53-1.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\end{acknowledgements}
|
1,108,101,565,188 | arxiv | \section{Introduction}
The reconstruction of a quantum state in general is important to test potential quantum systems on their properties such as entanglement, superposition and coherence and on their applicability for emerging quantum technologies. Furthermore, massive and spatially extended quantum systems such as complex molecules and nanoparticles in quantum superposition are the proposed test embodiments for universal boundaries of the validity of quantum theory with strong relevance for future nanotechnology~\cite{nimmrichter2011testing, romero2011optically}. The wave nature of molecular motional states (as massive as ten C$_{60}$ molecules) has been demonstrated~\cite{Gerlich2011quan} and to make a step further, we now want to characterize the quantum state of motion of molecules in Talbot-Lau interferometry.
The wave function cannot be observed directly, but by using the Wigner distribution function (WDF) we have an alternative perspective on quantum dynamics as WDF is equivalent to the density matrix~\cite{wigner1932quantum,schleich2001}. Quantum states have the unique property that they \emph{can} generate negative values of this quasi-probability function. On the first hand the negativity of the Wigner function is seen as a proof of the quantum nature of the state under consideration. A fully reconstructed Wigner function contains the complete information about the measured state. The process to evaluate the state is called phase space tomography.
Phase-space tomography for the Wigner function was pointed out in
a general context by Bertrand and Bertrand~\cite{bertrand1987tomographic}and independently by Vogel and Risken~\cite{Vogel1989}, applied in photonics~\cite{leonhardt1997measuring} where quantum state tomography is an established experimental tool to quantify the quantum state of light~\cite{lundeen2008tomography}. It has been recently used to demonstrate the state squeezing of atomic Bose condensates~\cite{schmied2011tomographic}), and to quantify the motional quantum state of a trapped Be$^+$ ion~\cite{leibfried1996experimental}. An early experiment to characterize the quantum nature of atomic motion in de Broglie interference was the Wigner function reconstruction of meta-stable He atoms diffracted at a double slit~\cite{kurtsiefer1997measurement}, as theoretically proposed earlier~\cite{janicke1995tomography}. The Wigner function reconstruction has been also proposed to be of use to prove the quantum nature of the superposition of very massive particles~\cite{romero2011} and even macroscopic opto-mechanical systems~\cite{Vanner2011}. Furthermore, to prove entanglement in superconducting qubits, quantum state tomography has been applied~\cite{steffen2006measurement}.
In more technical terms the Wigner function is a quasi-probability distribution of states in phase space. In the case of de Broglie interference of molecules the quantum state is projected on to the spatial coordinate, which is the spatial number distribution of particles after the diffraction grating (see Fig.\ref{fig:talbot}). This spatial distribution is needed for different rotational angles of the quantum state in phase space. This rotation in phase space comes natural from free space propagation of a particle beam after diffraction and can be evaluated from measuring the spatial distribution at different distances after the grating. The inverse Radon transformation of collected data of density distributions will give the Wigner function. In the experiment the spatial coherence is prepared by another grating placed in front of the diffraction grating. Then many coherent sources constructively contribute to the same interference pattern due to the Lau effect. The multi grating configuration is the so-called \textit{Talbot-Lau} interferometer (TLI) and is explained in detail elsewhere~\cite{nimmrichter2008theory,brezger2002}.
Here, we discuss the Wigner function reconstruction of matter-waves in the near-field Talbot regime with illumination of the grating by a single coherent source. If the spatial coherence of the matter wave is high, which means that plane waves are reaching the grating, then the here presented Talbot simulations are valid for the Talbot-Lau scheme and reconstruction gives the same Wigner function.
\begin{figure
\centering
\includegraphics[scale=0.25]{fig1}
\caption{\label{fig:talbot}The setup as used for simulations. The probability distribution for 2 Talbot length $z_T$, the quantum carpet, simulated with finite grating and grating opening fraction (slit width / period) of $f_o=0.3$ is illustrated. The reappearance of the grating self-images is the Talbot effect, which occurs if a periodic structure is coherently illuminated.}
\end{figure}
We theoretically perform phase-space tomography of the center of mass motion quantum state of massive molecules in a near-field TLI~\cite{clauser1992}. Near-field means that the interference pattern is on the same size scale as the diffraction grating period. In contrast, the far-field (Fraunhofer) interference pattern are much larger than the grating (for illustration see Hornberger et al.~\cite{Hornberger2012}). Earlier investigations of de Broglie quantum states have dealt with far-field pattern reconstruction~\cite{kurtsiefer1997measurement,janicke1995tomography}. From light and matter-wave optics it is known that complex diffraction pattern can be expected in this near-field regime. Those structures are sometimes called quantum carpets~\cite{friesch2000,berry2001}. Recently optical quantum carpets have been experimentally observed by Case et al.~\cite{case2009realization}. Talbot carpets have been applied for computation as number factorization~\cite{clauser2008,schleich2008,gilowski2008gauss}.
\section{Theoretical Model}
\subsection{Phase Space Tomography}
The WDF $W(x,p)$ of a complex signal $\psi(x)$ is defined by \cite{wigner1932quantum}:
\begin{equation}
W(x,p)=\frac{1}{\pi}\int_{-\infty}^{\infty} \psi^*(x-x')\psi(x+x')e^{-i2 p x'}dx' ,
\label{eq:wdf}
\end{equation}
with momentum $p$ and position $x$. Throughout the paper we set $\hbar$=1. Experimentally, we measure the spatial intensity pattern which corresponds to the projection of the WDF onto the space coordinate. This is formulated by integration of the WDF over the momentum variable $p$:
\begin{equation}
P(x)=\int_{-\infty}^{\infty} W(x,p) dp.
\label{eq:P}
\end{equation}
If the WDF is rotated by angle $\theta$ in phase space, it becomes
\begin{equation}
W_{\theta}(x,p)=W(x\cos\theta-p\sin\theta, x\sin\theta+p\cos\theta),
\end{equation}
and in analogy to Eq.~(\ref{eq:P}) the spatial intensity pattern $P_{\theta}(x)$ or the marginal probability can be obtained by
\begin{equation}
P_{\theta} (x)=\int_{-\infty}^{\infty} W_{\theta}(x,p) dp .
\label{eq:Ptheta}
\end{equation}
This intensity pattern with various angles of rotation can be obtained from diffraction pattern like a quantum carpet for a grating with an infinite number of slits. Phase space tomography is based on the transformation of Eq.~(\ref{eq:Ptheta}) resulting in a reconstructed WDF:
\begin{equation}
W(x,p)=\frac{1}{4\pi^2}\int_{-\infty}^{\infty} dx' \int_{0}^{\pi} d\theta P_{\theta}(x') \int_{-\infty}^{\infty} dr|r| e^{ir(x'-x\cos\theta-p \sin\theta)},
\label{eq:radon}
\end{equation}
which is the \emph{inverse Radon transformation}. Eq.~(\ref{eq:Ptheta}) can be written as
\begin{equation}
P_{\theta}(x)=\sum_n p_n|\psi_{n,\theta}(x)|^2,
\end{equation}
with
\begin{equation}
\psi_{n,\theta}(x)=\frac{1}{\sqrt{2\pi i \sin\theta}} \int_{-\infty}^{\infty} dx' e^{-i(\frac{x}{\sin\theta}x' -\frac{1}{2}\cot\theta x'^{2})}\psi_n(x'),
\label{eq:fresnelwave}
\end{equation}
which represents the fact that the active rotation of the Wigner function can be done by fractional Fourier transformation in Fresnel diffraction theory and was derived in~\cite{janicke1995tomography}.
\subsection{Wigner Function Reconstruction for Free Space Propagation}
A full reconstruction of the WDF requires spatial probability distributions $P_{\theta}(x)$ for every angle $\theta$ between 0 and $\pi$. The easiest way to rotate the WDF is the free space propagation of the particles. The rotation angle then depends on the distance $z$ between the diffraction grating and the detector. The free propagation of the diffracted wave function is according to Fresnel diffraction theory given by~\cite{Hornberger2012}:
\begin{equation}
\psi(x,z)=\frac{1}{\sqrt{i\lambda z }}\int_{-\infty}^{\infty} dx' e^{i\frac{k}{2z}(x-x')^2}\psi(x',0),
\label{eq:fdt}
\end{equation}
where $k=2\pi/\lambda$ is equivalent to momentum,with $\hbar=1$, and $\lambda$ is the de Broglie wavelength. We rescale the $x$-axis and get the expression:
\begin{equation}
\psi(\frac{x}{s},z)=e^{-ik\frac{x^2}{2zs^2}} \int_{-\infty}^{\infty} dx' e^{-ik(\frac{xx'}{sz}-\frac{x'^2}{2z})}\psi(x',0).
\label{eq:fresnelevolution}
\end{equation}
Comparing Eqs. (\ref{eq:fresnelwave}) and (\ref{eq:fresnelevolution}) gives $s=k\sin(\theta)/z$, for the rescaling of the $x$-axis, and $\cot(\theta)=k/z$ for the dependency of the rotation angle $\theta$ from the distance $z$. As a first result we find that free space propagation does not lead to a full rotation over $\pi$ for finite $z$. Sufficient rotation to fully reconstruct the WDF can be achieved by a lens. Such a lens has been implemented for atomic matter-waves by a Fresnel zone plate~\cite{carnal1991imaging, reisinger2009poisson} or a standing light wave~\cite{sleator1992imaging}. In~\cite{janicke1995tomography} it was shown theoretically that by using a lens a $\pi$/2-rotation, and therefore the Fourier transform is accessible for finite values of $z$. Since the realization of such a lens has not been demonstrated for molecular matter-waves, we here consider the simplest case of rotation, which is free space propagation. However it is possibility to increase the accessible angle of rotation and therefore the amount of information about the quantum state by additional symmetry assumptions on the investigated WDF~\cite{pfau1997partial}, which we will use in Sec. (\ref{rotationangle}). We will investigate the \emph{partial} reconstruction of WDF in the Talbot regime for $\Theta$ between $0$ and $\pi/2$. As a figure of merit we use the appearance of negativity of WDF.
Technically, we numerically reconstructed WDF using the filtered back-projection algorithm~\cite{Herman80} for the inverse Radon transformation:
\begin{equation}
W(x,p)=\int_{0}^{\pi}d\theta \int_{-x_m}^{x_m} dx' P_\theta(x') g(x'-x\cos\theta -p\sin\theta),
\end{equation}
with
\begin{equation}\label{eq:cutoff}
g(x)\approx 2(-\frac{1}{x^2}+\frac{\cos(r_c x)}{x^2}+\frac{r_c\sin(r_c x)}{x}),
\end{equation}
which is approximated from $g(x)=\int_{-\infty}^{\infty} dr |r| e^{irx}$, where $x_m$ is a real range of the transverse $x$-axis and $r_c$ is a cut-off frequency.
\subsection{WDF of Ideal Quantum Carpet and Talbot Effect}
The WDF for a quantum superposition state as in the classic double slit arrangement, a cat state, is well known. Here, we are interested in the WDF of an ideal quantum carpet, the near-field wave diffraction pattern after a grating with many slits. We start with a grating with an infinite number of slits, as it gives a nice analytical expression. The wave function $\psi(x)$ after the grating is:
\begin{equation}
\psi(x)= t_c(x)\otimes \sum_{n=-\infty}^{\infty} \delta(x-nd),
\label{eq:combf}
\end{equation}
where $d$ is the grating period, $t_c(x)$ is the grating transmission function for a single slit ($-d/2\leq x\leq d/2$), and $\otimes$ denotes convolution: $(f\otimes g)(x)=\int_{-\infty}^{\infty} f(x-x')g(x')dx'$. An infinite train of delta function, a comb function, can be expressed as $\frac{1}{d} \sum_{n=-\infty}^{\infty} e^{i2\pi n \frac{x}{d}} $. With that we rewrite Eq.~(\ref{eq:combf}) to be:
\begin{equation}
\psi(x)=\sum_{n=-\infty}^{\infty}A_n e^{i2\pi n \frac{x}{d}},
\end{equation}
where $A_n=d^{-1}\int_{-d/2}^{d/2} dx \,t_c(x) e^{-i2\pi n x/d}$.
The wave function along $z$ is then in form of Eq.(\ref{eq:fresnelevolution}) given by:
\begin{equation}
\psi(x,z)\sim \sum_{n=-\infty}^{\infty} A_n e^{-i2\pi n\frac{x}{d}} e^{-i\pi n^2 \frac{z \lambda}{d^2}}.
\end{equation}
The second exponential term generates a periodic appearance of the identical pattern at different positions in $z$-direction. This is the Talbot effect. Self-images of the grating revive at multiple of the Talbot distance $z_T = 2\frac{d^2}{\lambda}$. The probability distribution of this Talbot effect is shown in Fig.\ref{fig:talbot}. We note that self images are also revive at multiple of $z_T/2$ with a $d/2$ displacement in x-direction. It is this $d/2$ shift which carries non-classical information about the motional state and generates negative values of WDF. WDF reconstruction will therefore need to include especially such $z$-positions. In the following we define the diffracted wave function in units of the Talbot distance, since the self-image is repeated at multiple of $z_T$. We note that the number of near-field Talbot self-images typically depends on the number of contributing (coherently illuminated) grating slits. Experimentally, five Talbot lengths $z_T$ are easy to achieve~\cite{Hornberger2012}.
The exact WDF for the infinitely periodic input is obtained from Eq.~(\ref{eq:wdf}) as
\begin{equation}
W(x,p)=\sum_{n=-\infty}^{\infty}\sum_{n'=-\infty}^{\infty}A_nA^\ast_{n'} e^{i2\pi (n-n') \frac{x}{d}} \delta(p/2\pi-\frac{n+n'}{2d}).
\label{eq:WDFexact}
\end{equation}
The WDF of the $\delta$-comb function, which is the mathematically simplest guess, is obtained by substituting $m=n+n'$, and setting $A_n=1/d$:
\begin{eqnarray}
W(x,p)&=&\frac{1}{d^2} \sum_{n=-\infty}^{\infty} e^{i2\pi (2n) \frac{x}{d}}\sum_{m=-\infty}^{\infty}e^{-i2\pi (m) \frac{x}{d}} \delta(p/2\pi-\frac{m}{2d}) \nonumber \\
&=& \frac{1}{2d} \sum_{n=-\infty}^{\infty} \sum_{m=-\infty}^{\infty} (-1)^{nm} \delta(x-\frac{nd}{2}) \delta(p/2\pi-\frac{m}{2d}),
\label{eq:WDFcomb}
\end{eqnarray}
and plotted in phase space as shown in Fig.~\ref{fig:wdfcomb}(a). The WDF is the sum of $\delta$-function in momentum, as seen in Eq.~(\ref{eq:WDFexact}). This delta function peaks, however, do not appear in experimental physical system where we have a finite number of slits of finite width. A grating with finite slit width is formulated with the Fourier coefficient $A_n=f_o sinc(n\pi f_o)$ and is plotted in Fig.~\ref{fig:wdfcomb}(b). We note the negative peaks (bright spots) in the WDF, showing the non-classical character of the state. We observe that the peak width in momentum becomes narrower as the number of slits increases.
\begin{figure}[t]
(a)\hspace{7cm}(b)\\
\includegraphics[scale=0.65]{fig2a}
\includegraphics[scale=0.70]{fig2b}
\caption{Exact Wigner Distribution Function of (a) $\delta$-function comb wave and (b) rectangular wave. Those correspond to the WDF of infinite gratings. The symbols +, - in (a) indicate positive, negative delta peaks and the bar in (b) indicates gray scaled of WDF in the contour plot.}
\label{fig:wdfcomb}
\end{figure}
\section{Simulations: Limits for Finding Negativity in WDF}
Here we simulate the effects of experimental limitations on the quality of the reconstructed WDF, which include:
\begin{itemize}
\item Range of rotation angles,
\item Spatial detector resolution,
\item Incoherent source (in terms of collimation of molecule beam, spatial coherence),
\item Finite gratings,
\item Van der Waals interaction (interaction between molecules and material gratings),
\item Visibility.
\end{itemize}
We will conclude each section with the feasibility of these limits for the experiment. For numerical simulations, the grating period $d$ is set as unit length and $1/d$ is set as unit momentum. All parameter ($x,z,\lambda$, $p$) become dimensionless by rescaling: $x \rightarrow x/d$, $z\rightarrow z/d$, $\lambda\rightarrow\lambda/d$, and $p\rightarrow pd$. The wavelength is chosen to be $\lambda=10^{-5}d$ and the open fraction of the gratings is set to $f_0$=0.3. This choice of parameter adapts the simulation to molecule interferometry experiments, which are in the centre of our interest~\cite{brezger2002}. However our results are universal and can easily be tuned to represent the same diffraction effects in the Talbot regime for other electromagnetic or matter waves. The cut off frequency $r_c$ as defined in Eq.~(\ref{eq:cutoff}) was optimized to $r_c=30$ to show the structure of WDF without high frequency computational noise. The WDF is reconstructed basing on the normalized probability distribution and will be plotted as contour maps in all following figures where the gray scale bar indicates the value of WDF. The scale bar allow for comparison of the different effects as the intensity scale has been normalized for each effect. For some WDF plots we show a cross section of the contour plot to visualize the negativity.
\subsection{Rotation Angle}\label{rotationangle}
\begin{figure
(a)\hspace{8cm}(b)\\
\includegraphics[scale=0.65]{fig3a}
\includegraphics[scale=0.65]{fig3b} \\
(c)\hspace{8cm}(d)\\
\includegraphics[scale=0.65]{fig3c}
\includegraphics[scale=0.65]{fig3d}
\caption{(a) Propagation distance $z$ as a function of the rotation angle $\theta$, with distance $z$ scaled by Talbot distance $z_T$. (b) Full reconstruction of WDF, and partial reconstruction of the WDF with rotation angle between [0,$4z_T$] (c) and [0, $z_T$] (d).}
\label{fig:ztheta}
\end{figure}
The WDF is rotated by free space propagation. In the experiment the rotation angle is described by the distance $z$ after the grating:
\begin{equation}
z = \frac{2\pi}{\lambda}\tan\theta.
\label{eq:z_theta}
\end{equation}
Since we use $z$ and $\lambda$ in units of $d$, Eq.~(\ref{eq:z_theta}) becomes $z/z_T=\pi \tan\theta$ which is illustrated in Fig.\ref{fig:ztheta}(a). As a result a rotation of close to $\pi$/2 can be achieved with several Talbot distances $z_T$, although a rotation of exactly $\pi$/2 corresponds to infinite distance. A full reconstruction of the WDF is therefore not feasible, and we now investigate the \emph{partial} reconstruction for limited rotation angles to investigate whether this still leads to negativity of WDF. In Fig.\ref{fig:ztheta}, we compare full reconstruction (b), partial reconstruction of WDF with rotation angle [0,4$z_T$] (c), and for [0,$z_T$] (d). We observe tilting of WDF for partial reconstruction, but the position of the peaks and the negativity still remain. When partially reconstructing the WDF, negative peaks appear when including the displaced self-images, which are truly due to interference. We conclude therefore, these negative peaks in the partially reconstructed WDF show non-classical behavior. In the following, we chose a rotation angles between 0 and $4z_T$ which corresponds to rotation angles between 0 to $\arctan [4/\pi]$ according to Eq.~(\ref{eq:z_theta}). We note the rotation angle depends on the unit length and the corresponding WDF is also expressed as a function of the unit length~\cite{pfau1997partial}.
\subsection{Resolution in x and z}
In the last section we investigated the range of rotation angles necessary for observing negativity in the WDF. Now, our interest is the dependency on the resolution in $x$- as well as in $z$-direction. The resolution in x-direction corresponds to the spatial resolution of the detector. Simulations are shown in Fig.~\ref{fig:prwdf_dx}, where we compare $dx=0.01d$, $dx=0.05d$, and $dx=0.1d$. We conclude that a resolution of ten measurement points per grating period $d$ (and possibly even less) are sufficient to reconstruct negativity of WDF. This resolution is typically achieved in state-of-the-art molecule interference experiments. However structure of the reconstructed WDF becomes a more pronounced for higher $x$-resolution. We chose $dx=0.1d$ for the following simulations.
We find that an important parameter is the number of rotation angles $N_\theta$ which corresponds to the propagation distance $z$, as discussed in Sec.~\ref{rotationangle}. On the other hand the resolution of the $z$-direction itself is not critical and we show simulations in Fig.~\ref{fig:rwdf_dn} to prove this statement. We keep the the total distance $z$ constant, but change the resolution which is the number of scans within $z$. In Fig.~\ref{fig:rwdf_dn}(a) the reconstructed WDF for $N_{\theta}$=20 within 4$z_T$ is shown. The structure is very similar compared to Fig.~\ref{fig:ztheta}(c), which is over the same $z$ distance. We then vary the resolution further and show cross sections of the contour plot of the WDF at $(p/2\pi)d=0.5$ for $N_{\theta}=20, 50,100$ in Fig.~\ref{fig:rwdf_dn}(b). The periodic negative and positive peaks are almost independent from $N_{\theta}$. Therefore, data taken at the self-image planes are sufficient to reconstruct WDF. That means, when measuring only the self-image planes we need to measure until 4$z_T$ to find negativity in the reconstructed WDF. We note, displaced self images taken at integer multiple of $z_{T}/2$ are crucial to generate negative peaks.
\begin{figure
(a)\hspace{8cm} (b)\\
\includegraphics[scale=0.65]{fig4a}
\includegraphics[scale=0.65]{fig4b}
\caption{Partial reconstruction of WDF between 0 to 4$z_T$ varying resolution of $x$-axis, which corresponds to the spatial resolution of the detector in the experiment. (a) Contour plot for $dx=0.01d$, (b) Cross section plot for WDF at $(p/2\pi)d=0.5d$ for $dx=0.01d$ (red solid line), $dx=0.05d$ (green dotted line), and $dx=0.1d$ (blue dash-dotted line) where $N_\theta=100$ and $f_0=0.3$.}
\label{fig:prwdf_dx}
\end{figure}
\begin{figure
(a)\hspace{8cm} (b)\\
\includegraphics[scale=0.65]{fig5a}
\includegraphics[scale=0.65]{fig5b}
\caption{Partial reconstruction of WDF from 0 to 4$z_T$ varying the number of rotation angles (a) Contour plot for $N_{\theta}=20$, (b) Cross section plot for WDF at $(p/2\pi)d=0.5$ for $N_\theta=100$(red solid line), $N_\theta=50$ (green dotted line), and $N_\theta=20$ (blue dash-dotted line) where $f_0=0.3$ and $dx=0.1$.}
\label{fig:rwdf_dn}
\end{figure}
\subsection{Incoherent Source}\label{sec:incoherentsource}
\begin{figure
(a)\hspace{9cm} (b)\\
\includegraphics[scale=0.65]{fig6a}
\includegraphics[scale=0.65]{fig6b} \\
(c)\hspace{9cm} (d)\\
\includegraphics[scale=0.65]{fig6c}
\includegraphics[scale=0.65]{fig6d}
\caption{Probability distribution (a) and partial reconstruction of the WDF with an incoherent source based on data of $[0, 4z_T]$ with an incident angle of $[\alpha]_{max}=\pi\times10^{-6}$ (b) and $[\alpha]_{max}=2.5\pi\times10^{-6}$(c). (d) The WDF at $(p/2\pi)d=0.5$ for $\alpha_{max}=0$ (red solid line), $\alpha_{max}=\pi\times10^{-6}$ (green dotted line), and $5\pi\times10^{-6}$ (blue dash-dotted line) where $f_0=0.3, N_\theta=100, dx=0.1d$.}
\label{fig:prwdf_dk}
\end{figure}
Effects of temporal coherence, which correspond to the longitudinal $z$-velocity selection of the molecular beam is taken into account by a reduction of the visibility for a given distance $z$ in Sec.~(\ref{sec:visibility}). Here, we discuss the effect of an incoherent source with respect to spatial coherence of the molecular matter waves. This corresponds to the transverse $x$-velocity selection by collimation of the molecular beam. Assuming an incident collimation of angle $\alpha$ (see Fig.~(\ref{fig:talbot})), the wave function is found by averaging over all incident angles $\alpha$:
\begin{equation}
\psi(x,z) = \left<\sum_{n=-\infty}^{\infty} A_n \exp[{i(k_{\alpha}+n\frac{2\pi}{d}x)+i(k-\frac{{(k_{\alpha}+n\frac{2\pi}{d}x)}^2}{2k})z}]\right>_{\alpha} ,
\end{equation}
where $k=({k_z}^2+{(k_{\alpha}+n\frac{2\pi}{d}x)}^2)^{1/2}$ and $k_{\alpha} = k \sin(\alpha)$. The variation of the beam collimation is implemented mathematically by the standard deviation of a Gaussian distribution of $\sigma=0.1\alpha_{max}$, with $\alpha_{max}$ is the total range of incident collimation angles. The wave function is then averaged between
[$-\alpha_{max}/2$ , $\alpha_{max}/2$] with Gaussian weight. The Talbot effect can be observed in the limit $\alpha <d/z_T$, which means that the diffraction is dominating collimation. The probability distribution is plotted in Fig.~\ref{fig:prwdf_dk}(a) for incident angle $\alpha_{max}=\pi\times10^{-6}$, which is easy to achieve in the experiment by collimation at the fist grating in a Talbot and Talbot-Lau interferometer. We observe that interference contrast is washed out rapidly for increasing $z$. We reconstruct WDF as shown in Fig.~\ref{fig:prwdf_dk}(b) and it still shows negative peaks between positive peaks. However, we observe a significant lost of contrast of WDF for increased collimation angle ${\alpha}_{max}$, as shown in Fig.~\ref{fig:prwdf_dk}(c) and (d). Since the interference pattern are very sensitive to the spatial coherence, the Talbot-Lau interferometer configuration is used in experiments. Here, an additional grating placed in front of the diffraction grating increases the spatial coherence of the matter waves and Talbot images can be well revived even with a molecular beam source of relatively poor spatial coherence~\cite{clauser1992}.
\subsection{Finite Gratings}
The grating was assumed to have an infinite number of slits until now. Experimentally, the effective number of involved slits is finite, while sufficiently large to observe the Talbot effect. Talbot self-images, however, are washed out after some multiple of Talbot distance $z_T$ as an effect of the finite number of slits~\cite{Hornberger2012}. To study this effect quantitatively, we define the wave function for a finite number of slits as:
\begin{equation}
\psi(x)=t_c(x)\otimes \sum_{n=-N_s}^{n=N_s}\delta(x-nd),
\end{equation}
with 2$N_s$ slits. The wave function after the grating in free space propagation is then given by:
\begin{equation}
\psi(x,z)=t_c(x)\otimes \sum_{n=-N_s}^{n=N_s} e^{i\frac{\pi}{\lambda z}(x-nd)^2}.
\end{equation}
We simulate density distribution and WDF for various number of slits. Contrast in the density distribution is still visible in Fig.~\ref{fig:finitepd}(a) for as few as ten grating slits $N_s$ contributing to diffraction for up to three Talbot distances and we can still reconstruct WDF with negativity in Fig.~\ref{fig:finitepd}(b). Both, the visibility of positive and negative peaks of WDF as well as the distance (number of $z_T$), where near-field pattern can be seen in the density distribution, increase as the number of slits increases.
\begin{figure
(a)\hspace{9cm}(b) \\
\includegraphics[scale=0.65]{fig7a}
\includegraphics[scale=0.65]{fig7b}
\caption{Probability distribution (a) and partial reconstruction of the WDF (b) for finite number of slits, $N_s$=10 for $[0,4 z_T]$ where $N_\theta=100, f_0=0.3, dx=0.1$.}
\label{fig:finitepd}
\end{figure}
\subsection{Van der Waals Interaction}
Here we investigate the effect of the interaction between the grating and molecules on the WDF reconstruction. Dispersion forces, such as van der Waals (vdW) forces in the short range limit (on the order of 100~nm between molecule and grating wall) and Casimir-Polder (CP) forces in the long range limit (for larger distances than some 100~nm), are known to affect the molecule interference pattern~\cite{nimmrichter2008theory}, if the gratings are realized from material structures made of metals (gold, Au) or semiconductors (silicon nitride, SiN$_x$). Dispersion forces will therefore have a large effect on the quantum carpet structure and we have to test the respective limits of WDF reconstruction. We implement the grating wall molecule interaction as a phase term $\phi(x)$ to transmission function $t'_c(x)=t_c(x)\phi(x)$ in Eq.~(\ref{eq:combf}):
\begin{equation}
\phi(x)=e^{\frac{im}{\hbar p}\int^{\infty}_{-\infty}dz\,V(x,z)},
\end{equation}
with the mass $m$ of the molecule and the VdW potential $V=-C_3/x^3$, which we exclusively discuss here for simplicity. The $1/x^3$ scaling of the potential with distance $x$ is typical for a pointlike particle in front of a surface. The vdW interaction constant $C_3$ is depending on the dielectric properties of the molecule and the grating over the full electromagnetic spectrum. For our simulations we use $C_3$=10~meV~nm$^3$ for a fullerene C$_{60}$ molecule close to a gold surface in agreement with literature~\cite{nimmrichter2008theory}. Fig.~\ref{fig:rwdfvdw} shows reconstructed WDF (a) without and (b) with vdW interaction and it shows a small change in visibility and structure. We note, that density distribution (quantum carpet) and WDF with vdW are very similar to simulations with smaller open fraction $f_0=0.1$ without vdW. This is in agreement with the known effect that the always attractive vdW interaction effectively reduces the slit width of the diffraction grating~\cite{brezger2002}. The reduced visibility in the density distribution and the reduced negativity of WDF in Fig.~\ref{fig:rwdfvdw} (b) are explained by dephasing.
\begin{figure
(a)\hspace{9cm}(b) \\
\includegraphics[scale=0.65]{fig8a}
\includegraphics[scale=0.65]{fig8b} \\
\caption{The partial reconstructed WDF for $[0,4z_T]$ without van der Waals (vdW) interaction (a) and with vdW interaction (b) where open fraction $f_0=0.44$, $dx=0.1d, N_\theta=80$.}
\label{fig:rwdfvdw}
\end{figure}
\subsection{Visibility}\label{sec:visibility}
Another parameter which will be investigated here is the visibility $\mathcal{V}=(I_{max}-I_{min})/(I_{max}+I_{min})$, where $I$ is the intensity of the quantum carpet structure illustrated by the gray scale in the plots. The visibility is the most sensitive parameter of the pattern an is easily influenced by different effects. For instance a low velocity selection of the molecular beam resulting in a low longitudinal or temporal coherence of the matter wave reduces the visibility of the carpet. The negativity of WDF is therefore also reduced. Furthermore as already mentioned in Sec.~\ref{sec:incoherentsource}, the Talbot effect is experimentally usually observed in the Talbot-Lau interferometer. The main difference between Talbot and Talbot-Lau from the theoretical point of view is a reduction in interference fringe visibility for the TLI. This means our discussion on WDF reconstruction in the Talbot regime can be also extended to the Talbot-Lau regime, if we investigate the effect of visibility reduction. To include the visibility in the probability distribution (quantum carpet) we modify the density distribution $P(x.z)$ to be: $I_{min}+(1-I_{min})P(x,z)$, with $I_{min}=(1-\mathcal{V})/(1+\mathcal{V})$. We reconstruct WDF and plot in Fig.~\ref{fig:rwdfvis}(a), vary the visibility and find that with a visibility of $\mathcal{V}$=0.5 we can still identify negative parts of the WDF although the contrast is reduced as shown in Fig.~\ref{fig:rwdfvis}(b). The analysis of visibility by WDF reconstruction will become a useful tool to investigate decoherence effects and mechanisms in molecule quantum optics.
\begin{figure
(a)\hspace{9cm}(b) \\
\includegraphics[scale=0.65]{fig9a}
\includegraphics[scale=0.65]{fig9b} \\
\caption{(a) Contour plot of partially reconstructed WDF for $[0,4z_T]$ with visibility $\mathcal{V}=$0.5, and (b) cross section plot of the WDF at $(p/2\pi)d=0.5$ for $\mathcal{V}=1$ (red solid line), $\mathcal{V}=0.75$ (green dotted line), and $\mathcal{V}=0.5$ (blue dash-dotted line) where $N_\theta=100, f_0=0.3, dx=0.1d$.}
\label{fig:rwdfvis}
\end{figure}
\section{Conclusion}
We have numerically analyzed the partial Wigner distribution function (WDF) reconstruction of near-field optical quantum carpets for free space propagation of the wave. In our study, we considered all the major experimental inefficiencies. Most important, we find that negativity can be observed in the reconstructed Wigner distribution function in the Talbot and Talbot-Lau regime under such realistic conditions with today's technologies. All investigated parameter and effects keep promise for the realization of molecular quantum optics tomography. Important to reconstruct WDF is to collect data of quantum carpets at half-integer multiple of Talbot distances. The Talbot regime is important for molecule interferometry as the conceptually simpler double slit and far-field interference are much harder to perform due to experimental difficulties. The here presented tomography of the motional quantum state will become an important analytic tool in molecule quantum optics as it gives a handle to directly detect the superposition state of centre of mass motion. Furthermore WDF reconstruction may be used to investigate wave function dephasing effects such as van der Waals or Casimir-Polder interactions of the particles as well as to study decoherence effects which reduce the visibility~$\mathcal{V}$. Further work needs to conduct experimental tomography of the discussed quantum state, while the main challenge lasts to generate an intense beam of large particles while keeping sufficient coherence, which requires high phase-space density. The here described tomographic analysis is also applicable to investigate the quantum superposition of the centre of mass motion of even larger nanoparticle and cluster as well as for electromagnetic waves such as X-rays~\cite{Gaffney08062007}.
\section*{Acknowledgements}
This work has been financially supported by the University of Southampton and the Foundational Questions Institute (FQXi).
\section*{References}
\bibliographystyle{unsrt}
|
1,108,101,565,189 | arxiv | \section{Introduction}
Mutually non-orthogonal quantum states are important in quantum key
distribution (QKD) because such states cannot be completely distinguished
from each other and hence they are intentionally used to transmit classical
information while preventing eavesdropping.
For qubits, the qubit trine represents the smallest complete set of
non-orthogonal states.
Earlier trine-based protocols include the schemes by Bechmann-Pasquinucci and
Peres \cite{b.bpp2000} and by Phoenix, Barnett, and Chefles \cite{b.pbc2000}.
When QKD is performed, it is generally assumed that Alice and Bob share a
common reference frame, the precise nature of which depends on the specific
information carriers involved.
For instance, correct orientation of Alice and Bob's coordinates is necessary
for proper alignment of the preparation and measurement apparatus.
Practical implementations of cryptographic protocols require establishing a
rigid shared frame in advance or a frequent automatic realignment.
A lack of a shared reference frame is equivalent to the presence of
decoherence in the quantum channel \cite{b.brs2007}.
In this contribution, we describe a trine-based cryptographic protocol that uses
reference-frame-free qubits and a novel scheme for key generation.
We report the asymptotic noise threshold below which this QKD procedure
is secure.
The security analysis involves a very plausible, yet unproven, simplifying
symmetry assumption.
\section{Basics of trine schemes}
A qubit trine is represented by a symmetric set of three states lying in the
$XZ$-plane of the Bloch sphere, where adjacent vectors are separated by
$120^{\circ}$; see fig.~\ref{fig.trinebloch}.
Let us call the qubit trine ${T = \{ \ket{A}, \ket{B}, \ket{C} \}}$.
Alice prepares her qubits in any one of the trine states with equal
probability, and sends these qubits one at a time to Bob.
Bob measures the qubits he receives with a probability
operator measurement (POM) whose outcomes are not projectors to the trine $T$,
such as $\proj{A}$ but rather to states orthogonal to $T$, i.e., states
belonging to the set ${T' = \{ \ket{A'}, \ket{B'}, \ket{C'} \}}$, where
\begin{equation}
\label{eq.trineprop}
\bigl|\bracket{A}{A'}\bigr|^2 = 0\,,
\quad \bigl|\bracket{A}{B'}\bigr|^2
= \bigl|\bracket{A}{C'}\bigr|^2 =\frac{3}{4}\,,
\end{equation}
with analogous relations holding for $\ket{B}$ and $\ket{C}$.
Since Alice works only with the trine $T$ while Bob is concerned only with
states from the complementary trine $T'$, we can simplify matters by treating
corresponding states of $T$ and $T'$ as identical.
For example, if Alice sends $A$, we say Bob never measures $A$ but has equal
probability of obtaining either $B$ or $C$.
In this description, the joint probabilities of the quantum communication
channel are given by table~\ref{tab.trinescheme}, for which
\begin{equation}
\label{eq.trineMI}
I(A:B) = \log_{2}\frac{3}{2} = 0.585
\end{equation}
is the mutual information $I(A:B)$ between Alice and Bob.
With such a noiseless trine channel, then, they can generate
up to $0.585$ secret key bits per qubit sent.
\begin{figure}
\centerline{\includegraphics[scale=0.7]{RFFtrineQKD-Fig1.eps}}
\caption{\label{fig.trinebloch}%
Qubit trine $T$ and complementary trine $T'$ in the Bloch representation.
The projectors on the respective kets are all symbolized by vectors in the
$XZ$-plane.}
\end{figure}
\begin{table}
\caption{Joint probabilities for the noiseless trine channel.%
\label{tab.trinescheme}}
\centerline{\rule{0pt}{28pt}
\begin{tabular}[t]{r|ccc}
& A &
\raisebox{0pt}[0pt]{\begin{tabular}[b]{@{}c@{}}Bob \\ B\end{tabular}}
& C \\ [0.5ex] \hline
A & 0 & $\displaystyle\frac{1}{6}$ & $\displaystyle\frac{1}{6}
\rule{0pt}{18pt}\\
\makebox[0pt][r]{\begin{rotate}{90}\hspace*{-0.8em}Alice\end{rotate}\quad
B & $\displaystyle\frac{1}{6}$ & 0 & $\displaystyle\frac{1}{6}$%
\rule{0pt}{18pt}\\
C & $\displaystyle\frac{1}{6}$ & $\displaystyle\frac{1}{6}$ &
0\rule{0pt}{18pt}\\
\end{tabular}
}
\end{table}
\section{The double trine scheme}
If Alice and Bob use physical qubits for the logical qubits of the trine, they
must be sure to agree on the coordinates for preparing and measuring
their quantum signals.
However, they can skip this problem altogether by using reference-frame-free
(RFF) qubits.
In our scheme a logical qubit is constructed by coupling three physical
qubits.
For concreteness, we consider the physical qubits to be spin-$\frac{1}{2}$
particles (with $\ket{0}$ for spin-up and $\ket{1}$ for spin-down)
and combine their angular momenta in the appropriate manner.
Following the recipe of ref.~\cite{b.ste2008}, we consider two trines.
In the subspace ($j=\frac{1}{2}$, $m=\frac{1}{2}$) we have the trine
\begin{eqnarray}
\label{eq.pstates}
\ket{p_1} &=& \bigl(\ket{001} - \ket{010}\bigr)/\sqrt{2}\,, \nonumber\\
\ket{p_2} &=& \bigl(\ket{100} - \ket{001}\bigr)/\sqrt{2}\,, \nonumber\\
\ket{p_3} &=& \bigl(\ket{010} - \ket{100}\bigr) /\sqrt{2}\,,
\end{eqnarray}
and the states
\begin{eqnarray}
\label{eq.qstates}
\ket{q_1} &=& \bigl(\ket{101} - \ket{110}\bigr)/\sqrt{2}\,, \nonumber\\
\ket{q_2} &=& \bigl(\ket{110} - \ket{011}\bigr)/\sqrt{2}\,, \nonumber\\
\ket{q_3} &=& \bigl(\ket{011} - \ket{101}\bigr) /\sqrt{2}\,,
\end{eqnarray}
constitute the trine in the subspace ($j=\frac{1}{2}$, $m=-\frac{1}{2}$).
All relevant states are in the ${j=\frac{1}{2}}$ sector of the three
spin-$\frac{1}{2}$ atoms and, for the sake of simplifying the notation, we shall
consistently ignore the empty ${j=\frac{3}{2}}$ sector.
The sums of the projectors to corresponding $p$- and $q$-states are
\begin{equation}
\label{dts}
W_i=\proj{p_{i}} + \proj{q_{i}}=S_{jk} \,,
\end{equation}
where $S_{jk}$ projects on the singlet sector for atoms $j$ and $k$,
and the indices $ijk$ pertain to all cyclic permutations of $123$.
By construction, the $W_{i}$s are rotationally invariant and hence have
the same properties for all reference frames---they are RFF operators.
We note that the
$W_i$s have two eigenvalues $0$ and two eigenvalues $1$, so
that $W_i$ and ${1-W_i}$ project on orthogonal two-dimensional subspaces, and
\begin{equation}
\label{W-traces}
\sum_{i=1}^3W_i=\frac{3}{2}\,,\quad
\tr{W_i}=2\,,\quad\tr{W_iW_j}=\frac{3\delta_{ij}+1}{2}
\end{equation}
are identities that will be relevant in what follows.
Because our scheme has two independent sets of trines, we call it the
\emph{double trine scheme}.
It works as follows.
Alice sends a random sequence of the states $\rho_i=\frac{1}{2}W_i$ to Bob,
with the three states occurring with equal frequency, and Bob measures them
with a POM whose outcomes are $\Pi_j=\frac{2}{3}(1-W_j)$.
The resulting joint probabilities,
\begin{equation}
\label{jointprobs}
p_{ij}=\frac{1}{3}\tr{\rho_i\Pi_j}=\frac{1-\delta_{ij}}{6}
\end{equation}
are those of table~\ref{tab.trinescheme}.
\section{Signal and idler qubit}
The sum of the three $p$-kets of (\ref{eq.pstates}) vanishes---they are
linearly dependent because the ${j=m=\frac{1}{2}}$ sector is two-dimensional.
A pair ${\ket{++},\ket{-+}}$ of orthogonal kets is identified by
\begin{equation}
\label{eq.+kets}
\bigl(\ket{p_1},\ket{p_2},\ket{p_3}\bigr)
=\bigl(\ket{++},\ket{-+}\bigr)
\frac{1}{\sqrt{2}}\left(
\begin{array}{rrr}
1 & \omega^{\phantom{2}} & \omega^2 \\ 1 & \omega^2 & \omega^{\phantom{2}}
\end{array}\right),
\end{equation}
where $\omega=\exp(\mathrm{i}2\pi/3)$, and likewise we have
\begin{equation}
\label{eq.-kets}
\bigl(\ket{q_1},\ket{q_2},\ket{q_3}\bigr)
=\bigl(\ket{+-},\ket{--}\bigr)
\frac{1}{\sqrt{2}}\left(
\begin{array}{rrr}
1 & \omega^{\phantom{2}} & \omega^2 \\ 1 & \omega^2 & \omega^{\phantom{2}}
\end{array}\right)
\end{equation}
for the $q$-states.
We regard the four orthogonal states $\ket{\pm\pm}$ that span the
${j=\frac{1}{2}}$ sectors of the three spin-$\frac{1}{2}$ atoms as two-qubits
states~\cite{b.ste2008} whereby, for example, ket $\ket{+-}$ has the
\emph{signal qubit} in the `$+$' state and the \emph{idler qubit} in the `$-$'
state.
The signal states ${\ket{A},\ket{B},\ket{C}}$ that we identify as
\begin{equation}
\label{eq.sigABC}
\bigl(\ket{A},\ket{B},\ket{C}\bigr)
=\bigl(\ket{+},\ket{-}\bigr)
\frac{1}{\sqrt{2}}\left(
\begin{array}{rrr}
1 & \omega^{\phantom{2}} & \omega^2 \\ 1 & \omega^2 & \omega^{\phantom{2}}
\end{array}\right)
\end{equation}
form the single-qubit trine that matters.
Upon denoting the Pauli operators of the signal qubit by $X$, $Y$, and $Z$ and
identifying $\ket{\pm}$ with the eigenkets of $Y=\mathrm{i}XZ$, the
signal-qubit trine is in the $XZ$ plane as depicted in
fig.~\ref{fig.trinebloch}.
In view of
\begin{equation}
\label{eq.sigW}
W_1=\proj{A}\otimes 1\,,\quad
W_2=\proj{B}\otimes 1\,,\quad
W_3=\proj{C}\otimes 1\,,
\end{equation}
the idler sector is completely irrelevant: Alice encodes the information in
the signal qubit only, and Bob's POM does not probe the idler qubit at all.
The sole purpose of the idler qubit is to render possible the
construction of the rotationally invariant signal qubit.
We can, therefore, think of the double trine scheme as a generic scheme of the
kind described in the context of fig.~\ref{fig.trinebloch} with the signal
qubit carrying the quantum state from Alice to Bob.
\section{Common source scenario}
Rather than having Alice prepare qubits in the trine states and send
them to Bob, who then analyzes them with the trine POM, we can generate the
joint probabilities of table~\ref{tab.trinescheme} in a more symmetric and
largely equivalent way.
In this alternative scenario, a source distributes entangled two-qubit states
to Alice and Bob.
Ideally, the two signal qubits are in their singlet state that is described
by the statistical operator
\begin{equation}
\label{eq.sourcestate}
\rho_0=\proj{s}\quad\mbox{with}\enskip
\ket{s} = \frac{\ket{+-}-\ket{-+}}{\sqrt{2}}\,.
\end{equation}
On their respective qubits, Alice and Bob then both measure the same trine POM
with the outcomes
\begin{equation}
\label{eq.trinepovm}
\Pi_{i} = \proj[\frac{2}{3}]{i}\quad\mbox{for}\enskip i=A,B,C\,.
\end{equation}
Indeed, the resulting joint probabilities,
\begin{equation}
\label{eq.jointprob}
p_{jk}= \tr{\Pi_{j}\otimes \Pi_{k}\,\rho_0} \quad\mbox{for}\enskip
j,k = A,B,C\,,
\end{equation}
are those of table~\ref{tab.trinescheme}.
In the security analysis below, we shall assume that the source is controlled
by eavesdropper Eve.
Her activities will introduce noise into the quantum channel between Alice and
Bob, but they are only accepting qubits from a source that \emph{looks like}
the singlet of (\ref{eq.sourcestate}) with an admixture of unbiased noise,
\begin{equation}
\label{eq.unbiasednoise}
\rho_\epsilon = \proj[(1-\epsilon)]{s} + \frac{\epsilon}{4}
\end{equation}
with $0 \leq \epsilon \leq 1$.
\begin{table}
\caption{Joint probabilities for the noisy trine channel.%
\label{tab.noisytrine}}
\centerline{\rule{0pt}{28pt}
\begin{tabular}[t]{r|ccc}
& A &
\raisebox{0pt}[0pt]{\begin{tabular}[b]{@{}c@{}}Bob \\ B\end{tabular}}
& C \\ [0.5ex] \hline
A & $\displaystyle\frac{\epsilon}{9}$
& $\displaystyle\frac{3-\epsilon}{18}$
& $\displaystyle\frac{3-\epsilon}{18}
\rule{0pt}{18pt}\\
\makebox[0pt][r]{\begin{rotate}{90}\hspace*{-0.8em}Alice\end{rotate}\quad
B & $\displaystyle\frac{3-\epsilon}{18}$
& $\displaystyle\frac{\epsilon}{9}$
& $\displaystyle\frac{3-\epsilon}{18}$%
\rule{0pt}{18pt}\\
C & $\displaystyle\frac{3-\epsilon}{18}$
& $\displaystyle\frac{3-\epsilon}{18}$
& $\displaystyle\frac{\epsilon}{9}$\rule{0pt}{18pt}\\
\end{tabular}
}
\end{table}
As far as Alice and Bob are concerned, the noise parameter $\epsilon$
characterizes the channel.
In the presence of noise, they observe errors in the trine channel:
sometimes they get the same measurement outcome for a particular
qubit pair, which does not happen in the noise-free case.
Rather than the noise-free joint probabilities of table~\ref{tab.trinescheme},
they now have the probabilities of table~\ref{tab.noisytrine}.
But since their measurements yield only these nine joint probabilities,
Alice and Bob cannot determine all fifteen parameters that specify the
two-qubit state distributed by the source.
In this respect, the trine schemes are markedly different from tomographic
protocols~\cite{b.tomocrypt}, such as the six-state
protocol~\cite{b.sixstates} or the Singapore protocol~\cite{b.SingProt}, in
which full tomography of the source state is central.
If Alice and Bob do not see the symmetric probability
table~\ref{tab.noisytrine}, they enforce the symmetry by twirling.
For this purpose, they carry out random bilateral rotations on the qubits that
leave the singlet component intact while removing any bias from the noise.
\section{Efficient generation of the raw dual key}
Once Alice and Bob finish collecting and measuring their qubits, they get a
paired record of measurement results.
Next, they communicate over an authenticated public channel to discuss the raw
data and distill a cryptographic key.
Here we describe a new key generation method that yields mutual information
between Alice and Bob closer to the Shannon limit for a trine-based channel
\cite{b.chua2006}.
We illustrate the procedure with the sample results shown in
table~\ref{tab.sample}.
\begin{table}
\caption{Example of measurement records for Alice and Bob.%
\label{tab.sample}}
\centerline{\rule{0pt}{28pt}
\begin{tabular}{lccccccr}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
Alice & A & C & C & B & B & A & C \\
Bob & B & A & A & C & A & C & B
\end{tabular}
}
\end{table}
To begin, Alice chooses two time slots in a specified order where her outcomes
are different.
Suppose she selects columns 2 and 5 in table~\ref{tab.sample}.
Alice's pair of letters in these positions is CB.
She tells Bob to look at his record at those two particular time slots and he
finds he has A in both.
He declares he has the same letter in both positions.
Alice quickly determines this letter to be A since it is the only result
consistent with the expected outcomes for a trine protocol.
Both record A for the key.
Because there are three possibilities in this scenario, we call this the trit
key.
Alice and Bob discard the used time slots.
There is another situation to consider.
Say for the next round, Alice chooses columns 1 and 4.
Bob finds BC for these time slots and
announces the following: Record $0$ for BC and $1$ for CB.
Since Alice has AB, she infers that Bob must have BC, and
both of them record $0$ for the key.
In this case, there are two possibilities, so we call it the bit key.
Note that the order of the time slots selected matters: they would both record
$1$ if Alice reversed the order.
The two situations---the \emph{trit case} and the \emph{bit case}---are
mutually exclusive events so
the bit and trit keys are independently built up from the raw data.
In the noiseless case, the trit case happens $\frac{1}{4}$ of the time
while the bit case happens in the remaining $\frac{3}{4}$.
It follows that the number of key bits, per qubit exchanged,
that Alice and Bob share in the key sequences thus generated is given by
\begin{equation}
\label{eq.mutinfotrine}
I(A:B) = \frac{1}{2} \left( \frac{1}{4} \log_{2} 3
+ \frac{3}{4} \log_{2} 2\right)= 0.573\,,
\end{equation}
which is 98\% of the Shannon limit in (\ref{eq.trineMI}).
Noise in the channel leads to errors in the shared keys, since the unexpected
result of getting the same letter during transmission will sometimes occur.
The probability that the next letter pair contributes an entry to the trit key
is now
${p_\mathrm{trit}=\frac{1}{12}(3-\epsilon)(1+\epsilon)}$, and the probability of
contributing to the bit key is
${p_\mathrm{bit}=\frac{1}{12}[(3-\epsilon)^2+4\epsilon]}$.
In the trit case, the correctly matched pairs in the key (that is, both Alice
and Bob write down the same letter, whether A, B, or C) each have probability
$(3-\epsilon)/(9+9\epsilon)$;
the other six outcomes where they disagree have probability
$2\epsilon/(9+9\epsilon)$ each.
Likewise in the bit case, the two instances when Alice and Bob agree both have
probability $\frac{1}{2}(3-\epsilon)^{2}/[(3-\epsilon)^2+4\epsilon]$,
while for the other two where they disagree the probability is
$2\epsilon/[(3-\epsilon)^2+4\epsilon]$ each.
These probabilities yield
\begin{eqnarray}
\label{eq.noisymutinfotrine}
\nonumber I_{\mathrm{trit}}(A:B)
&=&
\frac{3-\epsilon}{3+3\epsilon} \log_{2} \frac{3-\epsilon}{1+\epsilon}
\\ && +
\frac{4\epsilon}{3+3\epsilon} \log_{2} \frac{2\epsilon}{1+\epsilon}\,,
\nonumber\\
\nonumber I_{\mathrm{bit}}(A:B) &=&
\frac{(3-\epsilon)^{2}}{(3-\epsilon)^{2}+4\epsilon}
\log_{2} \frac{2(3-\epsilon)^{2}}{(3-\epsilon)^{2}+4\epsilon}
\\ & & +
\frac{4\epsilon}{(3-\epsilon)^{2}+4\epsilon}
\log_{2} \frac{8\epsilon}{(3-\epsilon)^{2}+4\epsilon}\qquad
\end{eqnarray}
for the resulting mutual information between Alice and Bob for the two key
sequences.
For $\epsilon=0.1$, their weighted sum
${\frac{1}{2}(p_\mathrm{trit}I_{\mathrm{trit}}%
+p_\mathrm{bit}I_{\mathrm{bit}})}$
equals 96.4\% of the Shannon limit, the
mutual information of the joint probabilities in table~\ref{tab.noisytrine}.
As functions of $\epsilon$, $I_{\mathrm{bit}}(A:B)$ and
$I_{\mathrm{trit}}(A:B)$ are the monotonously decreasing curves in
figs.~\ref{fig.twobit} and \ref{fig.twotrit} below, respectively.
\section{Security analysis}
Eve is given full control of the source and is allowed to keep a quantum
record, encoded in ancilla states, of what is sent.
We write the source state in the form
\begin{equation}
\label{eq.sourceanc}
\ket{S}=\ket{++\;E_1}+\ket{+-\;E_2}+\ket{-+\;E_3}+\ket{--\;E_4}\,,
\end{equation}
where, for example, $\ket{+-\;E_2}$ is the `$+$' state of (\ref{eq.sigABC})
for Alice's signal qubit, the `$-$' state for Bob's, and Eve's ancilla in
state $\ket{E_2}$.
When Alice's POM gives the $j$th outcome, and Bob's the $k$th, the reduced
ancilla state is described by $\ket{E_{jk}}$ where $j$ and $k$ independently
take on values of $A$, $B$, or $C$.
After accounting for the coefficients in (\ref{eq.sigABC}) and
(\ref{eq.trinepovm}), we have
\begin{equation}
\label{eq.anc-jk}
\ket{E_{jk}}=\bigl(\ket{E_1},\ket{E_2},\ket{E_3},\ket{E_4}\bigl)
\,\frac{1}{3}{\left(\begin{array}{c}
\omega^{-j-k}\\ \omega^{-j+k} \\ \omega^{j-k} \\ \omega^{j+k}
\end{array}\right)}
\end{equation}
with $ABC\widehat{=}012$ for the $j$ and $k$ values in the exponents.
The joint probabilities of table~\ref{tab.noisytrine} impose the constraints
\begin{equation}
\label{eq.Eab-constr}
p_{jk}=\bracket{E_{jk}}{E_{jk}}=\frac{\epsilon}{9}\delta_{jk}
+\frac{3-\epsilon}{18}(1-\delta_{jk})\,,
\end{equation}
which in turn imply
\begin{eqnarray}
\label{eq.Eiproplist}
&&
\bracket{E_1}{E_1}+\bracket{E_2}{E_2}+\bracket{E_3}{E_3}+\bracket{E_4}{E_4}=1\,,
\nonumber\\
&&\bracket{E_{1}}{E_{2}} + \bracket{E_{3}}{E_{4}} = 0
=\bracket{E_{1}}{E_{3}} + \bracket{E_{2}}{E_{4}}\,, \nonumber\\
&&\bracket{E_{1}}{E_{4}} = 0\,,\qquad
\bracket{E_{2}}{E_{3}} =-(1-\epsilon)/2\,.
\end{eqnarray}
These determine nine of the 16 real parameters that specify the positive
${4\times4}$ matrix of the $\bracket{E_j}{E_k}$ amplitudes.
A convenient choice of the remaining seven real parameters is given by
representing the kets $\ket{E_1}$, \dots, $\ket{E_4}$ by the columns of a
matrix of the form~\cite{b.tabia2009}
\begin{equation}
\label{eq.4x4-V}
V = \left(
\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}
a^{\ }_1 & \lambda a^{\ }_2 & -\mu a^{\ }_2 & 0 \\
0 & r^{\ }_1\cos\theta & -r^{\ }_2\mathrm{e}^{\mathrm{i}\phi}\sin\theta & 0 \\
0 & r^{\ }_1\mathrm{e}^{-\mathrm{i}\phi}\sin\theta & -r^{\ }_2\cos\theta & 0 \\
0 & \mu^{*} a^{\ }_1 & -\lambda^{*} a^{\ }_1 & a^{\ }_2
\end{array}
\right),
\end{equation}
where $a^{\ }_1,a^{\ }_2,r^{\ }_1,r^{\ }_2,\phi,\theta$ are real and
$\lambda,\mu$ are complex, and their values are subject to
\begin{eqnarray}
\label{eq.v-constr}
\bigl(1+|\lambda|^2+|\mu|^2\bigr)\bigl(a_1^2+a_2^2\bigr)
+r_1^2+r_2^2&=&1\,,\nonumber\\
\lambda^*\mu\bigl(a_1^2+a_2^2\bigr)
+r^{\ }_1r^{\ }_2\mathrm{e}^{\mathrm{i}\phi}\sin(2\theta)
&=&\frac{1-\epsilon}{2}\,.
\end{eqnarray}
As demonstrated by
\begin{eqnarray}
\label{eq.rho-eps}
&&a^{\ }_1=a^{\ }_2=\frac{1}{2}\sqrt{\epsilon}\,,\quad
r^{\ }_1=r^{\ }_2=\frac{1}{2}\sqrt{2-\epsilon}\,,\quad
\lambda=\mu=0\,,\nonumber\\
&&\phi=0\,,\quad\sin(2\theta)=\frac{2-2\epsilon}{2-\epsilon}\,,
\end{eqnarray}
for which Alice and Bob's reduced two-qubit state is $\rho_{\epsilon}$ of
(\ref{eq.unbiasednoise}), there surely are permissible values, but it is
not obvious which set of parameters is optimal for Eve.
With the aid of (\ref{eq.anc-jk}), each permissible $V$ matrix gives us valid
column representations for the $\ket{E_{jk}}$s.
In fact, Eve is not interested in distinguishing the $\ket{E_{jk}}$
states themselves but rather the two-ancilla states that are associated with
symbols in the key sequences, whereby the bit and trit cases need to be
considered separately.
In the \emph{bit case}, the two-ancilla state conditioned on Alice concluding
that Bob has the letter sequence `$jk$' is given by
\begin{eqnarray}
\rho _{jk}^{(A)} &\propto& \proj{E_{kj}E_{lk}} + \proj{E_{lj}E_{jk}}
\nonumber\\&& + \proj{E_{kj}E_{jk}} + \proj{E_{kk}E_{lj}}
\nonumber\\ && + \proj{E_{lk}E_{jj}} + \proj{E_{kk}E_{jj}}\,,
\end{eqnarray}
where $jkl$ can be any permutation of $ABC$.
The first three terms account for the cases in which Alice and Bob record the
same bit value, and the bit errors are covered by the last three terms.
For example, the first term is for the situation when Alice has `$kl$' and Bob
has `$jk$' while both have `$kj$' for the last term.
Eve has to tell $\rho _{jk}^{(A)}$ and $\rho _{kj}^{(A)}$ apart when Bob
announces that his letters are `$j$' and `$k$'.
Analogously, in the \emph{trit case}, we have the conditioned two-ancilla state
\begin{eqnarray}
\rho_{j}^{(A)} &\propto&\proj{E_{kj}E_{lj}} + \proj{E_{lj}E_{kj}}
\nonumber\\ &&+ \proj{E_{kk}E_{lk}} + \proj{E_{lk}E_{kk}}
\nonumber\\ &&+ \proj{E_{kl}E_{ll}} + \proj{E_{ll}E_{kl}}
\end{eqnarray}
when Alice concludes that Bob has letter `$j$' twice,
with the first two terms accounting for a correct assignment and the remaining
four terms for errors.
Here, too, $jkl$ is a permutation of $ABC$, and Eve need to distinguish the
three states $\rho_A^{(A)}$, $\rho_B^{(A)}$, and $\rho_C^{(A)}$.
The six $\rho_{jk}^{(A)}$s and three $\rho_{j}^{(A)}$s account for 54 of the
81 two-ancilla kets $\ket{E_{jk}E_{j'k'}}$.
This is as it should be because the remaining 27 kets are those for which
Alice has the same letter twice, and this situation does not occur.
If Eve eavesdrops on Bob, the conditioned two-ancilla states are different.
In the bit case we have
\begin{eqnarray}
\rho _{jk}^{(B)} &\propto& \proj{E_{kj}E_{lk}} + \proj{E_{lj}E_{jk}}
\nonumber\\ &&+ \proj{E_{kj}E_{jk}} + \proj{E_{lj}E_{kk}}
\nonumber\\ &&+ \proj{E_{jj}E_{lk}} + \proj{E_{jj}E_{kk}}\,,
\end{eqnarray}
and the states
\begin{eqnarray}
\rho _{j}^{(B)} &\propto& \proj{E_{kj}E_{lj}} + \proj{E_{lj}E_{kj}}
\nonumber\\ &&+ \proj{E_{jj}E_{lj}} + \proj{E_{lj}E_{jj}}
\nonumber\\ &&+ \proj{E_{jj}E_{kj}} + \proj{E_{kj}E_{jj}}
\end{eqnarray}
apply in the trit case.
They differ from their respective counterparts by the error terms.
Therefore, we explore both sets of ancilla states to see whether Eve gains any
advantage by eavesdropping on either Alice or Bob, or if it does not make any
difference to the optimal amount of information she can obtain.
With the assignment of signal-qubit Pauli operators $X$, $Y$, $Z$ discussed
above in the context of (\ref{eq.sigABC}), the two-qubit state $\rho_{AB}^{\ }$
that the source distributes to Alice and Bob is specified by the eight fixed
expectations values
\begin{eqnarray}
\label{eq.ABexpect1}
&&\expect{X_A}=\expect{Z_A}=\expect{X_B}=\expect{Z_B}=0\,,\nonumber\\
&&\expect{X_AX_B}=\expect{Z_AZ_B}=-(1-\epsilon)\,,\nonumber\\
&&\expect{X_AZ_B}=\expect{Z_AX_B}=0
\end{eqnarray}
together with the seven adjustable expectation values
\begin{eqnarray}
\label{eq.ABexpect2}
&&\frac{1}{2}\bigl(\expect{Y_A}\pm\expect{Y_B}\bigr)=\left\{
\begin{array}{l}
a_1^2-a_2^2\,,\\[1ex]
r_1^2-r_2^2-(|\lambda|^2-|\mu|^2)(a_1^2-a_2^2)\,,
\end{array}\right.\nonumber\\
&&\expect{Y_AZ_B}+\mathrm{i}\expect{Y_AX_B}=4\lambda a_1a_2\,,\nonumber\\
&&\expect{Z_AY_B}+\mathrm{i}\expect{X_AY_B}=-4\mu a_1a_2\,,\nonumber\\
&&\expect{Y_AY_B}=2(a_1^2+a_2^2)-1\,,
\end{eqnarray}
which reveal the physical significance of the seven free parameters in
(\ref{eq.4x4-V}).
Alice and Bob cannot distinguish between $\rho_{AB}^{\ }$,
$X_AX_B\rho_{AB}^{\ }X_AX_B$, $Y_AY_B\rho_{AB}^{\ }Y_AY_B$, and
$Z_AZ_B\rho_{AB}^{\ }Z_AZ_B$, and Eve gets the same amount of information
from the corresponding four sets of conditioned ancilla states.
It follows that Eve can just as well choose the parameters in (\ref{eq.4x4-V})
such that
$\rho_{AB}^{\ }={X_AX_B\rho_{AB}^{\ }X_AX_B}={Z_AZ_B\rho_{AB}^{\ }Z_AZ_B}$.
Then, the six expectation values in (\ref{eq.ABexpect2}) that involve a
single $Y$ vanish, which happens for
\begin{equation}\label{eq.noYs}
a^{\ }_{1}= a^{\ }_{2}\,, \quad r^{\ }_{1} = r^{\ }_{2}\,,\quad
\lambda =\mu =0\,.
\end{equation}
Indeed, it is plausible, and supported by much numerical evidence, that a
parameter choice that yields such a particularly noisy $\rho_{AB}^{\ }$ is
advantageous for Eve because then the entanglement between her ancilla and the
qubits for Alice and Bob is particularly strong.
With (\ref{eq.noYs}), matrix $V$ takes on the simple one-parameter form
\begin{equation}
\label{eq.c-V}
V = \frac{1}{2}\left(
\begin{array}{crrc}
\sqrt{c} & 0 & 0 & 0 \\
0 & \phantom{-}x & -y & 0 \\
0 & y & -x & 0 \\
0 & 0 & 0 & \sqrt{c}
\end{array}
\right)
\end{equation}
with ${0\leq c\leq 2\epsilon}$ and ${x\pm y =\sqrt{2-c\pm2(1-\epsilon)}}$.
We return to (\ref{eq.rho-eps}) for ${c=\epsilon}$, while ${c=2\epsilon}$ and
${c=2\epsilon-\epsilon^2}$ give the $\rho_{AB}^{\ }$s with minimal concurrence
and maximal entropy, respectively; the $\rho_{AB}^{\ }$s for
${2\epsilon+c\geq2}$ are separable \cite{b.ase2006}.
The following observation lends additional support to (\ref{eq.noYs}) and
(\ref{eq.c-V}):
The resulting conditioned ancilla states are such that it does not matter
which letter pairs `$jk$' and `$kj$' are to be distinguished in the bit case,
or which letter `$j$' is the actual one in the trit case.
Eve does not acquire better knowledge about a subset of key entries at the
price of knowing less about other subsets.
By contrast, such an asymmetry in her knowledge is typically the case if some
of the single-$Y$ expectation values in (\ref{eq.ABexpect2}) are nonzero.
Accepting thus the hypothesis that it suffices to consider
matrices $V$ of the single-parameter form (\ref{eq.c-V}), we take the
resulting two-ancilla kets $\ket{E_{jk}}$ and calculate the
Holevo-Schumacher-Westmoreland (HSW) bounds \cite{b.hol1973,b.schumwest1997}
on ${I(A:E)}$ and ${I(B:E)}$ as a function of $c$.
After optimizing the value of $c$ for the given value of the noise parameter
$\epsilon$, we obtain the monotonically increasing
curves in figs.~\ref{fig.twobit} and
\ref{fig.twotrit} for the bit key and the trit key, respectively.
\begin{figure}
\centerline{\includegraphics{RFFtrineQKD-Fig2.eps}}
\caption{Optimizing the one-parameter source state:
Wiretapper bound for Eve eavesdropping on the bit key.
For Alice, the noise threshold is $\epsilon = 0.197$, where
$I_\mathrm{bit} = 0.560$.
The corresponding numbers for Bob are $\epsilon = 0.170$,
$I_\mathrm{bit} = 0.603$.}
\label{fig.twobit}
\end{figure}
\begin{figure}
\centerline{\includegraphics{RFFtrineQKD-Fig3.eps}}
\caption{Optimizing the one-parameter source state:
Wiretapper bound for Eve eavesdropping on the trit key.
For Alice, the noise threshold is $\epsilon = 0.193$, where
$I_\mathrm{trit} = 0.618$.
The corresponding numbers for Bob are $\epsilon =0.150$,
$I_\mathrm{trit} = 0.744$.}
\label{fig.twotrit}
\end{figure}
The $\epsilon$ values for which these curves intersect the curves representing
the corresponding $I(A:B)$ of (\ref{eq.noisymutinfotrine}) determine the noise
thresholds below which Alice and Bob can generate a secret key from the raw
key by the usual procedures of error correction and privacy amplification
\cite{b.renner2008}.
Both in the bit case and in the trit case, the thresholds are higher when Eve
is eavesdropping on Alice than on Bob.
We could not find lower thresholds with any parameter values not restricted by
the symmetry requirements (\ref{eq.noYs}).
\section{Summary and discussion}
We described a basis-independent trine protocol for QKD that uses RFF signal
qubits encoded in mixed states of three physical qubits.
The protocol exploits a novel efficient key generation scheme that yields a
dual alphabet key.
We analyzed the security with a plausible symmetry assumption that simplifies
the task to the optimization of a single parameter.
As a consequence of the asymmetric roles played by them during the
key generation, there are different noise thresholds for
eavesdropping on Alice and Bob.
The raw keys need to be processed before Alice and Bob share a secret key.
For the error correction and the privacy amplification one of the raw keys
serves as the error-free reference, and we choose Alice's key for this purpose
because then the higher thresholds apply.
We conclude that a secret key can be generated for ${\epsilon<0.197}$, and one
should stay well below this threshold to have a good key bit rate.
Regarding practical implementations of the scheme, we note that the production
of entangled states is no routine matter, with the difficulty
increasing rapidly with size.
A practical system with a common source for Alice and Bob requires six
entangled physical qubits for each transmission.
It is easier to use the variant where Alice prepares the states and
sends them to Bob as this requires only three qubits.
As with other QKD protocols, photon polarization is the most likely
candidate for the physical qubits.
Alice could prepare three-photon trine states by first preparing two of the
three photons in a Bell state and the third photon with random
polarization; it is possible to achieve this by beginning with an entangled
four-photon state and measuring the polarization of the fourth photon with a
suitable POM.
Bob's POM would then test if one of the three orthogonal Bell states is
present for every trio of photons received from Alice.
Given the limited efficiency of typical photodetectors, efficient detection of
all three photons is a challenge though.
\acknowledgments
We are grateful for useful discussions with Jun Suzuki, Syed~M. Assad,
and Valerio Scarani.
BGE thanks Hans Briegel for the kind hospitality in Innsbruck where part of
this work was done.
Centre for Quantum Technologies is a Research Centre for Excellence funded by
the Ministry of Education and National Research Foundation of Singapore.
|
1,108,101,565,190 | arxiv | \section{Introduction}
The presence of higher-order curvature terms in the Lagrangian from one side was inspired by string theories. In 1974 Scherk and Schwarz~\cite{sch-sch} showed that the Lagrangian of the Virasoro-Shapiro model~\cite{VSh1, VSh2} contains $R^2$ and $R_{\alpha\beta}R^{\alpha\beta}$ terms. Later a presence of curvature-squared term of the form $R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}$ was demonstrated~\cite{Candelas_etal} for the low-energy limit of the heterotic superstring theory~\cite{Gross_etal}. Next Zwiebach~\cite{Zwie} proved that the only combination of quadratic terms that gives a ghost-free solutions is the Gauss-Bonnet term $R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}-4R_{\alpha\beta}R^{\alpha\beta}+R^2$. Zumino~\cite{zumino} extended Zwiebach's result on higher order curvature terms and put forward the idea that the low-energy limit Lagrangian of the unified theory should include a sum of different powers of curvature.
On the other hand, in 1970 Lovelock~\cite{Lovelock} discovered that as the number of space-time dimensions increases, in every new odd dimension it is possible to add a specific higher curvature power term to the classic Einstein-Hilbert Lagrangian such that the correction to the equations of motion will contain only derivatives of the second order of the metric coefficients. This make Lovelock's model a natural generalization of General Relativity (GR) to higher dimensions.
The simplest example of a large family of gravity theories known as Lovelock Gravities is Einstein-Gauss-Bonnet Gravity (EGB). Recent studies show that the dynamics of an anisotropic Universe in Einstein-Gauss-Bonnet gravity can be much reacher than it is possible in GR. In particular, the presence of cosmological constant $\Lambda$ does not ultimately lead to isotropic multidimensional de Sitter solution. Another non-singular possibility is an anisotropic solution where Hubble parameters $H_i$ are constant, but can be different for different $i$. On the other hand, all $H_i$ can not be completely different: it was shown that such solutions exist only if the space has isotropic subspaces (three at most), so that the set of $H_i$ is divided into two or three groups with equal $H_i$ within a group~\cite{ChPavTop-ModPhysLet-2014,I,ChPavTop-GenRelGrav-2015}. It is noteworthy that this splitting is not done by the hand of an investigator, but appears in a natural way from equations of motion as a condition for such solutions to exist.
Explicit forms of the solutions in question show that the most typical situation (apart from de Sitter solution) is splitting of space into two isotropic subspaces with different Hubble parameters which we denote here as $H$ and $h$. Stability analysis indicates that for such solution to be stable the overall volume of space should increase~\cite{Iva-EurPhys-2016}. This leads to a condition $dh+(n-d)H>0$ for $n$-dimensional space splitted into $d$- and $n-d$-dimensional isotropic subspaces. Apart from this condition there is another one, which is shown to be violated if $d=1$~\cite{ChTop-GravCosmol-2017}, other possibilities are allowed except for some special combinations of coupling constants. In~\cite{split} it was shown numerically that such a decomposition is indeed a typical outcome of an anisotropic cosmological evolution if a non-zero cosmological constant is present in the action, with other possibilities being a non-standard singularity or an oscillatory regime.
So that the outlined results justify to consider a metric splitted into a product of two isotropic subspaces. The most interesting case is with $H>0$ in 3-dimensional subspace and $h<0$ in the $n-3$ dimensional subspace since such a combination would be useful for a compactification scenario describing "our" big 3-dimensional world and $n-3$-dimensional "inner" subspace. What is, however, still needed is a stabilization of the "inner" dimensions. In a set of papers~\cite{CanGiaPav-PhysRev-2013,CanGiaPav-GenRelGrav-2014,CanGiaPav-GravCosmol-2018} it was shown that such a stabilization can be achieved by introducing a negative spatial curvature for the "inner" space. As for the large subspace, possible spatial curvature does not change much since this subspace is expanding and the dynamical role of the spatial curvature decreases. So, the initially anisotropic Universes naturally evolves into a product of large isotropic "our" sub-Universe and a stabilized isotropic "inner" space -- in what follows we refer to such solutions as a compactification solution. Since $H>0$ for the large subspace, an inside observer would indicate the presence of an effective cosmological constant $\Lambda_{eff}$ in "our" world. Choosing the coupling constant of the theory satisfying an additional relation it is possible to set this $\Lambda_{eff}$ to arbitrary small value and get a standard Friedmann equation for evolution of the large subspace~\cite{CanGiaTroWil-PhysRev-2009}.
Stability of compactified solutions have been considered in~\cite{Pav-PhysRev-2015,Iva-EurPhys-2016,Iva-GravCosmol-2016,ErnIva-EurPhys-2017,ChTop-GravCosmol-2017} for both negative and positive curvature of the "inner" space. It was found that for negative curvature the solution with constant volume of the inner space is stable when it exists, however, for positive curvature the stability requires rather tough restrictions of possible coupling constants, and these restrictions are the more severe the bigger number of extra dimensions is considered. What is even worse, the case of zero effective cosmological constant in large dimensions is always unstable~\cite{ChPav-MosPhysLet-2021}. In Lovelock gravity, starting from $n=6$ the 3-d term is added to the Gauss-Bonnet term. In~\cite{ChGiaTop-GenRelGrav-2018} the stabilization of extra dimensions have been checked numerically for negative spatial curvature in the presence of 3-d Lovelock term. The goal of the present paper is to study stability of stabilization solutions with the cubic term for both possible cases of spatial curvature of extra dimensions.
The structure of the paper is the following: section~\ref{EoM} presents the action, the Lagrangian and the equations of motion; section~\ref{num-analysis} is devoted to the numerical analysis of stability and in the last section the conclusions will be given.
\section{Action and equations of motion\label{EoM}}
We consider $(D+4)$-dimensional spacetime $\mathcal{M}=\m_4\times \m_D$ where $\m_4$ is a flat Friedman-Robertson-Walker manifold with scale factor $a(t)$, $\m_D$ is a $D$-dimensional Euclidean compact constant curvature manifold with scale factor $b(t)$ and curvature $\gamma_{D}$.
Lovelock action under consideration reads
\eq{S=\int_{\mathcal{M}}d^{D+4}x\sqrt{|g|}\left\{R+\alpha L_{(2)}+\beta L_{(3)}-2\Lambda\right\},\label{action}}
where $|g|$ is the determinant of metric tensor; $\Lambda$ is the cosmological term; $\alpha$ and $\beta$ are the coupling constants; $L_{(2)}$ is quadratic Lovelock term\footnote{Hereafter Greek indices run from 0 to D, while Latin one from
1 to D unless otherwise stated}:
\eq{L_{(2)}=R^2-4R_{\beta}^{\phantom{\beta}\alpha}R_{\alpha}^{\phantom{\alpha}\beta}
+R_{\gamma\delta}^{\phantom{\gamma\delta}\alpha\beta}R_{\alpha\beta}^{\phantom{\alpha\beta}\gamma\delta}}
and $L_{(3)}$ is cubic Lovelock term:
\eq{\begin{split}
L_{(3)}=&-R^3+12RR_{\beta}^{\phantom{\beta}\alpha}R_{\alpha}^{\phantom{\alpha}\beta}
-3RR_{\gamma\delta}^{\phantom{\gamma\delta}\alpha\beta}R_{\alpha\beta}^{\phantom{\alpha\beta}\gamma\delta}
-16R_{\beta}^{\phantom{\beta}\alpha}R_{\gamma}^{\phantom{\gamma}\beta}R_{\alpha}^{\phantom{\alpha}\gamma}
+24R_{\gamma}^{\phantom{\gamma}\alpha}R_{\delta}^{\phantom{\delta}\beta}R_{\alpha\beta}^{\phantom{\alpha\beta}\gamma\delta}+\\
&+24R_{\beta}^{\phantom{\beta}\alpha}R_{\delta\varepsilon}^{\phantom{\delta\varepsilon}\beta\gamma}R_{\alpha\gamma}^{\phantom{\alpha\gamma}\delta\varepsilon}
+2R_{\gamma\delta}^{\phantom{\gamma\delta}\alpha\beta}R_{\varepsilon\zeta}^{\phantom{\varepsilon\zeta}\gamma\delta}
R_{\alpha\beta}^{\phantom{\alpha\beta}\varepsilon\zeta}
-8R_{\gamma\varepsilon}^{\phantom{\gamma\varepsilon}\alpha\beta}R_{\alpha\zeta}^{\phantom{\alpha\zeta}\gamma\delta}
R_{\beta\delta}^{\phantom{\beta\delta}\varepsilon\zeta}
\end{split}}
$R,R_{\beta}^{\phantom{\beta}\alpha},R_{\alpha\beta}^{\phantom{\alpha\beta}\gamma\delta}$ are the $(D+4)$-dimensional scalar curvature, Ricci tensor and Riemann tensor respectively.
We choose the ansatz for the metric as follows
\eq{ds^2=-dt^2+a(t)^2d\Sigma^2_{3}+b(t)^2d\Sigma^2_{D}\label{metric}}
where $d\Sigma^2_{3}$ stand for the metric of 3-dimensional plane, $d\Sigma^2_{D}$ stand for the metrics of $D$-dimensional manifold with constant curvature.
Substituting metric into the action~(\ref{action}), varying it with respect to $a(t)$ and $b(t)$ and introducing the Hubble parameter $H\equiv\frac{\dot{a}}{a}$, we obtain constraint~(\ref{constraint}) and equations of motion~(\ref{Eq1})-(\ref{Eq2}):
\eq{\begin{split}
&\frac{3}{D+1}\left(\frac{H b'(D+1)!}{b (D-1)!}+\frac{H^2(D+1)!}{D!}+\frac{(\gamma_{D}+b'^2)(D+1)!}{6b^2(D-2)!}\right)+ \\
& +3D\alpha\Biggl(\frac{(\gamma_{D}+b'^2)^2(D-1)!}{6b^4(D-4)!}+\frac{2H^2(\gamma_{D}+b'^2)(D-1)!}{b^2(D-2)!}+
\frac{4H^3b'}{b}+ \\
& \hspace{4.8cm}+\frac{4H^2b'^2(D-1)!}{b^2(D-2)!}+\frac{2Hb'(\gamma_{D}+b'^2)(D-1)!}{b^3(D-3)!}\Biggr)+ \\
& +3D(D-1)(D-2)\beta\Biggl(\frac{(\gamma_{D}+b'^2)^3(D-3)!}{6b^6(D-6)!}+\frac{3H^2(\gamma_{D}+b'^2)^2(D-3)!}{b^4(D-4)!}+
\frac{3Hb'(\gamma_{D}+b'^2)^2(D-3)!}{b^5(D-5)!}+ \\
& \hspace{4cm}+\frac{8H^3b'^3}{b^3}+\frac{12H^2b'^2(\gamma_{D}+b'^2)(D-3)!}{b^4(D-4)!}+\frac{12H^3b'(\gamma_{D}+b'^2)}{b^3}\Biggr)=\Lambda
\end{split}
\label{constraint}}
\eq{\begin{split}
&\frac{1}{D+1}\left(\frac{2H b'(D+1)!}{b (D-1)!}+\frac{H^2(D+1)!}{D!}+\frac{(\gamma_{D}+b'^2)(D+1)!}{2b^2(D-2)!}+\frac{b''(D+1)!}{b(D-1)!}+\frac{2(H'+H^2)(D+1)!}{D!}\right)+ \\
&+D\alpha\Biggl(\frac{(\gamma_{D}+b'^2)^2(D-1)!}{2b^4(D-4)!}+
\frac{8b''b'H(D-1)!}{b^2(D-2)!}+\frac{4(\gamma_{D}+b'^2)(H'+H^2)(D-1)!}{b^2(D-2)!}+\\
&\hspace{1.3cm} +\frac{4H^2b''}{b}+\frac{4Hb'(\gamma_{D}+b'^2)(D-1)!}{b^3(D-3)!}+\frac{4H^2b'^2(D-1)!}{b^2(D-2)!}+\\
&\hspace{1.3cm} +\frac{2H^2(\gamma_{D}+b'^2)(D-1)!}{b^2(D-2)!}+\frac{8(H'+H^2)Hb'}{b}+\frac{2b''(\gamma_{D}+b'^2)(D-1)!}{b^3(D-3)!}\Biggr)+ \\
& +D(D-1)(D-2)\beta\Biggl(\frac{(\gamma_{D}+b'^2)^3(D-3)!}{2b^6(D-6)!}+\frac{3H^2(\gamma_{D}+b'^2)^2(D-3)!}{b^4(D-4)!}+
\frac{6Hb'(\gamma_{D}+b'^2)^2(D-3)!}{b^5(D-5)!}+ \\
&\hspace{1.1cm}+\frac{24b''H^2b'^2}{b^3}+\frac{12H^2b'^2(\gamma_{D}+b'^2)(D-3)!}{b^4(D-4)!}+\frac{24(H'+H^2)Hb'(\gamma_{D}+b'^2)}{b^3}+\frac{12b''H^2(\gamma_{D}+b'^2)}{b^3}+ \\ &\hspace{1.1cm}+\frac{3b''(\gamma_{D}+b'^2)^2(D-3)!}{b^5(D-5)!}+\frac{6(H'+H^2)(\gamma_{D}+b'^2)^2(D-3)!}{b^4(D-4)!}+
\frac{24b''b'H(\gamma_{D}+b'^2)(D-3)!}{b^4(D-4)!}\Biggr)=\Lambda
\end{split}
\label{Eq1}}
\eq{\begin{split}
&\frac{3}{D+1}\left(\frac{H b'(D+1)!}{b (D-2)!}+\frac{H^2(D+1)!}{(D-1)!}+\frac{(\gamma_{D}+b'^2)(D+1)!}{6b^2(D-3)!}+\frac{b''(D+1)!}{3b(D-2)!}+\frac{(H'+H^2)(D+1)!}{(D-1)!}\right)+ \\
&+3D\alpha\Biggl(\frac{(\gamma_{D}+b'^2)^2(D-1)!}{6b^4(D-5)!}+\frac{2H^2(\gamma_{D}+b'^2)(D-1)!}{b^2(D-3)!}+
\frac{4b''b'H(D-1)!}{b^2(D-3)!}+\frac{4H^3b'(D-1)!}{b(D-2)!}+\\
& \hspace{1.1cm}\frac{4H^2b''(D-1)!}{b(D-2)!}+\frac{2Hb'(\gamma_{D}+b'^2)(D-1)!}{b^3(D-4)!}+\frac{4H^2b'^2(D-1)!}{b^2(D-3)!}+
\frac{8(H'+H^2)Hb'(D-1)!}{b(D-2)!}+ \\
&\hspace{1.1cm}\frac{2b''(\gamma_{D}+b'^2)(D-1)!}{3b^3(D-4)!}+\frac{2(H'+H^2)(\gamma_{D}+b'^2)(D-1)!}{b^2(D-3)!}+4H^2(H'+H^2)\Biggr)\\
& +3D(D-1)(D-2)\beta\Biggl(\frac{(\gamma_{D}+b'^2)^3(D-3)!}{6b^6(D-7)!}+\frac{3H^2(\gamma_{D}+b'^2)^2(D-3)!}{b^4(D-5)!}+
\frac{3Hb'(\gamma_{D}+b'^2)^2(D-3)!}{b^5(D-6)!}+ \\
&\hspace{1.1cm}\frac{12H^2b'^2(\gamma_{D}+b'^2)(D-3)!}{b^4(D-5)!}+\frac{24(H'+H^2)Hb'(\gamma_{D}+b'^2)(D-3)!}{b^3(D-4)!}+
\frac{24b''H^2b'^2(D-3)!}{b^3(D-4)!}+ \\
& \hspace{1.1cm}\frac{12b''H^2(\gamma_{D}+b'^2)(D-3)!}{b^3(D-4)!}+\frac{12H^2(H'+H^2)(\gamma_{D}+b'^2)}{b^2}+\frac{24H^2b'^2(H'+H^2)}{b^2}+\\
&\hspace{1.1cm}\frac{12H^3b'(\gamma_{D}+b'^2)(D-3)!}{b^3(D-4)!}+\frac{8H^3b'^3(D-3)!}{b^3(D-4)!}+\frac{24b''b'H^3}{b^2}\\
& \hspace{1.1cm}\frac{b''(\gamma_{D}+b'^2)^2(D-3)!}{b^5(D-6)!}+\frac{3(H'+H^2)(\gamma_{D}+b'^2)^2(D-3)!}{b^4(D-5)!}+
\frac{12b''b'H(\gamma_{D}+b'^2)(D-3)!}{b^4(D-5)!}\Biggr)=D\Lambda
\end{split}
\label{Eq2}}
\section{Stability analysis for the case $D=7$\label{num-analysis}}
Henceforward we assume that $D=7$. Substituting $D=7$ into the equations (\ref{Eq1})-(\ref{Eq2}) and solving it with respect to higher derivatives we obtain autonomous system of ordinary differential equations
\eq{
\left\{
\begin{array}{l}
\dot{b}=u \\
\dot{u}=F_1(b,u,H) \\
\dot{H}=F_2(b,u,H)
\end{array}
\right.
\label{system-buH}}
We do not write these equations down in an explicit way because of their cumbersomeness.
Compactification scenario suggests that at late times 3 dimensions describing "our real world" are expanding at an accelerating rate whereas the extra dimensions tend to a constant size. It means that $H'(t),b'(t),b''(t)\rightarrow0$; $b(t)\underset{t\rightarrow\infty}{\longrightarrow}\nolinebreak b_0$, $H(t)\underset{t\rightarrow\infty}{\longrightarrow}\nolinebreak H_0$, where $b_0,H_0=\const$. Since the value of the observed cosmological constant $H\sim10^{-132}$ in fundamental units, physically realistic regime implies that $H_0\approx0$.
Substituting $b''=b'=H'=H=0,\;b=b_0,\;D=7$ into constraint~(\ref{constraint}) and equations of motion~(\ref{Eq1})-(\ref{Eq2}), we get equations which we call \emph{asymptotic equations} in what follows:
\eq{\Lambda\,b_0^{6}-42\,\gamma_D\,b_0^{4}-840\,\alpha\,\gamma_D^{2}b_0^{2}-5040\,\beta\,\gamma_D^{3}=0}
\eq{\Lambda\,b_0^{6}-30\,\gamma_D\,b_0^{4}-360\,\alpha\,\gamma_D^{2}b_0^{2}-720\,\beta\,\gamma_D^{3}=0}
One of the asymptotic equations of motion coincides with the constraint, so we have only two independent equations. Assuming $b_0>0$ we solve these equation with respect to $\Lambda$ and $b_0$:
\eq{\Lambda={\frac {30\,\gamma_D\,({{b_0}}^{4}+12\,\alpha\,\gamma_D\,{{b_0}}^{2}+24\,\beta\gamma_D^{2})}{{{b_0}}^{6}}}\label{Lambda}}
\begin{table}[!h]
\begin{center}
\caption{Conditions of existing of $b_0$}
\label{table.solutions.5+1}
\begin{tabular}{|c|c|c|}
\hline
& $\gamma_D>0$ & $\gamma_D<0$ \\
\hline
$b_0=\sqrt{-20\alpha\gamma_D+2\gamma_D\sqrt{100\alpha^2-90\beta}}$ &
$\begin{array}{c}
\beta<0,\;\;\alpha\in\mathbb{R} \\
0<\beta\leqslant\frac{10}{9}\alpha^2,\;\;\alpha<0
\end{array}$ &
$0<\beta\leqslant\frac{10}{9}\alpha^2,\;\;\alpha>0$ \\
\hline
$b_0=\sqrt{-20\alpha\gamma_D-2\gamma_D\sqrt{100\alpha^2-90\beta}}$ &
$0<\beta\leqslant\frac{10}{9}\alpha^2,\;\;\alpha<0$ &
$\begin{array}{c}
\beta<0,\;\;\alpha\in\mathbb{R} \\
0<\beta\leqslant\frac{10}{9}\alpha^2,\;\;\alpha>0
\end{array}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Compactified solution $\bigl\{b(t)\equiv b_0,\; u(t)\equiv 0,\; H(t)\equiv 0\bigr\}$ is a fixed point of the system~(\ref{system-buH}). Stability of a fixed point of a system of ODEs is determined by the sign of real part of eigenvalues of the Jacobian matrix evaluated at the this point; a fixed point is asymptotically stable if all eigenvalues have negative real parts.
Analytical expressions for the corresponding eigenvalues can be written down, though they are too combersom for using them in further studies. So that,
we check the stability condition by numerical evaluation. We also note that substitution $H=0$ lead to all zero eigenvalues or to one zero and two equal in absolute value but opposite in sign eigenvalues (the same is true for the case of negative curvature). In order to be safe from small computer errors, instead of exact $H=0$
we substitute $u=0,\; H\sim 10^{-16}$ as well as expressions for $\Lambda$ and $b_0$ into the Jacobian matrix of the system~(\ref{system-buH}), so that each element of this matrix becomes a function of the coupling constants $\alpha$ and $\beta$. After that we make a mesh with the coupling constants and evaluate eigenvalues for each pair $(\alpha,\beta)$. It should also be noted that for positive curvature we see stable solutions for $b_0=\sqrt{-20\alpha+2\sqrt{100\alpha^2-90\beta}}$ with "plus" sign before the square root, and we do not see any stable solution for the branch of $b_0$ with "minus" sign before the square root ($b_0=\nolinebreak\sqrt{-20\alpha-2\sqrt{100\alpha^2-90\beta}}$) whereas for negative curvature we get stable solutions for both branches of $b_0$. Figure~\ref{stable-compactification} illustrates the distribution of stable compactified solutions over the coupling constants $\alpha$ and $\beta$ for $H_0=10^{-15}$ for positive and negative curvature. Green areas of the figures represent stable solutions; grey areas - unstable solutions; in white areas solutions does not exist at all. The result we see for negative curvature confirms the result obtained in our previous paper~\cite{ChGiaPavTop-EurPhys-2021}: all existing solutions are always stable.
One can see that for $\beta=0$ (Einstein-Gauss-Bonnet model) there are no stable compactified solutions with positive curvature, however they exist with negative curvature. These result are in agreement with the results obtained earlier (see~\cite{ChPav-MosPhysLet-2021} for details). For non-zero $\beta$ stable solutions exist for any sign of the spatial curvature.
Note that as the number of extra dimensions increases the number of stable compactified solutions with positive curvature decreases (see Fig.~\ref{changing-region-positive-curvature}); in the case of negative curvature we do not observe this and the regions of stability do not change. In the case of Einstein-Gauss-Bonnet model there exists a similar situation in the general case when we abandon the $H \sim 0$ condition and compactification for positive curvature becomes possible~\cite{ChGiaPavTop-EurPhys-2021} .
\section {Conclusions}
We have studied the stability of compactification scenario in 3-d order Lovelock gravity with curved extra dimension subspace.
We considered both negative and positive possibilities for the spatial curvature of the "inner" space. In the present paper
we restricted ourselves by the case when effective cosmological constant in "big" dimensions subspace is small.
Our results indicate that some restrictions known for Einstein-Gauss-Bonnet gravity (i.e. without the 3-d Lovelock term)
can be lifted, in particular the scenario with positive spatial curvature of "inner" space being impossible without
the 3-d term appears to be possible for non-zero 3-d term. However, if we plot coupling constants ranges needed for such scenario
to realize, they form rather narrow band, moreover, the width of such zone decreases with number of extra dimensions increasing.
This is similar to compactification conditions in EGB gravity in the general case (when effective cosmological constant in
"big" dimensions can be large, as we mentioned zero cosmological constant is incompatible with positive spatial curvature
of extra dimensions space): compactification scenario with positive spatial curvature is possible, but requires some fine-tuning
of coupling constants and the bigger number of extra dimensions is, the more severe this fine-tuning becomes.
As for the compactification with negative spatial curvature of the "inner" dimension space, the results are qualitatively the
same as for EGB gravity: if a compactification solution exists, it is always stable; zone of coupling constants needed for
compactification solution to exist is large and does not depend on the number of extra dimensions.
We also note that when for a stable point we choose $\Lambda$ so that $H$ vanishes, all three eigenvalue also vanish.
This means that initial perturbation would not disappear and instead would oscillate near the exact solution. Such oscillations
have been found in EGB gravity with $H=0$, moreover, numerical integration have shown that in the presence of ordinary matter
the oscillations decays in time \cite{CanGiaPav-GravCosmol-2018}. We can hope that this is still valid for 3-d order
Lovelock gravity, though the case of $H=0$ exactly needs a special treatment and we leave it for a future work.
\begin{figure}[!t]
\begin{minipage}[h]{.32\linewidth}
\center{\includegraphics[width=\linewidth]{Regions_of_stability_D_7_new_2.eps} \\ a)}
\end{minipage}
\begin{minipage}[h]{.32\linewidth}
\center{\includegraphics[width=\linewidth]{negative_curvature_D_7_plus_branch_stable_points_new_2.eps} \\ b)}
\end{minipage}
\begin{minipage}[h]{.32\linewidth}
\center{\includegraphics[width=\linewidth]{negative_curvature_D_7_minus_branch_stable_points_new_2.eps} \\ c)}
\end{minipage}
\caption{\footnotesize The distribution of stable compactified solutions over the coupling constants $\alpha$ and $\beta$ for $H_0=10^{-15}$ for a) positive curvature; b) negative curvature, the branch $b_0=\sqrt{20\alpha+2\sqrt{100\alpha^2-90\beta}}$; c) negative curvature, the branch $b_0=\sqrt{20\alpha-2\sqrt{100\alpha^2-90\beta}}$. Green areas: stable solutions; grey area: unstable solutions; white area: solutions does not exist.}
\label{stable-compactification}
\end{figure}
\begin{figure}[!t]
\begin{minipage}[h]{.49\linewidth}
\center{\includegraphics[width=0.6\linewidth]{positive_curvature_D_13_stable_points_new.eps}}
\end{minipage}
\caption{\footnotesize Region of stability for $D=13$ extra dimensions (positive curvature)}
\label{changing-region-positive-curvature}
\end{figure}
\section*{Acknowledgements}
The work of AT have been supported by the RFBR grant 20-02-00411. Authors are grateful to Alex
Giacomini for discussions.
|
1,108,101,565,191 | arxiv | \section{Introduction}
Dynamic epistemic logic describes the way knowledge can change in multi-agent
systems subject to informative actions taking place. For example, if Tim were
to announce ``I like cats'', then everyone in the room would know the
proposition {\it Tim likes cats} is true, and furthermore, everybody would
know that this fact is common knowledge among the people in the room. This
simple informative action is what is referred to as a public announcement
\cite{plaza1989}, and such actions of these have been extensively studied in
epistemic logics. More complex actions can include private announcements
(where some agents are oblivious to the informative action occurring), or a
group announcement (where members of a group simultaneously make a truthful
announcement to every other member of the group \cite{agotnes2010}). These
complex actions may be modelled and reasoned about using action models
\cite{baltag1998} which are effectively a semantic model of the change caused
by an informative action. Consequently they are very useful for reasoning
about the consequences of an informative action, but less well suited to
reasoning about the action itself.
We present a language for describing epistemic actions syntactically.
Complex actions may be built as an expression upon simpler primitive actions.
This approach is a generalisation of the relational actions introduced by van
Ditmarsch \cite{vanditmarsch2001}. We show in several settings that this
language is sufficient to represent any informative action represented by an
action model (up to a given model depth), we present a synthesis result,
and give a a sound and complete axiomatisation for some of these settings.
The synthesis result is an important application of this work: given a
desired state of knowledge among a group of agents, we are able to compute a
complex informative action that will achieve that particular knowledge state
(given it is consistent with the current knowledge of agents). We provided
these results in a variety of modal logics suited to epistemic reasoning:
\classK{}, \classKFF{} and \classS{}.
\begin{example}\label{grant-example}
James, Ed and Tim submit a research grant proposal, and eagerly await the
outcome. Is there a series of actions that will result in:
\begin{enumerate}
\item Ed knowing the grant application was successful;
\item James not knowing whether the grant application was successful, but knowing that
either Ed or Tim does know;
\item Tim does not know whether the grant application was successful, but knows that if
the grant application was unsuccessful, then James knows that it was unsuccessful.
\end{enumerate}
Such an epistemic state may be achieved by a series of messages: Ed is sent a
message congratulating him on a successful application, James is sent a
message informing him that at least one applicant on each grant has been
informed of the outcome, and Tim is sent a message informing him that the
first investigator of all unsuccessful grants has been notified.
\end{example}
After establishing some technical preliminaries (Section~\ref{technical-preliminaries}) we
present a syntactic approach for describing informative actions (Sections~\ref{syntax}~and~\ref{semantics}),
provide a sound and complete axiomatisation of the language (Section~\ref{axiomatisation}) and
provide a correspondence result between this language and action models (Section~\ref{correspondence}),
give a computational method for synthesising actions to achieve an epistemic goal (Section~\ref{synthesis}).
\section{Technical Preliminaries}\label{technical-preliminaries}
We recall definitions from modal logic, the action model logic of Baltag,
Moss and Solecki~\cite{baltag1998,baltag2005} the refinement modal logic of
van Ditmarsch, French and Pinchinat~\cite{vanditmarsch2009,vanditmarsch2010}
and the arbitrary action model logic of Hales~\cite{hales2013}.
Let \atoms{} be a non-empty, countable set of propositional atoms, and let
\agents{} be a non-empty, finite set of agents.
\begin{definition}[Kripke model]\label{kripke-model}
A {\em Kripke model} $\model = \modelTuple$ consists of
a {\em domain} \states{}, which is a non-empty set of states
(or possible worlds),
an {\em accessibility} function
$\accessibility : \agents \to \powerset(\states \times \states)$,
which is a function from agents to accessibility relations on \states{},
and a {\em valuation} function $\valuation : \atoms \to \powerset(\states)$,
which is a function from states to sets of propositional atoms.
The {\em class of all Kripke models} is called \classK{}.
A {\em multi-pointed Kripke model} $\pointedModel{\statesT} = (\model,
\statesT)$ consists of a Kripke model \model{} along with a designated set of
states $\statesT \subseteq \states$.
\end{definition}
We write $\accessibilityAgent{\agentA}$ to denote $\accessibility(\agentA)$.
Given two states $\stateS, \stateT \in \states$,
we write $\stateS \accessibilityAgent{\agentA} \stateT$ to denote that
$(\stateS, \stateT) \in \accessibilityAgent{\agentA}$.
We write $\statesT \accessibilityAgent{\agentA}$ to denote the set of states
$\{\stateS \in \states \mid \stateT \in \statesT, \stateT \accessibilityAgent{\agentA} \stateS \}$
and write $\accessibilityAgent{\agentA} \statesT$ to denote the set of states
$\{\stateS \in \states \mid \stateT \in \statesT, \stateS \accessibilityAgent{\agentA} \stateT \}$.
We write $\pointedModel{\stateS}$ as an abbreviation for
$\pointedModel{\{\stateS\}}$, and write $\stateT \accessibilityAgent{\agentA}$
and $\accessibilityAgent{\agentA} \stateT$ as abbreviations for
$\{\stateT\} \accessibilityAgent{\agentA}$ and
$\accessibilityAgent{\agentA} \{\stateT\}$ respectively.
As we will often be required to discuss several models at once, we will use
the convention that
$\pointedModel{\statesT} = \pointedModelTuple{\statesT}$,
$\pointedModel[\prime]{\statesT[\prime]} = \pointedModelTuple[\prime]{\statesT[\prime]}$,
$\pointedModel[\gamma]{\statesT[\gamma]} = \pointedModelTuple[\gamma]{\statesT[\gamma]}$,
etc.
\begin{definition}[Action model]\label{action-model}
Let \lang{} be a logical language.
An {\em action model} $\actionModel = \actionModelTuple$ with preconditions defined
on \lang{} consists of a {\em domain} \actionStates,
which is a non-empty, finite set of action points,
an {\em accessibility} function $\actionAccessibility : \agents \to \powerset(\actionStates \times \actionStates)$,
which is a function from agents to accessibility relations on \actionStates,
and a {\em precondition} function $\actionPrecondition : \actionStates \to \lang$,
which is a function from action points to formulae from \lang{}.
The {\em class of all action models} is called \classAM{}.
A {\em multi-pointed action model}
$\pointedActionModel{\actionStatesT} = (\actionModel, \actionStatesT)$
consists of an action model $\actionModel$ along with a designated set of
action points $\actionStatesT \subseteq \actionStates$.
\end{definition}
We use the same abbreviations and conventions for action models as are used
for Kripke models. We use the convention of using sans-serif fonts for action
models, as in \pointedActionModel{\actionStatesT} and italic fonts for Kripke
models, as in \pointedModel{\statesT}.
In addition to the class \classK{} of all Kripke models,
and the class \classAM{} of all action models
we will be referring to several other classes of Kripke models and action models.
\begin{definition}[Classes of Kripke models and action models]
The class of all Kripke models / action models with transitive and Euclidean accessibility relations is called \classKFF{} / $\classAM_\classKFF{}$.
The class of all Kripke models / action models with serial, transitive and Euclidean accessibility relations is called \classKD{} / $\classAM_\classKD{}$.
The class of all Kripke models / action models with reflexive, transitive and Euclidean accessibility relations is called \classS{} / $\classAM_\classS{}$.
\end{definition}
\begin{definition}[Language of arbitrary action model logic]\label{aml-syntax}
The language \langAaml{} of arbitrary action model logic is inductively defined as:
\begin{eqnarray*}
\phi &::=& \atomP \mid
\neg \phi \mid
(\phi \land \phi) \mid
\necessary[\agentA] \phi \mid
\allacts{\pointedActionModel{\actionStatesT}} \phi \mid
\allrefs \phi
\end{eqnarray*}
where $\atomP \in \atoms$, $\agentA \in \agents$, and
$\pointedActionModel{\actionStatesT} \in \classAM$ is a multi-pointed action
model with preconditions defined on the language \langAaml{}.
\end{definition}
We use all of the standard abbreviations for propositional logic, in addition
to the abbreviations
$\possible[\agentA] \phi ::= \neg \necessary[\agentA] \neg \phi$,
$\someacts{\pointedActionModel{\actionStatesT}} \phi ::= \neg \allacts{\pointedActionModel{\actionStatesT}} \neg \phi$,
and $\somerefs \phi ::= \neg \allrefs \neg \phi$.
We also use the cover operator of Janin and Walukiewicz~\cite{janin1995},
following the definitions given by B{\'i}lkov{\'a}, Palmigiano and
Venema~\cite{bilkova2008}. The cover operator, $\covers_\agentA \Gamma$ is
an abbreviation defined by $\covers_\agentA \Gamma ::= \necessary[\agentA] \bigvee_{\gamma \in \Gamma} \gamma \land \bigwedge_{\gamma \in \Gamma} \possible[\agentA] \gamma$,
where $\Gamma \subseteq \langAaml$ is a finite set of formulae.
We note that the modal operators $\necessary[\agentA]$, $\possible[\agentA]$ and $\covers_\agentA$ are interdefineable
as $\necessary[\agentA] \phi \iff \covers_\agentA \{\phi\} \lor \covers_\agentA \emptyset$ and $\possible[\agentA] \phi \iff \covers_\agentA \{\phi, \top\}$.
This is the basis for the axiomatisations of refinement modal logic and
arbitrary action model logic, and plays an important part in our correspondence
and synthesis results.
This was previously used as the basis of several axiomatisations
of refinement modal logics~\cite{vanditmarsch2010,hales2011a,hales2011b,hales2012,bozzelli2012a,hales2013}.
We refer to the language \langAml{} of action model logic, which is \langAaml{} without the $\allrefs$ operator,
the language \langRml{} of refinement modal logic, which is \langAaml{} without the $\allacts{\pointedActionModel{\actionStatesT}}$ operator,
the language \lang{} of modal logic, which is \langAml{} without the $\allacts{\pointedActionModel{\actionStatesT}}$ operator,
and the language \langP{} of propositional logic, which is \lang{} without the $\necessary[\agentA]$ operator.
\begin{definition}[Semantics of modal logic]\label{ml-semantics}
Let \classC{} be a class of Kripke models and let $\model = \modelTuple \in
\classC$ be a Kripke model.
The interpretation of $\phi \in \lang$ in the logic \logicC{} is defined
inductively as:
\begin{eqnarray*}
\pointedModel{\stateS} \entails \atomP &\text{ iff }& \stateS \subseteq \valuation(\atomP)\\
\pointedModel{\stateS} \entails \neg \phi &\text{ iff }& \pointedModel{\stateS} \nentails \phi\\
\pointedModel{\stateS} \entails \phi \land \psi &\text{ iff }& \pointedModel{\stateS} \entails \phi \text{ and } \pointedModel{\stateS} \entails \psi\\
\pointedModel{\stateS} \entails \necessary[\agentA] \phi &\text{ iff }& \text{for every } \stateT \in \stateS \accessibilityAgent{\agentA} : \pointedModel{\stateT} \entails \phi\\
\pointedModel{\statesT} \entails \phi &\text{ iff }& \text{for every } \stateT \in \statesT : \pointedModel{\stateT} \entails \phi
\end{eqnarray*}
\end{definition}
\begin{definition}[Bisimilarity of Kripke models]
Let $\model = \modelTuple \in \classK$
and $\model[\prime] = \modelTuple[\prime] \in \classK$
be Kripke models.
A non-empty relation $\bisimulation \subseteq \states \times \states[\prime]$
is a {\em bisimulation} if and only if for every $\agentA \in \agents$
and $(\stateS, \stateS[\prime]) \in \bisimulation$ the following conditions hold:
\paragraph{atoms}
For every $\atomP \in \atoms$: $\stateS \in \valuation(\atomP)$ if and only if $\stateS[\prime] \in \valuation[\prime](\atomP)$.
\paragraph{forth-$\agentA$}
For every $\stateT \in \stateS \accessibilityAgent{\agentA}$
there exists $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentA}$
such that $(\stateT, \stateT[\prime]) \in \bisimulation$.
\paragraph{back-$\agentA$}
For every $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentA}$
there exists $\stateT \in \stateS \accessibilityAgent{\agentA}$
such that $(\stateT, \stateT[\prime]) \in \bisimulation$.
If $(\stateS, \stateS[\prime]) \in \bisimulation$ then we call
$\pointedModel{\stateS}$ and $\pointedModel[\prime]{\stateS[\prime]}$
{\em bisimilar} and write
$\pointedModel{\stateS} \bisimilar \pointedModel[\prime]{\stateS[\prime]}$.
\end{definition}
\begin{proposition}
The relation $\bisimilar$ is an equivalence relation on Kripke models.
\end{proposition}
\begin{proposition}
Let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar \pointedModel[\prime]{\stateS[\prime]}$.
Then for every $\phi \in \lang$:
$\pointedModel{\stateS} \entails \phi$ if and only if $\pointedModel[\prime]{\stateS[\prime]} \entails \phi$.
\end{proposition}
These are well-known results.
\begin{definition}[$n$-bisimilarity of Kripke models]
Let $n \in \mathbb{N}$,
and let $\pointedModel{\stateS} = \pointedModelTuple{\stateS} \in \classK$
and $\pointedModel[\prime]{\stateS[\prime]} = \pointedModelTuple[\prime]{\stateS[\prime]} \in \classK$ be Kripke models.
We say that $\pointedModel{\stateS}$ is {\em $n$-bisimilar}
to $\pointedModel[\prime]{\stateS[\prime]}$,
and write $\pointedModel{\stateS} \bisimilar_n \pointedModel[\prime]{\stateS[\prime]}$,
if and only if for every $\agentA \in \agents$ the following conditions hold:
\paragraph{atoms}
For every $\atomP \in \atoms$: $\stateS \in \valuation(\atomP)$ if and only if $\stateS[\prime] \in \valuation[\prime](\atomP)$.
\paragraph{forth-$n$-$\agentA$}
If $n > 0$ then
for every $\stateT \in \stateS \accessibilityAgent{\agentA}$
there exists $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentA}$
such that $\pointedModel{\stateT} \bisimilar_{(n - 1)} \pointedModel[\prime]{\stateT[\prime]}$
\paragraph{back-$n$-$\agentA$}
If $n > 0$ then
for every $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentA}$
there exists $\stateT \in \stateS \accessibilityAgent{\agentA}$
such that $\pointedModel{\stateT} \bisimilar_{(n - 1)} \pointedModel[\prime]{\stateT[\prime]}$
\end{definition}
\begin{definition}[Modal depth]
Let $\phi \in \lang$. The {\em modal depth of $\phi$}, written as $d(\phi)$, is defined recursively as follows:
\begin{eqnarray*}
d(\atomP) &=& 0 \text{ for } \atomP \in \atoms\\
d(\neg \psi) &=& d(\psi)\\
d(\psi \land \chi) &=& max(d(\psi), d(\chi))\\
d(\necessary[\agentA] \psi) &=& 1 + d(\psi)
\end{eqnarray*}
\end{definition}
\begin{proposition}
The relation $\bisimilar_n$ is an equivalence relation on Kripke models.
\end{proposition}
\begin{proposition}
Let $n \in \mathbb{N}$ and
let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models
such that $\pointedModel{\stateS} \bisimilar_n \pointedModel[\prime]{\stateS[\prime]}$.
If $m < n$ then $\pointedModel{\stateS} \bisimilar_m \pointedModel[\prime]{\stateS[\prime]}$.
\end{proposition}
\begin{proposition}
Let $n \in \mathbb{N}$ and
let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar_n \pointedModel[\prime]{\stateS[\prime]}$.
Then for every $\phi \in \lang$ such that $d(\phi) \leq n$:
$\pointedModel{\stateS} \entails \phi$ if and only if $\pointedModel[\prime]{\stateS[\prime]} \entails \phi$.
\end{proposition}
\begin{proposition}
Let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models.
Then $\pointedModel{\stateS} \bisimilar \pointedModel[\prime]{\stateS[\prime]}$
if and only if for every $n \in \mathbb{N}$:
$\pointedModel{\stateS} \bisimilar_n \pointedModel[\prime]{\stateS[\prime]}$.
\end{proposition}
These are well-known results.
\begin{definition}[$\agentsB$-bisimilarity of Kripke models]
Let $\pointedModel{\stateS} = \pointedModelTuple{\stateS} \in \classK$
and $\pointedModel[\prime]{\stateS[\prime]} = \pointedModelTuple[\prime]{\stateS[\prime]} \in \classK$
be Kripke models.
We say that $\pointedModel{\stateS}$ is {\em $\agentsB$-bisimilar}
to $\pointedModel[\prime]{\stateS[\prime]}$,
and write $\pointedModel{\stateS} \bisimilar_\agentsB \pointedModel[\prime]{\stateS[\prime]}$,
if and only if for every $\agentB \in \agentsB$ the following conditions hold:
\paragraph{atoms}
For every $\atomP \in \atoms$: $\stateS \in \valuation(\atomP)$ if and only if $\stateS[\prime] \in \valuation[\prime](\atomP)$.
\paragraph{forth-$\agentB$}
For every $\stateT \in \stateS \accessibilityAgent{\agentB}$
there exists $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentB}$
such that $\pointedModel{\stateT} \bisimilar \pointedModel[\prime]{\stateT[\prime]}$.
\paragraph{back-$\agentB$}
For every $\stateT[\prime] \in \stateS[\prime] \accessibilityAgent[\prime]{\agentB}$
there exists $\stateT \in \stateS \accessibilityAgent{\agentB}$
such that $\pointedModel{\stateT} \bisimilar \pointedModel[\prime]{\stateT[\prime]}$ .
\end{definition}
\begin{definition}[$\agentsB$-restricted formulae]\label{b-restricted-formulae}
Let $\agentsB \subseteq \agents$. A {\em $\agentsB$-restricted formula} is defined by the following abstract syntax:
$$
\phi ::= \atomP \mid
\neg \phi \mid
(\phi \land \phi) \mid
\necessary[\agentB] \psi
$$
where $\atomP \in \atoms$, $\agentB \in \agentsB$, $\psi \in \lang$.
\end{definition}
\begin{proposition}
Let $\agentsB \subseteq \agents$,
and $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar_\agentsB \pointedModel[\prime]{\stateS[\prime]}$.
Then for every $\phi \in \lang$ such that $\phi$ is is a $\agentsB$-restricted formula:
$\pointedModel{\stateS} \entails \phi$ if and only if $\pointedModel[\prime]{\stateS[\prime]} \entails \phi$.
\end{proposition}
This result is trivial.
We recall the semantics of action model logic of Baltag, Moss and Solecki~\cite{baltag1998,baltag2005}.
\begin{definition}[Semantics of action model logic]\label{aml-semantics}
Let \classC{} be a class of Kripke models, let $\model = \modelTuple \in
\classC$ be a Kripke model and let $\actionModel \in \classAM$ be an action
model.
We first define {\em action model execution}.
We denote the result of executing the action model $\actionModel$
on the Kripke model $\model$ as $\model \exec \actionModel$,
and we define the result as
$\model \exec \actionModel = \model[\prime] = \modelTuple[\prime]$ where:
\begin{eqnarray*}
\states[\prime] &=& \{(\stateS, \actionStateS) \mid \stateS \in \states, \actionStateS \in \actionStates, \pointedModel{\stateS} \entails \actionPrecondition(\actionStateS)\}\\
(\stateS, \actionStateS) \accessibilityAgent[\prime]{\agentA} (\stateT, \actionStateT) &\text{ iff }& \stateS \accessibilityAgent{\agentA} \stateT \text{ and } \actionStateS \actionAccessibilityAgent{\agentA} \actionStateT\\
(\stateS, \actionStateS) \in \valuation[\prime](\atomP) &\text{ iff }& \stateS \in \valuation(\atomP)
\end{eqnarray*}
We also define {\em multi-pointed action model execution} as
$\pointedModel{\statesT} \exec \pointedActionModel{\actionStatesT} = \pointedModel[\prime]{\statesT[\prime]} = \pointedModelTuple[\prime]{\statesT[\prime]} = ((\model \exec \actionModel), (\statesT \times \actionStatesT) \cap \states[\prime])$.
Then the interpretation of $\phi \in \langAml$ in the logic \logicAmlC{} is
the same as its interpretation in the modal logic \logicC{} given in
Definition~\ref{ml-semantics}, with the additional inductive case:
\begin{eqnarray*}
\pointedModel{\stateS} \entails \allacts{\pointedActionModel{\actionStatesT}} \phi &\text{ iff }& \pointedModel{\stateS} \exec \pointedActionModel{\actionStatesT} \in \classC \text{ implies } \pointedModel{\stateS} \exec \pointedActionModel{\actionStatesT} \entails \phi
\end{eqnarray*}
\end{definition}
\begin{definition}[Sequential execution of action models]
Let $\actionModel, \actionModel[\prime] \in \classAM$.
We define the {\em sequential execution of $\actionModel$ and $\actionModel[\prime]$} as
$\actionModel \exec \actionModel[\prime] = \actionModel[\prime\prime] = \actionModelTuple[\prime\prime]$ where:
\begin{eqnarray*}
\actionStates[\prime\prime] &=& \actionStates \times \actionStates[\prime]\\
(\actionStateS, \actionStateS[\prime]) \accessibilityAgent[\prime\prime]{\agentA} (\actionStateT, \actionStateT[\prime]) &\text{ iff }& \actionStateS \actionAccessibilityAgent{\agentA} \actionStateT \text{ and } \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA} \actionStateT[\prime]\\
\actionPrecondition[\prime\prime]((\actionStateS, \actionStateS[\prime])) &=& \someacts{\pointedActionModel{\actionStateS}} \actionPrecondition[\prime](\actionStateS[\prime])
\end{eqnarray*}
We also define {\em sequential action of $\pointedActionModel{\actionStatesT}$ and $\pointedActionModel[\prime]{\actionStatesT[\prime]}$} as
$\pointedActionModel{\actionStatesT} \exec \pointedActionModel[\prime]{\actionStatesT[\prime]} = \pointedActionModel[\prime\prime]{\actionStatesT[\prime\prime]} = \pointedActionModelTuple[\prime\prime]{\actionStatesT \times \actionStatesT[\prime]}$.
\end{definition}
\begin{definition}[Bisimilarity of action models]
Let $\actionModel = \actionModelTuple \in \classAM$
and $\actionModel[\prime] = \actionModelTuple[\prime] \in \classAM$
be action models.
A non-empty relation $\bisimulation \subseteq \actionStates \times \actionStates[\prime]$
is a {\em bisimulation} if and only if for every $\agentA \in \agents$
and $(\actionStateS, \actionStateS[\prime]) \in \bisimulation$ the following conditions hold:
\paragraph{atoms}
$\proves \actionPrecondition(\actionStateS) \iff \actionPrecondition[\prime](\actionStateS[\prime])$
\paragraph{forth-$\agentA$}
For every $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
there exists $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$
such that $(\actionStateT, \actionStateT[\prime]) \in \bisimulation$.
\paragraph{back-$\agentA$}
For every $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$
there exists $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
such that $(\actionStateT, \actionStateT[\prime]) \in \bisimulation$.
If $(\actionStateS, \actionStateS[\prime]) \in \bisimulation$ then we call
$\pointedActionModel{\actionStateS}$ and $\pointedActionModel[\prime]{\actionStateS[\prime]}$
{\em bisimilar} and write
$\pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{definition}
\begin{proposition}
The relation $\bisimilar$ is an equivalence relation on action models.
\end{proposition}
\begin{proposition}
Let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar \pointedModel[\prime]{\stateS[\prime]}$.
and let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classAM$ be action models such that
$\pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Then
$(\pointedModel{\stateS} \exec \pointedActionModel{\actionStateS}) \bisimilar (\pointedModel[\prime]{\stateS[\prime]} \exec \pointedActionModel[\prime]{\actionStateS[\prime]})$.
\end{proposition}
\begin{proposition}
Let $\pointedModel{\stateS} \in \classK$ be a Kripke model
and let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classAM$ be action models.
Then
$((\pointedModel{\stateS} \exec \pointedActionModel{\actionStateS}) \exec \pointedActionModel[\prime]{\actionStateS[\prime]}) \bisimilar (\pointedModel{\stateS} \exec (\pointedActionModel{\actionStateS} \exec \pointedActionModel[\prime]{\actionStateS[\prime]}))$.
\end{proposition}
These results are shown by Baltag, Moss and Solecki~\cite{baltag1998,baltag2005}.
\begin{definition}[$n$-bisimilarity of action models]
Let $n \in \mathbb{N}$,
and let $\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$
and $\pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]} \in \classAM$
be action models.
We say that $\pointedActionModel{\actionStateS}$ is {\em $n$-bisimilar}
to $\pointedActionModel[\prime]{\actionStateS[\prime]}$,
and write $\pointedActionModel{\actionStateS} \bisimilar_n \pointedActionModel[\prime]{\actionStateS[\prime]}$,
if and only if for every $\agentA \in \agents$ the following conditions hold:
\paragraph{atoms}
$\proves \actionPrecondition(\actionStateS) \iff \actionPrecondition[\prime](\actionStateS[\prime])$
\paragraph{forth-$n$-$\agentA$}
If $n > 0$ then
for every $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
there exists $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$
such that $\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \pointedActionModel[\prime]{\actionStateT[\prime]}$
\paragraph{back-$n$-$\agentA$}
If $n > 0$ then
for every $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$
there exists $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
such that $\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \pointedActionModel[\prime]{\actionStateT[\prime]}$
\end{definition}
\begin{proposition}
The relation $\bisimilar_n$ is an equivalence relation on action models.
\end{proposition}
\begin{proposition}
Let $n \in \mathbb{N}$ and
let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classK$ be action models
such that $\pointedActionModel{\actionStateS} \bisimilar_n \pointedActionModel[\prime]{\actionStateS[\prime]}$.
If $m < n$ then $\pointedActionModel{\actionStateS} \bisimilar_m \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{proposition}
\begin{proposition}
Let $n \in \mathbb{N}$,
let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar_n \pointedModel[\prime]{\stateS[\prime]}$.
and let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classK$ be action models such that
$\pointedActionModel{\actionStateS} \bisimilar_n \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Then
$(\pointedModel{\stateS} \exec \pointedActionModel{\actionStateS}) \bisimilar_n (\pointedModel[\prime]{\stateS[\prime]} \exec \pointedActionModel[\prime]{\actionStateS[\prime]})$.
\end{proposition}
\begin{proposition}
Let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classK$ be action models.
Then $\pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$
if and only if for every $n \in \mathbb{N}$:
$\pointedActionModel{\actionStateS} \bisimilar_n \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{proposition}
These results follow from similar reasoning to the results for $n$-bisimilarity of Kripke models.
\begin{definition}[$\agentsB$-bisimilarity of action models]
Let $\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classK$
and $\pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]} \in \classK$
be Kripke models.
We say that $\pointedActionModel{\actionStateS}$ is $\agentsB$-bisimilar
to $\pointedActionModel[\prime]{\actionStateS[\prime]}$
and write $\pointedActionModel{\actionStateS} \bisimilar_\agentsB \pointedActionModel[\prime]{\actionStateS[\prime]}$,
if and only if
for every $\agentB \in \agentsB$ the following conditions hold:
\paragraph{atoms}
For every $\atomP \in \atoms$: $\actionStateS \in \valuation(\atomP)$ if and only if $\actionStateS[\prime] \in \valuation[\prime](\atomP)$.
\paragraph{forth-$\agentB$}
For every $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}$
there exists $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentB}$
such that $\pointedActionModel{\actionStateT} \bisimilar \pointedActionModel[\prime]{\actionStateT[\prime]}$.
\paragraph{back-$\agentB$}
For every $\actionStateT[\prime] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentB}$
there exists $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}$
such that $\pointedActionModel{\actionStateT} \bisimilar \pointedActionModel[\prime]{\actionStateT[\prime]}$ .
\end{definition}
\begin{proposition}
Let $\pointedModel{\stateS}, \pointedModel[\prime]{\stateS[\prime]} \in \classK$ be Kripke models such that
$\pointedModel{\stateS} \bisimilar_\agentsB \pointedModel[\prime]{\stateS[\prime]}$.
and let $\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]} \in \classK$ be action models such that
$\pointedActionModel{\actionStateS} \bisimilar_\agentsB \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Then
$(\pointedModel{\stateS} \exec \pointedActionModel{\actionStateS}) \bisimilar_\agentsB (\pointedModel[\prime]{\stateS[\prime]} \exec \pointedActionModel[\prime]{\actionStateS[\prime]})$.
\end{proposition}
This result follows from similar reasoning to the results for $\agentsB$-bisimilarity of Kripke models.
\begin{definition}[Axiomatisation \axiomAmlK]\label{aml-k-axioms}
The axiomatisation \axiomAmlK{} is a substitution schema consisting of the
rules and axioms of \axiomK{} along with the axioms:
$$
\begin{array}{rl}
{\bf AS} & \proves \allacts{\pointedActionModel{\actionStatesT} \exec \pointedActionModel[\prime]{\actionStatesT[\prime]}} \phi \iff \allacts{\pointedActionModel[\prime]{\actionStatesT[\prime]}} \allacts{\pointedActionModel{\actionStatesT}} \phi\\
{\bf AU} & \proves \allacts{\pointedActionModel{\actionStatesT}} \phi \iff \bigwedge_{\actionStateT \in \actionStatesT} \allacts{\pointedActionModel{\actionStateT}} \phi\\
{\bf AP} & \proves \allacts{\pointedActionModel{\actionStateT}} \atomP \iff (\actionPrecondition(\actionStateT) \implies \atomP) \text{ for $\atomP \in \atoms$}\\
{\bf AN} & \proves \allacts{\pointedActionModel{\actionStateT}} \neg \phi \iff (\actionPrecondition(\actionStateT) \implies \neg \allacts{\pointedActionModel{\actionStateT}} \phi)\\
{\bf AC} & \proves \allacts{\pointedActionModel{\actionStateT}} (\phi \land \psi) \iff (\allacts{\pointedActionModel{\actionStateT}} \phi \land \allacts{\pointedActionModel{\actionStateT}} \psi)\\
{\bf AK} & \proves \allacts{\pointedActionModel{\actionStateT}} \necessary[\agentA] \phi \iff (\actionPrecondition(\actionStateT) \implies \necessary[\agentA] \allacts{\pointedActionModel{\actionStateT \actionAccessibilityAgent{\agentA}}} \phi)\\
\end{array}
$$
and the rule:
$$
\begin{array}{rl}
{\bf NecA} & \text{From $\proves \phi$ infer $\proves \allacts{\pointedActionModel{\actionStatesT}} \phi$}
\end{array}
$$
\end{definition}
\begin{proposition}\label{aml-k-soundness-completeness}
The axiomatisation \axiomAmlK{} is sound and complete for the logic \logicAmlK{}.
\end{proposition}
\begin{proposition}\label{aml-k-expressive-equivalence}
The logic \logicAmlK{} is expressively equivalent to the logic \logicK{}.
\end{proposition}
These results are shown by Baltag, Moss and Solecki~\cite{baltag1998,baltag2005}.
We note that the completeness and expressive equivalence results follow from
the fact that \axiomAmlK{} forms a set of reduction axioms which give a
provably correct translation from \langAml{} to \lang{}.
We note that the same results hold for the logics \logicAmlKFF{} and
\logicAmlS{} if we extend \axiomAmlK{} with the additional axioms of
\axiomKFF{} and \axiomS{} and restrict the language to only include
$\classAM_\classKFF$ and $\classAM_\classS$ action models respectively, given
the following results.
\begin{proposition}\label{aml-kff-domain}
$\model \in \classK$ and $\actionModel \in \classAM_\classKFF$ if and only if $\model \exec \actionModel \in \classKFF$.
\end{proposition}
\begin{proposition}\label{aml-s-domain}
$\model \in \classK$ and $\actionModel \in \classAM_\classS$ if and only if $\model \exec \actionModel \in \classS$.
\end{proposition}
\begin{definition}[Simulation and refinement]\label{refinements}
Let $\model, \model[\prime] \in \classK$ be Kripke models.
A non-empty relation $\refinementRel \subseteq \states \times \states[\prime]$
is a {\em simulation} if and only if it satisfies {\bf atoms}, {\bf forth-$\agentA$} for every $\agentA \in \agents$.
If $(\stateS, \stateS[\prime]) \in \refinementRel$ then we call $\pointedModel[\prime]{\stateS[\prime]}$ a {\em simulation} of $\pointedModel{\stateS}$
and call $\pointedModel{\stateS}$ a {\em refinement} of $\pointedModel[\prime]{\stateS[\prime]}$.
We write $\pointedModel[\prime]{\stateS[\prime]} \simulation \pointedModel{\stateS}$
or equivalently $\pointedModel{\stateS} \refinement \pointedModel[\prime]{\stateS[\prime]}$.
\end{definition}
\begin{proposition}
The relation $\refinement$ is a preorder on Kripke models.
\end{proposition}
\begin{proposition}
Let $\pointedModel{\stateS} \in \classK$ and $\pointedActionModel{\actionStateS} \in \classAM$.
Then $\pointedModel{\stateS} \exec \pointedActionModel{\actionStateS} \refinement \pointedModel{\stateS}$
\end{proposition}
These results are shown by van Ditmarsch and French~\cite{vanditmarsch2009}.
\begin{definition}[Semantics of arbitrary action model logic]
Let \classC{} be a class of Kripke models
and let $\model \in \classC$ be a Kripke model.
The interpretation of $\phi \in \langAaml$ in the logic \logicAamlC{}
is the same as its interpretation in the action model logic \logicAmlC{} given in
Definition~\ref{aml-semantics} with the additional inductive case:
\begin{eqnarray*}
\pointedModel{\stateS} \entails \allrefs \phi &\text{ iff }& \text{for every } \pointedModel[\prime]{\stateS[\prime]} \in \classC \text{ such that } \pointedModel[\prime]{\stateS[\prime]} \refinement \pointedModel{\stateS}: \pointedModel[\prime]{\stateS[\prime]} \entails \phi
\end{eqnarray*}
\end{definition}
The semantics of arbitrary action model logic are given by Hales~\cite{hales2013},
which are a combination of the semantics of action model logic of Baltag, Moss and Solecki~\cite{baltag1998,baltag2005}
and the semantics of refinement modal logic of van Ditmarsch and French~\cite{vanditmarsch2009}.
As noted earlier, the action model logics \logicAmlK{}, \logicAmlKFF{} and
\logicAmlS{} are expressively equivalent to their underlying modal logics via
a provably correct translation. Similarly it was shown by Bozzelli, et
al.~\cite{bozzelli2012a} and Hales, French and Davies~\cite{hales2012} that
the refinement modal logics \logicRmlK{}, \logicRmlKD{} and \logicS{} are
expressively equivalent to their underlying modal logics, also via a provably
correct translation. We note that the same result for \logicRmlKFF{} can be
shown similarly to the result for \logicRmlKD{}. In axiomatising
\logicAamlK{}, Hales~\cite{hales2013} simply noted that the rules and axioms
of \axiomAmlK{} and \axiomRmlK{} are sound in \logicAamlK{} and that the
provably correct translations for \logicAmlK{} and \logicRmlK{} can be
simply combined to form a provably correct translation for \logicAamlK{}.
We reproduce the axiomatisation for \logicAamlK{} here, and note that the same
similar reasoning to~\cite{hales2013} gives sound and complete
axiomatisations and provably correct translations for \logicAamlKFF{} and
\logicAamlS{}, which we also list here.
\begin{definition}[Disjunctive normal form]\label{dnf}
A formula in {\em disjunctive normal form} is defined by the following abstract syntax:
$$
\phi :: \pi \land \bigwedge_{\agentB \in \agentsB} \covers_\agentA \Gamma_\agentA \mid \phi \lor \phi
$$
where $\pi \in \langP$, $\agentsB \subseteq \agents$ and for every $\agentB \in \agentsB$,
$\Gamma_\agentB$ is a finite set of formulae in disjunctive normal form.
\end{definition}
\begin{proposition}
Every formula of \lang{} is equivalent to a formula in disjunctive normal
form under the semantics of \logicK{}.
\end{proposition}
This is shown by van Ditmarsch, French and Pinchinat~\cite{vanditmarsch2010}.
\begin{definition}[Axiomatisation \axiomAamlK{}]
The axiomatisation \axiomAamlK{} is a substitution schema consisting of
the rules and axioms of \axiomAmlK{} along with the axioms:
$$
\begin{array}{rl}
{\bf R} & \allrefs (\phi \implies \psi) \implies (\allrefs \phi \implies \allrefs \psi)\\
{\bf RP} & \allrefs \pi \iff \pi \text{ where } \pi \in \langP\\
{\bf RK} & \somerefs \covers_\agentA \Gamma_\agentA \iff \bigwedge_{\gamma \in \Gamma_\agentA} \possible[\agentA] \somerefs \gamma\\
{\bf RDist} & \somerefs \bigwedge_{\agentA \in \agents} \covers_\agentA \Gamma_\agentA \iff \bigwedge_{\agentA \in \agents} \somerefs \covers_\agentA \Gamma_\agentA
\end{array}
$$
and the rule:
$$
\begin{array}{rl}
{\bf NecR} & \text{From $\proves \phi$ infer $\proves \allrefs \phi$}
\end{array}
$$
\end{definition}
The additional axioms for \axiomAamlK{} are the additional axioms from \axiomRmlK{} for refinement modal logic, given by Bozzelli, et al.~\cite{bozzelli2012a}.
\begin{proposition}
The axiomatisation \axiomAamlK{} is sound and complete for the logic \logicAamlK{}.
\end{proposition}
\begin{proposition}
The logic \logicAamlK{} is expressively equivalent to the logic \logicK{}.
\end{proposition}
These results are shown by Hales~\cite{hales2013}.
\begin{definition}[Alternating disjunctive normal form]\label{adnf}
A formula in {\em $\agentA$-alternating disjunctive normal form} is defined by the following abstract syntax:
$$
\phi :: \pi \land \bigwedge_{\agentB \in \agentsB} \covers_\agentB \Gamma_\agentB \mid \phi \lor \phi
$$
where $\pi \in \langP$, $\agentsB \subseteq \agents \setminus \{\agentA\}$ and for every $\agentB \in \agentsB$,
$\Gamma_\agentB$ is a finite set of formulae in $\agentB$-alternating disjunctive normal form.
A formula in {\em alternating disjunctive normal form} is defined by the following abstract syntax:
$$
\phi :: \pi \land \bigwedge_{\agentB \in \agentsB} \covers_\agentA \Gamma_\agentA \mid \phi \lor \phi
$$
where $\pi \in \langP$, $\agentsB \subseteq \agents$ and for every $\agentB \in \agentsB$,
$\Gamma_\agentB$ is a finite set of formulae in $\agentB$-alternating disjunctive normal form.
\end{definition}
The additional axioms for \axiomAamlKFF{} are adapted from the additional
axioms from \axiomRmlKD{} for refinement doxastic logic, given by Hales, French
and Davies~\cite{hales2012}. The axioms do not require that each
$\Gamma_\agentA$ be non-empty, which is due to the lack of seriality in the
setting of \classKFF{}.
\begin{proposition}
Every formula of \lang{} is equivalent to a formula in alternating
disjunctive normal form under the semantics of \logicKFF{}.
\end{proposition}
This is shown by Hales, French and Davies~\cite{hales2012} for \logicKD{},
however the same reasoning applies to \logicKFF{}.
\begin{definition}[Axiomatisation \axiomAamlKFF{}]
The axiomatisation \axiomAmlKFF{} is a substitution schema consisting of
the rules and axioms of \axiomAmlKFF{} along with the rules and axioms
{\bf R}, {\bf RP} and {\bf NecR} of \axiomAamlK{} and the axioms:
$$
\begin{array}{rl}
{\bf RK45} & \somerefs \covers_\agentA \Gamma_\agentA \iff \bigwedge_{\gamma \in \Gamma_\agentA} \possible[\agentA] \somerefs \gamma\\
{\bf RDist} & \somerefs \bigwedge_{\agentA \in \agents} \covers_\agentA \Gamma_\agentA \iff \bigwedge_{\agentA \in \agents} \somerefs \covers_\agentA \Gamma_\agentA
\end{array}
$$
where for every $\agentA \in \agents$, $\Gamma_\agentA$ is a finite set
of $a$-alternating disjunctive normal formulae.
\end{definition}
The additional axioms for \axiomAamlKFF{} are the additional axioms from \axiomRmlKFF{} for refinement epistemic logic, given by Hales, French and Davies~\cite{hales2012}.
The additional axioms for \axiomAamlKFF{} are adapted from the additional
axioms from \axiomRmlKD{} for refinement modal logic, given by Hales, French
and Davies~\cite{hales2012}. The axioms do not require that each
$\Gamma_\agentA$ be non-empty, which is due to the lack of seriality in the
setting of \classKFF{}.
\begin{proposition}
The axiomatisation \axiomAamlKFF{} is sound and complete for the logic \logicAamlKFF{}.
\end{proposition}
\begin{proposition}
The logic \logicAamlKFF{} is expressively equivalent to the logic \logicKFF{}.
\end{proposition}
These results follow from similar reasoning to the same results in \logicAamlK{}.
\begin{definition}[Explicit formulae]\label{explicit}
Let $\pi \in \langP$ be a propositional formula,
let $\gamma^0 \in \lang$ be a modal formula
and for every $\agentA \in \agents$
let $\Gamma_\agentA \subseteq \lang$
be a finite set of formulae such that $\gamma^0 \in \Gamma_\agentA$.
Let $\Psi = \{\psi \leq \gamma \mid \agentA \in \agents, \gamma \in \Gamma_\agentA\}$
be the set of subformulae of the formulae in each set $\Gamma_\agentA$.
Finally let $\phi$ be a formula of the form
$$
\phi = \pi \land \gamma^0 \land \bigwedge_{\agentA \in \agents} \covers_\agentA \Gamma_\agentA
$$
Then $\phi$ is an {\em explicit formula} if and only if the following conditions hold:
\begin{enumerate}
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$, $\psi \in \Psi$:
either $\proves_\axiomS \gamma \implies \psi$ or $\proves_\axiomS \gamma \implies \neg \psi$.
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$, $\necessary_\agentA \psi \in \Psi$:
$\proves_\axiomS \gamma \implies \necessary[\agentA] \psi$ if and only if
for every $\gamma' \in \Gamma_\agentA$: $\proves_\axiomS \gamma' \implies \psi$.
\end{enumerate}
\end{definition}
\begin{proposition}
Every formula of \lang{} is equivalent to a disjunction of explicit
formulae under the semantics of \logicS{}.
\end{proposition}
This is shown by Hales, French and Davies~\cite{hales2012}.
\begin{definition}[Axiomatisation \axiomAamlS{}]
The axiomatisation \axiomAmlS{} is a substitution schema consisting of
the rules and axioms of \axiomAmlS{} along with the rules and axioms
{\bf R}, {\bf RP} and {\bf NecR} of \axiomAamlK{} and the axioms:
$$
\begin{array}{rl}
{\bf RS5} & \somerefs (\gamma^0 \land \covers_\agentA \Gamma_\agentA) \iff \somerefs \gamma^0 \land \bigwedge_{\gamma \in \Gamma_\agentA} \possible[\agentA] \somerefs \gamma\\
{\bf RDist} & \somerefs (\gamma^0 \land \bigwedge_{\agentA \in \agents} \covers_\agentA \Gamma_\agentA) \iff \bigwedge_{\agentA \in \agents} \somerefs (\gamma^0 \land \covers_\agentA \Gamma_\agentA)
\end{array}
$$
where $\gamma^0 \land \bigwedge_{\agentA \in \agents} \somerefs \covers_\agentA \Gamma_\agentA$ is an explicit formula
and for every $\agentA \in \agents$, $\gamma^0 \land \covers_\agentA \Gamma_\agentA$ is an explicit formula.
\end{definition}
\begin{proposition}
The axiomatisation \axiomAamlS{} is sound and complete for the logic \logicAamlS{}.
\end{proposition}
\begin{proposition}
The logic \logicAamlS{} is expressively equivalent to the logic \logicS{}.
\end{proposition}
These results follow from similar reasoning to the same results in \logicAamlK{}.
\section{Syntax}\label{syntax}
\begin{definition}[Language of arbitrary action formula logic]
The language \langAafl{} of arbitrary action formula logic is inductively defined as:
$$
\phi ::= \atomP \mid
\neg \phi \mid
(\phi \land \phi) \mid
\necessary[\agentA] \phi \mid
\allacts{\alpha} \phi \mid
\allrefs \phi
$$
where $\atomP \in \atoms$, $\agentA \in \agents$ and
$\alpha \in \langAaflAct{}$, and where the language \langAaflAct{} of
arbitrary action formulae is inductively as:
$$
\alpha ::= \test{\phi} \mid
\alpha \choice \alpha \mid
\alpha \compose \alpha \mid
\learns_\agentsB (\alpha, \alpha)
$$
where $\phi \in \langAafl{}$ and $\emptyset \subset \agentsB \subseteq \agents$.
\end{definition}
We use all of the standard abbreviations for arbitrary action model logic, in
addition to the abbreviations
$\learns_\agentsB \alpha ::= \learns_\agentsB (\alpha, \alpha)$ and
$\learns_\agentA (\alpha, \beta) ::= \learns_{\{\agentA\}} (\alpha, \beta)$.
We denote non-deterministic choice ($\choice$) over a finite set of action
formula $\Delta \subseteq \langAaflAct$ by $\bigchoice \Delta$
and we denote sequential execution ($\compose$) of a finite, non-empty
sequence of action formulae $(\alpha_i)_{i=0}^{n} \in \mathbb{N}^\langAaflAct$
by $\bigcompose (\alpha_i)_{i=0}^{n}$ and define them in the obvious way.
We refer to the languages \langAfl{} of action formula logic and \langAflAct{} of action formulae, which are \langAafl{} and \langAaflAct{} respectively, both without the $\allrefs$ operator,
As in the action model logic~\cite{baltag2005}, the intended meaning of the
operator $\allacts{\alpha} \phi$ is that ``$\phi$ is true in the result of
any successful execution of the action $\alpha$''. In the following section
we define the semantics of the action formula logic in terms of action model
execution. For each setting of \classK{}, \classKFF{} and \classS{} we
provide a function $\tau_\classC : \langAflAct \to \classAM$ of translating
action formulae from \langAflAct{} into action models. The result of
executing an action $\alpha \in \langAflAct{}$ is determined by translating
$\alpha$ into an action model $\tau_\classC(\alpha) \in \classAM_\classC$, and then executing
the action model in the usual way.
In each setting we have attempted to define the translation from action
formulae into action models in such a way that the action formulae carry an
intuitive description of the action that is performed by the corresponding
action model. We call the $\test{}$ operator the test operator, and describe
the action $\test{\phi}$ as a test for $\phi$. A test is intended to restrict
the states in which an action can successfully execute to states where the
condition $\phi$ is true initially, but otherwise leaves the state unchanged.
We call the $\choice$ operator the non-deterministic choice operator, and
describe the action $\alpha \choice \beta$ as a non-deterministic choice
between $\alpha$ and $\beta$. We call the $\compose$ operator the sequential
execution operator, and describe the action $\alpha \compose \beta$ as an
execution of $\alpha$ followed by $\beta$. Finally we call $\learns_\agentsB$
the learning operator, and describe the action $\learns_\agentsB (\alpha,
\beta)$ as the agents in $\agentsB$ learning that the actions $\alpha$ or
$\beta$ occurred.
This action is intended to result in the agents $\agentsB$
knowing or believing what would be true if $\alpha$ or $\beta$ were executed.
For example, if a consequence of executing $\alpha$ is that $\phi$ is true in
the result, then the intention is that a consequence of executing
$\learns_\agentA (\alpha, \alpha)$ is that $\knows_\agentA \phi$ is true in
the result. As we will see, this property is generally true in \logicAflK{},
however due to the extra frame conditions of \classKFF{} and \classS{} it is
only true for some formulae $\phi$ in \logicAflKFF{} and \logicAflS{}.
\begin{example}\label{grant-example-formula}
If $\atomP$ stands for the proposition ``the grant application was
successful'' then the action described in Example~\ref{grant-example}
might be written in the form of an action formula as:
\begin{align*}
\alpha = &\learns_{Ed} (\test{\atomP}) \compose\\
&\learns_{James} (\learns_{Ed} \test{\atomP} \choice \learns_{Ed} \test{\neg \atomP} \choice \learns_{Tim} \test{\atomP} \choice \learns_{Tim} \test{\neg \atomP}) \compose\\
&\learns_{Tim} ((\test{\neg \atomP} \compose \learns_{James} \test{\neg \atomP}) \choice \test{\top})
\end{align*}
\end{example}
\section{Semantics}\label{semantics}
We now define the semantics of arbitrary action formula logic. As mentioned
earlier, the semantics are defined by translating action formulae into action
models. The translation used varies in each class of \classK{}, \classKFF{}
and \classS{} that we work in, according to the frame conditions in each
class. Therefore our semantics are parameterised by a function
$\tau_\classC: \langAflAct \to \classAM$ that will vary according to the
class of Kripke models.
\begin{definition}[Semantics of arbitrary action formula logic]
Let \classC{} be a class of Kripke models, let $\tau_\classC :
\langAflAct \to \classAM$ be a function from action formulae to
multi-pointed action models, and let $\model = \modelTuple \in \classC$
be a Kripke model.
Then the interpretation of $\phi \in \langAafl$ in the logic
$\logicAaflC$ is the same as its interpretation in modal logic given in
Definition~\ref{ml-semantics}, with the additional inductive cases:
\begin{eqnarray*}
\pointedModel{\stateS} \entails \allacts{\alpha} \phi &\text{ iff }& \pointedModel{\stateS} \exec \tau_\classC(\alpha) \in \classC \text{ implies } \pointedModel{\stateS} \exec \tau_\classC(\alpha) \entails \phi\\
\pointedModel{\stateS} \entails \allrefs \phi &\text{ iff }& \text{for every } \pointedModel[\prime]{\stateS[\prime]} \in \classC \text{ such that } \pointedModel[\prime]{\stateS[\prime]} \refinement \pointedModel{\stateS}: \pointedModel[\prime]{\stateS[\prime]} \entails \phi
\end{eqnarray*}
where action model execution $\exec$ is as defined in Definition~\ref{aml-semantics}
and the refinement relation is defined in Definition~\ref{refinements}.
\end{definition}
We note that the semantics of arbitrary action formula logic \logicAaflC{} are very similar
to the semantics of arbitrary action model logic \logicAamlC{}~\cite{hales2013}.
We generalise the semantics to the classes of \classK{}, \classKFF{} and
\classS{} by introducing the parameterised class \classC{} and restricting
successful updates to those that result in \classC{} models as in the
approach of Balbiani, et al~\cite{balbiani2012}.
The difference is that as actions are specified in \langAafl{} formulae as
action formulae, then the semantics must first translate the action formulae
into action models before performing action model execution. As such there is
a semantically correct translation from \langAafl{} formulae to \langAaml{}
formulae (by replacing occurrences of $\alpha$ with $\tau_\classC(\alpha)$),
and any validities, axioms or results from arbitrary action model logic also
apply in this setting if the language is restricted to action models that are
defineable by action formulae. Therefore for the current section and the
following sections concering the axiomatisations
(Section~\ref{axiomatisation}) and correspondence results
(Section~\ref{correspondence}), we will deal only with the action formula
logic, rather than the full arbitrary action formula logic, focussing on the
differences and correspondences between action formulae and action models,
rather than getting distracted by the refinement quantifiers which behave
identically between each logic. We return to the full arbitrary action
formula logic in Section~\ref{synthesis} for the synthesis results.
We give the following general result.
\begin{proposition}
Let $\classC$ be a class of Kripke models. For every $\phi \in \langAafl$
there exists $\phi' \in \langAaml$
such that for every $\pointedModel{\statesT} \in \classC$:
$\pointedModel{\statesT} \entails_\logicAaflC \phi$ if and only if
$\pointedModel{\statesT} \entails_\logicAamlC \phi'$.
\end{proposition}
In the following subsections we will give definitions for $\tau_\classK$,
$\tau_\classKFF$ and $\tau_\classS$. These functions vary according to the
class of Kripke models being used. When the class is clear from context, then
we will simply write $\tau$ instead of $\tau_\classC$.
We begin by giving a definition of $\tau$ for translating actions involving
non-deterministic choice and sequential execution. These definitions are
common to all of the settings we are working in.
\begin{definition}[Non-deterministic choice]\label{afl-choice}
Let $\classC \in \{\classK, \classKFF, \classS\}$
and let $\alpha, \beta \in \langAflAct$
where $\tau_\classC(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$
and $\tau_\classC(\beta) = \pointedActionModel[\beta]{\actionStatesT[\beta]} = \pointedActionModelTuple[\beta]{\actionStatesT[\beta]}$
such that $\actionStates[\alpha]$ and $\actionStates[\beta]$ are disjoint.
We define $\tau_\classC(\alpha \choice \beta) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \actionStates[\alpha] \cup \actionStates[\beta]\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \actionAccessibilityAgent[\beta]{\agentA} \text{ for } \agentA \in \agents\\
\actionPrecondition &=& \actionPrecondition[\alpha] \cup \actionPrecondition[\beta]\\
\actionStatesT &=& \actionStatesT[\alpha] \cup \actionStatesT[\beta]
\end{eqnarray*}
\end{definition}
\begin{definition}[Sequential execution]\label{afl-sequential}
Let $\classC \in \{\classK, \classKFF, \classS\}$,
and let $\alpha, \beta \in \langAflAct$ where
$\tau_\classC(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$ and
$\tau_\classC(\beta) = \pointedActionModel[\beta]{\actionStatesT[\beta]} = \pointedActionModelTuple[\beta]{\actionStatesT[\beta]}$.
We define $\tau_\classC(\alpha \compose \beta) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} \exec \pointedActionModel[\beta]{\actionStatesT[\beta]}$.
\end{definition}
We give some properties of non-deterministic choice and sequential execution
of action formulae.
\begin{proposition}\label{afl-choice-sequential-validities}
Let $\alpha, \beta, \gamma \in \langAflAct$ and $\phi \in \langAfl$. Then the following are valid in \logicAflK{}, \logicAflKFF{} and \logicAflS{}:
\begin{eqnarray*}
&&\entails \allacts{\alpha \choice \beta} \phi \iff (\allacts{\alpha} \phi \land \allacts{\beta} \phi) \label{afl-axiom-choice}\\
&&\entails \allacts{\alpha \compose \beta} \phi \iff \allacts{\alpha} \allacts{\beta} \phi \label{afl-axiom-sequential}\\
&&\entails \allacts{\alpha \choice \alpha} \phi \iff \allacts{\alpha} \phi\\
&&\entails \allacts{\alpha \choice \beta} \phi \iff \allacts{\beta \choice \alpha} \phi\\
&&\entails \allacts{(\alpha \choice \beta) \choice \gamma} \phi \iff \allacts{\alpha \choice (\beta \choice \gamma)} \phi\\
&&\entails \allacts{(\alpha \compose \beta) \compose \gamma} \phi \iff \allacts{\alpha \compose (\beta \compose \gamma)} \phi\\
&&\entails \allacts{(\alpha \choice \beta) \compose \gamma} \phi \iff \allacts{(\alpha \compose \gamma) \choice (\beta \compose \gamma)} \phi
\end{eqnarray*}
\end{proposition}
These validities follow trivially from the semantics of \logicAaflC{} and Definitions~\ref{afl-choice} and~\ref{afl-sequential}.
In the following subsections we give definitions of $\tau_\classC$ for
translating action formulae involving tests and learning in the settings of
\classK{}, \classKFF{} and \classS{}. We note that in each subsection the
constructions of action models used to define tests and learning closely
resemble the constructions of refinements used to show the soundness of
axioms in refinement modal logic~\cite{bozzelli2012a,hales2012}.
\subsection{\classK{}}
\begin{definition}[Test]\label{afl-k-test}
Let $\phi \in \langAfl$.
We define $\tau(\test{\phi}) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \{\actionStateTest, \actionStateSkip\}\\
\actionAccessibilityAgent{\agentA} &=& \{(\actionStateTest, \actionStateSkip), (\actionStateSkip, \actionStateSkip)\} \text{ for } \agentA \in \agents\\
\actionPrecondition &=& \{(\actionStateTest, \phi), (\actionStateSkip, \top)\}\\
\actionStatesT &=& \{\actionStateTest\}
\end{eqnarray*}
\end{definition}
\begin{definition}[Learning]\label{afl-k-learning}
Let $\alpha \in \langAflAct$ where
$\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$.
Let $\actionStateTest$ and $\actionStateSkip$ be new states not appearing in $\actionStates[\alpha]$.
We define $\tau(\learns_\agentsB (\alpha, \alpha)) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \actionStates[\alpha] \cup \{\actionStateTest, \actionStateSkip\}\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateSkip, \actionStateSkip)\} \cup \{(\actionStateTest, \actionStateT[\alpha]) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\} \text{ for } \agentA \in \agentsB\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateTest, \actionStateSkip), (\actionStateSkip, \actionStateSkip)\} \text{ for } \agentA \notin \agentsB\\
\actionPrecondition &=& \actionPrecondition[\alpha] \cup \{(\actionStateTest, \top), (\actionStateSkip, \top)\}\\
\actionStatesT &=& \{\actionStateTest\}
\end{eqnarray*}
We define $\tau(\learns_\agentsB (\alpha, \beta)) = \tau(\learns_\agentsB (\alpha \choice \beta, \alpha \choice \beta))$.
\end{definition}
We note that the syntax of action formula logic defines the learning operator
as a binary operator that can be applied to two different action formulae,
however in the setting of \classK{} and \classKFF{} we only give a direct
definition of $\tau$ for actions of the form $\learns_\agentsB (\alpha, \alpha)$
and define the more general case in terms of this. Intuitively
$\learns_\agentsB (\alpha, \beta)$ is intended to represent an action where
the agents in $\agentsB$ learn that $\alpha$ or $\beta$ have occurred (i.e.
that $\alpha \choice \beta$ has occurred). The setting of \classS{}
corresponds to a notion of {\em knowledge}, where anything that an agent {\em
knows} must be true, and therefore anything that an agent {\em learns} must
also be true. So in an action where agents learn that $\alpha$ or $\beta$
have occurred, one of those actions must have actually occurred. Therefore in
\classS{} we describe the action $\learns_\agentsB (\alpha, \beta)$ as the
agents in $\agentsB$ learning that $\alpha$ or $\beta$ have occurred, when in
reality $\alpha$ has actually occurred. On the other hand, the settings of
\classK{} and \classKFF{} correspond more closely to a notion of {\em
belief}, where there is no requirement that what an agent {\em believes} is
true. So in an action where agents learn that $\alpha$ or $\beta$ have
occurred, neither of these actions must actually have occurred. Therefore in
the settings of \classK{} and \classKFF{} we make no distinction between
$\alpha$ and $\beta$ in a description of the action $\learns_\agentsB
(\alpha, \beta)$, hence the definition of $\tau$ given in these settings.
\subsection{\classKFF{}}
\begin{definition}[Test]\label{afl-kff-test}
Let $\phi \in \langAfl$. We define $\tau(\test{\phi})$ as in
Definition~\ref{afl-k-test} for \classK{}.
\end{definition}
\begin{definition}[Learning]\label{afl-kff-learning}
Let $\alpha \in \langAflAct$ where
$\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$.
Let $\actionStateTest$ and $\actionStateSkip$ be new states not appearing in $\actionStates[\alpha]$.
For every $\actionStateT[\alpha] \in \actionStatesT[\alpha]$ let $\proxyStateT[\alpha]$ be a new state not appearing in $\actionStates[\alpha]$.
We call each $\proxyStateT[\alpha]$ a {\em proxy state} for $\actionStateT[\alpha]$.
We define $\tau(\learns_\agentsB (\alpha, \alpha)) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \actionStates[\alpha] \cup \{\actionStateTest, \actionStateSkip\} \cup \{\proxyStateT[\alpha] \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\}\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateSkip, \actionStateSkip)\} \cup \{(\actionStateTest, \proxyStateT[\alpha]) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\} \cup \\&& \quad \{(\proxyStateT[\alpha], \proxyStateU[\alpha]) \mid \actionStateT[\alpha], \actionStateU[\alpha] \in \actionStatesT[\alpha]\} \text{ for } \agentA \in \agentsB\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateTest, \actionStateSkip), (\actionStateSkip, \actionStateSkip)\} \cup\\&& \quad \{(\proxyStateT[\alpha], \actionStateU[\alpha]) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha], \actionStateU[\alpha] \in \actionStateT[\alpha] \actionAccessibilityAgent[\alpha]{\agentA} \} \text{ for } \agentA \notin \agentsB\\
\actionPrecondition &=& \actionPrecondition[\alpha] \cup \{(\actionStateTest, \top), (\actionStateSkip, \top)\} \cup \{(\proxyStateT[\alpha], \actionPrecondition[\alpha](\actionStateT[\alpha])) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\}\\
\actionStatesT &=& \{\actionStateTest\}
\end{eqnarray*}
As in Definition~\ref{afl-k-learning}, we define $\tau(\learns_\agentsB (\alpha, \beta)) = \tau(\learns_\agentsB (\alpha \choice \beta, \alpha \choice \beta))$.
\end{definition}
\begin{lemma}\label{afl-kff-structure}
Let $\alpha \in \langAflAct$. Then $\tau(\alpha) \in \classAM_\classKFF$.
\end{lemma}
\begin{lemma}\label{afl-kff-exec}
Let $\alpha \in \langAflAct$ and
let $\pointedModel{\statesT} \in \classKFF$.
Then $\pointedModel{\statesT} \exec \tau(\alpha) \in \classKFF$.
\end{lemma}
We note that the definition for $\tau$ given here varies considerably from
the definition given in the setting of \classK{} due to the presence of the
proxy states. The proxy states are introduced due to the additional frame
constraints in \classKFF{} and the desire that the action models constructed
by $\tau$ be $\classAM_\classKFF$ action models. In constructing
$\tau(\learns_\agentsB \alpha)$ we wish to construct an action model with a
root state whose $\agentsB$-successors are the root states of $\tau(\alpha)$,
so that the result of executing the action $\learns_\agentsB \alpha$ is that
the agents $\agentsB$ believe that the action $\alpha$ has occurred. However
in order for this construction to result in a $\classAM_\classKFF$ action
model, we must take the transitive, Euclidean closure of the
$\agentsB$-successors of the root state. If we were to perform a construction
similar to that used in the setting of \classK{} where proxy states are not
used, then this would mean that the for every $\agentB \in \agentsB$, the
$\agentB$-successors of the root state would include all of the
$\agentB$-successors of the root states, and not just the root states
themselves. To show why this is not desireable, consider the simple example
of the action $\learns_\agentA \test{\phi}$. The intention is that this
action represents a private announcement to $\agentA$ that $\phi$ is true, as
it is in the setting of \classK{}. Without using proxy states, if we wanted to
include the state $\actionStateTest$ in the $\agentA$-successors of the root
state of $\tau(\alpha)$ then in order to construct a $\classAM_\classKFF$
action model we would need to take the transitive, Euclidean closure of the
$\agentA$-successors of $\actionStateTest$. As $\actionStateSkip$ is an
$\agentA$-successor of $\actionStateTest$ in the action $\test{\phi}$, then
this would mean that $\agentA$ would not be able to distinguish between the
actions states $\actionStateTest$ and $\actionStateSkip$ and so the result of
executing $\tau(\alpha)$ would be that $\agentA$ learns nothing. With the
construction provided, the action $\learns_\agentA \test{\phi}$ gives the
desired result that $\agentA$ learns that $\phi$ is true.
We also note that the results presented in this paper for \classKFF{} can be
extended to \classKD{} by modifying Definition~\ref{afl-kff-learning} so that
$\actionPrecondition(\actionStateTest) = \bigwedge_{\agentA \in \agentsB}
\bigvee_{\actionStateT[\alpha] \in \actionStatesT[\alpha]} \possible[\agentA]
\actionPrecondition[\alpha](\actionStateT[\alpha])$, which guarantees that
the result of successfully executing an action formula has the seriality
property of \classKD{}.
\subsection{\classS{}}
\begin{definition}[Test]\label{afl-s-test}
Let $\phi \in \langAfl$.
We define $\tau(\test{\phi}) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \{\actionStateTest, \actionStateSkip\}\\
\actionAccessibilityAgent{\agentA} &=& \actionStates^2 \text{ for } \agentA \in \agents\\
\actionPrecondition &=& \{(\actionStateTest, \phi), (\actionStateSkip, \top)\}\\
\actionStatesT &=& \{\actionStateTest\}
\end{eqnarray*}
\end{definition}
\begin{definition}[Learning]\label{afl-s-learning}
Let $\alpha, \beta \in \langAflAct$ where
$\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$ and
$\tau(\beta) = \pointedActionModel[\beta]{\actionStatesT[\beta]} = \pointedActionModelTuple[\beta]{\actionStatesT[\beta]}$.
For every $\actionStateT \in \actionStatesT[\alpha] \cup \actionStatesT[\beta]$ let $\proxyStateT$ be a new state not appearing in $\actionStates[\alpha] \cup \actionStates[\beta]$.
We define $\tau(\learns_\agentsB (\alpha, \beta)) = \pointedActionModel{\actionStatesT} = \pointedActionModelTuple{\actionStatesT}$ where:
\begin{eqnarray*}
\actionStates &=& \actionStates[\alpha] \cup \actionStates[\beta] \cup \{\proxyStateT \mid \actionStateT \in \actionStatesT[\alpha] \cup \actionStatesT[\beta]\}\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \actionAccessibilityAgent[\beta]{\agentA} \cup \{(\proxyStateT, \proxyStateU) \mid \actionStateT, \actionStateU \in \actionStatesT[\alpha] \cup \actionStatesT[\beta]\} \text{ for } \agentA \in \agentsB\\
\actionAccessibilityAgent{\agentA} &=& \actionAccessibilityAgent[\alpha]{\agentA} \cup \actionAccessibilityAgent[\beta]{\agentA} \cup \bigcup_{\actionStateT \in \actionStatesT[\alpha] \cup \actionStatesT[\beta]} (\{\proxyStateT\} \cup \actionStateT (\actionAccessibilityAgent[\alpha]{\agentA} \cup \actionAccessibilityAgent[\beta]{\agentA}))^2 \text{ for } \agentA \notin \agentsB\\
\actionPrecondition &=& \actionPrecondition[\alpha] \cup \actionPrecondition[\beta] \cup \{(\proxyStateT, (\actionPrecondition[\alpha] \cup \actionPrecondition[\beta])(\actionStateT)) \mid \actionStateT \in \actionStatesT[\alpha] \cup \actionStatesT[\beta]\}\\
\actionStatesT &=& \{\proxyStateT \mid \actionStateT \in \actionStatesT[\alpha]\}
\end{eqnarray*}
\end{definition}
\begin{lemma}\label{afl-s-structure}
Let $\alpha \in \langAflAct$. Then $\tau(\alpha) \in \classAM_\classS$.
\end{lemma}
\begin{lemma}\label{afl-s-exec}
Let $\alpha \in \langAflAct$ and
let $\pointedModel{\statesT} \in \classS$.
Then $\pointedModel{\statesT} \exec \tau(\alpha) \in \classS$.
\end{lemma}
We note that as in the setting of \classKFF{} the definition of $\tau$ uses
proxy states to construct action models from learning operators. However
unlike in the settings of \classK{} and \classKFF{} this construction does
not introduce the new states $\actionStateTest$ and $\actionStateSkip$. As
discussed earlier this is because in the setting of \classS{}, in an action
where agents learn that $\alpha$ or $\beta$ have occurred, one of those
actions must have actually occurred. Unlike in the settings of \classK{} and
\classKFF{} we have distinguished between the actions $\alpha$ and $\beta$,
designating that $\alpha$ is the action that has actually occurred. We also
note that the definition of $\tau$ for test operators is different from that
used in \classK{} and \classKFF{}, simply to account for the additional frame
constraints of \classS{}.
\section{Axiomatisation}\label{axiomatisation}
In the following subsections we give sound and complete axiomatisations for
the action formulae logic in the settings of \classK{} and \classKFF{}. In
the setting of \classS{} we provide a sound but not complete axiomatisation,
and comment on the difficulty of giving a complete axiomatisation and the
possible alternatives.
We note that axiomatisations for arbitrary action formula logic in these
settings can be derived trivially from these axiomatisations by adding the
additional axioms and rules from refinement modal logic.
\subsection{\classK{}}
\begin{definition}[Axiomatisation \axiomAflK{}]\label{afl-k-axioms}
The axiomatisation \axiomAflK{} is a substitution schema consisting of the
rules and axioms of \axiomK{} along with the axioms:
$$
\begin{array}{rl}
{\bf LT} & \proves \allacts{\test{\phi}} \psi \iff (\phi \implies \psi) \text{ for } \psi \in \lang\\
{\bf LU} & \proves \allacts{\alpha \choice \beta} \phi \iff (\allacts{\alpha} \phi \land \allacts{\beta} \phi)\\
{\bf LS} & \proves \allacts{\alpha \compose \beta} \phi \iff \allacts{\alpha} \allacts{\beta} \phi\\
{\bf LP} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \atomP \iff \atomP\\
{\bf LN} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \neg \phi \iff \neg \allacts{\learns_\agentsB (\alpha, \beta)} \phi\\
{\bf LC} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} (\phi \land \psi) \iff (\allacts{\learns_\agentsB (\alpha, \beta)} \phi \land \allacts{\learns_\agentsB (\alpha, \beta)} \psi)\\
{\bf LK1} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \necessary[\agentA] \phi \iff \necessary[\agentA] \allacts{\alpha \choice \beta} \phi \text{ for } \agentA \in \agentsB\\
{\bf LK2} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \necessary[\agentA] \phi \iff \necessary[\agentA] \phi \text{ for } \agentA \notin \agentsB
\end{array}
$$
and the rule:
$$
\begin{array}{rl}
{\bf NecL} & \text{From $\proves \phi$ infer $\proves \allacts{\alpha} \phi$}
\end{array}
$$
\end{definition}
\begin{proposition}\label{afl-k-axioms-soundness}
The axiomatisation \axiomAflK{} is sound in the logic \logicAmlK{}.
\end{proposition}
\begin{proof}
{\bf LT} follows from applying the reduction axioms of \axiomAmlK{}
inductively to $\allacts{\test{\phi}} \psi$.
{\bf LU} and {\bf LS} follow from Proposition~\ref{afl-choice-sequential-validities}.
Let $\tau(\learns_\agentB (\alpha, \beta)) = \pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS}$.
{\bf LP}, {\bf LN} and {\bf LC} follow trivially from the \axiomAmlK{}
axioms {\bf AP}, {\bf AN} and {\bf AC} respectively, noting from
Definition~\ref{afl-k-learning} that $\actionPrecondition(\actionStateS) = \top$.
{\bf LK1} follows trivially from the \axiomAmlK{} axiom {\bf AK},
noting from Definition~\ref{afl-k-learning} that as $\agentA \in \agents$ then
$\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentA}} \bisimilar \tau(\alpha \choice \beta)$.
{\bf NecL} follows trivially from the \axiomAmlK{} rule {\bf NecA}.
{\bf LK2} follows trivially from the \axiomAmlK{} axiom {\bf AK},
noting from Definition~\ref{afl-k-learning} that as $\agentA \notin \agents$ then
$\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentA}} \bisimilar \tau(\test{\top})$.
\end{proof}
\begin{proposition}\label{afl-k-axioms-completeness}
The axiomatisation \axiomAflK{} is complete for the logic \logicAmlK{}.
\end{proposition}
We note that the axiomatisation \axiomAflK{} forms a set of reduction axioms
that gives a provably correct translation from \langAfl{} to \lang{}.
\begin{example}\label{grant-example-derivation}
We give an example derivation that the action formula $\alpha$ given in
Example~\ref{grant-example-formula} does indeed satisfy (part of) the
epistemic goal stated in Example~\ref{grant-example}.
\begin{eqnarray}
&\proves& \allacts{\test{\atomP}} \atomP \iff (\atomP \implies \atomP)\label{grant-example-derivation-1}\\
&\proves& \allacts{\test{\atomP}} \atomP\label{grant-example-derivation-2}\\
&\proves& \necessary[Ed] \allacts{\test{\atomP}} \atomP\label{grant-example-derivation-3}\\
&\proves& \allacts{\learns_{Ed} \test{\atomP}} \necessary[Ed] \atomP\label{grant-example-derivation-4}
\end{eqnarray}
(\ref{grant-example-derivation-1}) follows from {\bf LT},
(\ref{grant-example-derivation-3}) follows from {\bf NecK} and
(\ref{grant-example-derivation-4}) follows from {\bf LK1}.
Similarly we have
\begin{eqnarray*}
&\proves& \allacts{\learns_{Ed} \test{\neg \atomP}} \necessary[Ed] \neg \atomP\\
&\proves& \allacts{\learns_{Tim} \test{\atomP}} \necessary[Tim] \atomP\\
&\proves& \allacts{\learns_{Tim} \test{\neg \atomP}} \necessary[Tim] \neg \atomP
\end{eqnarray*}
Let $\phi = \necessary[Ed] \atomP \lor \necessary[Ed] \neg \atomP \lor \necessary[Tim] \atomP \lor \necessary[Tim] \neg \atomP$. Then:
\begin{eqnarray}
&\proves& \allacts{\learns_{Ed} \test{\atomP} \choice \learns_{Ed} \test{\neg \atomP} \choice \learns_{Tim} \test{\atomP} \choice \learns_{Tim} \test{\neg \atomP}} \phi\label{grant-example-derivation-5}\\
&\proves& \necessary[James] \allacts{\learns_{Ed} \test{\atomP} \choice \learns_{Ed} \test{\neg \atomP} \choice \learns_{Tim} \test{\atomP} \choice \learns_{Tim} \test{\neg \atomP}} \phi\label{grant-example-derivation-6}\\
&\proves& \allacts{\learns_{James} (\learns_{Ed} \test{\atomP} \choice \learns_{Ed} \test{\neg \atomP} \choice \learns_{Tim} \test{\atomP} \choice \learns_{Tim} \test{\neg \atomP})} \necessary[James] \phi\label{grant-example-derivation-7}\\
&\proves& \allacts{\alpha} \necessary[James] \phi\label{grant-example-derivation-8}
\end{eqnarray}
(\ref{grant-example-derivation-5}) follows from {\bf LU},
(\ref{grant-example-derivation-6}) follows from {\bf NecK} and
(\ref{grant-example-derivation-7}) follows from {\bf LK1}.
(\ref{grant-example-derivation-8}) follows from {\bf LS} and {\bf LK2}.
Therefore a consequence of successfully executing $\alpha$ is that James
learns that Ed or Tim knows whether the grant application was successful.
\end{example}
\subsection{\classKFF{}}
\begin{definition}[Axiomatisation \axiomAflKFF{}]\label{afl-kff-axioms}
The axiomatisation \axiomAflKFF{} is a substitution schema consisting of the
rules and axioms of \axiomKFF{}
along with the rules and axioms of \axiomAflK{},
but substituting the \axiomAflK{} axiom {\bf LK1} for the axiom:
$$
\begin{array}{rl}
{\bf LK1} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \necessary[\agentA] \chi \iff \necessary[\agentA] \allacts{\alpha \choice \beta} \chi \text{ for } \agentA \in \agentsB\\
\end{array}
$$
and the rule:
$$
\begin{array}{rl}
{\bf NecL} & \text{From $\proves \phi$ infer $\proves \allacts{\alpha} \phi$}
\end{array}
$$
where $\chi$ is a $(\agents \setminus \{\agentA\})$-restricted formula.
\end{definition}
\begin{proposition}\label{afl-kff-axioms-soundness}
The axiomatisation \axiomAflKFF{} is sound in the logic \logicAmlKFF{}.
\end{proposition}
\begin{proof}
Soundness of {\bf LT}, {\bf LU}, {\bf LS}, {\bf LP}, {\bf LN}, {\bf LC},
{\bf LK2} and {\bf NecL} follow from the same reasoning as in the proof
of Proposition~\ref{afl-k-axioms-soundness}.
{\bf LK1} follows from the \axiomAmlKFF{} axiom {\bf AK}.
We note that as $\agentA \in \agentsB$, from Definition~\ref{afl-kff-learning}
we have $\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentA}} \bisimilar_{(\agents \setminus \{\agentA\})} \tau(\alpha \choice \beta)$,
and as $\chi$ is $(\agents \setminus \{\agentA\})$-restricted formula then
$\entails \allacts{\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentA}}} \chi \iff \allacts{\tau(\alpha \choice \beta)} \chi$.
\end{proof}
\begin{proposition}\label{afl-kff-axioms-completeness}
The axiomatisation \axiomAflKFF{} is complete for the logic \logicAmlKFF{}.
\end{proposition}
We note that the axiomatisation \axiomAflKFF{} forms a set of reduction
axioms that gives a provably correct translation from \langAfl{} to \lang{}.
To translate a subformula $\allacts{\alpha} \phi$, where $\phi \in \lang$, we
must first translate $\phi$ to the alternating disjunctive normal form of
\cite{hales2012}, which gives the property that for every subformula
$\necessary[\agentA] \psi$, the formula $\psi$ is $(\agents \setminus
\{\agentA\})$-restricted, and therefore {\bf LK1} is applicable.
\subsection{\classS{}}
\begin{definition}[Axiomatisation \axiomAflS{}]\label{afl-s-axioms}
The axiomatisation \axiomAflS{} is a substitution schema consisting of the
rules and axioms of \axiomS{} along with the axioms:
$$
\begin{array}{rl}
{\bf LT} & \proves \allacts{\test{\phi}} \psi \iff (\phi \implies \psi) \text{ for } \psi \in \lang\\
{\bf LU} & \proves \allacts{\alpha \choice \beta} \phi \iff (\allacts{\alpha} \phi \land \allacts{\beta} \phi)\\
{\bf LS} & \proves \allacts{\alpha \compose \beta} \phi \iff \allacts{\alpha} \allacts{\beta} \phi\\
{\bf LP} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \atomP \iff \atomP\\
{\bf LN} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} \neg \phi \iff \neg \allacts{\learns_\agentsB (\alpha, \beta)} \phi\\
{\bf LC} & \proves \allacts{\learns_\agentsB (\alpha, \beta)} (\phi \land \psi) \iff (\allacts{\learns_\agentsB (\alpha, \beta)} \phi \land \allacts{\learns_\agentsB (\alpha, \beta)} \psi)\\
\end{array}
$$
and the rule:
$$
\begin{array}{rl}
{\bf NecL} & \text{From $\proves \phi$ infer $\proves \allacts{\alpha} \phi$}
\end{array}
$$
where $\chi$ is a $(\agents \setminus \{\agentA\})$-restricted formula.
\end{definition}
\begin{proposition}\label{afl-s-axioms-soundness}
The axiomatisation \axiomAflS{} is sound in the logic \logicAmlS{}.
\end{proposition}
\begin{proof}
Soundness of {\bf LT}, {\bf LU}, {\bf LS}, {\bf LP}, {\bf LN}, {\bf LC}
and {\bf NecL} follow from the same reasoning as in the proof of
Proposition~\ref{afl-k-axioms-soundness}.
\end{proof}
We note that we do not have a axioms in \logicAflS{} corresponding to the
axioms {\bf LK1} and {\bf LK2} from \axiomAflK{} and \axiomAflKFF{}.
{\bf LK1} works in the setting of \classK{} because the $\agentsB$-successors
of the root state of $\tau(\learns_\agentsB \alpha)$ are bisimilar to the
root states of $\tau(\alpha)$, and so the consequences of executing
$\tau(\alpha)$ are the same as the consequences of executing the
$\agentsB$-successors of $\tau(\learns_\agentsB \alpha)$. In the setting of
\classKFF{} this is not the case, however we do have the restricted property
of $\agentsB$-bisimilarity, giving us that the $\agentsB$-restricted
consequences are the same. In the setting of \classS{} we do not know of such
a property to relate the consequences of the $\agentsB$-successors of
$\tau(\learns_\agentsB (\alpha, \beta))$ to the consequences of $\tau(\alpha
\choice \beta)$. Given the correspondence results of the previous section, it
should be possible to construct an action formula that is $n$-bisimilar to
the $\agentsB$-successors of $\tau(\learns_\agentsB (\alpha, \beta))$, where
$d(\phi) = n$, and define axioms for {\bf LK1} and {\bf LK2} in terms of
this action formula and not $\alpha \choice \beta$. However translating
$\langAfl$ formulae into $\langAml$ formulae and then using the
axiomatisation \axiomAmlS{} would certainly be simpler.
\section{Correspondence}\label{correspondence}
In the following subsections we show the correspondence between action
formulae and action models in the settings of \classK{}, \classKFF{} and
\classS{}. In each setting we show that action formulae are capable of
representing any action model up to $n$-bisimilarity.
\subsection{\classK{}}
To begin we give two lemmas to simplify the construction that we will use
for our correspondence result in \classK{}.
\begin{lemma}\label{afl-k-construction-test}
Let $\phi \in \langAfl$ and
$\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$.
Then let $\pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]} \in \classAM$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \actionStates \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \actionAccessibilityAgent{\agentA} \cup \{(\actionStateS[\prime], \actionStateT) \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \text{ for } \agentA \in \agents\\
\actionPrecondition[\prime] &=& \actionPrecondition \cup \{(\actionStateS[\prime], \phi \land \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
Then $\tau(\test{\phi}) \exec \pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{lemma}
\begin{lemma}\label{afl-k-construction-learning}
Let $\alpha \in \langAflAct$ where $\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$,
$\agentA \in \agents$
and $\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$
such that $\actionStateS \actionAccessibilityAgent{\agentA} = \{\actionStateT\}$
for some $\actionStateT \in \actionStates$
and $\actionStateT \actionAccessibilityAgent{\agentA} = \{\actionStateT\}$
Then let $\pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]} \in \classAM$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \actionStates \cup \actionStates[\alpha] \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \actionAccessibilityAgent{\agentA} \cup \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateS[\prime], \actionStateT[\alpha]) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\}\\
\actionAccessibilityAgent[\prime]{\agentB} &=& \actionAccessibilityAgent{\agentB} \cup \actionAccessibilityAgent[\alpha]{\agentB} \cup \{(\actionStateS[\prime], \actionStateT) \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}\} \text{ for } \agentB \in \agents \setminus \{\agentA\}\\
\actionPrecondition[\prime] &=& \actionPrecondition \cup \{(\actionStateS[\prime], \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
Then $\tau(\learns_\agentA \alpha) \exec \pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{lemma}
\begin{proposition}\label{afl-k-correspondence}
Let $\pointedActionModel{\actionStateS} \in \classAM$ and
let $n \in \mathbb{N}$.
Then there exists $\alpha \in \langAflAct$ such that
$\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
\end{proposition}
\begin{proof}
By induction on $n$.
Suppose that $n = 0$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)}$ and
$\tau(\alpha) = \pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]}$.
From Definition~\ref{afl-k-test} we have that
$\actionPrecondition(\actionStateS) = \actionPrecondition[\prime](\actionStateS[\prime])$, so
$(\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]})$ satisfies
{\bf atoms} and therefore
$\pointedActionModel{\actionStateS} \bisimilar_0 \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Suppose that $n > 0$.
By the induction hypothesis, for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
there exists $\alpha^{\agentA,\actionStateT} \in \langAflAct$ such that
$\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \tau(\alpha^{\agentA,\actionStateT})$,
where $\tau(\alpha^{\agentA,\actionStateT}) \bisimilar \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} = \pointedActionModelTuple[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)} \compose \bigcompose_{\agentA \in \agents} \learns_\agentA (\bigchoice_{\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} \alpha^{\actionStateT})$.
Then from Lemmas~\ref{afl-k-construction-test} and~\ref{afl-k-construction-learning}: $\tau(\alpha) \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]}$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionStates[\agentA,\actionStateT]) \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \bigcup_{\agentB \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}} (\actionAccessibilityAgent[\agentB,\actionStateT]{\agentA}) \cup \{(\actionStateS[\prime], \actionStateS[\agentA,\actionStateT]) \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \text{ for } \agentA \in \agents\\
\actionPrecondition[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionPrecondition[\agentA,\actionStateT]) \cup \{(\actionStateS[\prime], \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
We note for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$ that
$\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]} \bisimilar \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$
as for every $\agentA \in \agents$,
$\actionStateU \in \actionStates[\agentA,\actionStateT]$ we have
$\actionStateU \actionAccessibilityAgent[\prime]{\agentA} = \actionStateU \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
We show that $(\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]})$
satisfies {\bf atoms}, {\bf forth-$n$-$\agentA$} and {\bf back-$n$-$\agentA$}
for every $\agentA \in \agents$.
\paragraph{atoms} By construction
$\actionPrecondition[\prime](\actionStateS[\prime]) =
\actionPrecondition(\actionStateS)$.
\paragraph{forth-$n$-$\agentA$}
Let $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$.
By construction $\actionStateS[\agentA,\actionStateT] \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$,
by the induction hypothesis $\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$
and from above $\pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} \bisimilar \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
Therefore by transitivity $\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
\paragraph{back-$n$-$\agentA$} Follows from similar reasoning to {\bf forth-$n$-$\agentA$}.
Therefore $\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
\end{proof}
\begin{corollary}\label{afl-k-correspondence-aml-allacts}
Let $\pointedActionModel{\actionStateS} \in \classAM$.
Then for every $\phi \in \langAml$
there exists $\alpha \in \langAflAct$
such that $\entails_\logicAmlK{} \allacts{\pointedActionModel{\actionStateS}} \phi \iff \allacts{\tau(\alpha)} \phi$.
\end{corollary}
\begin{proof}
Suppose that $d(\phi) = n$.
From Proposition~\ref{afl-k-correspondence}
there exists $\alpha \in \langAflAct$
such that $\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
Therefore for every $\pointedModel{\actionStateS} \in \classK$
we have $\pointedModel{\actionStateS} \exec \pointedActionModel{\actionStateS} \bisimilar_n \pointedModel{\actionStateS} \exec \tau(\alpha)$
and so $\pointedModel{\actionStateS} \exec \pointedActionModel{\actionStateS} \entails_\logicAmlK{} \phi$
if and only if $\pointedModel{\actionStateS} \exec \tau(\alpha) \entails_\logicAmlK{} \phi$.
Therefore $\pointedModel{\actionStateS} \entails_\logicAmlK{} \allacts{\pointedActionModel{\actionStateS}} \phi$
if and only if $\pointedModel{\actionStateS} \entails_\logicAmlK{} \allacts{\tau(\alpha)} \phi$.
\end{proof}
\begin{corollary}\label{afl-k-correspondence-afl-aml}
Let $\phi \in \langAml$.
Then there exists $\phi' \in \langAfl$
such that for every $\pointedModel{\stateS} \in \classK$:
$\pointedModel{\stateS} \entails_\logicAmlK{} \phi$ if and only if
$\pointedModel{\stateS} \entails_\logicAflK{} \phi'$.
\end{corollary}
\begin{proof}[Sketch]
Given Corollary~\ref{afl-k-correspondence-aml-allacts} we can replace all
occurrences of $\allacts{\pointedActionModel{\actionStateS}} \psi$
within $\phi$ with an equivalent $\allacts{\alpha} \psi$
where $\alpha \in \langAflAct$.
\end{proof}
\subsection{\classKFF{}}
As in the previous subsection we give a lemma to simplify the construction
that we will use, although as the definition of $\tau(\test{\phi})$ is the
same between \classK{} and \classKFF{} we simply reuse
Lemma~\ref{afl-k-construction-test} from the previous subsection.
\begin{lemma}\label{afl-kff-construction-learning}
Let $\agentA \in \agents$,
$\alpha \in \langAflAct$ where $\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$,
and $\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$
such that $\actionStateS \actionAccessibilityAgent{\agentA} = \{\actionStateT\}$
for some $\actionStateT \in \actionStates$
and $\actionStateT \actionAccessibilityAgent{\agentA} = \{\actionStateT\}$
Then let $\pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]} \in \classAM$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \actionStates \cup \actionStates[\alpha] \cup \{\proxyStateT[\alpha] \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\} \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \actionAccessibilityAgent{\agentA} \cup \actionAccessibilityAgent[\alpha]{\agentA} \cup \{(\actionStateS[\prime], \proxyStateT[\alpha]) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\} \cup \{(\proxyStateT[\alpha], \proxyStateU[\alpha]) \mid \actionStateT[\alpha], \actionStateU[\alpha] \in \actionStatesT[\alpha]\}\\
\actionAccessibilityAgent[\prime]{\agentB} &=& \actionAccessibilityAgent{\agentB} \cup \actionAccessibilityAgent[\alpha]{\agentB} \cup \{(\actionStateS[\prime], \actionStateT) \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}\} \text{ for } \agentB \in \agents \setminus \{\agentA\}\\
\actionPrecondition[\prime] &=& \actionPrecondition \cup \{(\proxyStateT[\alpha], \actionPrecondition[\alpha](\actionStateT[\alpha])) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\} \cup \{(\actionStateS[\prime], \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
Then $\tau(\learns_\agentA \alpha) \exec \pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{lemma}
\begin{proposition}\label{afl-kff-correspondence}
Let $\pointedActionModel{\actionStateS} \in \classAM_\classKFF$ and
let $n \in \mathbb{N}$.
Then there exists $\alpha \in \langAflAct$ such that
$\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
\end{proposition}
\begin{proof}
By induction on $n$.
Suppose that $n = 0$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)}$ and
$\tau(\alpha) = \pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]}$.
From Definition~\ref{afl-kff-test} we have that
$\actionPrecondition(\actionStateS) = \actionPrecondition[\prime](\actionStateS[\prime])$, so
$(\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]})$ satisfies
{\bf atoms} and therefore
$\pointedActionModel{\actionStateS} \bisimilar_0 \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Suppose that $n > 0$.
By the induction hypothesis, for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
there exists $\alpha^{\agentA,\actionStateT} \in \langAflAct$ such that
$\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \tau(\alpha^{\agentA,\actionStateT})$.
For every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
let $\tau(\alpha^{\agentA,\actionStateT}) = \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} = \pointedActionModelTuple[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)} \compose \bigcompose_{\agentA \in \agents} \learns_\agentA (\bigchoice_{\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} \alpha^{\agentA,\actionStateT})$.
Then from Lemmas~\ref{afl-k-construction-test} and~\ref{afl-kff-construction-learning}: $\tau(\alpha) \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]}$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionStates[\agentA,\actionStateT]) \cup \{\proxyStateS[\agentA,\actionStateT] \mid \agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \bigcup_{\agentB \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}} (\actionAccessibilityAgent[\agentB,\actionStateT]{\agentA}) \cup \{(\actionStateS[\prime], \proxyStateS[\agentA, \actionStateT]) \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup \{(\proxyStateS[\agentA, \actionStateT], \proxyStateS[\agentA, \actionStateU]) \mid \actionStateT, \actionStateU \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup\\
&&\hspace{45pt}\{(\proxyStateS[\agentB, \actionStateT], \actionStateU) \mid \agentB \in \agents \setminus \{\agentA\}, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}, \actionStateU \in \actionStateS[\agentB,\actionStateT] \actionAccessibilityAgent[\agentB,\actionStateT]{\agentA}\} \text{ for } \agentA \in \agents\\
\actionPrecondition[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionPrecondition[\agentA,\actionStateT]) \cup \{(\proxyStateS[\agentA, \actionStateT], \actionPrecondition[\agentA,\actionStateT](\actionStateS[\agentA,\actionStateT])) \mid \agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup \{(\actionStateS[\prime], \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
As in the proof of Proposition~\ref{afl-k-correspondence},
we note for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$ that
$\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]} \bisimilar \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$.
We need to show that $(\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]})$
satisfies {\bf atoms}, {\bf forth-$n$-$\agentA$} and {\bf back-$n$-$\agentA$}
for every $\agentA \in \agents$.
We use reasoning similar to the proof of Proposition~\ref{afl-k-correspondence},
however noting that the successors of $\actionStateS[\prime]$ in
$\actionModel[\prime]$ are not the same as in the construction used
previously.
We claim that each $\proxyStateS[\agentA,\actionStateT]$
state is $(n-1)$-bisimilar to the corresponding $\actionStateS[\agentA,\actionStateT]$ state.
We show this by showing for every $0 \leq i \leq n - 1$,
$\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$ that
$\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_i \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
We proceed by induction on $i$.
\paragraph{atoms} By construction $\actionPrecondition[\prime](\proxyStateS[\agentA, \actionStateT]) = \actionPrecondition[\prime](\actionStateS[\agentA,\actionStateT])$.
\paragraph{forth-$i$-$\agentB$} Suppose that $0 < i \leq n - 1$. Let $\actionStateU \in \proxyStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB}$.
Suppose that $\agentB = \agentA$.
By construction there exists $\actionStateV \in \actionStateS \actionAccessibilityAgent{\agentA}$
such that $\actionStateU = \proxyStateS[\agentA,\actionStateV]$.
From above $\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]} \bisimilar \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$
and $\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateV]} \bisimilar \pointedActionModel[\agentA,\actionStateV]{\actionStateS[\agentA,\actionStateV]}$.
By the outer induction hypothesis $\pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateT}$
and $\pointedActionModel[\agentA,\actionStateV]{\actionStateS[\agentA,\actionStateV]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateV}$.
By transitivity $\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateT}$
and $\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateV]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateV}$.
As $\actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}$ from {\bf back-$(n - 1)$-$\agentA$}
there exists $\actionStateW \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA}$
such that $\pointedActionModel[\prime]{\actionStateW} \bisimilar_{(n - 2)} \pointedActionModel{\actionStateV}$.
By transitivity $\pointedActionModel[\prime]{\actionStateW} \bisimilar_{(n - 2)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateV]}$.
By the induction hypothesis $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateV]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateV]}$.
Therefore by transitivity $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateV]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateW}$.
Suppose that $\agentB \neq \agentA$. By construction
$\proxyStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB} = \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB}$,
so $\actionStateU \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB}$
and we trivially have that $\pointedActionModel[\prime]{\actionStateU} \bisimilar \pointedActionModel[\prime]{\actionStateU}$.
\paragraph{back-$i$-$\agentB$} Follows similar reasoning to {\bf forth-$i$-$\agentB$}.
Therefore for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$ we have that
$\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
We can now show that $\pointedActionModel{\actionStateS} \bisimilar_n \pointedActionModel[\prime]{\actionStateS[\prime]}$
by using the same reasoning as the proof for Proposition~\ref{afl-k-correspondence}, using the $(n - 1)$-bisimilar
$\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]}$ states
in place of corresponding $\pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$ states.
Therefore $\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
\end{proof}
\begin{corollary}
Let $\pointedActionModel{\actionStateS} \in \classAM_\classKFF$.
Then for every $\phi \in \langAml$
there exists $\alpha \in \langAflAct$
such that $\entails_\logicAmlKFF{} \allacts{\pointedActionModel{\actionStateS}} \phi \iff \allacts{\tau(\alpha)} \phi$.
\end{corollary}
\begin{corollary}
Let $\phi \in \langAml$.
Then there exists $\phi' \in \langAfl$
such that for every $\pointedModel{\stateS} \in \classKFF$:
$\pointedModel{\stateS} \entails_\logicAmlKFF{} \phi$ if and only if
$\pointedModel{\stateS} \entails_\logicAflKFF{} \phi'$.
\end{corollary}
\subsection{\classS{}}
Once more we give two lemmas to simplify the construction that we will use.
\begin{lemma}\label{afl-s-construction-test}
Let $\phi \in \langAfl$ and
$\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$.
Then let $\pointedActionModel[\prime]{\actionStateS} = \pointedActionModelTuple[\prime]{\actionStateS} \in \classAM$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \actionStates\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \actionAccessibilityAgent{\agentA} \text{ for } \agentA \in \agents\\
\actionPrecondition[\prime] &=& \actionPrecondition \setminus \{(\actionStateS, \actionPrecondition(\actionStateS))\} \cup \{(\actionStateS, \phi \land \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
Then $\tau(\test{\phi}) \exec \pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{lemma}
\begin{lemma}\label{afl-s-construction-learning}
Let $\agentA \in \agents$,
$\alpha \in \langAflAct$ where $\tau(\alpha) = \pointedActionModel[\alpha]{\actionStatesT[\alpha]} = \pointedActionModelTuple[\alpha]{\actionStatesT[\alpha]}$,
and $\pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS} \in \classAM$
such that $\actionStateS \actionAccessibilityAgent{\agentA} = \{\actionStateS\}$
and $\actionPrecondition(\actionStateS) = \top$.
Then let $\pointedActionModel[\prime]{\actionStateS} = \pointedActionModelTuple[\prime]{\actionStateS} \in \classAM$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \actionStates \cup \actionStates[\alpha] \cup \{\proxyStateT[\alpha] \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \actionAccessibilityAgent{\agentA} \cup \actionAccessibilityAgent[\alpha]{\agentA} \cup (\{\actionStateS\} \cup \{\proxyStateT[\alpha] \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\})^2\\
\actionAccessibilityAgent[\prime]{\agentB} &=& \actionAccessibilityAgent{\agentB} \cup \actionAccessibilityAgent[\alpha]{\agentB} \cup (\{\proxyStateT[\alpha]\} \cup \actionStateT[\alpha] \actionAccessibilityAgent[\alpha]{\agentB})^2 \text{ for } \agentB \in \agents \setminus \{\agentA\}\\
\actionPrecondition[\prime] &=& \actionPrecondition \cup \{(\proxyStateT[\alpha], \actionPrecondition[\alpha](\actionStateT[\alpha])) \mid \actionStateT[\alpha] \in \actionStatesT[\alpha]\}
\end{eqnarray*}
Then $\tau(\learns_\agentA (\test{\top}, \alpha)) \exec \pointedActionModel{\actionStateS} \bisimilar \pointedActionModel[\prime]{\actionStateS[\prime]}$.
\end{lemma}
\begin{proposition}\label{afl-s-correspondence}
Let $\pointedActionModel{\actionStateS} \in \classAM_\classS$ and
let $n \in \mathbb{N}$.
Then there exists $\alpha \in \langAflAct$ such that
$\pointedActionModel{\actionStateS} \bisimilar_n \tau(\alpha)$.
\end{proposition}
\begin{proof}
By induction on $n$.
Suppose that $n = 0$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)}$ and
$\tau(\alpha) = \pointedActionModel[\prime]{\actionStateS[\prime]} = \pointedActionModelTuple[\prime]{\actionStateS[\prime]}$.
From Definition~\ref{afl-s-test} we have that
$\actionPrecondition(\actionStateS) = \actionPrecondition[\prime](\actionStateS[\prime])$, so
$(\pointedActionModel{\actionStateS}, \pointedActionModel[\prime]{\actionStateS[\prime]})$ satisfies
{\bf atoms} and therefore
$\pointedActionModel{\actionStateS} \bisimilar_0 \pointedActionModel[\prime]{\actionStateS[\prime]}$.
Suppose that $n > 0$.
By the induction hypothesis, for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
there exists $\alpha^{\agentA,\actionStateT} \in \langAflAct$ such that
$\pointedActionModel{\actionStateT} \bisimilar_{(n - 1)} \tau(\alpha^{\agentA,\actionStateT})$.
For every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
let $\tau(\alpha^{\agentA,\actionStateT}) = \pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} = \pointedActionModelTuple[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]}$.
Let $\alpha = \test{\actionPrecondition(\actionStateS)} \compose \bigcompose_{\agentA \in \agents} \learns_\agentA (\test{\top}, \bigchoice_{\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} \alpha^{\agentA,\actionStateT})$.
Then from Lemmas~\ref{afl-k-construction-test} and~\ref{afl-kff-construction-learning}: $\tau(\alpha) = \pointedActionModel[\prime]{\actionStateS[\prime]} =\pointedActionModelTuple[\prime]{\actionStateS[\prime]}$ where:
\begin{eqnarray*}
\actionStates[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionStates[\agentA,\actionStateT]) \cup \{\proxyStateS[\agentA,\actionStateT] \mid \agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup \{\actionStateS[\prime]\}\\
\actionAccessibilityAgent[\prime]{\agentA} &=& \bigcup_{\agentB \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}} (\actionAccessibilityAgent[\agentB,\actionStateT]{\agentA}) \cup (\{\actionStateS[\prime]\} \cup \{\proxyStateS[\agentA,\actionStateT] \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\})^2 \cup\\&&\quad\bigcup_{\agentB \in \agents \setminus \{\agentA\}, \actionStateT \in \actionAccessibilityAgent{\agentB}} (\{\proxyStateS[\agentB,\actionStateT]\} \cup \actionStateS[\agentB,\actionStateT] \actionAccessibilityAgent[\agentB,\actionStateT]{\agentA})^2 \text{ for } \agentA \in \agents\\
\actionPrecondition[\prime] &=& \bigcup_{\agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}} (\actionPrecondition[\agentA,\actionStateT]) \cup \{(\proxyStateS[\agentA,\actionStateT], \actionPrecondition[\agentA,\actionStateT](\actionStateS[\agentA,\actionStateT])) \mid \agentA \in \agents, \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}\} \cup \{(\actionStateS[\prime], \actionPrecondition(\actionStateS))\}
\end{eqnarray*}
We note that unlike the constructions used for
Proposition~\ref{afl-k-correspondence} and
Proposition~\ref{afl-kff-correspondence}, this construction does not have
$\pointedActionModel[\prime]{\actionStateU} \bisimilar \pointedActionModel[\agentA,\actionStateT]{\actionStateU}$,
as we do not have that
$\actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA} = \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
Similar to the proof of Proposition~\ref{afl-kff-correspondence} we claim
that each $\proxyStateS[\agentA,\actionStateT]$ state is
$(n-1)$-bisimilar to the corresponding $\actionStateS[\actionStateT]$ state.
However in lieu of bisimilarity of $\actionStates[\agentA,\actionStateT]$
states we need another result for these states.
We also need to consider the additional state $\actionStateS[\prime]$,
which due to reflexivity is also a successor of itself.
We need to show for every $0 \leq i \leq n - 1$:
\begin{enumerate}
\item For every $\agentA \in \agents$: $\pointedActionModel[\prime]{\actionStateS[\prime]} \bisimilar_i \pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateS]}$.
\item For every $\agentA \in \agents$, $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$: $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_i \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
\item For every $\agentA \in \agents$, $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$, $\actionStateU \in \actionStates[\agentA,\actionStateT]$, $\actionStateV \in \actionStates$: if $\pointedActionModel[\agentA,\actionStateT]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$ then $\pointedActionModel[\prime]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$.
\end{enumerate}
We proceed by induction on $i$.
\begin{enumerate}
\item
For every $\agentA \in \agents$: $\pointedActionModel[\prime]{\actionStateS[\prime]} \bisimilar_i \pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateS]}$.
\paragraph{atoms} By the outer induction hypothesis $\pointedActionModel[\agentA,\actionStateS]{\actionStateS[\agentA,\actionStateS]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateS}$
and so $\proves \actionPrecondition[\agentA,\actionStateS](\actionStateS[\agentA,\actionStateS]) \iff \actionPrecondition(\actionStateS)$.
By construction $\actionPrecondition[\prime](\actionStateS[\prime]) = \actionPrecondition(\actionStateS)$
and $\actionPrecondition[\prime](\proxyStateS[\agentA,\actionStateS]) = \actionPrecondition[\agentA,\actionStateS](\actionStateS[\agentA,\actionStateS])$
and therefore $\proves \actionPrecondition[\prime](\actionStateS[\prime]) \iff \actionPrecondition[\agentA,\actionStateS](\proxyStateS[\agentA,\actionStateS])$.
\paragraph{forth-$i$-$\agentB$} Suppose that $0 < i \leq n - 1$. Let $\actionStateU \in \actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA}$.
Suppose that $\agentB = \agentA$.
By construction $\actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentA} = \proxyStateS[\agentA,\actionStateS] \actionAccessibilityAgent[\prime]{\agentA}$
and we trivially have that $\pointedActionModel[\prime]{\actionStateU} \bisimilar \pointedActionModel[\prime]{\actionStateU}$.
Suppose that $\agentB \neq \agentA$.
By construction $\actionStateS[\prime] \actionAccessibilityAgent[\prime]{\agentB} = \{\proxyStateS[\agentB,\actionStateT] \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}\} \cup \{\actionStateS[\prime]\}$ and $\proxyStateS[\agentA,\actionStateS] \actionAccessibilityAgent[\prime]{\agentB} = \actionStateS[\agentA,\actionStateS] \actionAccessibilityAgent[\agentA,\actionStateS]{\agentB} \cup \{\proxyStateS[\agentA,\actionStateS]\}$.
Suppose that $\actionStateU = \actionStateS[\prime]$.
Then by the induction hypothesis $\pointedActionModel[\prime]{\actionStateS[\prime]} \bisimilar_{(i-1)} \pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateS]}$.
Suppose that $\actionStateU \in \{\proxyStateS[\agentB,\actionStateT] \mid \actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}\}$.
Then there exists $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}$
such that $\actionStateU = \proxyStateS[\agentB,\actionStateT]$.
By the outer induction hypothesis $\pointedActionModel[\agentA,\actionStateS]{\actionStateS[\agentA,\actionStateS]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateS}$.
As $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentB}$ then by {\bf back-$(n-1)$-$\agentB$}
there exists $\actionStateV \in \actionStateS[\agentA,\actionStateS] \actionAccessibilityAgent[\agentA,\actionStateS]{\agentB} \subseteq \proxyStateS[\agentA,\actionStateS] \actionAccessibilityAgent[\prime]{\agentB}$
such that $\pointedActionModel[\agentA,\actionStateS]{\actionStateV} \bisimilar_{(n - 2)} \pointedActionModel{\actionStateT}$.
Then by the inner induction hypothesis this implies $\pointedActionModel[\prime]{\actionStateV} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateT}$.
By the inner induction hypothesis $\pointedActionModel[\prime]{\proxyStateS[\agentB,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateS[\agentB,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel[\agentB,\actionStateT]{\actionStateS[\agentB,\actionStateT]}$
and by the outer induction hypothesis $\pointedActionModel[\agentB,\actionStateT]{\actionStateS[\agentB,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateT}$
so by transitivity $\pointedActionModel[\prime]{\proxyStateS[\agentB,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateT}$.
Therefore by transitivity we have that $\pointedActionModel[\prime]{\proxyStateS[\agentB,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateV}$.
\paragraph{back-$i$-$\agentB$} Follows similar reasoning to {\bf forth-$i$-$\agentB$}.
\item
For every $\agentA \in \agents$, $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$: $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_i \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
\paragraph{atoms} By construction $\actionPrecondition[\prime](\proxyStateS[\agentA,\actionStateT]) = \actionPrecondition[\prime](\actionStateS[\agentA,\actionStateT])$.
\paragraph{forth-$i$-$\agentB$} Suppose that $0 < i \leq n - 1$. Let $\actionStateU \in \proxyStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA}$.
Suppose that $\agentB = \agentA$.
By construction $\proxyStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA} = \{\proxyStateS[\agentA,\actionStateV] \mid \actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}\} \cup \{\actionStateS[\prime]\}$.
Suppose that $\actionStateU \in \{\proxyStateS[\agentA,\actionStateV] \mid \actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}\}$.
Then there exists $\actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}$ such that $\actionStateU = \proxyStateS[\agentA,\actionStateV]$.
By the outer induction hypothesis $\pointedActionModel[\agentA,\actionStateT]{\actionStateS[\agentA,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateT}$.
As $\actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}$ then by {\bf back-$(n-1)$-$\agentA$}
there exists $\actionStateW \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA} \subseteq \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA}$
such that $\pointedActionModel[\agentA,\actionStateT]{\actionStateW} \bisimilar_{(n - 2)} \pointedActionModel{\actionStateV}$.
Then by the inner induction hypothesis this implies $\pointedActionModel[\prime]{\actionStateW} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateV}$.
By the inner and outer induction hypothesis $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateV]} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateV}$.
Therefore by transitivity we have that $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateV]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateW}$.
Suppose that $\actionStateU = \actionStateS[\prime]$.
Then from the inner induction hypothesis $\pointedActionModel[\prime]{\actionStateS[\prime]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateS]}$
and we can proceed using the same reasoning as in the case where $\actionStateU = \proxyStateS[\agentA,\actionStateS] \in \{\proxyStateS[\agentA,\actionStateV] \mid \actionStateV \in \actionStateT \actionAccessibilityAgent{\agentA}\}$.
Suppose that $\agentB \neq \agentA$.
By construction $\proxyStateS[\agentB,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB} = \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentB} \cup \{\proxyStateS[\agentB,\actionStateT]\}$.
Suppose that $\actionStateU = \proxyStateS[\agentB,\actionStateT]$.
By construction $\actionStateS[\agentA,\actionStateT] \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB}$
and by the induction hypothesis $\pointedActionModel[\prime]{\proxyStateS[\agentB,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$.
Suppose that $\actionStateU \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentB} \subseteq \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentB}$.
Then we trivially have that $\pointedActionModel[\prime]{\actionStateU} \bisimilar \pointedActionModel[\prime]{\actionStateU}$.
\paragraph{back-$i$-$\agentB$} Follows similar reasoning to {\bf forth-$i$-$\agentB$}.
\item
For every $\agentA \in \agents$, $\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$, $\actionStateU \in \actionStates[\agentA,\actionStateT]$, $\actionStateV \in \actionStates$: if $\pointedActionModel[\agentA,\actionStateT]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$ then $\pointedActionModel[\prime]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$.
Suppose that $\pointedActionModel[\agentA,\actionStateT]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$.
\paragraph{atoms} As $\pointedActionModel[\agentA,\actionStateT]{\actionStateU} \bisimilar_i \pointedActionModel{\actionStateV}$
then $\proves \actionPrecondition[\agentA,\actionStateT](\actionStateU) \iff \actionPrecondition(\actionStateV)$.
By construction $\actionPrecondition[\prime](\actionStateU) = \actionPrecondition[\agentA,\actionStateT](\actionStateU)$
and therefore $\proves \actionPrecondition[\prime](\actionStateU) \iff \actionPrecondition(\actionStateV)$.
\paragraph{forth-$i$-$\agentB$} Suppose that $0 < i \leq n - 1$.
Let $\actionStateW \in \actionStateU \actionAccessibilityAgent[\prime]{\agentB}$.
Suppose that $\actionStateU \neq \actionStateS[\agentA,\actionStateT]$ or $\agentB = \agentA$.
By construction $\actionStateU \actionAccessibilityAgent[\prime]{\agentA} = \actionStateU \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$
and so $\actionStateW \in \actionStateU \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
As $\actionStateW \in \actionStateU \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$
then by {\bf forth-$i$-$\agentB$} there exists $\actionStateX \in \actionStateV \actionAccessibilityAgent{\agentB}$ such that
$\pointedActionModel[\agentA,\actionStateT]{\actionStateW} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateX}$.
By the induction hypothesis $\pointedActionModel[\prime]{\actionStateW} \bisimilar_{(i - 1)} \pointedActionModel{\actionStateX}$.
Suppose that $\actionStateU = \actionStateS[\agentA,\actionStateT]$ and $\agentB \neq \agentA$.
By construction $\actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\prime]{\agentA} = \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA} \cup \{\proxyStateS[\agentA,\actionStateT]\}$.
Suppose that $\actionStateW \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
We proceed using the same reasoning as above, where $\actionStateW \in \actionStateU \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
Suppose that $\actionStateW = \proxyStateS[\agentA,\actionStateT]$.
By the induction hypothesis $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_{(i - 1)} \pointedActionModel[\prime]{\actionStateS[\agentA,\actionStateT]}$
and we proceed using the same reasoning above, where $\actionStateW = \actionStateS[\agentA,\actionStateT] \in \actionStateS[\agentA,\actionStateT] \actionAccessibilityAgent[\agentA,\actionStateT]{\agentA}$.
\paragraph{back-$i$-$\agentB$} Follows similar reasoning to {\bf forth-$i$-$\agentB$}.
\end{enumerate}
Therefore for every $\agentA \in \agents$,
$\actionStateT \in \actionStateS \actionAccessibilityAgent{\agentA}$
we have that $\pointedActionModel[\prime]{\actionStateS[\prime]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateS}$
and $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]} \bisimilar_{(n - 1)} \pointedActionModel{\actionStateT}$.
We can now show that $\pointedActionModel{\actionStateS[\prime]} \bisimilar_n \pointedActionModel{\actionStateS}$
by using the same reasoning as the proof for Proposition~\ref{afl-k-correspondence},
using the $(n-1)$-bisimilar $\pointedActionModel[\prime]{\proxyStateS[\agentA,\actionStateT]}$
in place of corresponding $\pointedActionModel[\prime]{\actionStateS[\actionStateT]}$ states.
\end{proof}
\begin{corollary}
Let $\pointedActionModel{\actionStateS} \in \classAM_\classS$.
Then for every $\phi \in \langAml$
there exists $\alpha \in \langAflAct$
such that $\entails_\logicAmlS{} \allacts{\pointedActionModel{\actionStateS}} \phi \iff \allacts{\tau(\alpha)} \phi$.
\end{corollary}
\begin{corollary}
Let $\phi \in \langAml$.
Then there exists $\phi' \in \langAfl$
such that for every $\pointedModel{\stateS} \in \classS$:
$\pointedModel{\stateS} \entails_\logicAmlS{} \phi$ if and only if
$\pointedModel{\stateS} \entails_\logicAflS{} \phi'$.
\end{corollary}
\section{Synthesis}\label{synthesis}
In the following subsections we give a computational method for synthesising
action formulae to achieve epistemic goals, whenever those goals are
achievable. We note that the notion of when an epistemic goal is achievable
is captured by the refinement quantifiers of refinement modal
logic~\cite{vanditmarsch2009,bozzelli2012a}, which are also included in the
arbitrary action formula logic, and so in this section we will refer to the full
arbitrary action formula logic, keeping in mind the correspondence with
arbitrary action model logic mentioned in Section~\ref{semantics}.
\subsection{\classK{}}
\begin{proposition}\label{afl-k-synthesis}
For every $\phi \in \langAfl$ there exists $\alpha \in \langAflAct$ such that $\proves \allacts{\alpha} \phi$ and $\proves \somerefs \phi \implies \someacts{\alpha} \phi$.
\end{proposition}
\begin{proof}
Without loss of generality we assume that $\phi$ is in disjunctive normal
form. We proceed by induction on the structure of $\phi$.
Suppose that $\phi = \psi \lor \chi$. By the induction hypothesis
there exists $\alpha^\psi, \alpha^\chi \in \langAflAct$
such that $\proves \allacts{\alpha^\psi} \psi$,
$\proves \somerefs \psi \implies \someacts{\alpha^\psi} \psi$,
$\proves \allacts{\alpha^\chi} \chi$ and
$\proves \somerefs \chi \implies \someacts{\alpha^\chi} \chi$.
Let $\alpha = \alpha^\psi \choice \alpha^\chi$.
Then:
\begin{eqnarray}
&\proves& \allacts{\alpha^\psi} (\psi \lor \chi) \land \allacts{\alpha^\chi} (\psi \lor \chi)\label{afl-k-synthesis-or-1}\\
&\proves& \allacts{\alpha^\psi \choice \alpha^\chi} (\psi \lor \chi)\label{afl-k-synthesis-or-2}
\end{eqnarray}
(\ref{afl-k-synthesis-or-1}) follows from the induction hypothesis and
(\ref{afl-k-synthesis-or-2}) follows from {\bf LU}.
Further:
\begin{eqnarray}
&\proves& (\somerefs \psi \lor \somerefs \chi) \implies (\someacts{\alpha^\psi} (\psi \lor \chi) \lor \someacts{\alpha^\chi} (\psi \lor \chi))\label{afl-k-synthesis-or-3}\\
&\proves& (\somerefs \psi \lor \somerefs \chi) \implies \someacts{\alpha^\psi \choice \alpha^\chi} (\psi \lor \chi)\label{afl-k-synthesis-or-4}\\
&\proves& \somerefs (\psi \lor \chi) \implies \someacts{\alpha^\psi \choice \alpha^\chi} (\psi \lor \chi)\label{afl-k-synthesis-or-5}
\end{eqnarray}
(\ref{afl-k-synthesis-or-3}) follows from the induction hypothesis,
(\ref{afl-k-synthesis-or-4}) follows from {\bf LU} and
(\ref{afl-k-synthesis-or-5}) follows from {\bf R}.
Suppose that $\phi = \pi \land \bigwedge_{\agentB \in \agentsB \subseteq \agents} \covers_\agentB \Gamma_\agentB$.
By the induction hypothesis for every $\agentB \in \agentsB$, $\gamma \in \Gamma_\agentB$
there exists $\alpha^\gamma \in \langAflAct$ such that
$\proves \allacts{\alpha^\gamma} \gamma$ and
$\proves \somerefs \gamma \implies \someacts{\alpha^\gamma} \gamma$.
Let $\alpha = \test{\somerefs \phi} \compose \bigcompose_{\agentB \in \agentsB} \learns_\agentB (\bigchoice_{\gamma \in \Gamma_\agentB} \alpha^\gamma)$.
Then for every $\agentB \in \agentsB$:
\begin{eqnarray}
&\proves& \allacts{\bigchoice_{\gamma \in \Gamma_\agentB} \alpha^\gamma} \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-1}\\
&\proves& \necessary_\agentB \allacts{\bigchoice_{\gamma \in \Gamma_\agentB} \alpha^\gamma} \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-2}\\
&\proves& \allacts{\learns_\agentB (\bigchoice_{\gamma \in \Gamma_\agentB} \alpha^\gamma)} \necessary_\agentB \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-3}\\
&\proves& \allacts{\bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \necessary_\agentB \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-4}\\
&\proves& \allacts{\test{\somerefs \phi}} \allacts{\bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \necessary_\agentB \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-5}\\
&\proves& \allacts{\test{\somerefs \phi} \compose \bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \necessary_\agentB \bigvee_{\gamma \in \Gamma} \gamma\label{afl-k-synthesis-covers-6}
\end{eqnarray}
(\ref{afl-k-synthesis-covers-1}) follows from the induction hypothesis and {\bf LU},
(\ref{afl-k-synthesis-covers-2}) follows from {\bf NecK},
(\ref{afl-k-synthesis-covers-3}) follows from {\bf LK1},
(\ref{afl-k-synthesis-covers-4}) follows from {\bf LK2} and {\bf LS},
(\ref{afl-k-synthesis-covers-5}) follows from {\bf NecL} and
(\ref{afl-k-synthesis-covers-6}) follows from {\bf LS}.
Further:
\begin{eqnarray}
&\proves& \somerefs \phi \implies \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \somerefs \gamma\label{afl-k-synthesis-covers-7}\\
&\proves& \somerefs \phi \implies \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \someacts{\alpha^{\gamma}} \gamma\label{afl-k-synthesis-covers-8}\\
&\proves& \somerefs \phi \implies \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \someacts{\bigchoice_{\gamma' \in \Gamma_\agentB} \alpha^{\gamma'}} \gamma\label{afl-k-synthesis-covers-9}\\
&\proves& \somerefs \phi \implies \someacts{\bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \gamma\label{afl-k-synthesis-covers-10}\\
&\proves& \somerefs \phi \implies \someacts{\test{\somerefs \phi} \compose \bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \gamma\label{afl-k-synthesis-covers-11}\\
&\proves& \allacts{\test{\somerefs \phi} \compose \bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \bigwedge_{\agentB \in \agentsB, \gamma \in \Gamma_\agentB} \possible_\agentB \gamma\label{afl-k-synthesis-covers-12}\\
&\proves& \allacts{\test{\somerefs \phi} \compose \bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} (\pi \land \bigwedge_{\agentB \in \agentsB} \covers_\agentB \Gamma_\agentB)\label{afl-k-synthesis-covers-13}
\end{eqnarray}
(\ref{afl-k-synthesis-covers-7}) follows from {\bf RK},
(\ref{afl-k-synthesis-covers-8}) follows from the induction hypothesis,
(\ref{afl-k-synthesis-covers-9}) follows from {\bf LU},
(\ref{afl-k-synthesis-covers-10}) follows from {\bf LK1}, {\bf LK2} and {\bf LS},
(\ref{afl-k-synthesis-covers-11}) and (\ref{afl-k-synthesis-covers-12}) follow from {\bf LT}, and
(\ref{afl-k-synthesis-covers-13}) follows from (\ref{afl-k-synthesis-covers-6}), {\bf RP} {\bf LC} and the definition of the cover operator.
Therefore $\proves \allacts{\alpha} \phi$.
Finally:
\begin{eqnarray}
&\proves& \someacts{\bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \top \iff \top\label{afl-k-synthesis-covers-14}\\
&\proves& \someacts{\test{\somerefs \phi} \compose \bigcompose_{\agentC \in \agentsB} \learns_\agentC (\bigchoice_{\gamma \in \Gamma_\agentC} \alpha^\gamma)} \top \iff \somerefs \phi\label{afl-k-synthesis-covers-15}\\
&\proves& \somerefs \phi \implies \someacts{\alpha} \top\label{afl-k-synthesis-covers-16}\\
&\proves& \somerefs \phi \implies \someacts{\alpha} \phi\label{afl-k-synthesis-covers-17}
\end{eqnarray}
(\ref{afl-k-synthesis-covers-14}) follows from {\bf LS} and {\bf LP},
(\ref{afl-k-synthesis-covers-15}) follows from {\bf LS} and {\bf LT},
(\ref{afl-k-synthesis-covers-16}) follows from (\ref{afl-k-synthesis-covers-15}),
(\ref{afl-k-synthesis-covers-17}) follows from (\ref{afl-k-synthesis-covers-13}) and (\ref{afl-k-synthesis-covers-16}),
Therefore $\proves \somerefs \phi \implies \someacts{\alpha} \phi$.
\end{proof}
\begin{corollary}
For every $\pointedModel{\stateS} \in \classK$ and $\phi \in \langAaml$:
$\pointedModel{\stateS} \entails \somerefs \phi$ if and only if
there exists $\pointedActionModel{\actionStateS} \in \classAM$
such that $\pointedModel{\stateS} \entails \someacts{\pointedActionModel{\actionStateS}} \phi$.
\end{corollary}
\subsection{\classKFF{}}
\begin{proposition}\label{afl-kff-synthesis}
For every $\phi \in \langAfl$ there exists $\alpha \in \langAflAct$ such that $\proves \allacts{\alpha} \phi$ and $\proves \somerefs \phi \implies \someacts{\alpha} \phi$.
\end{proposition}
\begin{proof}
Without loss of generality we assume that $\phi$ is in alternating
disjunctive normal form. We use the same reasoning as in the proof of
Proposition~\ref{afl-k-synthesis}, substituting \axiomAflKFF{} axioms for
the corresponding \axiomAflK{} axioms, noting that the alternating
disjunctive normal form gives the $(\agents \setminus
\{\agentA\})$-restricted properties required for {\bf LK1} and the
\axiomRmlKFF{} axioms {\bf RK45}, {\bf RComm} and {\bf RDist} to be
applicable.
\end{proof}
\begin{corollary}
For every $\pointedModel{\stateS} \in \classKFF$ and $\phi \in \langAaml$:
$\pointedModel{\stateS} \entails \somerefs \phi$ if and only if
there exists $\pointedActionModel{\actionStateS} \in \classAM_\classKFF$
such that $\pointedModel{\stateS} \entails \someacts{\pointedActionModel{\actionStateS}} \phi$.
\end{corollary}
\subsection{\classS{}}
\begin{proposition}\label{afl-s-synthesis}
For every $\phi \in \langAfl$ there exists $\alpha \in \langAflAct$ such that $\proves \allacts{\alpha} \phi$ and $\proves \somerefs \phi \implies \someacts{\alpha} \phi$.
\end{proposition}
\begin{proof}
Without loss of generality, assume that $\phi$ is a disjunction of
explicit formulae. We proceed by induction on the structure of $\phi$.
Suppose that $\phi = \psi \lor \chi$. We use the same reasoning as in the
proof of Proposition~\ref{afl-k-synthesis}.
Suppose that $\phi = \pi \land \gamma^0 \land \bigwedge_{\agentA \in \agents} \covers_\agentA \Gamma_\agentA$ is an explicit formula.
By the induction hypothesis for every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$
there exists $\alpha^{\agentA,\gamma} \in \langAflAct$
such that $\proves \allacts{\alpha^{\agentA,\gamma}} \gamma$ and $\proves \somerefs \gamma \implies \someacts{\alpha^{\agentA,\gamma}} \gamma$,
where $\tau(\alpha^{\agentA,\gamma}) = \pointedActionModel[\agentA,\gamma]{\actionStateS[\agentA,\gamma]} = \pointedActionModelTuple[\agentA,\gamma]{\actionStateS[\agentA,\gamma]}$.
Let $\alpha = \test{\somerefs \gamma^0} \compose \bigcompose_{\agentA \in \agents} \learns_\agentA (\test{\top}, \bigchoice_{\gamma \in \Gamma_\agentA} \alpha^{\agentA,\gamma})$.
Then from Lemmas~\ref{afl-s-construction-test} and~\ref{afl-s-construction-learning}: $\tau(\alpha) \bisimilar \pointedActionModel{\actionStateS} = \pointedActionModelTuple{\actionStateS}$ where:
\begin{eqnarray*}
\actionStates &=& \bigcup_{\agentA \in \agents, \gamma \in \Gamma_\agentA} \actionStates[\agentA,\gamma] \cup \{\proxyStateS[\agentA,\gamma] \mid \agentA \in \agents, \gamma \in \Gamma_\agentA\} \cup \{\actionStateS\}\\
\actionAccessibilityAgent{\agentA} &=& \bigcup_{\agentB \in \agents, \gamma \in \Gamma_\agentB} \actionAccessibilityAgent[\agentB,\gamma]{\agentA} \cup (\{\actionStateS\} \cup \{\proxyStateS[\agentA,\gamma] \mid \gamma \in \Gamma_\agentA\})^2 \cup \bigcup_{\agentB \in \agents \setminus \{\agentA\}, \gamma \in \Gamma_\agentB} (\{\proxyStateS[\agentB,\gamma]\} \cup \actionStateS[\agentB,\gamma] \actionAccessibilityAgent[\agentB,\gamma]{\agentA})^2 \text{ for } \agentA \in \agents\\
\actionPrecondition &=& \bigcup_{\agentA \in \agents, \gamma \in \Gamma_\agentA} \actionPrecondition[\agentA,\gamma] \cup \{(\proxyStateS[\agentA,\gamma], \actionPrecondition[\agentA,\gamma](\actionStateS[\agentA<,\gamma])) \mid \agentA \in \agents, \gamma \in \gamma_\agentA\} \cup \{(\actionStateS, \somerefs \gamma^0)\}
\end{eqnarray*}
Let $\Psi = \{\psi \leq \gamma \mid \agentA \in \agents, \gamma \in \Gamma_\agentA\}$. We need to show for every $\psi \in \Psi$:
\begin{enumerate}
\item For every $\agentA \in \agents$: $\proves \allacts{\pointedActionModel{\actionStateS}} \psi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma^0]}} \psi$.
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$: $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \psi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \psi$.
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$, $\actionStateU \in \actionStates[\agentA,\gamma]$: $\proves \allacts{\pointedActionModel{\actionStateU}} \psi \iff \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateU}} \psi$.
\end{enumerate}
We proceed by induction on $\psi$.
\begin{enumerate}
\item For every $\agentA \in \agents$: $\proves \allacts{\pointedActionModel{\actionStateS}} \psi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma^0]}} \psi$.
Suppose that $\psi = \atomP$ where $\atomP \in \atoms$.
This follows trivially from {\bf AP}.
Suppose that $\psi = \neg \chi$ or that $\psi = \chi_1 \land \chi_2$. These cases follow trivially from the induction hypothesis.
Suppose that $\psi = \necessary[\agentA] \chi$.
By construction $\actionStateS \actionAccessibilityAgent{\agentA} = \proxyStateS[\agentA,\gamma^0] \actionAccessibilityAgent{\agentA}$
and $\actionPrecondition(\actionStateS) = \actionPrecondition(\proxyStateS[\agentA,\gamma^0])$
and so $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentA] \chi \iff \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \necessary[\agentA] \chi$
follows from {\bf AK} trivially.
Suppose that $\psi = \necessary[\agentB] \chi$ where $\agentB \neq \agentA$.
By construction $\actionStateS \actionAccessibilityAgent{\agentB} = \{\actionStateS\} \cup \actionStateS[\agentB,\gamma^0] \actionAccessibilityAgent{\agentB}$
and $\proxyStateS[\agentA,\gamma^0] \actionAccessibilityAgent{\agentB} = \{\proxyStateS[\agentA,\gamma^0]\} \cup \actionStateS[\agentA,\gamma^0] \actionAccessibilityAgent[\agentA,\gamma^0]{\agentB}$.
As $\phi$ is an explicit formula and $\necessary[\agentB] \chi \in \Psi$
then either $\proves \gamma^0 \implies \necessary[\agentB] \chi$
or $\proves \gamma^0 \implies \neg \necessary[\agentB] \chi$.
Suppose that $\proves \gamma^0 \implies \necessary[\agentB] \chi$.
Then for every $\gamma \in \Gamma_\agentB$ we have $\proves \gamma \implies \necessary[\agentB] \chi$.
By the outer induction hypothesis $\proves \allacts{\pointedActionModel[\agentB,\gamma]{\actionStateS[\agentB,\gamma]}} \gamma$
and so $\proves \allacts{\pointedActionModel[\agentB,\gamma]{\actionStateS[\agentB,\gamma]}} \chi$.
By the inner induction hypothesis $\proves \allacts{\pointedActionModel{\actionStateS[\agentB,\gamma]}} \chi$.
As $\gamma^0 \in \Gamma_\agentB$ then $\proves \allacts{\pointedActionModel{\actionStateS[\agentB,\gamma^0]}} \chi$
and so by the inner induction hypothesis $\proves \allacts{\pointedActionModel{\actionStateS}} \chi$.
So $\proves \allacts{\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentB}}} \chi$
and therefore $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentB] \chi$ follows from {\bf AK}.
By the outer induction hypothesis $\proves \allacts{\pointedActionModel[\agentA,\gamma^0]{\actionStateS[\agentA,\gamma^0]}} \gamma^0$
and so $\proves \allacts{\pointedActionModel[\agentA,\gamma^0]{\actionStateS[\agentA,\gamma^0]}} \necessary[\agentB] \chi$.
From {\bf AK} we have $\proves \somerefs{\gamma^0} \implies \necessary[\agentB] \allacts{\pointedActionModel[\agentA,\gamma^0]{\actionStateS[\agentA,\gamma^0] \actionAccessibilityAgent[\agentA,\gamma^0]{\agentB}}} \chi$.
By the inner induction hypothesis $\proves \allacts{\pointedActionModel[\agentA,\gamma^0]{\actionStateS[\agentA,\gamma^0] \actionAccessibilityAgent[\agentA,\gamma^0]{\agentB}}} \chi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma^0] \actionAccessibilityAgent[\agentA,\gamma^0]{\agentB}}} \chi$.
and as $\proves \allacts{\pointedActionModel{\actionStateS}} \chi$
then $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \chi$.
So we have $\proves \allacts{\pointedActionModel[\agentA,\gamma^0]{\actionStateS[\agentA,\gamma^0] \actionAccessibilityAgent[\agentA,\gamma^0]{\agentB}}} \chi \iff \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0] \actionAccessibilityAgent{\agentB}}} \chi$
and $\proves \somerefs{\gamma^0} \implies \necessary[\agentB] \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0] \actionAccessibilityAgent{\agentB}}} \chi$
and so $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \necessary[\agentB]$ follows from {\bf AK}.
Therefore $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentB] \chi \iff \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \necessary[\agentB] \chi$.
Suppose that $\proves \gamma^0 \implies \neg \necessary[\agentB] \chi$.
A dual argument can be used to show that $\proves \neg \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentB] \chi$
and $\proves \neg \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \necessary[\agentB] \chi$
and therefore $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentB] \chi \iff \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \necessary[\agentB] \chi$.
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$: $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \psi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \psi$.
Suppose that $\psi = \atomP$ where $\atomP \in \atoms$.
This follows trivially from {\bf AP}.
Suppose that $\psi = \neg \chi$ or that $\psi = \chi_1 \land \chi_2$. These cases follow trivially from the induction hypothesis.
Suppose that $\psi = \necessary[\agentA] \chi$.
By construction $\proxyStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentA} = \{\actionStateS\} \cup \{\proxyStateS[\agentA,\gamma] \mid \delta \in \Gamma_\agentA\}$
and $\actionStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentA} = \actionStateS[\agentA,\gamma] \actionAccessibilityAgent[\agentA,\gamma]{\agentA}$.
As $\phi$ is an explicit formula and $\necessary[\agentA] \chi \in \Psi$
then either $\proves \gamma \implies \necessary[\agentA] \chi$
or $\proves \gamma \implies \neg \necessary[\agentA] \chi$.
Suppose that $\proves \gamma \implies \necessary[\agentA] \chi$.
Then for every $\delta \in \Gamma_\agentA$
we have $\proves \delta \implies \necessary[\agentA] \chi$.
By the outer induction hypothesis $\proves \allacts{\pointedActionModel[\agentA,\delta]{\actionStateS[\agentA,\delta]}} \delta$
and so $\proves \allacts{\pointedActionModel[\agentA,\delta]{\actionStateS[\agentA,\delta]}} \chi$.
By the inner induction hypothesis $\proves \allacts{\pointedActionModel{\actionStateS[\agentA,\delta]}} \chi$
and $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\delta]}} \chi$.
As $\gamma^0 \in \Gamma_\agentA$ then $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma^0]}} \chi$
and by the inner induction hypothesis $\proves \allacts{\pointedActionModel{\actionStateS}} \chi$.
So $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentA}}} \chi$
and therefore $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$
follows from {\bf AK}.
By the outer induction hypothesis $\proves \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateS[\agentA,\gamma]}} \gamma$
and so $\proves \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$.
From {\bf AK} we have $\proves \somerefs \gamma \implies \necessary[\agentA] \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateS[\agentA,\gamma] \actionAccessibilityAgent[\agentA,\gamma]{\agentA}}} \chi$.
By the inner induction hypothesis $\proves \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateS[\agentA,\gamma] \actionAccessibilityAgent[\agentA,\gamma]{\agentA}}} \chi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma] \actionAccessibilityAgent[\agentA,\gamma]{\agentA}}} \chi$
so $\proves \somerefs \gamma \implies \necessary[\agentA] \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentA}}} \chi$
and so $\proves \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$
follows from {\bf AK}.
Suppose that $\proves \gamma \implies \neg \necessary[\agentB] \chi$ where $\agentB \neq \agentA$.
Therefore $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \necessary[\agentA] \chi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$.
A dual argument can be used to show that $\proves \neg \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$
and $\proves \neg \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$
and therefore $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \necessary[\agentA] \chi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \necessary[\agentA] \chi$.
Suppose that $\psi = \necessary[\agentB] \chi$ where $\agentB \neq \agentA$.
By construction $\proxyStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentB} = \actionStateS[\agentA,\gamma] \actionAccessibilityAgent{\agentB}$
and $\actionPrecondition(\proxyStateS[\agentA,\gamma]) = \actionPrecondition(\actionStateS[\agentA,\gamma])$
and so $\proves \allacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \necessary[\agentB] \chi \iff \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \necessary[\agentB] \chi$
follows from {\bf AK} trivially.
\item For every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$, $\actionStateU \in \actionStates[\agentA,\gamma]$: $\proves \allacts{\pointedActionModel{\actionStateU}} \psi \iff \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateU}} \psi$.
Suppose that $\psi = \atomP$ where $\atomP \in \atoms$.
This follows trivially from {\bf AP}.
Suppose that $\psi = \neg \chi$ or that $\psi = \chi_1 \land \chi_2$. These cases follow trivially from the induction hypothesis.
Suppose that $\psi = \necessary[\agentA] \chi$.
By construction $\actionStateU \actionAccessibilityAgent{\agentA} = \actionStateU \actionAccessibilityAgent[\agentA,\gamma]{\agentA}$
and $\actionPrecondition(\actionStateU) = \actionPrecondition[\agentA,\gamma](\actionStateU)$
and so $\proves \allacts{\pointedActionModel{\actionStateU}} \necessary[\agentA] \chi \iff \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateU}} \necessary[\agentA] \chi$
follows from {\bf AK} and the induction hypothesis trivially.
Suppose that $\psi = \necessary[\agentB] \chi$ where $\agentB \neq \agentA$.
By construction $\actionStateU \actionAccessibilityAgent{\agentA} = \actionStateU \actionAccessibilityAgent[\agentA,\gamma]{\agentA}$
or $\actionStateU \actionAccessibilityAgent{\agentA} = \{\proxyStateS[\agentA,\gamma]\} \cup \actionStateU \actionAccessibilityAgent[\agentA,\gamma]{\agentA}$
and $\actionPrecondition(\actionStateU) = \actionPrecondition[\agentA,\gamma](\actionStateU)$
and so $\proves \allacts{\pointedActionModel{\actionStateU}} \necessary[\agentB] \chi \iff \allacts{\pointedActionModel[\agentA,\gamma]{\actionStateU}} \necessary[\agentB] \chi$
follows from {\bf AK} and the induction hypothesis trivially.
\end{enumerate}
Therefore for every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$
we have that $\proves \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \gamma$
and $\proves \allacts{\pointedActionModel{\actionStateS}} \gamma^0$.
Therefore for every $\agentA \in \agents$
we have $\proves \allacts{\pointedActionModel{\actionStateS \actionAccessibilityAgent{\agentA}}} \bigvee_{\gamma \in \Gamma_\agentA} \gamma$
and so from {\bf AK} we have that $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentA] \bigvee_{\gamma \in \Gamma_\agentA} \gamma$.
As $\phi$ is an explicit formula, from {\bf RDist}, {\bf RS5} and {\bf RComm} we have that
$\somerefs \phi \implies \pi \land \bigwedge_{\agentA \in \agents, \gamma \in \Gamma_\agentA} \possible[\agentA] \somerefs \gamma$.
By construction for every $\agentA \in \agents$, $\gamma \in \Gamma_\agentA$
we have $\actionPrecondition(\proxyStateS[\agentA,\gamma]) = \somerefs \gamma$
and from above we have $\proves \allacts{\pointedActionModel{\actionStateS[\agentA,\gamma]}} \gamma$
therefore $\proves \somerefs \phi \implies \pi \land \bigwedge_{\agentA \in \agents, \gamma \in \Gamma_\agentA} \possible[\agentA] \someacts{\pointedActionModel{\proxyStateS[\agentA,\gamma]}} \gamma$.
Therefore by {\bf AK} we have $\proves \somerefs \phi \implies \someacts{\pointedActionModel{\actionStateS}} (\pi \land \bigwedge_{\agentA \in \agents, \gamma \in \Gamma_\agentA} \possible[\agentA] \gamma)$.
From above we have $\proves \allacts{\pointedActionModel{\actionStateS}} \necessary[\agentA] \bigvee_{\gamma \in \Gamma_\agentA} \gamma$
and therefore $\proves \somerefs \phi \implies \allacts{\pointedActionModel{\actionStateS}} \phi$.
As $\proves \phi \implies \gamma^0$ then $\proves \somerefs \phi \implies \somerefs \gamma^0$
and so $\proves \somerefs \phi \implies \someacts{\pointedActionModel{\actionStateS}} \phi$.
Let $\alpha' = \test{\somerefs \phi} \compose \alpha$.
By {\bf LS} we have $\proves \allacts{\alpha'} \phi \iff \allacts{\test{\somerefs \phi}} \allacts{\alpha} \phi$.
By {\bf LT} we have $\proves \allacts{\alpha'} \phi \iff (\somerefs \phi \implies \allacts{\alpha} \phi)$.
From above we have $\proves \somerefs \phi \implies \allacts{\alpha} \phi$
and therefore $\proves \allacts{\alpha'} \phi$.
By {\bf LS} we have $\proves \someacts{\alpha'} \phi \iff \someacts{\test{\somerefs \phi}} \someacts{\alpha} \phi$.
By {\bf LT} we have $\proves \someacts{\alpha'} \phi \iff (\somerefs \phi \land \someacts{\alpha} \phi)$.
From above we have $\proves \somerefs \phi \implies \someacts{\alpha} \phi$
and therefore $\proves \somerefs \phi \implies \someacts{\alpha'} \phi$.
\end{proof}
\begin{corollary}
For every $\pointedModel{\stateS} \in \classS$ and $\phi \in \langAaml$:
$\pointedModel{\stateS} \entails \somerefs \phi$ if and only if
there exists $\pointedActionModel{\actionStateS} \in \classAM_\classS$
such that $\pointedModel{\stateS} \entails \someacts{\pointedActionModel{\actionStateS}} \phi$.
\end{corollary}
\section{Related work}\label{related-work}
Several other papers have addressed the problem of describing and reasoning
about epistemic actions. One of the most important works in this area is the
work of Baltag, Moss and Solecki~\cite{baltag1998} which introduced the
notion of action model logic, building on the earlier work of Gerbrandy and
Groeneveld~\cite{gerbrandy1997}. In later work Baltag and Moss extended
action model logic to consider epistemic programs~\cite{baltag2005} which are
expressions built from action models using such operators as sequential
composition, non-deterministic choice and iteration. The atoms of these
programs are action models, so the approach is still inherently semantic in
nature. The logic is unable to decompose the program beyond the level of the
atoms, which themselves may be complex semantic objects.
The relational actions of van Ditmarsch~\cite{vanditmarsch2001} provides a
syntactic mechanism for describing an epistemic action, and provides the
foundation for a lot of the work presented in this paper. The relational
actions are constructed using essentially the same operators as in the
language of action formulae. While the language is very similar, the
semantics given are quite different~\cite{vanditmarsch2007}. In the logic of
epistemic actions the semantics are given in such a way that worlds in a
model are specified with respect to subsets of agents, so that the model is
restricted to agents for whom the epistemic action was applied. The semantics
were also specific to \classS{}, and non-trivial to generalise to other
epistemic logics. A version of relational actions with concurrency is able to
describe any \classS{} action model, although it is unknown whether the
expressivity of concurrent relational actions is greater than that of action
models~\cite{baltag2006}. Here we have generalised the approach and
provided a correspondence theorem for action model logic. This has allowed us
to retain the more familiar semantics of epistemic logic, generalise the
logic to \classK{} and \classKFF{} as well as access existing
synthesis results for dynamic epistemic logic~\cite{hales2013}.
The synthesis result presented here is built on the work of
Hales~\cite{hales2013} which gave a method to build an action model to
satisfy a given epistemic goal. This construction inspired the syntactic
description of epistemic actions and approach that we have used in this
paper.
Related synthesis results have been given by Aucher, et
al.~\cite{aucher2011,aucher2012,aucher2013} which presents an event model
language and uses it to give a thorough exploration of the relationship
between epistemic models, action models and epistemic goals. Aucher defines a
logic for action models and provides calculi to describe epistemic
progression (what is true after executing a given action in a given model)
epistemic regression (what is the most general precondition for an epistemic
action given an epistemic goal) and epistemic planning (what action is
sufficient to achieve an epistemic goal given some precondition). In future
work we hope to extend the correspondence between action formula logic and
action models to include Aucher's event model language.
\bibliographystyle{aiml14}
|
1,108,101,565,192 | arxiv | \section{Introduction}
\indent
The study of plasma flows for regulating the turbulence and anomalous transport has in recent years attracted strong interest~\cite{a15}. The regulation of drift wave turbulence by sheared flows is due to shearing of the turbulent eddies and thereby reducing the spatial scales of the eddies. Of particular interest among the plasma flows are the so called zonal flows~\cite{a11} which are self generated from the background turbulence via Reynolds stress.
Unlike the sheared mean flows which have equilibrium scale size the zonal flows are random $E\times B$ flows mainly in the poloidal direction with low frequency and thus are almost stationary compared to the time scale of the background turbulence. Since the zonal flows are quasistationary compared to the background turbulence they may keep decorrelating turbulent eddies for a relatively long time and thereby effectively suppressing the turbulent transport.
The drift waves that are driven by gradients in the plasma density, temperature, magnetic field etc are responsible for causing turbulence and anomalous transport. Candidates of such drift waves are the Ion-Temperature-Gradient Mode (ITG) and the collisionless Trapped Electron Mode (TEM).
In the present paper the direct effects of a sheared mean flow on the zonal flow instability are investigated analytically, however, the effects on the background turbulence itself is not considered in this work. Moreover, the effect of the parallel ion motion on the zonal flow growth rate and frequency is studied.
The zonal flow generation from nonlinear interactions among drift waves has been extensively investigated both analytically~\cite{a51}-~\cite{a50} and in computer simulations using gyrokinetic~\cite{a22}-~\cite{a24} and advanced fluid models~\cite{a25}-~\cite{a27}. Furthermore, the effect of micro-scale electron temperature gradient (ETG) driven turbulence driven zonal flow on semi-macro scale ITG turbulence was studied by a fluid simulation~\cite{a277}. However, the direct interaction of zonal flow with a sheared mean flow and the interaction with the GAM have mostly been overlooked. There have been some previous studies on the interaction of zonal flows and mean flows using simple drift wave models~\cite{a48} and using the coherent mode coupling method~\cite{a61}. The present work extends the drift wave model to an advanced fluid model for the toroidal ITG mode turbulence while using the wave kinetic approach. The purpose of this study is to obtain a qualitative estimate of the zonal flow growth rate and real frequency, and their parametric dependence of the plasma under the influence of a sheared mean flow and parallel ion motion. The effects of sheared mean flows and parallel ion motion on zonal flows is not well known as well as how it is influencing the turbulence and in the end how this effect may change the overall transport.
However the full non-linear effects of the Geodesic Acoustic Mode (GAM)~\cite{a12} is out of the scope of the present study. The GAM is a perturbation where the electrostatic potential ($m=n=0$ mode) is coupled to the sideband density perturbations ($m=1,n=0$ mode) by toroidal effects. The GAM interacts with the zonal flow and act as a source or sink for the poloidal flow leading to an oscillatory nature of the zonal flow. The complete role of GAMs are still not well understood~\cite{a13}-~\cite{a151}. For these reasons it is of great importance to study the four component system of drift waves, sheared mean flow, GAM and zonal flow.
The methodology of the analytical model is based on the wave kinetic approach~\cite{a16},~\cite{a48}-~\cite{a50}. An algebraic equation which describes the zonal flow growth rate and real frequency including the effects of a sheared mean flow in the presence of maternal ITG turbulence including the effects of parallel ion motion is derived and solved numerically. An advanced fluid model including the ion continuity, ion temperature equations and the ion momentum equation is used for the background ITG turbulence~\cite{a28}. The generation of zonal flows is described by the vorticity equation and the time evolution of the ITG turbulence in the presence of the slowly growing zonal flow is described by a wave kinetic equation.
It was found the generation of zonal flows was in general suppressed by a sheared mean flow. By introducing collisional damping a modest suppression of the zonal flow growth rate is found. In addition, it is found that the interaction with parallel iom momentum may reduce the ZF generation significantly.
The paper is organized as follows. In Section II the analytical model for the zonal flows generated from toroidal ITG modes including the effects of a sheared mean flow is reviewed. The analytical model is extended to include parallel ion motion in Section III. Section IV is dedicated to the results and a discussion thereof. Finally there is a summary in section V.
\section{Analytical model for interaction of mean flows and zonal flows}
In this section the model excluding parallel ion motion is introduced. The description used for toroidal ITG driven modes consists of the ion continuity and ion temperature equations. For simplicity, effects of electron trapping and finite beta are neglected in this work. Magnetic shear can, however, modify the non-linear upshift as found in Ref.~\cite{a21}-~\cite{a211} and is accordingly incorporated in the model including parallel ion motion. In this section the model of how to construct the interaction of a sheared flow and the zonal flows generated from ITG driven turbulence and the derivation of the dispersion relation for zonal flows are summarized. The method has been described in detail in Refs.~\cite{a16},~\cite{a48}-~\cite{a50} (and References therein) and only a brief summary is given here. The analysis of Ref.~\cite{a16} is closely followed and in the present work extended to be valid for an advanced fluid model. An alternative statistical approach, resulting in a modified wave kinetic equation, is presented in Ref.~\cite{a47} which also contains an extensive discussion of and comparison with the approach used here. In describing the large scale plasma flow dynamics it is assumed that there is a sufficient spectral gap between the small scale fluctuations and the large scale flow. The electrostatic potential ($\phi = e \varphi / T_e$) is represented as a sum of fluctuating and mean quantities
\begin{eqnarray}
\phi(\vec{X},\vec{x},T,t) = \Phi(\vec{X},T) + \tilde{\phi}(\vec{x},t)
\end{eqnarray}
where $\tilde{\phi}(\vec{x},t)$ is the fluctuating potential varying on the turbulent scales $x,y,t$ and $\Phi(\vec{X},T)$ is the zonal flow potential varying on the slow scale $\vec{X},T$ (the zonal flow potential is independent on $Y$). The coordinates $\left( \vec{X}, T\right)$, $\left( \vec{x},t \right)$ are the spatial and time coordinates for the mean flows and small scale fluctuations, respectively.
The wave kinetic equation see Refs.~\cite{a16},~\cite{a2} -~\cite{a8} for the generalized wave action $N_k = \frac{4 \gamma_k^2}{\Delta_k^2 + \gamma_k^2}|\tilde{\phi}_k|^2 $ in the presence of a sheared mean plasma flow perturbing the other mean flow (the zonal flow in this case) due to the interaction between mean flow and small scale fluctuations is
\begin{eqnarray}
\frac{\partial }{\partial t} N_k(x,t) & + & \frac{\partial }{\partial k_x} \left( \omega_k + \vec{k} \cdot \vec{v}_0 \right)\frac{\partial N_k(x,t)}{\partial x} - \frac{\partial }{\partial x} \left( \vec{k} \cdot\vec{v}_0\right) \frac{\partial N_k(x,t)}{\partial k_x} \nonumber \\
& = & \gamma_k N_k(x,t) - \Delta\omega N_k(x,t)^2
\end{eqnarray}
Here the spectral difference ($q_y << k_y$, $q_y, k_y$ is zonal flow and drift wave wave numbers respectively) is used in solving the wave kinetic equation and only the $x$ direction is considered where similar spectrum of the background turbulence and zonal flow is expected. Where $\vec{v}_0$ is the zonal flow part of the $E \times B$ velocity, and in the relation between small scale turbulence and the generalized wave action density we have
\begin{eqnarray}
\Delta_k & = & \frac{k_y}{2}\left( 1 - \epsilon_n g + \frac{4 \tau}{3} \epsilon_n g\right) \\
\gamma_k & = & k_y \sqrt{\epsilon_n g \left(\eta_i - \eta_{i th}\right)}.
\end{eqnarray}
In the expression for the $\eta_{ith}$ the FLR effects are neglected,
\begin{eqnarray}
\eta_{i th} \approx \frac{2}{3} - \frac{1}{2 \tau} + \frac{1}{4 \tau \epsilon_n g} + \epsilon_n g\left( \frac{1}{4 \tau} + \frac{10}{9 \tau}\right).
\end{eqnarray}
Here, and in the forthcoming equations $\tau = T_i/T_e$, $\vec{v}_{\star} = \rho_s c_s \vec{y}/L_n $, $\rho_s = c_s/\Omega_{ci}$ where $c_s=\sqrt{T_e/m_i}$, $\Omega_{ci} = eB/m_i c$. We also define $L_f = - \left( d ln f / dr\right)^{-1}$, $\eta_i = L_n / L_{T_i}$, $\epsilon_n = 2 L_n / R$ where $R$ is the major radius and $\alpha_i = \tau \left( 1 + \eta_i\right)$. The perturbed variables are normalized with the additional definitions $\tilde{n} = (L_n/\rho_s) \delta n / n_0$, $\tilde{\phi} = (L_n/\rho_s e) \delta \phi /T_e$, $\tilde{T}_i = (L_n/\rho_s) \delta T_i / T_{i0}$ as the normalized ion particle density, the electrostatic potential and the ion temperature, respectively. The perpendicular length scale and time are normalized to $\rho_s$ and $L_n/c_s$, respectively. The geometrical quantities are calculated in the strong ballooning limit ($\theta = 0 $, $g\left(\theta = 0, \kappa \right) = 1/\kappa$ (Ref.~\cite{a29}) where $g\left( \theta \right)$ is defined by $\omega_D \left( \theta \right) = \omega_{\star} \epsilon_n g\left(\theta \right)$). In this analysis it is assumed that the RHS is approximately zero (stationary turbulence). The role of non-linear interactions among the ITG fluctuations (here represented by a non-linear frequency shift $\Delta\omega$) is to balance linear growth rate. In the case when $ \gamma_k N_k(x,t) - \Delta\omega N_k(x,t)^2 = 0$, the expansion of the wave kinetic equation is made under the assumption of small deviations from the equilibrium spectrum function; $N_k = N_k^0 + \tilde{N}_k$ where $\tilde{N}_k$ evolves at the zonal flow time and space scale $\left( \Omega, q_x, q_y = 0\right)$, as
\begin{eqnarray}
\frac{\partial \tilde{N}_k}{\partial t} + iq_x v_{gx} \tilde{N}_k - k_y \left<V_E^{\prime}\right> \frac{\partial \tilde{N}_k}{\partial k_x}+ \gamma_k \tilde{N}_k = i q_x k_y \tilde{V}_E \frac{\partial N_k^0}{\partial k_x}
\end{eqnarray}
In this last expression the third term on the left hand side shows the explicit interaction of a mean shear flow $\left< V_E^{\prime} \right>$ and $\tilde{N}_k$. Here $\tilde{V}_E = iq_x \Phi$. This equation is solved for $\tilde{N}_k$ assuming that $\left< V_E^{\prime} \right>^2$ is small, by integrating by parts and obtaining an expansion in $\left< V_E^{\prime} \right>^2$ and introducing a total time derivative $D_t = \frac{\partial }{\partial t} - k_y \left< V_E^{\prime}\right> \frac{\partial}{\partial k_x}$. It is interesting to note that on the total time scale the shearing effect is explicit as $D_t k_x = -k_y \left< V_E^{\prime}\right>$. The solution can be written as
\begin{eqnarray}
\tilde{N}_k & = & \int^{t}_{t_0} dt^{\prime} e^{-\gamma (t-t^{\prime}) - i q_x \int_{t^{\prime}}^{t^{\prime \prime}} dt^{\prime \prime}v_{gx}}i q_x k_y \tilde{V_E} \frac{\partial N_k^0}{\partial k_x} \nonumber \\
& = & - i q_x^2 k_y \Phi R_0(k_y, q_x, \Omega, \left< V_E^{\prime}\right>) \frac{\partial N_k^0}{\partial k_x}.
\end{eqnarray}
The evolution equations for the zonal flows is obtained after averaging the ion-continuity equation over the magnetic flux surface and over fast scales employing the quasineutrality and including a damping term ~\cite{a4}. The evolution equation is obtained as
\begin{eqnarray}
\frac{\partial}{\partial t} \nabla_x^2 \Phi -\mu \nabla_x^4 \Phi = \left(1 + \tau \right) \nabla_x^2 \left<\frac{\partial}{\partial x} \tilde{\phi}_k \frac{\partial}{\partial y}\tilde{\phi}_k \right> + \tau \nabla_x^2 \left<\frac{\partial}{\partial x} \tilde{\phi}_k \frac{\partial}{\partial y}\tilde{T}_{ik} \right>
\end{eqnarray}
where it is assumed that only the small scale self interactions are the important interactions in the RHS~\cite{a31}. Using typical tokamak parameters ($T_i = T_e = 10 kev$, $n_i = n_e = 10^{20}m^{-3}$, $r = 1m$, $R = 3m$) $\mu = 0.78 \nu_{ii} \sqrt(r/R)$ and $\nu_{ii} = 10^{-12} n_i/T_i^{3/2}$ and $\nu_{ii}$ is the ion-ion collision frequency, $T_i$ is the ion temperature in electron volts. Using typical tokamak parameters it is found that $\mu \approx 50$.
Expressing the Reynolds stress terms in Eq. 8 in $N_k$ we obtain
\begin{eqnarray}
\left(-i \Omega - \mu q_x^2 \right) \Phi = \left(1 + \tau + \tau \delta \right) \int d^2 k k_x k_y |\tilde{\phi}_k|^2
\end{eqnarray}
where $\delta$ is a $k$ independent factor
\begin{eqnarray}
\delta = \frac{\Delta_k k_y}{\Delta_k^2 + \gamma_k^2} \left(\eta_i - \frac{2}{3} \left(1 + \tau \right) \epsilon_n g\right).
\end{eqnarray}
Utilize equations 7, 9 and 10 gives,
\begin{eqnarray}
\left(-i \Omega - \mu q_x^2 \right) = - q_x^2 \left(1 + \tau + \tau \delta \right) \frac{\Delta_k^2 + \gamma_k^2}{4 \gamma_k^2} \int d^2 k k_y^2 k_x \frac{\partial N_k^0}{\partial k_x} R_0.
\end{eqnarray}
Here the response function ($R_0$) is, considering only the first two even terms in the expansion
\begin{eqnarray}
\bar{R}_0 & = & \frac{1}{\gamma_k - i(\Omega-v_{gx} q_x)} \\
R_0 & = & \bar{R}_0 + \bar{R}_0^2 D_t \bar{R}_0 D_t.
\end{eqnarray}
In contrast with Ref.~\cite{a49} it is not assumed that the short scale turbulence is close to marginal state (or stationary state, $\gamma_k$ is small). Integrating by parts in $k_x$ and assuming a monochromatic wave packet $N_k^0 = N_0 \delta\left(k - k_0\right)$ gives
\begin{eqnarray}
\left(\Omega + i \mu q_x^2 \right) = - i q_x^2 \left(1 + \tau + \tau \delta \right) \frac{\Delta_k^2 + \gamma_k^2}{4 \gamma_k^2} k_y^2 R_0(k_y, q_x, \Omega, \left< V_E^{\prime} \right> ) N_0
\end{eqnarray}
Here, $\Omega$ is the zonal flow growth rate and real frequency, $q_x$ is the zonal flow wave number and $k_y$ is the wave number for the ITG mode. The real part of the sheared mean flow dependent term can now be written in the $k_{\perp} << 1$ limit as found in Ref.~\cite{a48},
\begin{eqnarray}
Re(R_0) & \approx & \frac{\gamma_k}{\gamma_k^2+(\Omega-q_xv_{gx})^2} \nonumber \\
& - & 12 q_x^2 (q_f^2 \Phi_f)^2 k_y^2\left( \frac{\gamma_k}{\gamma_k^2+(\Omega - q_x v_{gx})^2}\right)^5.
\end{eqnarray}
This result is valid in the weak shear limit $\gamma_{ZF} > \left< V_E^{\prime}\right>$. In later numerical calculations the full expression is retained. In expressing the zonal flow growth in dimensional form making use of the relation $\left(\Delta^2_k + \gamma_k^2\right)/\left( 4 \gamma_k^2 \right) N_0 = |\tilde{\phi}|^2$ it is assumed that the mode coupling saturation level is reached~\cite{a40}
\begin{eqnarray}
\tilde{\phi} = \frac{\gamma}{\omega_{\star}}\frac{1}{k_y L_n}
\end{eqnarray}
When calculating the group velocities the FLR effects in linear ITG mode physics is important and given by the real frequency and growth rate as follows from Ref ~\cite{a49}
\begin{eqnarray}
\omega_r & = & \frac{k_y}{2\left( 1 + k_{\perp}^2\right)} \left( 1 - \left(1 + \frac{10\tau}{3} \right) \epsilon_n g - k_{\perp}^2 \left( \alpha_i + \frac{5}{3} \tau \epsilon_n g \right)\right) \\
\gamma & = & \frac{k_y}{1 + k_{\perp}^2} \sqrt{\tau \epsilon_n g\left( \eta_i - \eta_{i th}\right)}
\end{eqnarray}
where $\omega = \omega_r + i \gamma$. The group velocities ($v_{gj} = \partial \omega_r/\partial k_j$) are in the long wavelength limit ($k^2_{\perp} << 1$) given by,
\begin{eqnarray}
v_{gx} & = & - k_x k_y \left(1 + \left( 1 + \eta_i\right) \tau - \left(1 + \frac{5 \tau}{3} \right) \epsilon_n g \right) \\
v_{gy} & = & \frac{1}{2} \left( 1 - \left( 1 + \frac{10 \tau}{3}\right) \epsilon_n g\right).
\end{eqnarray}
\section{Analytical model including parallel ion motion}
It is known that parallel ion motion effects on the background turbulence growth rate is only slightly modified, whereas, there is often a significant effect on the real frequency (significant increase in $|\omega_r|$). The zonal flow dispersion equation is explicitly dependent of the background real frequency, group velocity and growth rate and it is now expected that there is significant effect on the zonal flow generation dependent on the parallel ion motion. Accordingly, the previous model for the generation of zonal flows from ITG background turbulence is extended to include the equation of motion for the ions. The model for the drift waves consists of the following equations:
\newline
ion continuity equation
\begin{eqnarray}
\frac{\partial \tilde{n}}{\partial t} - \left(\frac{\partial}{\partial t} - \alpha_i \frac{\partial}{\partial y}\right)\nabla^2_{\perp} \tilde{\phi} + \frac{\partial \tilde{\phi}}{\partial y} - \epsilon_n g \frac{\partial}{\partial y} \left(\tilde{\phi} + \tau \left(\tilde{n} + \tilde{T}_i \right) \right) + \frac{\partial \tilde{v}_{i||}}{\partial z} = \nonumber \\
- \left[\phi,n \right] + \left[\phi, \nabla^2_{\perp} \phi \right] + \tau \left[\phi, \nabla^2_{\perp} \left( n + T_i\right) \right]
\end{eqnarray}
ion energy equation
\begin{eqnarray}
\frac{\partial \tilde{T}_i}{\partial t} - \frac{5}{3} \tau \epsilon_n g \frac{\partial \tilde{T}_i}{\partial y} + \left( \eta_i - \frac{2}{3}\right)\frac{\partial \tilde{\phi}}{\partial y} - \frac{2}{3} \frac{\partial \tilde{n}}{\partial t} = - \left[\phi,T_i \right] + \frac{2}{3} \left[\phi,n \right]
\end{eqnarray}
parallel ion momentum equation
\begin{eqnarray}
\frac{\partial \tilde{v}_{i||}}{\partial t} & = & - \left(\frac{\partial \phi}{\partial z} + \tau \frac{\partial}{\partial z}\left( \tilde{n} + \tilde{T}_i\right)\right) - \left[\phi,\tilde{v}_{i||} \right].
\end{eqnarray}
Here $\left[ A ,B \right] = \partial A/\partial x \partial B/\partial y - \partial A/\partial y \partial B/\partial x$ is the Poisson bracket. The quantities are normalized in the same fashion as above with $\tilde{v}_{i||} = (L_n/\rho_s) v_{i||}/c_s$. The electrons are assumed to be Boltzmann distributed. Note that, for the zonal flows $k_{\parallel} = 0$ and Eq. 23 is identically zero and Boltzmann distributed electrons cannot be used, instead the same model as earlier is employed c.f Eq. 8. The dispersion relation for the ITG mode resulting from Eqs 21 - 23 is then
\begin{eqnarray}
\left[1 + k_{\perp}^2 \left(1 + \frac{5 \tau}{3} \right) \right] \omega^2 - \left[ 1 - \epsilon_n \left( 1 + \frac{5 \tau}{3} + \alpha_r\right) - k_{\perp}^2 \tau \Gamma \right. \nonumber \\
\left. -i \left( \frac{\epsilon_n s}{2 q} \left(1 + \frac{5 \tau}{3} \right) \right) \right]\omega k_y \nonumber \\
+ \left[ \epsilon_n \left( \Gamma - \alpha_r + \frac{5 \tau^2}{3}k_{\perp}^2 \left( 1 + \eta_i\right)\right) + i\left( \frac{\epsilon_n s}{2 q} \Gamma \right) \right] k_y^2 = 0
\end{eqnarray}
Here $\alpha_r = \frac{5 \tau}{3}$, $\tau = T_e / T_i$ and
\begin{eqnarray}
\Gamma = \tau \left( \eta_i - \frac{2}{3}\right) + \frac{5 \tau}{3} \epsilon_n \left( 1 + \tau \right)
\end{eqnarray}
Here, the solution have been found using an approximate eigen-mode function in the form of the lowest order Hermite polynomial ($n=0$).
\begin{eqnarray}
\delta \phi \propto e^{-z^2/2\sigma^2}
\end{eqnarray}
where
\begin{eqnarray}
\sigma = \frac{i \epsilon_n}{k_{\perp} |s| q \omega}
\end{eqnarray}
Here, $Re(\sigma^{-2}) > 0$ has been imposed as a causality requirement. Now, we proceed as in Ref.~\cite{a49} and obtains a modified 3rd order dispersion relation for the zonal flow growth rate and real frequency.
\begin{eqnarray}
\left(\Omega + i \mu q_x^2 \right)\left(\Omega - q_x \frac{\partial \omega_r}{\partial k_x}\right)^2 = - q_x^2 \left(1 + \tau + \tau \delta \right) k_y^2 R_0 |\tilde{\phi}_k|^2 \Omega
\end{eqnarray}
where $\omega_r$, $\gamma_k$ and the numerical derivative $\frac{\partial \omega_r}{\partial k_x}$ are found from the dispersion relation (Eq. 24), $R_0$ is the same factor as before and where the factor $\delta$ now is modified to,
\begin{eqnarray}
\delta = k_y \frac{\omega_r - \frac{7}{3}\tau \epsilon_n g k_y}{(\omega_r - \frac{7}{3}\tau \epsilon_n g k_y)^2 + \gamma_k^2} \left(\eta_i - \frac{2}{3} \left(1 + \tau \right) \epsilon_n g\right).
\end{eqnarray}
In present model extended for including parallel momentum and mode structure there are some limitations, it is only a change in the linear drift wave physics and the possible non-linear coupling term in Eq. 28 is not included and analogously the generalized action density derived in~\cite{a49} for the background turbulence is only approximately correct in the long wavelength limit.
\section{Results and discussion}
The model in Ref.~\cite{a49} is expanded by including the effects of interaction with mean flows and the effects of parallel ion motion for the zonal flow generation. An analytical dispersion relation is derived according to the wave kinetic equation approach that takes into account the effect of a shear flow interacting with the generation of zonal flows. The plasma parameter dependence on the zonal flow growth rate and real frequency is explored for comparison purposes with the explicit dependence of mean flow shear ($\left< V_E^{\prime}\right> \propto q_f^2 \Phi_f$) given special attention. Here $q_f$ is the wavenumber and $\Phi_f$ is the electrostatic potential of the sheared mean flow. First the results from the analytical model derived from the wave kinetic approach is presented including the effect of a sheared mean flow (solution to the dispersion relation Eq. 12).
In Figure 1 the zonal flow growth rate and frequency as a function of $\eta_i$ with the collisional damping as a parameter is displayed. The other parameters are $q_x = 0.3 = k_x = k_y$, $\tau = 1$ and $\epsilon_n = 1$ with $q_f = 0$ and $|\Phi_f| = 0$. The results are shown for $\mu = 0$ (plus), $\mu = 10$ (boxes) and $\mu = 50$ (rings). In the figure the growth rates are positive while the real frequencies are negative for the zonal flow. In the case of no damping $\mu = 0$, almost no explicit $\eta_i$ dependency on the zonal flow growth rate and real frequency appears, whereas, for increasing damping ($\mu$) modest effects on the growth rate and real frequency of the zonal flow $\Omega$ are exhibited.
\begin{figure}
\includegraphics[height=.3\textheight]{Figure1.eps}
\caption{the zonal flow growth rate and frequency as a function of $\eta_i$ with collisional damping ($\mu$) as a parameter. The other parameters are as $q_x = 0.3 = k_x = k_y$, $\epsilon_n = 1.0$ and $\tau = 1$, $q_f = 0$ and $ |\Phi_f| = 0$. The results are shown for $\mu = 0$ (plus), $\mu = 10$ (boxes) and $\mu = 50$ (rings).}
\end{figure}
Next in Figure 2 and 3, the zonal flow growth rate (Figure 2) and real frequency (Figure 3) as a function of $\eta_i$ with $|\Phi_f|$ as a parameter is shown. The other parameters are as in Figure 1 with $q_f = 0.1$. The results are shown for $|\Phi_f| = 0$ (plus), $|\Phi_f| = 100$ (boxes) and $|\Phi_f| = 500$ (asterisk). The effect of a mean flow on zonal flow generation is qualitatively similar to that of damping, however, in the mean flow case the growth rates and real frequencies approach the case with no flow $|\Phi_f| = 0$ for large values of $\eta_i$. In addition, unlike the damping, the sheared mean flow is also responsible for a significant suppression of the zonal flow growth rate for small $\eta_i$-values to a certain minimum level determined by the background turbulence. If the background is considered to be close to the marginal state a total suppression of the zonal flow growth rate is found. Note that the effect of mean flow is included in the product of $q_f^2$ and $|\Phi_f|$, namely the shearing rate. The resulting zonal flow growth rate reflects that the solution to the dispersion relation consists of two branches where one branch is not depending on the mean flow damping and another branch that is significantly damped close to marginal stability. The behavior of the zonal flow growth rate is indicated in Eq. 15 where the mean flow damping is explicitly found and that it is reduced with increasing $\eta_i$-values (i.e. increasing ITG growth rate). If the sheared mean flow is allowed to interact with the background and act as $E\times B$ shear the linear ITG growth rate will be reduced resulting in a reduction of the zonal flow growth rate. Moreover, in Eq. 15 it is indicated that a reduction in the linear growth rate will also result in a stronger direct effect of a sheared mean flow damping on the zonal flows.
\begin{figure}
\includegraphics[height=.3\textheight]{Figure2a.eps}
\caption{(2 and 3) The zonal flow growth rate (Figure 2) and frequency (Figure 3) as a function of $\eta_i$ with $q_f |\Phi_f|$ as a parameter. The other parameters are as $q_x = 0.3 = k_x = k_y$, $\epsilon_n = 1.0$ and $\tau = 1$. The results are shown for $q_f |\Phi_f| = 0$ (plus), $q_f |\Phi_f| = 100$ (boxes) and $q_f |\Phi_f| = 500$ (asterisk).}
\end{figure}
\begin{figure}
\includegraphics[height=.3\textheight]{Figure2b.eps}
\caption{(2 and 3) The zonal flow growth rate (Figure 2) and frequency (Figure 3) as a function of $\eta_i$ with $q_f |\Phi_f|$ as a parameter. The other parameters are as $q_x = 0.3 = k_x = k_y$, $\epsilon_n = 1.0$ and $\tau = 1$. The results are shown for $q_f |\Phi_f| = 0$ (plus), $q_f |\Phi_f| = 100$ (boxes) and $q_f |\Phi_f| = 500$ (asterisk).}
\end{figure}
In Figure 4 and 5 the zonal flow growth rate (Figure 4) and real frequency (Figure 5) (normalized to the linear ITG growth rate) as a function of $\epsilon_n$ is shown with mean flow shear as a parameter is displayed. The other parameters are as in Figure 1 with $\eta_i = 4$ and $q_f = 0.1$. The results are shown for $|\Phi_f| = 0$ (asterisk), $|\Phi_f| = 100$ (diamonds) and $|\Phi_f| = 500$ (stars). Note, that in the present normalization the expansion term ($k_y q_f^2 |\Phi_f||$) is still small compared to the growth rate in the weak flow case while it is comparable to the growth rate in the strong flow case. The results show a suppression of zonal flow growth with $|\Phi_f|$. The reason for this is in the variation of the radial group velocity ($v_{gx}$) in the total time frame, as in the decorrelation of drift wave propagation by a sheared flow. This will weaken the modulation response of the drift wave response. The stabilization of zonal flow growth by the sheared flow results in a significant increase of the real frequency ($|\Omega_r|$). The effects resulting from from this approach is in qualitative agreement with with results reported earlier using other drift wave models~\cite{a48} and using the coherent mode coupling method in ETG background turbulence for generating the zonal flow~\cite{a61}.
\begin{figure}
\includegraphics[height=.3\textheight]{Figure3a.eps}
\caption{(4 and 5) The zonal flow growth rate (Figure 4) and frequency (Figure 5) (normalized to the linear ITG growth rate) as a function of $\epsilon_n$ is shown with mean flow shear as a parameter. The other parameters are $q_x = 0.3 = k_x = k_y$, $\tau = 1$ and $\eta_i = 3$. The results are displayed for $q_f |\Phi_f| = 0$ (asterisk), $q_f |\Phi_f| = 100$ (diamonds) and $q_f |\Phi_f| = 500$ (stars).}
\end{figure}
\begin{figure}
\includegraphics[height=.3\textheight]{Figure3b.eps}
\caption{(4 and 5) The zonal flow growth rate (Figure 4) and frequency (Figure 5) (normalized to the linear ITG growth rate) as a function of $\epsilon_n$ is shown with mean flow shear as a parameter. The other parameters are $q_x = 0.3 = k_x = k_y$, $\tau = 1$ and $\eta_i = 3$. The results are displayed for $q_f |\Phi_f| = 0$ (asterisk), $q_f |\Phi_f| = 100$ (diamonds) and $q_f |\Phi_f| = 500$ (stars).}
\end{figure}
Next, the results using the extended model including parallel ion momentum is treated (solutions to the dispersion relation Eq. 28).
In Figure 6, the ratio of the growth rate and real frequency is shown as a function of $\eta_i$ with safety factor ($q$) as a parameter. In the present case the safety factor is varied from $q=2$ (asterisk curve) to $q=8$ (plus curve). The other parameters are as in Figure 1. It is found that the ratio of the growth rate and the frequency is decreasing with increasing $\eta_i$ (remember that $L_n$ is considered to be fixed). This is suggestive of a transition from a state of stable zonal flows to a state with oscillating zonal flows. It is indicated that in the region close to the linear ITG threshold the zonal flows are stationary and may have a significant stabilizing effect on the background turbulence whereas at higher $\eta_i$ the zonal flows becomes oscillatory. In general, in a comparison of the previous model and the present model the change in zonal flow generation inherently comes from the fact that $|\omega_r|$ is increased in the system including parallel ion momentum, whereas the ITG growth rates are rather similar in both systems.
\begin{figure}
\includegraphics[height=.3\textheight]{Figure4.eps}
\caption{The ratio of the growth rate and frequency $\gamma_{ZF}/\omega_{ZF}$ as a function of $\eta_i$ with safety factor ($q$) as a parameter. The other parameters are as $q_x = 0.3 = k_x = k_y$, $\epsilon_n = 1.0$, $\tau = 1$ and $q_f |\Phi_f| = 0$. The results are displayed for $q=2$ (asterisk curve) to $q=8$ (plus curve).}
\end{figure}
\section{Summary}
This work focus mainly on analytical estimations of zonal flow growth rate and real frequency under the influence of sheared mean flows and parallel ion motion. An analytical model for the generation of zonal flows by ion-temperature-gradient background turbulence in the presence of a sheared mean flow and parallel ion motion are derived using the wave kinetic approach. The model consists of the ion continuity and ion temperature equations, in addition, the parallel momentum equation for the ions are included. The zonal flow evolution is described by the vorticity equation including a collisional damping on the zonal flow generation. The zonal flow growth rates and real frequencies are scanned for a wide range of plasma parameters.
It was found the general level of zonal flow was suppressed by a sheared background flow and also that for obtaining realistic results retaining collisions into the analytical model seems to be very important.
The results show a suppression of zonal flow growth with $|\Phi_f|$. This is in agreement with previous research of zonal flow generation using another drift wave model~\cite{a48} and the coherent mode coupling method in ETG drift wave turbulence~\cite{a61}. The reason for this is in the variation of the radial group velocity ($v_{gx}$) in the total time frame, as in the decorrelation of drift wave propagation by a sheared flow. This will weaken the modulation response of the drift wave response. The stabilization of zonal flow growth by the sheared flow results in a significant increase of the real frequency ($|\Omega_r|$).
By introducing collisional damping a suppression of the zonal flow growth rate is found. In the case of no damping $\mu = 0$, no explicit $\eta_i$ dependency on the zonal flow growth rate and real frequency appears, whereas, a modest effect of damping is exhibited.
In addition, it is found that the parallel ion motion may reduce the ZF generation significantly. This is suggestive of a transition from a state of stable zonal flows to a state with oscillating zonal flows. It is indicated that in the region close to the linear ITG threshold the zonal flows are stable and may have a significant stabilizing effect on the background turbulence whereas at higher $\eta_i$ the zonal flows becomes oscillatory. In general, in a comparison of the previous model and the present model the change in zonal flow generation inherently comes from the fact that $|\omega_r|$ is increased in the system including parallel ion momentum, whereas the ITG growth rates are rather similar in both systems.
\section{Acknowledgment}
This research was supported by Japan Society for the Promotion of Science (JSPS). The sponsors do not bear any responsibility for the contents in this work. The Authors are indebted to Professor J. Li for fruitful discussions.
\newpage
|
1,108,101,565,193 | arxiv | \section{Introduction}
The R\'enyi entropy of the probability density $\rho(\vec{r}), \vec{r} = (x_1 , \ldots , x_D),$ which characterizes the quantum state of a $D$-dimensional physical system is defined \cite{renyi1,renyi2} as
\begin{equation}
\label{eq:renentrop}
R_{q}[\rho] = \frac{1}{1-q}\log W_{q}[\rho], \quad 0<q<\infty, \,\, q \neq 1,
\end{equation}
where the symbol $W_{q}[\rho]$ denotes the frequency or entropic moment of order $q$ of the density given by
\begin{equation}
\label{eq:entropmom2}
W_{q}[\rho] = \int_{\mathbb{R}^D} [\rho(\vec{r})]^{q}\, d\vec{r}.
\end{equation}
These quantities completely characterize the density $\rho(\vec{r})$ \cite{romera_01,jizba2016} under certain conditions. They quantify numerous facets of the spreading of the quantum probability density $\rho(\vec{r})$, which include the intrinsic randomness (uncertainty) and the geometrical profile of the quantum system. The R\'enyi entropies are closely related to the Tsallis entropies \cite{tsallis} $T_{p}[\rho] = \frac{1}{p-1}(1-W_{p}[\rho]), 0<p<\infty,\, p\neq1$ by $T_{p}[\rho] = \frac{1}{1-p}[e^{(1-p)R_{p}[\rho]}-1]$. Moreover for the special cases $q=0,1,2,$ and $\infty $, the Rényi entropic power, $N_{q}[\rho]=e^{R_{q}[\rho]}$, is equal to $\text{the length of the support}, e^{-\langle \ln \rho \rangle}, \langle \rho \rangle^{-1}, \rho_{max}^{-1}$, respectively.
Therefore, these $q$-entropies include the Shannon entropy \cite{shannon}, $S[\rho] = \lim_{p\rightarrow 1} R_{p}[\rho] = \lim_{p\rightarrow 1} T_{p}[\rho]$, and the disequilibrium, $\langle\rho\rangle =\exp(\textcolor{red}{-} R_{2}[\rho])$, as two important particular cases; in addition, they
The use of R\'enyi, Shannon and Tsallis entropies as measures of uncertainty allow a wider quantitative range of applicability than the Heisenberg-like measures which are based on the moments around the origin (so, including the standard or root-square-mean deviation). This permits, for example, a quantitative discussion of quantum uncertainty relations further beyond the conventional Heisenberg-like uncertainty relations \cite{hall,dehesa_sen12,bialynicki2,vignat,zozor2008,guerrero11,puertas2017}. The properties of the Rényi entropies and their applications have been widely analyzed; see e.g. \cite{aczel,leonenko,bialynicki3} and the reviews \cite{dehesa_sen12,jizba,jizba_2004b}.
In general, the R\'enyi entropies of quantum systems cannot be determined in an exact way, basically because the associated wave equation is generally not solvable in an analytical way. Even when the time-independent Schr\"{o}dinger equation is solvable, what happens for a small set of elementary potentials (zero-range, harmonic, Coulomb) \cite{albeverio,dong2011}, the exact determination of the R\'enyi entropies is a formidable task mainly because they are integral functionals of some special functions of applied mathematics \cite{nikiforov} (e.g., orthogonal polynomials, hypergeometric functions, Bessel functions,...) which control the wavefunctions of the stationary states of the quantum system. These integral functionals have not yet been solved for harmonic (i.e., oscillator-like) systems except for a few lowest-lying states (where the calculation is trivial) and, most recently, for the extreme Rydberg (i.e., highest-lying) \cite{aptekarev2016,dehesa2017,tor2016b} and pseudoclassical (i.e., the highest dimensional) \cite{tor2017b,puertas2017,temme2017} states of harmonic and Coulomb systems by means of sophisticated asymptotical techniques of orthogonal polynomials. This lack is amazing because harmonicity is the most frequent and useful approximation to study the quantum many-body systems, and the other two basic classes of uncertainty measures, the Heisenberg-like measures \cite{ray,drake,hey,assche,andrae,tarasov,zozor,suslov} and the Fisher information \cite{romera2005}, have been already calculated for all stationary states of the multidimensional harmonic system.\\
In this work we determine the exact values of the R\'enyi uncertainty measures of the $D$-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) for all ground and excited quantum states directly in terms of $D$, the potential strength and the hyperquantum numbers which characterize the states.
This is a far more difficult problem than the Heisenberg-like and Fisher information cases, both analytically and numerically. The latter is basically because a naive numerical evaluation using quadratures is not convenient due to the increasing number of integrable singularities when the principal hyperquantum number is increasing, which spoils any attempt to achieve reasonable accuracy even for rather small hyperquantum numbers \cite{buyarov}.
The structure of the manuscript is the following. In section \ref{sec:basics} the wavefunctions and the probability densities of the stationary states of the $D$-dimensional harmonic (oscillator-like) system are briefly described in both position and momentum spaces. In section \ref{sec:renyi} the R\'enyi entropies for all the ground and excited states of this system are determined in an analytical way by use of a recently developed methodology \cite{amc2013}. Finally some conclusions and open problems are given.
\section{The $D$-dimensional harmonic problem}
\label{sec:basics}
In this section we summarize the quantum-mechanical $D$-dimensional problem corresponding to the harmonic oscillator potential
\begin{equation}
V(r) = \frac{1}{2}k(x_{1}^{2}+\ldots + x_{D}^{2}) = \frac{1}{2}kr^{2},
\end{equation}
and we give the probability densities of the stationary quantum states of the system in both position and momentum spaces. The stationary bound states of the system, which are the physical solutions of the Schr\"{o}dinger equation
\begin{equation}\label{schrodinger}
\left( -\frac{1}{2} \vec{\nabla}^{2}_{D} + V(r)\right) \Psi \left( \vec{r} \right) = E \Psi \left(\vec{r} \right),
\end{equation}
(we use atomic units throughout the paper) where $\vec{\nabla}_{D}$ denotes the $D$-dimensional gradient operator, are well known \cite{gallup,louck60a,yanez1994} to be characterized by the energies
\begin{equation}
\label{HOEL}
E_{N} = \left(N + \frac{D}{2}\right) \omega
\end{equation}
where
\[
\omega = \sqrt{k}, \quad N = \sum_{i=1}^{D}n_{i} \quad \text{with} \quad n_{\textcolor{red}{i}}=0,1,2,\ldots
\]
The corresponding eigenfunctions can be expressed as
\begin{equation}
\label{HOEF}
\psi_{N}(\vec{r}) = \mathcal{N} e^{-\frac{1}{2}\alpha(x_{1}^{2}+\ldots+x_{D}^{2})}H_{n_{1}}(\sqrt{\alpha}\, x_{1})\cdots H_{n_{D}}(\sqrt{\alpha}\, x_{D}), \quad \alpha = k^{\frac{1}{4}}
\end{equation}
where $\vec r\in\mathbb R^D$ and $\mathcal{N}$ stands for the normalization constant
\[
\mathcal{N} = \frac{1}{\sqrt{2^{N}n_{1}!n_{2}!\cdots n_{D}! }}\left(\frac{\alpha}{\pi}\right)^{D/4},
\]
and $H_{n}(x)$ denotes the Hermite polynomials of degree $n$ orthogonal with respect the weight function $\omega(x) = e^{-x^{2}}$ in $(-\infty, \infty)$.\\
Then, the associated quantum probability density in position space is given by
\begin{equation}
\label{HOPD}
\rho_{N}(\vec{r}) = |\psi_{N}(\vec{r})|^{2} = \mathcal{N}^{2} e^{-\alpha(x_{1}^{2}+\ldots+x_{D}^{2})}H_{n_{1}}^{2}(\sqrt{\alpha}\, x_{1})\cdots H_{n_{D}}^{2}(\sqrt{\alpha}\, x_{D}),
\end{equation}
and the density function in momentum space is obtained by squaring the Fourier transform of the position wavefunction, obtaining
\begin{align}
\label{HOMPD}
\gamma_{N}(\vec{p}) & = \mathcal{\tilde{N}}^{2} e^{-\frac{1}{\alpha}(p_{1}^{2}+\ldots+p_{D}^{2})}H_{n_{1}}^{2}\left(\frac{ p_{1}}{\sqrt{\alpha}}\right)\cdots H_{n_{D}}^{2}\left(\frac{ p_{D}}{\sqrt{\alpha}}\right)
= \alpha^{-D}\rho_{N}\left(\frac{\vec{p}}{\alpha}\right)
\end{align}
where $\vec p\in\mathbb R^D$ and the normalization constant is
\[
\mathcal{\tilde{N}} = \frac{1}{\sqrt{2^{N}n_{1}!\cdots n_{D}! }}\left(\frac{1}{\pi\alpha}\right)^{D/4}.
\]
\section{R\'enyi entropies of the harmonic system}
\label{sec:renyi}
Let us now determine the R\'enyi entropy of the $D$-dimensional harmonic system according to Eqs. \eqref{eq:renentrop}-\eqref{eq:entropmom2} by
\begin{align}
\label{HORE}
R_{q}[\rho_{N}] &= \frac{1}{1-q}\log \int_{-\infty}^{\infty} dx_{1}\ldots \int_{-\infty}^{\infty} dx_{D} \, [\rho_{N}(\vec{r})]^{q} \nonumber \\
& = \frac{1}{1-q}\log\left( \mathcal{N}^{2q}\int_{-\infty}^{\infty} e^{-\alpha q x_{1}^{2}}|H_{n_{1}}(\sqrt{\alpha}\, x_{1})|^{2q} \, dx_{1} \cdots \int_{-\infty}^{\infty} e^{-\alpha q x_{D}^{2}}|H_{n_{D}}(\sqrt{\alpha}\, x_{D})|^{2q}\, dx_{D} \right)
\end{align}
where we have used Eq. \eqref{HOPD}. To calculate these $D$ integral functionals of Hermite polynomials we will follow the 2013-dated technique (only valid for $q\in\mathbb{N}$ other than unity) \cite{srivastava,niukkanen,amc2013} to evaluate similar integral functionals of hypergeometric orthogonal polynomials by means of multivariate special functions. To do so, first we express the Hermite polynomials in terms of the Laguerre polynomials (see e.g., \cite{olver}) as
\begin{eqnarray}
\label{HfL}
H_{2n}(x) &=& (-1)^{n} 2^{2n}n!L_{n}^{-\frac{1}{2}}(x^{2}), \nonumber \\
H_{2n+1}(x) &=& (-1)^{n} 2^{2n+1}n!xL_{n}^{\frac{1}{2}}(x^{2}),
\end{eqnarray}
which allows to write
\begin{equation}
\label{HpfL}
H_{n}(\sqrt{\alpha}x) ^{2q} = A_{n,q}(\nu) \alpha^{q\nu}x^{2q\nu} L_{\frac{n-\nu}{2}}^{(\nu-\frac{1}{2})}(\alpha x^{2}) ^{2q},
\end{equation}
with the constant
\[
A_{n,q} (\nu) = 2^{2qn}\left[\Gamma\left(\frac{n-\nu}{2}+1\right) \right]^{2q}
\]
and the paramater $\nu=0(1)$ for even(odd) $n$; that is, $\nu=\frac12\left(1-(-1)^{n}\right).$
\\
Following the same steps as in \cite{amc2013}, after the change of variable $t_{i}=\alpha q x_{i}^{2}$ in \eqref{HORE}, one obtains the following linearization relation for the $(2q)$-th power of the Hermite polynomials
\begin{equation}
\label{linforH2}
H_{n}\left(\sqrt \alpha x\right)^{2q} = A_{n,q}(\nu)q^{-q\nu}\sum_{j=0}^{\infty}\frac{1}{(-1)^{}2^{2j } j!}c_{j}\left(q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right) H_{2j}(\sqrt{\alpha q}x),
\end{equation}
with
\begin{align}
\hspace{-3cm}c_{j}&\left( q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right)= \nonumber \\
& = \left(\frac{1}{2}\right)_{q\nu} \binom{\frac{n+\nu-1 }{2}}{\frac{n-\nu}{2}}^{2q} F_{A}^{(2q+1)}\left( \begin{array}{cc}
q\nu+\frac{1}{2} ; \overbrace{\frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2}}^{2q}, -j & \\[-3.5em]
&; \underbrace{\frac{1}{q}, \ldots, \frac{1}{q}}_{2q},1\\[-3.5em]
\underbrace{\nu + \frac{1}{2}, \ldots, \nu+\frac{1}{2}}_{2q},\frac{1}{2} & \\
\end{array}\right),\nonumber\\
\end{align}
where $(z)_a = \frac{\Gamma(z+a)}{\Gamma(z)}$ is the known Pochhammer's symbol and $F_{A}^{(2q+1)}(\frac{1}{q}, \ldots, \frac{1}{q},1)$ is the Lauricella function of type A of $2q+1$ variables given by
\begin{align}
\label{HOLF}
F_{A}^{(2q+1)}\left( \begin{array}{cc}
q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2},-j & \\[-3.5em]
&; \frac{1}{q}, \ldots, \frac{1}{q},1\\[-3.5em]
\nu+ \frac{1}{2}, \ldots, \nu+\frac{1}{2},\frac{1}{2} & \\
\end{array}\right) &=\nonumber\\
&\hspace{-7cm}= \sum_{k_{1}, \ldots, k_{2q}, k_{2q+1}=0 }^{\infty} \frac{\left(q\nu+\frac{1}{2}\right)_{k_{1}+\ldots k_{2q}+ k_{2q+1}} (\frac{\nu_-n }{2})_{k_{1}} \cdots (\frac{\nu-n }{2})_{k_{2q}}(-j)_{ k_{2q+1}} }{(\nu+ \frac{1}{2})_{k_{1}} \cdots (\nu + \frac{1}{2})_{k_{2q}}\left(\frac{1}{2}\right)_{ k_{2q+1}} } \frac{\left(\frac{1}{q}\right)^{k_{1}} \cdots \left(\frac{1}{q}\right)^{k_{2q}}}{k_{1}!\cdots k_{2q}! k_{2q+1}!} ,
\end{align}
Now, the combination of Eqs. \eqref{HORE} and \eqref{linforH2} together with the orthogonalization condition of the Hermite polynomials $H_{n}(x)$ (with which one realizes that all the summation
terms vanish except the one with $i=0$), allows one to write the exact Rényi entropy of the harmonic system as
\begin{align}
\label{HORE1}
R_{q}[\rho_{N}] &=\frac{1}{1-q}\log \left[\mathcal{N}^{2q} \left(\frac{\pi}{\alpha}\right)^{\frac{D}{2}}q^{-\frac{D}{2}}\prod_{i=1}^{D}q^{-q\nu_{i}}A_{n_{i},q}(\nu_{i})\, c_{0}\left( q\nu_{i},2q,\frac{1}{q},\frac{n_{i}-\nu_{i}}{2},\nu_{i}-\frac{1}{2},-\frac{1}{2} \right)\right] \nonumber \\
&\hspace{-1cm}= \frac D2 \log\left[\frac{\pi}{\alpha}\right]+\frac{1}{q-1}\log \left[2^{qN} q^\frac D2 \right] +\frac{1}{1-q}\sum_{i=1}^{D}\log\left[\frac{A_{n_{i},q}(\nu_{i})}{q^{q\nu_{i}}\Gamma(n_i+1)^q}\, c_{0}\left( q\nu_{i},2q,\frac{1}{q},\frac{n_{i}-\nu_{i}}{2},\nu_{i}-\frac{1}{2},-\frac{1}{2} \right) \right]
\end{align}
with
\begin{align}
c_{0}&\left( q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right) = \left(\frac{1}{2}\right)_{q\nu} \binom{\frac{n+\nu-1 }{2}}{\frac{n-\nu}{2}}^{2q}\, \mathfrak F_q(n),\nonumber\\
\end{align}
where the symbol $\mathfrak F_q(n)$ denotes the following Lauricella function of $2q$ variables
\begin{align}
\label{HOLF2}
\mathfrak F_q(n)&\equiv
F_{A}^{(2q+1)}\left( \begin{array}{cc}
q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2},0 & \\[-3.5em]
&; \frac{1}{q}, \ldots, \frac{1}{q},1\\[-3.5em]
\nu+ \frac{1}{2}, \ldots, \nu+\frac{1}{2},\frac{1}{2} & \\
\end{array}\right) =
F_{A}^{(2q)}\left( \begin{array}{cc}
q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2} & \\[-3.5em]
&; \frac{1}{q}, \ldots, \frac{1}{q}\\[-3.5em]
\nu + \frac{1}{2}, \ldots, \nu +\frac{1}{2} & \\
\end{array}\right) &\nonumber\\
&= \sum_{j_{1}, \ldots, j_{2q}=0 }^{\infty} \frac{\left(q\nu+\frac{1}{2}\right)_{j_{1}+\ldots j_{2q}} (\frac{\nu-n }{2})_{j_{1}} \cdots (\frac{\nu-n }{2})_{j_{2q}} }{(\nu + \frac{1}{2})_{j_{1}} \cdots (\nu+ \frac{1}{2})_{j_{2q}} } \frac{\left(\frac{1}{q}\right)^{j_{1}} \cdots \left(\frac{1}{q}\right)^{j_{2q}}}{j_{1}!\cdots j_{2q}! }&
\nonumber\\
&= \sum_{j_{1}, \ldots, j_{2q}=0 }^{\frac{n-\nu}2} \frac{\left(q\nu+\frac{1}{2}\right)_{j_{1}+\ldots j_{2q}} (\frac{\nu-n }{2})_{j_{1}} \cdots (\frac{\nu-n }{2})_{j_{2q}} }{(\nu + \frac{1}{2})_{j_{1}} \cdots (\nu + \frac{1}{2})_{j_{2q}} } \frac{\left(\frac{1}{q}\right)^{j_{1}} \cdots \left(\frac{1}{q}\right)^{j_{2q}}}{j_{1}!\cdots j_{2q}! }.
\end{align}
Note that, as $\frac{\nu-n}{2}$ is always a negative integer number, the Lauricella function simplifies to a finite sum. In the following, for convenience, we use the notation $N_O=\sum_{i=1}^D\nu_i$, which is the amount of odd numbers $n_i$ and, thus, $N_E=D-N_O$ gives the number of the even ones. Then simple algebraic manipulations allow us to rewrite Eq. \eqref{HORE1} as
\begin{eqnarray}
\label{HORE3}\nonumber
R_{q}[\rho_{N}]
&=& -\frac D2 \log\left[\alpha\right]+\mathcal K_q\,D+\overline{\mathcal K}_q\,N_O+\frac {q}{q-1}\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right]+\frac{1}{1-q}\sum_{i=1}^D\log\left[\mathfrak F_q(n_i)\right], \nonumber\\
\end{eqnarray}
where $\mathcal K_q=\frac{\log[\pi^{q-\frac12}\,q^\frac12]}{q-1}$ and $\overline{\mathcal K}_q=\frac{1 }{1-q}\log \left[\frac{4^{q}\,\Gamma\left(\frac12+q\right)}{\pi^{\frac{1}{2}}\,q^{q}}\right]$.
This expression allows for the analytical determination of the Rényi entropies (with positive integer values of $q$) for any arbitrary state of the multidimensional harmonic systems.
Finally, for the ground state (i.e., $n_i=0,\,i=1,\cdots, D$; so, $N=0$) the general Eq. \eqref{HORE3} boils down to ,
\begin{equation}
R_q[\rho_N]=\frac D2\log\left[\frac {\pi\, q^{\frac1{q-1}}}{\alpha}\right].
\end{equation}
In fact, this ground state R\'enyi entropy holds for any $q>0$ as one can directly derive from Eq. \eqref{HORE}. Taking into account that the momentum density is a re-scaled form of the position density, we have the following expression for the associated momentum R\'enyi entropy,
\begin{eqnarray}
\label{Remsp}
R_{\tilde q}[\gamma_N] &=& \frac D2 \log\left[\alpha\right]+\mathcal K_{\tilde q}\,D+\overline{\mathcal K}_{\tilde q}\,N_O+\frac {\tilde q}{\tilde q-1}\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right]+\frac{1}{1-\tilde q}\sum_{i=1}^D\log\left[\mathfrak F_{\tilde q}(n_i)\right],\nonumber\\
\end{eqnarray}
($\tilde q\in \mathbb{N}$). Although Eqs. \eqref{HORE3} and \eqref{Remsp} rigorously hold for $q\not=1$ and $q\in\mathbb N$ only, it seems reasonable to conjecture its general validity for any $q>0, \,q\not=1$ provided the formal existence of a generalized function $\mathfrak F_q(n)$. If so, we obtain the general expression for the position-momentum uncertainty Rényi entropic sum as
\begin{eqnarray}\nonumber
R_{q}[\rho_N]+R_{\tilde q}[\gamma_N] &=& (\mathcal K_{q}+\mathcal K_{\tilde q})\,D+(\overline{\mathcal K}_{q}+\overline{\mathcal K}_{\tilde q})\,N_O+\left(\frac {q}{ q-1}+\frac {\tilde q}{\tilde q-1}\right)\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right]
\\
&+&\frac{1}{1- q}\sum_{i=1}^D\log\left[\mathfrak F_{q}(n_i)\right]+\frac{1}{1-\tilde q}\sum_{i=1}^D\log\left[\mathfrak F_{\tilde q}(n_i)\right]
\end{eqnarray}
which verifies the R\'enyi-entropy-based uncertainty relation of Zozor-Portesi-Vignat \cite{zozor2008} when $\frac1q+\frac1{\tilde q}\ge2$ for arbitrary quantum systems.
In the conjugated case $\tilde q=q^*$ such that $\frac1q+\frac1{q^*}=2$, one obtains
\begin{eqnarray}\nonumber
R_{q}[\rho_N]+R_{q^*}[\gamma_N] &=& D\log\left(\pi q^{\frac1{2q-2}}{q^*}^{\frac{1}{2q^*-2}}\right)+(\overline{\mathcal K}_{q}+\overline{\mathcal K}_{ q^*})\,N_O\\
&+&\frac{1}{1- q}\sum_{i=1}^D\log\left[\mathfrak F_{q}(n_i)\right]+\frac{1}{1- q^*}\sum_{i=1}^D\log\left[\mathfrak F_{ q^*}(n_i)\right].
\end{eqnarray}
Let us finally remark that the first term corresponds to the sharp bound for the general Rényi entropy uncertainty relation with conjugated parameters
\begin{equation}\nonumber
R_{q}[\rho_N]+R_{q^*}[\gamma_N] \ge D\log\left(\pi q^{\frac1{2q-2}}{q^*}^{\frac{1}{2q^*-2}}\right)
\end{equation}
of Bialynicki-Birula \cite{bialynicki2} and Zozor-Vignat \cite{vignat}.
\section{Conclusions}
In this work we have explicitly calculated the R\'enyi entropies, $R_q [\rho_N]$ ($q\in \mathbb{N}$), for all the
quantum-mechanically allowed harmonic states in terms of the Rényi index $q$, the spatial dimension $D$, the oscillator strength $\alpha$, as well as
the hyperquantum numbers, $\{n_{i}\}_{i=1}^{D}$, which characterize the corresponding state's wavefunction. To do that we have used the harmonic wavefunctions in Cartesian coordinates, which can be expressed in terms of a product of $D$ Hermite polynomials and exponentials. So, the R\'enyi entropies of the quantum states boil down to $D$ entropy-like functionals of Hermite polynomials. Then we have determined these integral functionals by taking into account the close connection between the Hermite and Laguerre polynomials and the Srivastava-Niukkanen linearization method for powers of Laguerre polynomials. The final analytical expression of the Rényi entropies with positive integer index $q$ in both position and momentum spaces is given in a compact way by use of a Lauricella function of type A. It remains as an open problem, the extension of this result to R\'enyi entropies for any real value of the parameter $q$. The latter requires a completely different approach, still unknown to the best of our knowledge.
\section*{Acknowledgments}
This work has been partially supported by the Project FQM-207 of the Junta de Andaluc\'ia and the MINECO-FEDER grants FIS2014-54497P and FIS2014-59311P. I. V. Toranzo acknowledges the support of ME under the program FPU.
\\
Author contribution statement:
all authors have contributed equally to the paper.
|
1,108,101,565,194 | arxiv | \section*{APPENDIX}
\begin{small}
\vspace{0.4cm}
\center{
\renewcommand\arraystretch{0.6}
\tabcolsep 0in
\begin{table}[h]
\caption{Coefficients of spin-flavor-color operators}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{$~$} & & & &
\multicolumn{2}{|c|}{$~$} & & & \\
\multicolumn{2}{|c|}{$~$}
&$~ \Delta \Delta~ $& $~\Delta \Delta~ $ & $~$CC$~$&
\multicolumn{2}{|c|}{$~$} & $~ \Delta \Delta ~$ & $ ~\Delta \Delta ~$
&$~$CC$~$ \\
\multicolumn{2}{|c|}{$\hat{O}_{ij}$} & & & &
\multicolumn{2}{|c|}{$\hat{O}_{ij}$} & & & \\
\multicolumn{2}{|c|}{$~$} & $ \Delta \Delta $& CC & CC&
\multicolumn{2}{|c|}{$~$} & $ \Delta \Delta $ & CC& CC
\\
\multicolumn{2}{|c|}{$~$} & & & &
\multicolumn{2}{|c|}{$~$} & & & \\ \hline
& 1 &27 &0 &27 & & $\hat{O}_{ij}$ &9 &0 &9 \\
& $P_{36}$ &-3 &-12 &-21 &$\stackrel{\rightarrow}{\sigma_{i}}\cdot\stackrel{\rightarrow}{\sigma_{j}}$
& $\hat{O}_{ij}P_{36}$ &-1 &-4 &-7
\\\hline
& $\hat{O}_{12}$ & -72 & 0 & -18 & & $\hat{O}_{12}$ & 9 &0 & -9 \\
& $\hat{O}_{36}$ & 0 & 0 & -36
&
&$\hat{O}_{36}$ & -15 &0 & -3 \\
$\lambda_{i}^c\cdot\lambda_{j}^c$& $\hat{O}_{12}P_{36}$ & 8 & 32 & 2 &
$\sum_{k=1}^{3}\lambda^{F}_{i}(k)\lambda^{F}_{j}(k)$
& $\hat{O}_{12}P_{36}$
& -1 &-4 & 11\\
and& $\hat{O}_{13}P_{36}$ & 8 & 32 & 20
&and &$\hat{O}_{13}P_{36}$ &-1 &-4 &5
\\
$~(\stackrel{\rightarrow}{\sigma_{i}}\cdot\stackrel{\rightarrow}{\sigma_{j}})
(\lambda_{i}^c\cdot\lambda_{j}^c)$~ & $\hat{O}_{16}P_{36}$ & 8 & -4 & 20 &
~$(\stackrel{\rightarrow}{\sigma_{i}}\cdot\stackrel{\rightarrow}{\sigma_{j}})
(\sum_{k=1}^{3}\lambda^{F}_{i}(k)\lambda^{F}_{j}(k))
$ & $\hat{O}_{16}P_{36}$ & -1 &8 & 5\\
& $\hat{O}_{14}P_{36}$ & -4 & 2 & 35 & & $\hat{O}_{14}P_{36}$ & 3 &6 & 0 \\
& $\hat{O}_{36}P_{36}$ & -16 & 8 & 32 & & $\hat{O}_{36}P_{36}$ & 7 & 4 & 1
\\\hline
\multicolumn{2}{|c|}{factor} &~$ \frac{1}{27}$ &~$\frac{1}{27}$ &~$ \frac{1}{27}$ &
\multicolumn{2}{|c|}{factor} &~$\frac{1}{9}$ &~$\frac{1}{9}$
&~$\frac{1}{9}$~ \\ \hline
\end{tabular}
\end{table}
\newpage
\vspace{0.4cm}
\renewcommand\arraystretch{0.6}
\tabcolsep 0in
\begin{table}
\caption{Coefficients of spin-flavor-color operators(tensor part)}
\vspace{0.4cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{$~$} & & & &
\multicolumn{2}{|c|}{$~$} & & & \\
\multicolumn{2}{|c|}{$~$}
&$ \Delta \Delta $& $\Delta \Delta $ & CC&
\multicolumn{2}{|c|}{$~$} & $ \Delta \Delta $ & $ \Delta \Delta $& CC \\
\multicolumn{2}{|c|}{$\hat{O}_{ij}$} & & & &
\multicolumn{2}{|c|}{$\hat{O}_{ij}$} & & & \\
\multicolumn{2}{|c|}{$~$} & $ \Delta \Delta $& CC & CC&
\multicolumn{2}{|c|}{$~$} & $ \Delta \Delta $ & CC& CC
\\
\multicolumn{2}{|c|}{$~$} & & & &
\multicolumn{2}{|c|}{$~$} & & & \\ \hline
$ (\stackrel{\rightarrow}{\sigma_{i}}\stackrel{\rightarrow}{\sigma_{j}})_{2}$& $\hat{O}_{ij}$& 54 &0 & 54 & & & & & \\
&$\hat{O}_{ij} P_{36}$ &-6 &-24 &-42 & & & & & \\\hline
& $\hat{O}_{12}$ & -144 & 0 & -36 & & $\hat{O}_{12}$ & 18 &0 & -18 \\
& $\hat{O}_{36}$ & 0 & 0 & -72 & & $\hat{O}_{36}$ & -30 &0 & -6 \\
& $\hat{O}_{12}P_{36}$ & 16 & 64 & 4 & & $\hat{O}_{12}P_{36}$ & -2 &-8 & 22\\
$(\stackrel{\rightarrow}{\sigma_{i}}\stackrel{\rightarrow}{\sigma_{j}})_{2}
(\lambda_{i}^c\cdot\lambda_{j}^c)$ & $\hat{O}_{13}P_{36}$ & 16 & 64 & 40
&$(\stackrel{\rightarrow}{\sigma_{i}}\stackrel{\rightarrow}{\sigma_{j}})_{2}
(\sum_{k=1}^{3}\lambda^{F}_{i}(k)\lambda^{F}_{j}(k))
$ &$\hat{O}_{13}P_{36}$ &-2 &-8 &10 \\
& $\hat{O}_{16}P_{36}$ & 16 & -8 & 40 & & $\hat{O}_{16}P_{36}$ & -2 &16 & 10\\
& $\hat{O}_{14}P_{36}$ & -8 & 4 & 70 & & $\hat{O}_{14}P_{36}$ & 6 &12 & 0 \\
& $\hat{O}_{36}P_{36}$ & -32 & 16 & 64 & & $\hat{O}_{36}P_{36}$ & 14 & 8
& 2 \\ \hline
\multicolumn{2}{|c|}{factor} &$ \frac{1}{27} \sqrt{\frac{14}{5}}$ &$\frac{1}{27}
\sqrt{\frac{14}{5}}$ &$ \frac{1}{27} \sqrt{\frac{14}{5}}$
&\multicolumn{2}{|c|}{factor} &$\frac{1}{9} \sqrt{\frac{14}{5}}$
&$\frac{1}{9} \sqrt{\frac{14}{5}}$ &$\frac{1}{9} \sqrt{\frac{14}{5}}$ \\ \hline
\end{tabular}
\end{table}
\center}
\end{small}
\noindent
and
\begin{eqnarray*}
\langle
\lambda^{F}_{i}(8)\lambda^{F}_{j}(8)
\rangle=\frac{1}{3}\langle 1 \rangle \\
\langle (\stackrel{\rightarrow}{\sigma_{i}}\cdot \stackrel{\rightarrow}
{\sigma_{j}})(
\lambda^{F}_{i}(8)\lambda^{F}_{j}(8)
) \rangle
=\frac{1}{3}\langle \stackrel{\rightarrow}{\sigma_{i}}\cdot
\stackrel{\rightarrow}{\sigma_{j}}\rangle \\
\langle (\stackrel{\rightarrow}{\sigma_{i}}\stackrel{\rightarrow}
{\sigma_{j}})_{2}(
\lambda^{F}_{i}(8)\lambda^{F}_{j}(8)
) \rangle=
\frac{1}{3}\langle( \stackrel{\rightarrow}{\sigma_{i}}
\stackrel{\rightarrow}{\sigma_{j}})_{2}\rangle
\end{eqnarray*}
\end{document}
|
1,108,101,565,195 | arxiv | \section{Introduction}
{\bf Introduction}--Supersymmetry (SUSY) provides a natural solution to the gauge
hierarchy problem in the Standard Model (SM). In the Supersymmetric SMs (SSMs)
with $R$ parity, gauge coupling unification can be achieved, and the Lightest
Supersymmetric Particle (LSP) is a dark matter candidate. However, after the first run
of the Large Hadron Collider (LHC), the former top candidate for physics
beyond the SM, the Minimal SSM (MSSM), has lost a lot of its attraction. One reason is
the discovery of the SM-like Higgs boson with a mass of $125$~GeV~\cite{Chatrchyan:2012ufa,Aad:2012tfa}.
In order to obtain the correct Higgs mass, there are two possibilities in
the MSSM: either there
must be a very large mixing among the supersymmetric partners of top quarks, or
the SUSY breaking soft masses must be much heavier than naively expected. The first possibility
is often disfavoured by charge and colour breaking minima~\cite{Camargo-Molina:2013sta, Blinov:2013fta,
Chowdhury:2013dka, Camargo-Molina:2014pwa, Chattopadhyay:2014gfa}, while the second one raises
the question if the MSSM is really a natural solution to gauge hierarchy problem. This has
caused an increasing interest in non-minimal SUSY models. The focus was mainly on models
which enhance the Higgs at tree level to reduce the fine-tuning (FT)~\cite{BasteroGil:2000bw,
Dermisek:2005gg,Zhang:2008jm,Ellwanger:2011mu,Ross:2011xv,%
Hirsch:2011hg,Ross:2012nr,Gherghetta:2012gb,Perelstein:2012qg,Kim:2013uxa,Kaminska:2014wia,Binjonaid:2014oga}.
In addition, the other ideas like $R$-symmetric SSMs with Dirac instead of Majorana gauginos
became much more popular in the last few years~\cite{Fox:2002bu,Chacko:2004mi,Carpenter:2005tz,Antoniadis:2006eb,
Kribs:2007ac,Amigo:2008rc,Benakli:2008pg,Benakli:2009mk,Benakli:2010gi,
Benakli:2011vb,Choi:2010an,Abel:2011dc,Benakli:2011kz,Heikinheimo:2011fk,
Kribs:2012gx,Kalinowski:2011zzc,Davies:2012vu,Goodsell:2012fm,Benakli:2012cy,
Abel:2013kha,Kribs:2013oda,Csaki:2013fla,Busbridge:2014sha,Benakli:2014cia,
Chakraborty:2014sda,Ding:2015wma,Nelson:2015cea,Alves:2015bba,Chakraborty:2015wga,Goodsell:2015ura,Martin:2015eca}.
On the one hand, such models are known to be supersoft since they only give finite contributions
to scalar masses \cite{Fox:2002bu, Kribs:2012gx}. On the other hand, they can reduce existing mass limits
from SUSY searches and weaken bounds from flavour physics \cite{Kribs:2007ac}. It is somehow surprising,
but the electroweak FT (EWFT) question in the SSMs with Dirac gauginos and a specific SUSY breaking mechanism
has not been addressed so far. So we shall close this gap here.
The single scale SUSY provides an elegant solution to
the SUSY EWFT problem~\cite{Leggett:2014mza, Leggett:2014hha, Du:2015una}.
In particular, the original conditions for string inspired SSMs are mainly~\cite{Du:2015una}:
(1) The K\"ahler potential and superpotential can be calculated in principle or at least inspired
from a fundamental theory such as string theory with suitable compactifications;
(2) There is one and only one chiral superfield which breaks supersymmetry;
(3) All the mass parameters in the SSMs must arise from supersymmetry breaking. With these conditions,
one can show that the SUSY EWFT measure is automatically of order one. The above conditions seem to be too strong,
thus, we point out that the essential condition is: there is one and only one
fundamental mass parameter
and the coefficients to set the different mass scales to be determined. Or simply speaking,
all dimensionful parameters in the SSMs are correlated. In particular, all dimensionful parameters
can be further relaxed to all the dimensional parameters with large EWFT measures,
and we shall call it the effective single scale condition.
In the minimal $R$-symmetric SSM (MRSSM) with Dirac gauginos, we present for the first time
the SUSY breaking soft terms from gauge mediated SUSY breaking (GMSB)~\cite{Benakli:2010gi}
with conformal sequestering~\cite{Luty:2001jh, Luty:2001zv, Murayama:2007ge}, and find
that the naive EWFT measure turns out to be similar to the other SSMs,
except the minor improvements due to supersoft property
and additional loop contributions to the Higgs boson mass. With our above updated condition
for single scale SUSY, we show a perfect cancellation analytically and numerically between the
dominant terms that contribute to the EWFT. The resulting EWFT measure
can be of order one even for the supersymmetric particle (sparticle) masses in the TeV range.
In particular, it is not
necessary that the dimensionful parameters in the superpotential have to be tuned to be small
as this is usually the case. In a wide range
of the parameter space we find a precise cancellation among different contributions to the
EWFT measures.
{\bf The SUSY Breaking Soft Terms}--The generic new soft terms in the MRSSM are
\begin{align}
\mathcal{L}=(m_D\lambda_i\psi_{A_i}+b_A A^2+h.c.)+m_A^2 \left|A\right|^2~,~
\label{eqn:dirac}
\end{align}
where $\lambda$ is a gaugino, $\psi$ and $A$ are the fermionic and scalar components of
a chiral adjoint superfield, $m_D$ is the Dirac gaugino mass, and
$b_A$ and $m_A^2$ are the holomorphic and,respectively, non-holomorphic masses.
In the simplest ansatz that the origin of the Dirac mass term is the operator
for gauge field strengths $W_{\alpha}^{\prime}$ and $W_{j}^{\alpha}$
\begin{align}
W_{ssoft}=\frac{W_{\alpha}^{\prime}W_{j}^{\alpha}A_j}{\Lambda}~,~
\end{align}
a massless scalar in the adjoint representation is predicted~\cite{Csaki:2013fla}. This observation has triggered efforts in constructing
phenomenological reliable models with Dirac gauginos \cite{Carpenter:2015mna,Alves:2015bba,Alves:2015kia}. In general,
the aim is to get $m_D^2\sim m_A^2\sim b_A$.
However, if $b_A$ and $m_D$ are generated at one loop,
$b_A$ is naturally larger than $m_D^2$ by a loop factor of $16\pi^2$.
To address this $m_D-b_A$ problem and generate
the proper Dirac gaugino and scalar masses, we introduce two pairs of messenger fields
for the GMSB~\cite{Benakli:2010gi}
and consider the
conformal sequestering~\cite{Luty:2001jh, Luty:2001zv, Murayama:2007ge}.
Supposing the hidden sector interactions are strong below
the messenger scale $M_{\text{mess}}$ down to some scale where conformality is broken, we obtain
\begin{align}
m_{D_i} =& \frac{g_i}{16\pi^2} \frac{C_{D_i}\lambda_i}{6{\sqrt 2}} \frac{\Lambda^{\prime 2}_F}{M_{\text{mess}}}~,~
b_{A_i} = -\frac{1}{16\pi^2} \frac{C_{b_i}\lambda^2_i}{2^{\delta_i}} \Lambda^{\prime 2}_F ~,~ \nonumber \\
m^2_{A_i} = & \left(\frac{1}{32\pi^2} \frac{\lambda^2_i}{2^{\delta_i}}+
\frac{1}{128 \pi^4} \sum_i C_i(A_i) g_i^4 \right) C_{A_i} \Lambda^{\prime 2}_F ~,~ \nonumber \\
m^2_{\phi} = & \frac{1}{128 \pi^4} \sum_i C_i(\phi) g_i^4 C_{\phi} \Lambda^{\prime 2}_F ~,~
\end{align}
where $(\delta_1,\delta_2, \delta_3)=(0, 1, 1)$,
$g_i$ and $\lambda_i$ are gauge and Yukawa couplings,
$\phi$ represents scalars not appearing in the adjoint representation,
$C_{D_i/b_i/A_i/\phi}$ is the conformal sequestering suppression factor, and
$C_i(A_i/\phi)$ is the quadratic Casimir index. For simplicity, we assume
$C_{b_i}=C_{A_i} =C_{\phi} \equiv C_{XX}$, and define
\begin{eqnarray}
y_i ~\equiv~ \frac{C_{D_i}\lambda_i}{6{\sqrt 2}} \frac{\Lambda^{\prime 2}_F}{M_{\text{mess}} \Lambda_D}~,~
\Lambda^2_F ~\equiv~ C_{XX} \Lambda^{\prime 2}_F~,~
\end{eqnarray}
where $\Lambda_D$ and $\Lambda_F$ are roughly the same mass scales.
Assuming that $C_{XX} << C_{D_i}$ and $10 \lambda_i \leq g^2_1/2{\sqrt 2}\pi$, we approximately have
\begin{align}
\label{eq:Boundary1}
m_{D_i} =& \frac{g_i y_i}{16\pi^2} \Lambda_D ~,~~~ b_{A_i} \simeq 0 ~,~\\
m^2_{A_i/\phi} \simeq & \frac{1}{128 \pi^4} \sum_i C_i(A_i/\phi) g_i^4 \Lambda^{ 2}_F ~.~
\label{eq:Boundary2}
\end{align}
{\bf The MRSSM}--The particle content of the MRSSM is the MSSM extended by adjoint superfields for
all gauge groups necessary to construct Dirac gaugino masses as well as by
two chiral iso-doublets $R_u$ and $R_d$ with $R$ charge $2$ to build $\mu$ like terms.
Thus, the superpotential is
\begin{align}
\nonumber W = & - Y_d \,\hat{d}\,\hat{q}\hat{H}_d\,- Y_e \,\hat{e}\,\hat{l}\hat{H}_d\,
+Y_u\,\hat{u}\,\hat{q}\hat{H}_u + \mu_D\,\hat{R}_d \hat{H}_d\,
\, \\ \nonumber &
+\mu_U\,\hat{R}_u\hat{H}_u\,+\hat{S}(\lambda_d\,\hat{R}_d\hat{H}_d\,+\lambda_u\,\,\hat{R}_u\hat{H}_u) \\
&+ \lambda^T_d\,\hat{R}_d \hat{T}\,\hat{H}_d\,+\lambda^T_u\,\hat{R}_u\hat{T}\,\hat{H}_u \,~.~\,
\label{eq:superpot}
\end{align}
All the other terms are forbidden by the $R$-symmetry as the Majorana gaugino masses and trilinear soft-breaking couplings are. However, a soft-breaking term
$B_\mu$ necessary to give mass to the pseudoscalar Higgs is allowed by this symmetry.
The tree-level Higgs mass is even smaller than the MSSM because of negative
contributions from the new $D$-terms proportional to the Dirac gaugino masses. Moreover, the stops can not be used to push this mass significantly since all $A$-terms are forbidden by
$R$-symmetry. Nevertheless, it has been shown that the large loop corrections stemming from the new superpotential terms $\lambda_i$ and $\lambda^T_i$ ($i=u,d$)
increase the Higgs mass to the demanded level \cite{Diessner:2014ksa,Diessner:2015yna}. Moreover, this model is consistent with gauge coupling unification \cite{Goodsell:2015ura}. Thus, it
is natural to embed it in a constrained SUSY breaking scenario.
We use the boundary conditions defined in Eqs.~(\ref{eq:Boundary1})--(\ref{eq:Boundary2})
in the limit $\Lambda_D/M \to 0$ to calculate most
soft masses at the conformal scale $M$. Only the soft-mass for the singlet $m_s^2$ and $B_\mu$,
which can also be generated via Yukawa mediations, are derived from the
minimization conditions at the vacuum.
The other two minimization conditions are used to calculate $\mu_D$ and $\mu_U$. In short, we have
the following input parameters
\begin{eqnarray}
\label{eq:input}
& \Lambda_F, \, \Lambda_D, \, M,\, y_i ,\, \lambda_u,\, \lambda_d,\, \lambda^T_u, \, \lambda^T_d, \,
\tan\beta, \, v_s,\, v_T ~,~
\end{eqnarray}
where $\tan\beta \equiv \langle H_u^0 \rangle/\langle H_d^0 \rangle$, and $v_s$ and $v_T$ are the Vacuum
Expectation Values (VEVs) of the singlet and neutral triplet.
Also, we assume $\mu_D$ and $\mu_U$ are positive.
{\bf Naturalness}--To quantize the EWFT size, we adopt the measure introduced in Refs.~\cite{Ellis:1986yg, Barbieri:1987fn}
\begin{equation}
\label{eq:measure}
\Delta_{FT} \equiv {\text{Max}}\{\Delta _{\alpha}\},\qquad \Delta _{\alpha}\equiv \left|\frac{\partial \ln
M_Z^{2}}{\partial \ln \alpha} \right| \;,
\end{equation}
where $\alpha$ is a set of independent parameters, and $\Delta_\alpha^{-1}$ gives an estimate of the accuracy to which
the parameter $\alpha$ must be tuned to get the correct electroweak symmetry breaking (EWSB)
scale \cite{Ghilencea:2012qk}. The smaller $\Delta_{FT}$, the more natural the model under consideration is. We use the conformal scale $M$ as a reference scale and calculate the FT with respect to $\{\Lambda_F, \Lambda_D, y_i, \lambda_{d,u}, \lambda^T_{d,u}, \mu_{D,U}, m_s^2, B_\mu\}$.
For large regions in parameter space, the main EWFT sources are $\mu_U$ and
the scale $\Lambda_F$ because of the impact on the running soft mass $m_{H_u}^2$ responsible for EWSB.
If we only include the terms proportional to top Yukawa coupling in the running, we can estimate the EWFT measures
for these two parameters to be
\begin{align}
\label{eq:FT1}
\Delta_{FT}(\Lambda_F) \approx& \left|\frac{ \sqrt{2} \Lambda^2_F}{384 \pi^4 v^2} \left((32 (-1 + R) + 9 (1 + R) g_2^4)) \right)\right|~,~ \\
\label{eq:FT2}
\Delta_{FT}(\mu_U) \approx& \left|R \cdot 4 \sqrt{2} \frac{ \mu^2_U(M)}{v^2} \right|~,~
\end{align}
where $R = e^{((3 \log(M_{SUSY}/M) Y_t^2)/(16 \pi^2))}$, and $\mu_U(M)$ is the running value of $\mu_U$ at the conformal scale. For simplicity we assumed that $Y_t$ does not change significantly between the SUSY breaking and conformal scales,
but our conclusion is independent of this approximation. As usual, one finds that the FT measure increases quickly with
increasing values for the SUSY breaking scale and/or the scale of the dimensionful parameters in the superpotential. For $\mu_U$ in the TeV range, it seems not to be possible to find a FT measure below 100 unless the conformal scale is very low.
Assuming that all the parameters with dimension mass are correlated at the conformal scale,
which is defined as single scale supersymmetry, we have
\begin{eqnarray*}
\Lambda_D \sim \Lambda_F \sim \mu_D \sim \mu_U \sim m_s \sim \sqrt{B_\mu}~.~\,
\end{eqnarray*}
The underlying assumption is: there is one and only one fundamental parameter with dimension mass and
the coefficients to set the different scales are calculable. However, a concrete construction of such a model
is beyond the scope of this letter. As we will show, the single scale SUSY condition can be relaxed further
to the effective conformal sequestering single scale SUSY condition, and
for the following discussion only $\Lambda_F \sim \mu_U$ is necessary since their corresponding EWFT measures
are relatively large while all the rest are small and negligible.
To study the effect of this correlation, we first determine $\mu_U(M)$ from the tadpole equations
in the limit $v_T \to 0$ and $\lambda_u\to0$. We obtain
\begin{align}
\label{eq:FT3}
& \mu_U(M) = \frac{1}{96 \pi^2 (\lambda^{T,2}_d-\lambda^{T,2}_u \tan^2\beta)} \Big(
6 g_2^2 \lambda^T_u \tilde \Lambda_D \tan^2\beta \nonumber \\
& + \sqrt{3 \lambda^{T,2}_d \tan^2\beta \left(12 g_2^4 \tilde \Lambda_D^2 +\lambda^{T,2}_u \Lambda_F^2 R\right) -3 \lambda^{T,4}_d \Lambda_F^2 R} \Big), \nonumber
\end{align}
where $\tilde \Lambda_D \equiv y_2 \Lambda_D$. If we combine Eqs.~(\ref{eq:FT1}) and (\ref{eq:FT2}),
we get the correlated FT measure
\begin{align}
& \Delta^C_{FT} = \frac{\sqrt{2} \lambda^{T,2}_u \Lambda_F^2 \tan^2\beta \, R}{384 \pi^4 v^2 \left(\lambda^{T,2}_d-\lambda^{T,2}_u \tan^2\beta\right)} + \frac{\tilde \Lambda_D}{\Lambda_F} F_1 + \frac{\tilde \Lambda^2_D}{\Lambda^2_F} F_2 \,, \nonumber
\end{align}
where $F_1$ and $F_2$ are functions of $g_2$, $\lambda^T_i$ and $\tan\beta$ which we skip for brevity.
The last two terms can be suppressed in the limit $ \tilde \Lambda_D \ll \Lambda_F$. This is also the preferred limit,
because large $\tilde \Lambda_D$ would cause large wino masses which reduce the tree-level Higgs mass.
The first term becomes very small for $\lambda^T_u \to 0$. We have checked numerically that these estimates
reproduce the correct behaviour to a large extent even if we include the correlation to all other
dimensionful parameters. For this purpose, we implement the model in the \Mathematica package
\SARAH \cite{Staub:2008uz,Staub:2009bi,Staub:2010jh,Staub:2012pb,Staub:2013tta}
and generate Fortran code for \SPheno \cite{Porod:2003um,Porod:2011nf} to calculate the FT measures
using the full two-loop renormalization group equations (RGEs) based on Ref.~\cite{Goodsell:2012fm}. The calculated values
for $\Delta_{FT}(\Lambda_F)$,
$\Delta_{FT}(\mu_U)$ and $\Delta^C_{FT}$ as function of $\Lambda_D$, $\lambda^T_d$,
and $\lambda^T_u$ are shown in Fig.~\ref{fig:FT}.
\begin{figure}[hbt]
\includegraphics[width=0.9\linewidth]{LD_vs_Delta} \\[5mm]
\includegraphics[width=0.9\linewidth]{LamTD_vs_Delta} \\[5mm]
\includegraphics[width=0.9\linewidth]{LamTU_vs_Delta}
\caption{The calculated FT measures $\Delta_{FT}(\mu_U)$ (dashed red), $\Delta_{FT}(\Lambda_F)$ (dashed black), $\Delta^C_{FT}$ (full black line) as function of $\Lambda_D$ (first row), $\lambda^T_d$ (second row) and $\lambda^T_u$ (third row). The other parameters were set to $\Lambda_F = 2.7\times 10^5$~GeV, $\Lambda_D = 2.0\times 10^5$~GeV, $M=10^{12}$~GeV, $\tan\beta=20$, $y_i=(0.6,-0.15,-0.75)$, $\lambda_{s,d} = (-0.78, -0.01)$, $\lambda^T_{s,d} = (-1,0.037)$, $v_s = -5$~GeV, $v_T=-0.25$~GeV.}
\label{fig:FT}
\end{figure}
We find that $\Delta^C_{FT}$ tends to be very small for small $\Lambda_F$ and $\lambda^T_u$ together with large $\lambda^T_d$, while $\Delta_{FT}(\Lambda_F)$ and
$\Delta_{FT}(\mu_U)$ are several orders larger.
Our proposal is completely different from focus point SUSY often considered
in the MSSM~\cite{Feng:1999mn, Feng:1999zg}: in the focus point SUSY $m_{H_u}^2$ is rather insensitive to the UV parameters because of specific hierarchies in the corresponding $\beta$-function. While this suppresses the FT with respect to $m_{H_u}^2$ one has always to tune $\mu$ to be small in order to obtain a low overall FT. In our proposal, there is no need that the FT with respect to the $\mu$-term is small nor the cancellations in the running of $m_{H_u}^2$ are needed because there is
a precise cancellation among these two sources.
We have checked whether this mechanism can be applied to the MSSM with the minimal GMSB. And indeed, we
have found there a good cancellation for large $\tan\beta$
if we relate the SUSY breaking scale $\Lambda$ and $\mu$ at the messenger scale.
However, this cancellation in the MSSM is not as good as the MRSSM considered here. The point is that the contributions
from the Majorana gaugino masses to the running of $m_{H_u}^2$ are absent in the MRSSM.
The $\Delta^{C}_{FT}$ in the MSSM is always bigger than the MRSSM, but still very good and well below 100.
We test it numerically by removing the gaugino contribution terms ``by hand'' from the $\beta$-functions of scalars,
and indeed
we can recover a similar cancellation as described here and $\Delta^{C}_{FT}$ drops to very small values.
Thus, the supersoft character of Dirac gauginos together with an underlying correlation among
dimensionful parameters results in a very natural model. The detailed study
for the MSSM will be given elsewhere.
{\bf Benchmark Scenarios}--In the EWFT discussion so far,
we have neglected all other current experimental constraints that must be fulfilled.
In particular, the mass limits on the SUSY particles from direct and indirect searches as well as the measurement
of the SM-like Higgs mass exclude large parameter regions in SUSY models today. We can use the generated \SPheno version
to check all these constraints. It is especially worth to point out that the Higgs mass is also calculated
at the two-loop level including all model specific contributions in the gaugeless limit \cite{Goodsell:2014bna,Goodsell:2015ira}. Therefore, the theoretical uncertainty is of the same level as in the MSSM and can be estimated to be $O(3~\text{GeV})$.
We show the input and the most important output parameters
for two benchmark scenarios in Table~\ref{tab:BP}.
\begin{table}[h]
\begin{tabular}{|c|c|c|}
\hline
& BP1 & BP2 \\
\hline
\multicolumn{3}{|c|}{Input} \\
\hline
$\Lambda_F~[10^5~{\ensuremath{{\mathrm{GeV}}}}]$ & 2.7 & 2.0 \\
$\Lambda_D~[10^5~{\ensuremath{{\mathrm{GeV}}}}]$ & 2.0 & 2.2 \\
$M [10^7~{\ensuremath{{\mathrm{GeV}}}}]$ & 1.0 & 1.0 \\
$y_i$ & -(-0.63,0.15,0.75) & -(0.45,0.16,1.1) \\
$\lambda_{d,u}$ & -(0.78,0.01) & (-1.45,0.09) \\
$\lambda^T_{d,u}$ & (-1.0, 0.037) & (-1.60,0.07) \\
$\tan\beta$ & 20 & 20 \\
$v_{s,T}~[{\ensuremath{{\mathrm{GeV}}}}]$ & -(5.0,0.25) & -(4.0,0.56) \\
\hline
\multicolumn{3}{|c|}{Output} \\
\hline
$\mu_U~[{\ensuremath{{\mathrm{GeV}}}}]$ & 1850 & \\
\hline
\multicolumn{3}{|c|}{Masses} \\
\hline
$m_h~[{\ensuremath{{\mathrm{GeV}}}}]$ & 123.5 & 122.4 \\
$m_{\tilde g}~[{\ensuremath{{\mathrm{GeV}}}}]$ & 1620.5 &2316.9 \\
$m_{\tilde q}~[{\ensuremath{{\mathrm{GeV}}}}]$ &$\sim$ 3000 & $\sim$ 2300 \\
$m_{\tilde l_R}~[{\ensuremath{{\mathrm{GeV}}}}]$ &$\sim$ 500 & $\sim$400 \\
$m_{\tilde l_L}~[{\ensuremath{{\mathrm{GeV}}}}]$ &$\sim$1000 & $\sim$1000 \\
$m_{\tilde \chi_1^0}~[{\ensuremath{{\mathrm{GeV}}}}]$ &151.2 & 159.0 \\
\hline
\multicolumn{3}{|c|}{$\Delta_{FT}$} \\
\hline
Max($\Delta_{FT}(\lambda)$) & 0.7 & 1.8 \\
$\Delta_{FT}(\Lambda_F)$ & 342.5 & 180.0 \\
$\Delta_{FT}(\Lambda_D)$ & 0.2 & 0.1 \\
$\Delta_{FT}(\mu_U) $ & 342.8 & 186.7 \\
$\Delta_{FT}(\mu_D) $ & 4.2 & 9.2 \\
$\Delta_{FT}(B_\mu) $ & 4.3 & 9.1 \\
\hline
\multicolumn{3}{|c|}{$\Delta^C_{FT}$} \\
\hline
$\Delta^C_{FT}$ & 0.2 & 6.8 \\
\hline
\end{tabular}
\caption{The input parameters, important particle spectra, and EWFT measures
for two benchmark scenarios. Max($\Delta_{FT}(\lambda)$) is the maximal EWFT measure
for $\lambda_{d,u}$ and $\lambda^T_{d,u}$.}
\label{tab:BP}
\end{table}
One sees that the uncorrelated EWFT measures in our model are already smaller than
in the usual MSSM with GMSB.
The reason are the additional loop corrections
which weak the need for very heavy stops significantly. For example,
we have very large $\lambda_d$ couplings for BP2. This is similar to
the MSSM extensions with vector-like (s)tops where the additional loop corrections
cause an significant improvement in the EWFT measure~\cite{Nickel:2015dna}. Also
the two-loop corrections are enhanced due to the presence of scalar octets.
Moreover, the correlated EWFT becomes
much smaller due to the precise cancellation between the contributions from $\Lambda_F$ and $\mu_U$.
For BP1 the resulting EWFT is even smaller for the dimensionless parameters.
Because of the slightly larger value of $\lambda^T_{u}$, as expected,
the cancellation for BP2 is not working
as good as for BP1, although $\Delta^C_{FT}$ is still very small.
{\bf Conclusion}--We considered the GMSB with conformal sequestering,
and found that the naive EWFT measures in the MRSSM are similar to the other SSMs
except the minor improvements due to supersoft property
and additional loop contributions to the Higgs boson mass.
With the effective single scale SUSY condition that all dimensionful
parameters with large EWFT measures are correlated, we showed explicitly
an excellent cancellation between the
dominant terms that contribute to the EWFT. As we expected,
the correlated EWFT measure is of unit order even
for the TeV-scale supersymmetric particle masses.
{\bf Acknowledgements}--We thank Jessica Goodman for very useful discussions.
This research was supported in part by the Natural Science Foundation of China
under grant numbers 11135003, 11275246, and 11475238 (TL).
|
1,108,101,565,196 | arxiv | \section{Introduction} \label{sec:introduction}
\subsection{Motivation and results} \label{sec:motivation}
Predictive scores are increasingly used to guide decisions. Banks use credit scores to set the terms of loans; judges use defendant risk scores to set bail; and online platforms score sellers, businesses, and job-seekers. We now live in a ``scored society" \citep{CitronPasquale2014}. These scores have a common structure: An intermediary gathers data about an agent from different sources and then converts the agent’s features into a score that predicts some latent characteristic. For FICO credit scores, which are used in the majority of consumer lending decisions in the United States, the characteristic is creditworthiness, and the features include credit utilization rate and credit mix.%
\footnote{FICO claims that its scores are used in 90\% of U.S. lending decisions (\texttt{https://ficoscore.com}).}
If, however, a strategic agent understands that she is being scored, she may distort her features to improve her score, without changing her latent characteristic. For example, a consumer can split her spending between two different credit cards to lower her credit utilization rate, without reducing her risk of default. ``The scoring models may not be telling us the same thing that they have historically," according to Mark Zandi, chief economist at Moody's, ``because people are so focused on their scores and working hard to get them up."%
\footnote{``How More Americans Are Getting a Perfect Credit Score," \textit{Bloomberg}, August 14, 2017.}
As scores are introduced to guide high-stakes decisions in new domains, people learn to manipulate them. In the presence of such strategic behavior, what scoring rule induces the most accurate decisions?
To answer this question, I build a model of scoring. The agent being scored---the sender---has multiple manipulable \emph{features}. An intermediary commits to a rule that maps the sender's features into a score. A receiver sees this score and takes a decision. The receiver wants his decision to match the sender’s latent \emph{characteristic}. The sender, however, wants the most favorable decision, and she can distort each of her features at a cost. For each feature, the sender has two dimensions of private information: her \emph{intrinsic level} is the value of the feature if she does not distort it; her \emph{distortion ability} determines her cost of distorting her feature away from the intrinsic level. The sender's latent characteristic is correlated with her intrinsic level on each feature.
The intermediary's problem is to design the scoring rule to make the receiver's decision most accurate. If the sender's features were exogenous, this would be the familiar statistical problem of predicting a latent parameter from observable covariates. But the sender's features are endogenous. The intermediary must consider how the scoring rule motivates the sender to distort her features. Formally, each scoring rule induces a different game between the sender and receiver.
To understand the interaction between the sender and receiver, I first drop the intermediary and suppose that the receiver observes the sender's features. Thus, the sender and receiver play a signaling game, with the sender's features as signals. This \emph{signaling} setting provides a lower bound on the intermediary's payoff from optimal scoring. Since the score set is not restricted, the intermediary can induce this signaling game by using a scoring rule that fully discloses the sender's features.
In the signaling game, the sender's distortion interferes with the receiver's inference about the sender's latent characteristic. I show that there is exactly one equilibrium in linear strategies (\cref{res:existence_uniqueness}). In this equilibrium, each of the sender's features confounds her intrinsic level with her distortion ability. The receiver loses information not because of distortion itself, but because distortion is heterogeneous. Indeed, in the special case of homogeneous distortion ability, the equilibrium is fully separating. In that case, no information is lost, so the optimal scoring rule is full disclosure.
\begin{comment}
From the sender's distorted features, the receiver cannot determine the sender's intrinsic levels. The more weight the sender places on the decision, the greater her incentive to distort, the less informative her features become about her intrinsic levels. In equilibrium, the receiver's decisions become more inaccurate (\cref{res:info_loss}). In this signaling equilibrium, each of the s
the sender's feature confound her instrinBecause the signaling confounds the sender's intrinsive leves
Since the sender's distortion garbles the receiver's information, there is a potential for the intermediary to improve information transmission.
\end{comment}
Despite intuition from decision problems suggesting that more information produces better decisions, I show that aggregating the sender's features into a score transmits better information about the sender's latent characteristic. The key intuition is that, by limiting the receiver's information, the intermediary provides the receiver with partial commitment power. Without commitment, the receiver chooses the decision that is optimal, taking the distribution of the sender's features as given. Commitment allows the receiver to internalize the effect of his strategy on the distribution of the sender's features.
The receiver freely chooses his decision after observing the intermediary's score, so the intermediary cannot give the receiver full commitment power. Nevertheless, I show that for generic covariance parameters, the optimal scoring rule strictly outperforms full disclosure (\cref{res:scoring_equals_screening}). As long as the features are not symmetric, the intermediary improves information transmission by adjusting the relative weights on different features. The optimal scoring rule underweights features on which the sender's distortion ability is most heterogeneous. It overweights other features to ensure that the score is not biased. If the receiver could observe the sender's features ex post, he would change his decision, but from the score alone he cannot disentangle the contribution of each feature. Since the receiver has fewer feasible deviations, his commitment problem is mitigated.
Finally I consider a \emph{screening} setting in which the receiver commits to a decision as a function of the sender's features. Commitment power allows the receiver to under-react to every feature. As commitment increases---from signaling to scoring to screening---the receiver's decision becomes less sensitive to the sender's features (\cref{res:reduced_distortion}). The receiver benefits from additional commitment, but the welfare comparison for the sender is ambiguous. The receiver underweights features on which distortion ability is most heterogeneous, but these are not necessarily the features on which the sender's distortion is most costly.
\paragraph{Outline}
\cref{sec:literature} reviews related literature. \cref{sec:model} presents the model of scoring. \cref{sec:signaling} analyzes the signaling setting without the intermediary. In \cref{sec:scoring}, I study the intermediary's scoring problem. I characterize what decisions the intermediary can induce and then I optimize over this set. In \cref{sec:comparing_commitment}, I introduce screening and then I compare the three settings---signaling, scoring, and screening. \cref{sec:extensions} extends the baseline model to allow for stochastic scores and for a more general social welfare objective. The conclusion is in \cref{sec:conclusion}. Proofs are in \cref{sec:proofs}. Additional results are in \cref{sec:additional_results}.
\subsection{Related literature} \label{sec:literature}
My model combines \emph{signaling} and \emph{information design}: The sender adjusts her features to signal her private type, and the intermediary designs the receiver's information about the sender's features. In this section I relate my model to these two literatures. In the main text, I connect my results to multi-dimensional cheap talk and to the multitask principal--agent problem.
\paragraph{Signaling}
The foundation of my model is a signaling environment that is doubly multi-dimensional. There are multiple signals, termed features, and for each signal the sender has two dimensions of private information. Earlier papers have studied these two forms of multi-dimensionality in isolation. In \cite{FrankelKartik2019}, \cite{FischerVerrecchia2000}, and \cite{BenabouTirole2006}, the sender controls one signal, and her cost depends on two dimensions of private information. The signal confounds these two dimensions, so there is no separating equilibrium. In \cite{Engers1987} and \cite{QuinziiRochet1985}, there are multiple signals, and for each signal, the sender has a single-dimensional type. They give conditions under which a fully separating equilibrium exists.
In my model, the intermediary's scoring rule blends the sender's features. Two papers that study the effect of garbling costly signals. \cite{Rick2013WP} gives conditions under which transmitting signals over a noisy channel can improve social welfare. \cite{Whitmeyer2019WP} shows that in a binary signaling game, the receiver-optimal garbling coincides with the receiver's full commitment solution. These two papers focus on a single signal transmitted over a noisy channel, whereas I study the optimal way to aggregate different signals.%
\footnote{\cite{Perez-RichetSkreta2018WP} study persuasion in a different setting in which the sender's distortion is costless but distortion rates are observable to the receiver.}
I show that the three settings---signaling, scoring, and screening---generally yield different decisions. \cite{FrankelKartik2019WP} compare signaling and screening in their single-feature setting \citep{FrankelKartik2019}. They show that a receiver with commitment power benefits from under-reacting to the sender's single feature.%
\footnote{Computer scientists have studied a similar screening problem of strategic classification \citep{MeirProcacciaRosenschein2012, Dalvi_etal2004,Hardt_etal2016}.}
When there are multiple features, I show that even if the receiver cannot commit, an informational intermediary can provide partial commitment power by aggregating the sender's features into a score.
The three settings that I compare give the same result in the canonical signaling environment of \cite{Spence1973} because the signaling equilibrium is fully separating. But a classical literature compares these settings in the cheap-talk environment of \cite{CrawfordSobel1982}, where the partitional equilibria are not fully separating. There, information transmission can be improved through mediation \citep{Myerson1982,Forges1986} or decision commitment \citep{Holmstrom1977,Holmstrom1984}. I compare these settings in a costly signaling environment that does not admit a fully separating equilibrium.
\paragraph{Information design}
In the growing literature on information design \citep[for surveys, see][]{Kamenica2019,BergemannMorris2019}, the designer controls information about an exogenous state. The designer's policy influences the beliefs of a single agent \citep{KamenicaGentzkow2011} or multiple agents playing a simultaneous-move game \citep{BergemannMorris2013,BergemannMorris2016}, but the information policy does not change the distribution of the state that the designer observes. In my model, the intermediary's scoring rule changes the distribution of the sender's features. This feedback loop is crucial. The intermediary has the same preferences as the receiver, so if the sender's features were exogenous, full disclosure would be trivially optimal.
The few papers studying this feedback loop focus on moral hazard, without private information. In \cite{BoleslavskyKim2018WP}, \cite{RodinaFarragut2016WP}, and \cite{Zapechelnyuk2019WP}, the designer must motivate the sender to exert effort in addition to persuading the receiver. In a dynamic setting, \cite{Rodina2016WP} and \cite{HornerLambertFC} analyze information design in \citeapos{Holmstrom1999} career concerns model. Closer to my paper, \cite{BonattiCisternasFC} study the design of scores about a consumer who buys from a sequence of short-lived monopolists. Ratings that overweight the past, relative to the Bayes-optimal weights, deter consumers from strategically under-consuming. Our settings are different, but we reach the common conclusion that ex post suboptimal weights maximize learning from endogenous signals.
\section{Model} \label{sec:model}
There are three players. The agent being scored is called the sender (she). An intermediary (it) commits to a rule that maps the sender’s features into a score. A receiver (he) observes this score and takes a decision.
The receiver wants to predict the sender's latent \emph{characteristic} $\theta \in \mathbf{R}$. The receiver takes a decision $y \in \mathbf{R}$, and his utility $u_R$ is given by
\[
u_R = - (y - \theta)^2.
\]
Hence the receiver matches his decision with his posterior expectation of $\theta$.
The sender has $k$ manipulable \emph{features} $1, \ldots,k$, where $k > 1$.
For each feature $i$, the sender has an \emph{intrinsic level} $\eta_i \in \mathbf{R}$, and she chooses distortion $d_i \in \mathbf{R}$. Feature $i$ takes the value
\[
x_i = \eta_i + d_i.
\]
For each feature $i$, the sender also has a \emph{distortion ability} $\gamma_i \in \mathbf{R}_+$. Her utility $u_S$ is given by%
\[
u_S
=
y - (1/2) \sum_{i = 1}^{k} d_i^2/ \gamma_i.
\]
The sender wants the decision $y$ to be high, and she experiences a quadratic cost from distorting each feature. Her marginal cost of distorting feature $i$ decreases in her distortion ability $\gamma_i$. At the other extreme, $\gamma_i = 0$, she cannot distort feature $i$, so $x_i = \eta_i$.%
\footnote{For $\gamma_i = 0$, the sender's utility is defined by the limit as $\gamma_i$ converges to $0$ from above.}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzstyle{block} = [rectangle,anchor = north, draw, text width=8 em, text centered, rounded corners, minimum height=1.5em]
\node (N) {};
\node (S) [block, below = 1cm of N] {\textbf{Sender} \\ distortion $d \in \mathbf{R}^k$};
\node (R) [block, base right = 4 cm of S] {\textbf{Receiver} \\ decision $y \in \mathbf{R}$};
\node (I) [block, below right = 3 cm of S.center] {\textbf{Intermediary} \\ commits to\\ $f \colon \mathbf{R}^k \to \Delta (S)$};
\path (N) edge [thick,-{Latex[length=2mm]}] node [anchor = east] {$(\eta,\gamma)$} (S.north);
\path (S.south) edge [bend right, thick,-{Latex[length=2mm]}] node [below left] {$x = \eta + d$} (I.west);
\path (I.east) edge [bend right, thick,-{Latex[length=2mm]}] node [below right] {$f(x)$} (R.south);
\end{tikzpicture}
\end{center}
\caption{Flow of information}
\label{fig:flow_of_information}
\end{figure}
The sender chooses her distortion vector $d = (d_1, \ldots, d_k)$ after privately observing her type, which consists of the vectors
\[
\eta = (\eta_1, \ldots, \eta_k)
\quad
\text{and}
\quad
\gamma = (\gamma_1, \ldots, \gamma_k),
\]
called her intrinsic type and distortion type. The sender does not observe her latent characteristic $\theta$, though her type $(\eta,\gamma)$ may reveal it.
The random vector $(\theta, \eta, \gamma)$ has an elliptical distribution with finite second moments. The elliptical distribution generalizes the multivariate Gaussian. Elliptical distributions are flexible enough to accommodate the sign restriction on $\gamma$, yet they retain the property of the Gaussian that conditional expectations are linear. \cref{sec:LCE} formally states this property and makes nondegeneracy assumptions on the covariance. \cref{sec:existence_uniqueness} makes substantive assumptions on the covariance.
Unlike the sender and receiver, the intermediary has commitment power. Before observing the sender's features, the intermediary commits to a score set $S$ and a scoring rule
\[
f \colon \mathbf{R}^k \to \Delta(S),
\]
which assigns to each realized feature vector $x$ a (stochastic) score $f(x)$ in $S$. Here $\Delta (S)$ is the set of probability measures on $S$, but I use $f(x)$ to denote the random score itself.%
\footnote{If $S$ is uncountable, measurability conditions are needed. I state them formally in \cref{sec:measurability}. I will not comment further on measurability in the main text.} The score set $S$ is not restricted. I will show, however, that there is no loss in taking $S$ equal to $\mathbf{R}$.
The intermediary has the same utility function as the receiver, so the intermediary's problem is to maximize the receiver's information about the sender's latent characteristic. An interpretation is that the intermediary is a monopolist who sells a scoring service to the receiver, but I do not explicitly model this.
The intermediary's scoring rule induces a game between the sender and receiver. \cref{fig:flow_of_information} illustrates the flow of information in this game. The sender observes her private type $(\eta, \gamma)$ and then chooses how much to distort each feature. Her distorted feature vector $x$ is realized and observed by the intermediary. The intermediary assigns the score $f(x)$ and passes it to the receiver. The receiver updates his beliefs about the sender's latent characteristic and takes a decision $y$. Finally, payoffs are realized.
\section{Signaling without the intermediary} \label{sec:signaling}
I first drop the intermediary and suppose that the receiver observes the sender's features. Thus, the sender and receiver play a signaling game, with features as signals. \cref{fig:full_disclosure} shows the flow of information in this game. This signaling setting gives a lower bound on the intermediary's payoff from optimal scoring. Since the score set is unrestricted, the intermediary can replicate the signaling game by using a scoring rule that fully discloses the sender's features.
In this section, first I discuss the properties of elliptical distributions, and then I use these properties to construct equilibria.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzstyle{block} = [rectangle,anchor = north, draw, text width=8 em, text centered, rounded corners, minimum height=1.5em]
\node (N) {};
\node (S) [block, below = 1cm of N] {\textbf{Sender} \\ distortion $d \in \mathbf{R}^k$};
\node (R) [block, base right = 4 cm of S] {\textbf{Receiver} \\ decision $y \in \mathbf{R}$};
\path (N) edge [thick,-{Latex[length=2mm]}] node [anchor = east] {$(\eta,\gamma)$} (S.north);
\path (S.east) edge [thick,-{Latex[length=2mm]}] node [above] {$x = \eta + d $} (R.west);
\end{tikzpicture}
\end{center}
\caption{No intermediary}
\label{fig:full_disclosure}
\end{figure}
\subsection{Elliptical distributions} \label{sec:LCE}
The elliptical distribution generalizes the multivariate Gaussian. For a Gaussian distribution with mean $\mu$ and variance matrix $\Sigma$, each isodensity curve is an ellipse centered at $\mu$ whose shape is determined by $\Sigma$. The density on each curve decays exponentially in the square of the radius. Elliptical distributions have the same isodensity curves, but the radial density function is unrestricted. In particular, if the radial density function vanishes beyond a certain value, then the distribution is supported inside some ellipsoid. One example is a uniform distribution on the interior of an ellipsoid. Other examples are the Student distribution and the Laplace distribution.%
\footnote{These distributions have full support, but the mean and covariance parameters can be chosen so that, with high probability, $\gamma$ is nonnegative. If we truncate the distribution by setting the density to zero whenever it is below a fixed tolerance, we get an elliptical distribution for which $\gamma$ is nonnegative.}
Denote the mean and variance of $(\theta, \eta, \gamma)$ by
\[
\mu =
\begin{bmatrix}
\mu_\theta \\
\mu_\eta \\
\mu_\gamma
\end{bmatrix}
\quad
\text{and}
\quad
\Sigma =
\begin{bmatrix}
\sigma_\theta^2 & \Sigma_{\theta \eta} & \Sigma_{\theta \gamma} \\
\Sigma_{\eta \theta} & \Sigma_{\eta \eta} & \Sigma_{\eta \gamma } \\
\Sigma_{\gamma \theta} & \Sigma_{\gamma \eta} & \Sigma_{\gamma \gamma}
\end{bmatrix}.
\]
I do not specify the radial density function, as it will not play a role in my analysis. It is implicitly restricted, however, by the assumption that $\gamma$ is nonnegative. I make the following standing nondegenaracy assumptions. Denote the Moore--Penrose inverse of $\Sigma_{\gamma \gamma}$ by $\Sigma_{\gamma \gamma}^{\dagger}$. Assume that the matrix $\Sigma_{\eta \eta} - \Sigma_{\eta \gamma} \Sigma_{\gamma\g}^{\dagger} \Sigma_{\gamma \eta}$ has full rank. This ensures that the conditional variance of $\eta$ given $\gamma$ has full rank. Assume also that at least one of the $2k$ components of $(\Sigma_{\eta \theta}, \Sigma_{\gamma \theta})$ is nonzero. Thus, the sender's type provides some information about her latent characteristic.
Next I introduce notation for linear regression. Given a random variable $Y$ and a random $p$-vector $X$, the (population) regression problem is to choose an intercept $b_0$ and a coefficient vector $b$ in $\mathbf{R}^p$ to minimize the expected square error
\[
\E\bigl[ (Y - b_0 - b^T X)^2 \bigr].
\]
As long as $X$ and $Y$ are square integrable and $\var(X)$ has full rank, the minimizers $b_0^\star$ and $b^\star$ are unique and given by
\[
b^\star = \var^{-1}(X) \cov(X,Y),
\qquad
b_0^\star = \E [Y] - (b^\star)^T \E [X].
\]
I denote the $p$-vector of regression coefficients by
\[
\reg (Y | X) = \var^{-1}(X) \cov(X,Y).
\]
This regression vector is a column vector, and I denote its transpose by $\reg ^T (Y | X)$. Next I state the key property of elliptical distributions.
\begin{lem} [Linear conditional expectations] \label{res:LCE}
Let $X = A \eta + B \gamma$ for some matrices $A$ and $B$. If\/ $\var(X)$ has full rank, then
\[
\E [ \theta | X] = \mu_\theta + \reg^T(\theta | X) (X - \E [X]).
\]
\end{lem}
The right side is the regression of $\theta$ on the vector $X$, so it is the best \emph{linear} prediction of $\theta$ from $X$. The left side is the conditional expectation of $\theta$ given $X$, so it is the best prediction of $\theta$ from $X$, without any linearity constraint. The elliptical family is precisely the class of distributions for which this conditional expectation is linear; for a formal statement of this characterization, see \cref{sec:elliptical_distributions}.
Returning to the model, let $\beta$ and $\beta_0$ denote the coefficients from regressing $\theta$ on $\eta$:
\begin{equation} \label{eq:beta_def}
\beta = \Sigma_{\eta \eta}^{-1} \Sigma_{\eta \theta},
\qquad
\beta_0 = \mu_\theta - \beta^T \mu_\eta.
\end{equation}
By \cref{res:LCE}, we have $\E [ \theta | \eta] = \beta_0 + \beta^T \eta$. These coefficients are useful benchmarks against which to compare the linear equilibria constructed below.
\subsection{Linear equilibrium characterization}
In the signaling game, (pure) strategies are defined as follows. Let $T$ denote the support of $(\eta, \gamma)$. This set $T$ is the sender's type space. A distortion strategy for the sender is a map
\[
d \colon T \to \mathbf{R}^k,
\]
which assigns a distortion vector to each sender type. A decision strategy for the receiver is a map
\[
y \colon \mathbf{R}^k \to \mathbf{R},
\]
which assigns a decision to each feature vector of the sender.
I focus on Bayesian Nash equilibria in linear strategies, which I call \emph{linear equilibria}.%
\footnote{Technically, these functions are affine, but I use the more common term linear throughout. To be sure, nonlinear deviations are feasible, but in equilibrium they are not optimal.}
Restricting to linear equilibria disciplines the receiver's off-path decisions. In particular, linearity rules out discontinuous jumps in the receiver's decisions off-path, which are permitted, for example, by perfect Bayesian equilibrium.%
\footnote{With Gaussian uncertainty, linear strategies have full support so nothing is off-path. To avoid negative cost functions, I follow \cite{FrankelKartik2019} in using elliptical distributions with compact support. One consequence is that linear strategies do not have full support. With this modeling choice, imposing weak Bayesian equilibrium means that some type knows she is the lowest type and hence would never choose costly distortion. This low probability event makes belief-updating intractable.}
Suppose that the receiver uses the linear strategy
\[
y(x) = b_0 + b^T x,
\]
for some intercept $b_0$ and some coefficient vector $b = (b_1, \ldots, b_k)$. Plugging this strategy into the sender's utility gives
\[
b_0 + b^T (\eta + d) - (1/2) \sum_{i =1}^{k} d_i^2 /\gamma_i.
\]
Taking the receiver's linear strategy as given, the sender's utility is additively separable in $d_1, \ldots, d_k$. The marginal benefit of distorting feature $i$ is the sensitivity $b_i$ of the receiver's decision to feature $i$. The marginal cost of distorting feature $i$ is $d_i/\gamma_i$. Equating these expressions gives
\[
d_i (\eta, \gamma) = b_i \gamma_i.
\]
On each feature $i$, the sender's best response is increasing in her distortion ability and in the sensitivity of the receiver's decision to feature $i$. Because the receiver uses a linear strategy, the return to distortion is constant. Therefore, the sender's distortion best response does not depend on her intrinsic level.
Denoting the componentwise (Hadamard) product of vectors with with the symbol $\circ$, the full best response is
\[
d(\eta, \gamma) = b \circ \gamma.
\]
The sender's best response induces the feature vector $\eta + b \circ \gamma$. The equilibrium condition is
\begin{equation} \label{eq:signaling_CE}
b_0 + b^T ( \eta + b \circ \gamma) = \expec [ \theta | \eta + b \circ \gamma ].
\end{equation}
The left side is the receiver's linear strategy, evaluated at the feature vector $\eta + b \circ \gamma$. The right side is the receiver's posterior expectation of $\theta$ upon seeing the sender's feature vector. This equality between random variables must hold almost surely. By the linear conditional expectation property (\cref{res:LCE}), the conditional expectation on the right side equals the population regression of $\theta$ on the random feature vector $\eta + b \circ \gamma$. Therefore, this equality holds if and only if these regression coefficients coincide with the corresponding coefficients on the left side.%
\footnote{The \emph{only if} direction holds because, for all vectors $b$, the variance of $\eta + b \circ \gamma$ has full rank.}
Taking expectations of each side of \eqref{eq:signaling_CE} gives
\[
b_0 + b^T (\mu_\eta + b \circ \mu_\gamma) = \mu_\theta.
\]
The intercept $b_0$ is pinned down by the vector $b$. Hereafter I refer to linear equilibria by the coefficient vector $b$ alone, with the understanding that $b_0$ is chosen to satisfy this equation. The equilibrium condition for the vector $b$ is
\begin{equation*}
b = \reg ( \theta | \eta + b \circ \gamma),
\end{equation*}
which can be expressed equivalently in terms of the covariance as
\begin{equation} \label{eq:signaling_cov}
\var (\eta + b \circ \gamma) b = \cov ( \eta + b \circ \gamma, \theta).
\end{equation}
Both sides of these equation are $k$-vectors. This is a system of $k$ equations in the $k$ unknowns $b_1, \ldots, b_k$. Each equation involves cubic polynomials, with coefficients determined by the covariance matrix $\Sigma$.%
\footnote{Let $\diag (b)$ denote the diagonal matrix with the vector $b$ along the diagonal. After rewriting the Hadamard product $b \circ \gamma$ as $\diag (b) \gamma$, the full equation becomes
\[
\bigl[ \Sigma_{\eta \eta} + \Sigma_{\eta \gamma} \diag (b) + \diag (b) \Sigma_{\gamma \eta} + \diag(b) \Sigma_{\gamma\g} \diag (b) \bigr] b = \Sigma_{\eta \theta} + b \circ \Sigma_{\gamma \theta}.
\]
}
The cubic degree of this system highlights the challenge of analyzing this sequential-move game. In a simultaneous-move game with quadratic utility functions, best responses are linear and the coefficients of a linear equilibrium are characterized by a linear system \citep{MorrisShin2002,LambertMartiniOstrovsky2018WP}. In my model, the receiver makes an inference from the sender's endogenous features, not an exogenous signal. The feedback from the receiver's strategy to the distribution of this signal increases the dimension of the equilibrium conditions.
\subsection{Homogeneous distortion} \label{sec:homogeneous_distortion}
I first analyze the special case in which the distortion vector $\gamma$ is nonrandom, that is, $\gamma = \mu_\gamma$.Therefore, the sender's private information is only her intrinsic type $\eta$.
\begin{prop}[Separating equilibrium] \label{res:separating}
If the distortion vector $\gamma$ is nonrandom, then the signaling game has exactly one linear equilibrium. This equilibrium is fully separating.
\end{prop}
This proposition follows directly from the linear equilibrium condition \eqref{eq:signaling_cov}. With $\gamma$ nonrandom, \eqref{eq:signaling_cov} reduces to the linear system
\[
\var(\eta) b = \cov(\eta, \theta).
\]
In terms of the regression coefficients $\beta_0, \ldots, \beta_k$ defined in \eqref{eq:beta_def}, it follows that
\[
b = \beta,
\qquad
b_0 = \beta_0 - \beta^T ( \beta \circ \mu_\gamma).
\]
The receiver uses the same coefficient vector on the sender's features as he would if could directly observe the sender's intrinsic level. The sender chooses the distortion vector $\beta \circ \mu_\gamma$, so the receiver subtracts this from the feature vector, and no information is lost. The receiver's decision coincides with the conditional expectation $\E [ \theta | \eta]$, which in this case equals $\E [ \theta | \eta, \gamma]$.
With the intrinsic level as the only private information, the signaling equilibrium perfectly reveals the sender's intrinsic levels to the receiver, so there is no role for the intermediary. Returning to the FICO credit scoring example, if every consumer distorts her features by the same amount, this could be easily corrected by subtracting a constant from everyone's credit score. But in reality, different consumers experience different costs and benefits form distortion. I turn to this general case next.
\subsection{Equilibrium existence and uniqueness} \label{sec:existence_uniqueness}
With the distortion vector $\gamma$ nonconstant, the equilibrium condition \eqref{eq:signaling_cov} is cubic, not linear. Specifically, the linear equilibria are characterized by a cubic polynomial system of $k$ equations in $k$ unknowns. Generically, such a system has at most $3^k$ real solutions, but in general these systems are difficult to analyze.%
\footnote{For a system of $k$ polynomial equations in $k$ unknowns with generic coefficients, there are finitely many complex solutions. Whenever there are finitely many complex solutions, the Bezout's theorem states that the number of complex solutions is at most the product of the degrees of the equations \cite[Theorem 5.5, p.~115]{CoxLittleOshea2005} Of course, every real solution is a complex solution, so if there are finitely many linear equilibria, then there are at most $3^k$ equilibria. This upper bound is achieved if each equation is a univariate cubic polynomial in a single variable with three real roots.} Instead of analyzing this system algebraically, I take an analytic approach.
Unless specified otherwise, I make the following covariance assumptions for the rest of the paper.
\begin{ass*}[Covariance] \hfill
\vspace{-\baselineskip}
\begin{enumerate}[label = \Alph*., ref = \Alph*]
\item \label{it:uncorrelated} $\gamma$ and $(\theta, \eta)$ are uncorrelated.\footnote{Two random variables $W$ and $Z$ are uncorrelated if $\E [ W Z^T] = \E[W] \E [Z]^T$, or equivalently, the componentwise covariances $\cov(W_i, Z_j)$ are zero for all $i$ and $j$.}
\item \label{it:covariance} $\cov(\gamma_i, \gamma_j) \geq 0$ for all features $i,j$.
\end{enumerate}
\end{ass*}
Assumption~\ref{it:uncorrelated} says that the distortion ability is not informative about the characteristic of interest or the intrinsic levels. This stylized assumption substantially simplifies the analysis because all the cross covariance terms vanish. But the qualitative conclusions go through more generally; see \cref{sec:correlated_distortion}.
Assumption~\ref{it:covariance} says that the sender's distortion ability on different features is nonnegatively correlated. In the sender's utility function, the distortion ability $\gamma_i$ parameterize the sender's cost of distortion. Through a change of variables, this ability can also be interpreted as measuring the intensity of the sender's preferences for a high decision. Suppose that
\[
u_R = \gamma_0 y - (1/2) \sum_{i=1}^{k} d_i^2 / \bar{\gamma}_i,
\]
where each $\bar{\gamma}_{i}$ is a fixed constant and $\gamma_0$ is a nonnegative random variable. Dividing by $\gamma_0$ does not change the receiver's preferences, and yields a utility function that fits the baseline model with $\gamma_i = \gamma_0 \bar{\gamma}_i$. Immediately,
\[
\cov ( \gamma_i, \gamma_j) = \var(\gamma_0) \bar{\gamma}_i \bar{\gamma}_j \geq 0,
\]
so Assumption~\ref{it:covariance} is automatically satisfied. More generally, the distortion ability can reflect heterogeneity in the sender's cost of distortion and the sender's preference for high decisions.
With these standing assumptions, I obtain the main result for the signaling game.
\begin{thm}[Existence and uniqueness] \label{res:existence_uniqueness}
The signaling game has exactly one linear equilibrium.
\end{thm}
For existence, I use only the nondeneracy assumption on the conditional variance $\var(\eta|\gamma)$. Assumptions \ref{it:uncorrelated} and \ref{it:covariance} are not required. Define the best-response function $\BR \colon \mathbf{R}^k \to \mathbf{R}^k$ by
\[
\BR (b) = \reg ( \theta | \eta + b \circ \gamma).
\]
This function is continuous, but the strategy sets are not bounded. To apply a fixed-point theorem, I show that the function $\BR$ has bounded image. The key observation is that for every $b$, the variance matrix $\var(\eta + b \circ \gamma)$ is uniformly bounded away from $0$ in the positive semidefinite matrix order.%
\footnote{For two symmetric matrices $A$ and $B$, we have $A \succeq B$ if and only if the difference $A - B$ is positive semidefinite.}
This gives a uniform upper bound on the receiver's best response vector because the variance of the receiver's best response cannot be too large. With this bound, I apply Brouwer's fixed point theorem.
Uniqueness is more subtle. In the single-dimensional case, the linear equilibria are the roots of a univariate cubic polynomial. In general, there can be up three real roots, and without the covariance assumptions there can indeed by three equilibria. With $k$ features, the equilibrium system can in general have $3^k$ roots. But under the covariance assumptions, the equilibrium is unique. The idea is to express the equilibrium condition as a stationary point of a strictly convex function $\Phi$. Define the function $\Phi$ by
\[
\Phi(z) = \var( z^T \eta - \theta) + (1/2) \var( (z \circ z)^T \gamma).
\]
There is an equilibrium at $b$ if and only if $\nabla \Phi(b) = 0$. The convexity of this function expresses a simple property. Consider a candidate equilibrium strategy profile $(b,b)$. As $b$ moves away from $b^\star$ in either direction, the receiver gets an increasing marginal benefit from adjusting his strategy in the direction of the equilibrium profile $b^\star$. I illustrate this function in the following single-feature example.
\begin{exmp}[Single feature] \label{ex:single_feature}
Suppose $k = 1$. Now $\eta$, $\gamma$, and $b$ are scalars instead of vectors. If the receiver uses a linear decision strategy with slope $b$, the receiver's distortion best response is $d(\eta, \gamma) = b \gamma$. The sender's feature equals $\eta + b \gamma$, so
\[
\BR(b) = \frac{ \sigma_{\theta \eta}}{\sigma_{\eta}^2 + b^2 \sigma_\gamma^2}.
\]
I denote this term by $\BR (b)$. For simplicity suppose $\theta = \eta$ and all variances equal $1$. Then $\BR(b) = 1 / (1 + b^2)$. \cref{fig:potential_function} plots this best-response function against the 45-degree line. The best response achieves its maximum of $1$ at $b = 0$. As $b$ increases, the magnitude of the sender's distortion increases, so the sender's features become a noisier signal of her intrinsic level, and thus the receiver's best response is less sensitive to the sender's feature. Equilibrium is given by the condition $b = \BR(b)$. There is exactly one equilibrium, namely the unique root of the cubic polynomial $b^3 + b - 1$. Thus, $b^\star \approx 0.68$.
\end{exmp}
\begin{figure}
\centering
\newcommand{0.68233}{0.68233}
\begin{tikzpicture}
\begin{axis}[
clip = false,
axis lines = center,
scale = 1.1,
xmin = -0.2,
xmax = 1.5,
ymin = -0.2,
ymax = 1.5,
xtick = {0.68233, 1},
xticklabels = {$b^\star$, 1},
ytick = {1},
]
\addplot [Blue, thick, domain = -0.2:1.5, samples = 200] {1/(1 + x^2)} node [pos = 1, anchor = west] {$\operatorname{BR} (b)$};
\addplot [smooth, samples = 200] coordinates {(-0.2,-0.2) (1.5,1.5)} node [pos = 1, anchor = south west] {$b$};
\addplot [Orange, thick, dashed, domain = -0.2:1.5, smooth] {1.5*x^2 - 2*x + 1} node [pos = 1, anchor = north west] {$\Phi(b)$};
\addplot [dotted, thick, gray] coordinates {(0.68233,0) (0.68233, 0.68233)};
\fill [Blue] (0.68233, 0.68233) circle (2pt);
\fill [Orange] (0.68233, 1.5*0.68233^2 - 2*0.68233 + 1) circle (2pt);
\end{axis}
\end{tikzpicture}
\caption{Equilibrium and convex representation}
\label{fig:potential_function}
\end{figure}
How could this equilibrium arise? The intuition is that the receiver chooses some linear scoring rule. Then the sender adjusts her strategy in response, the receiver adjusts his strategy in response, and so on. As long as one player is playing a linear strategy, the best response of the other agent is also linear. In \cref{sec:equilibrium_stability}, I show that continuous best-response dynamics converge to the unique signaling equilibrium.
\begin{comment}
To prove this result, I first show that that the game is best-response equivalent to a zero-sum game, in the same way that a zero-sum potential game \citep{Voorneveld2000, MondererShapley1996} is best-response equivalent to an identical interest game.\footnote{See \cite{HwangReyBellet2018WP_characterization, HwangReyBellet2018WP_decomposition} for a discussion of these games.}
The best-response dynamics in this game coincide with the best response dynamics in the associated zero-sum game. These dynamics are well understood from the general result of \cite{HofbauerSorin2006} and its extension by \cite{BarronGoebelJensen2010}. The key idea is to show that when the agents are restricted to To prove this result, I first show that that the game is best-response equivalent to a zero-sum game, in the same way that a zero-sum potential game \citep{Voorneveld2000, MondererShapley1996} is best-response equivalent to an identical interest game.\footnote{See \cite{HwangReyBellet2018WP_characterization, HwangReyBellet2018WP_decomposition} for a discussion of these games.}
The best-response dynamics in this game coincide with the best response dynamics in the associated zero-sum game. These dynamics are well understood from the general result of \cite{HofbauerSorin2006} and its extension by \cite{BarronGoebelJensen2010}.
\end{comment}
\begin{comment}
I first represent the sequential-move game in strategic form, an then restrict the players to linear strategies.To analyze the best response dynamics, I work with a simpler auxiliary game.
This is reminiscent of potential games \citep{MondererShapley1996}, but in fact I can show that the game is \emph{zero-sum potential game} \citep{HwangReyBellet2018WP_characterization,HwangReyBellet2018WP_decomposition}. More precisely, this game is best-response equivalent to a zero-sum game, just as a potential game is best-response equivalent to a common interest game. Convex-concave zero-sum games in which the have special properties, namely that continuous best-response dynamics converge to the unique equilibrium. Therefore, this property also holds for any game that is best-response equivalent to a zero-sum game.%
\footnote{Indeed, this provides an alternative, related proof of uniqueness, but the proof given above is more robust to perturbations in the parameter values that violate the assumptions.}
In the appendix, I formally specify different forms of the best response dynamics that guarantee this result.
\end{comment}
\subsection{Information loss from distortion}
Now I study the comparative statics as the sender cares more about the receiver's decision. Suppose that the receiver's utility equals
\[
s y - (1/2) \sum_{i = 1}^{k} d_i^2/\gamma_i,
\]
where $s > 0$. The parameter $s$ controls the weight that the sender attaches to the receiver's decision. Without changing the sender's preferences, we can divide by $s$ to get
\[
y - (1/2) \sum_{i =1}^{k} d_i^2 / ( s \gamma_i).
\]
This is the utility function from the baseline model, except that the sender's distortion type is $s \gamma$. In the variance matrix $\Sigma$, the submatrix $\Sigma_{\gamma\g}$ is scaled up by $s^2$ and all other components are unchanged. By Assumption~\ref{it:uncorrelated}, the covariances between $\gamma$ and $(\theta, \eta)$ are already zero, so they do not change. Therefore, these comparative statics can be captured equivalently by scaling up the variance matrix $\Sigma_{\gamma\g}$ by $s^2$. I study how this stakes parameter $s$ affects the receiver's equilibrium utility---her utility in the unique linear equilibrium.
\begin{prop}[Information loss] \label{res:info_loss}
Fix a positive definite matrix\/ $\bar{\Sigma}_{\gamma\g}$, and let $\Sigma_{\gamma \gamma}(s) = s^2 \bar{\Sigma}_{\gamma\g}$ for $s > 0$.
The receiver's equilibrium utility is strictly decreasing in $s$.
\end{prop}
As $s$ increases, the sender's equilibrium feature vector becomes less informative about the sender's latent characteristic $\theta$. For a given decision strategy of the receiver, it is clear that as $s$ increases, the sender distorts her features by more and hence they become less informative. But the receiver's equilibrium strategy also changes as $s$ changes., so this conclusion is more subtle. A natural conjecture is that the equilibrium utility decreases whenever the variance matrix $\Sigma_{\gamma\g}$ strictly increases in the positive semidefinite cone order, but this does not hold, as illustrated by the following counterexample.
\begin{exmp}[Equilibrium payoff not monotone in $\Sigma_{\gamma\g}$]
Suppose $k = 2$. Consider the covariance matrices
\[
\Sigma_{\theta \eta}
=
\begin{bmatrix}
4 \\ 1
\end{bmatrix},
\qquad
\Sigma_{\eta\h}
=
\begin{bmatrix}
2 & 1 \\
1 & 2
\end{bmatrix},
\qquad
\Sigma_{\gamma\g}
=
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}.
\]
Set $\sigma_{\theta}^2 = 9$. This variance does not affect the analysis, as long as the induced variance matrix is positive semidefinite. With these parameters, the regression coefficient $\beta$ equals $(7/3, -2/3)$. Even though $\eta_1$ and $\eta_2$ are both positively correlated with the characteristic $\theta$, the component $\eta_1$ is much more strongly correlated, so the regression coefficient on $\eta_2$ is negative. The signaling equilibrium is $b^\star = (1.20,-0.10)$. As $\var(\gamma_2)$ increases, both components of $b^\star$ shrink in magnitude, and the receiver's payoff increases, though the effect is very small.\footnote{Increasing $\var(\gamma_2)$ from $1$ to $2$ shifts $b^\star$ from $(1.1951, -0.0971)$ to $(1.1950, -0.0966)$. The receiver's utility increases from $-4.3167$ to $-4.3165$.}
\end{exmp}
This result extends the single-dimensional result in \cite{FrankelKartik2019}. With a single feature, the scaling and the positive semidefinite cone order coincide. In multiple dimensions this is no longer the case, and my result clarifies that it is scaling the objective, not just increasing the variance matrix, that reduces equilibrium information transmission.
\begin{comment}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
group style = {group size = 2 by 1, horizontal sep = 1.5cm},
axis lines = center,
axis equal image,
xtick = \empty,
ytick = \empty,
xmin = -1,
xmax = 1,
ymin = -1,
ymax = 1,
]
\draw [thick, rotate around={-45: (0.25, 0.25)}] (0.25,0.25) ellipse (0.15 and 0.3);
\draw [thick, rotate around={-45: (0.25, 0.25)}] (0.25,0.25) circle (0.3 and 0.6);
\fill (0.25,0.25) circle [radius = 1pt];
\draw [Blue, thick, ->, > = latex] (0.25,0.25) -- (0.75, -0.1);
\end{axis}
\end{tikzpicture}
\caption{Elliptical distribution}
\label{fig:elliptical}
\end{figure}
\begin{nm}[Covariance] $\cov (\eta_j, \theta) \geq 0$ for each feature $j$.
\end{nm}
This covariance inequality is called a normalization, rather than an assumption, because it entails no loss of generality. If there is some feature $i$ or which $\cov ( \eta_i, \theta) < 0$, then redefine feature $i$ as its negative. This means that we must negative $x_i$, $\eta_i$, $d_i$, and $b_i$. After this change, decisions, scores, and payoffs are unchanged.
\end{comment}
\section{Scoring} \label{sec:scoring}
Now I return to the main model with the intermediary. To simplify the intermediary's problem, I first establish a revelation principle: There is no loss in restricting to direct recommendation scores. With this result, I characterize the linear outcome rules that the intermediary can induce. Then I maximize the intermediary's objective over this set.
\subsection{Revelation principle and obedience}
Each scoring rule $f \colon \mathbf{R}^k \to \Delta(S)$ induces a different game between the sender and the receiver. In order to state the revelation principle in full generality, I allow for mixed strategies in this game. A distortion strategy for the sender is a map $d \colon T \to \Delta(\mathbf{R}^k)$. A decision strategy for the receiver is a map $y \colon S \to \Delta(\mathbf{R})$, which assigns to each score a distribution over decisions. The solution concept is Bayesian Nash equilibrium, just as in the signaling setting without the intermediary. But the revelation principle would hold for other solution concepts as well.
Together a scoring rule $f$ and a strategy profile $(d,y)$ in the resulting game induce an \emph{outcome rule} $\sigma = (\sigma_S, \sigma_R)$, with $\sigma_S = d$ and $\sigma_R = y \circ f$, where the composition of functions is extended in the natural way to the composition of mixed strategies. An outcome rule $\sigma$ is \emph{implementable} if there exists a scoring rule $f$ and a Bayesian Nash equilibrium $(d,y)$ in the associated game that induces $\sigma$. An outcome rule is \emph{directly implementable} if it can be implemented by a scoring rule with $S = \mathbf{R}$ and a Bayesian Nash equilibrium with $y = \id$. In words, direct implementation means that the intermediary makes a decision recommendation, and the receiver follows this recommendation.
\begin{prop}[Revelation principle] \label{res:revelation_principle}
Every implementable outcome rule is directly implementable.
\end{prop}
The intuition is that the intermediary pools all the scores inducing the same decision into a single score, which is then renamed to be the decision that it induces. This way, the intermediary gives the receiver the minimal information needed to take his decision. The intuition resembles the revelation principle from \cite{Myerson1986}, but the formal details are different because the intermediary in my model observes the sender's features (which are payoff-relevant actions) rather than cheap-talk messages.%
\footnote{With cheap-talk messages, the mediator can without loss restrict the sender to a fixed set of messages. The mediator distinguishes one default message from the message set and commits to treat any message outside the message space as if the sender had sent that default message. By sending a message outside the message space, the sender cannot do better than sending the default message. The same approach does not work in my model because different distortion vectors give the sender different payoffs, even if they result in the same decision by the receiver. \cite{DovalEly2016wp} and \cite{MakrisRenou2018wp} establish a revelation principle for dynamic games in which the designer can observe past actions.}
With the revelation principle, I can focus my analysis directly on the outcome rule $\sigma$. A decision rule $\sigma$ is directly implementable if it satisfies the following obedience conditions. The sender's obedience condition is simply her equilibrium condition for strategy profile $(\sigma_S, \sigma_R)$. For the receiver, the obedience condition is
\[
\sigma_R(\eta + \sigma_S (\eta, \gamma)) = \E \bigl[ \theta | \sigma_R(\eta + \sigma_R(\eta, \gamma)) \bigr].
\]
Here, both sides are viewed as random variables. The randomness comes from the random variable $(\theta, \eta, \gamma)$ and also from the mixed decision rules.
\subsection{Linear outcome rules} \label{sec:linear_outcome_rules}
\begin{comment}
In practice, linear scoring rules are common.
\item robustness/overfitting: not detail-dependent; if distribution is within this class, higher moments do not matter
\item strategic simplicity: running regressions
\item interpretation: equilibrium depends on moments, so possible to do comparative statics in terms of low-dimensional parametrization
\item explainability of algorithms
footnote{There are other reasons to consider consider lieear scores. This also ensures that the importance of differene features is independent of an agents' features, so this is a fairness benefit. It also gives a low dimensional representation, so it is less detail-dependent. This also makes comparisons across scoring rules more transparent.}
\end{comment}
For the rest of the analysis, I restrict to linear decision rules. This allows me to isolate the effect of commitment when I compare with the signaling setting. Thus,
\[
\sigma_R (x) = b_0 + b^T x,
\]
for some intercept $b_0 \in \mathbf{R}$ and a coefficient vector $b \in \mathbf{R}^k$. As in the signaling game, the sender's best response is $\sigma_S (\eta, \gamma) = b \circ \gamma$. The receiver's obedience condition, however, is much more permissive than the receiver's best response condition in the signaling game. The receiver's decision must match his conditional expectation of $\theta$, given the intermediary's decision recommendation:
\begin{equation} \label{eq:scoring_CE}
b_0 + b^T ( \eta + b \circ \gamma) = \E [ \theta | b_0 + b^T ( \eta + b \circ \gamma)].
\end{equation}
By the linear conditional expectations property (\cref{res:LCE}), this reduces to the regression system. The intercept $b_0$ on the right side does not affect the conditional expectation, so $b_0$ is pinned down by $b$:
\begin{equation*}
b_0 = \mu_\theta - b^T ( \mu_\eta + b \circ \mu_\gamma).
\end{equation*}
The coefficient vector $b$ must satisfy a single regression equation:
\begin{equation*}
1 =\reg \bigl( \theta | b^T ( \eta + b \circ \gamma) \bigr).
\end{equation*}
or in terms of the covariance as
\begin{equation} \label{eq:scoring_cov}
\var \bigl( b^T (\eta + b \circ \gamma) \bigr) = \cov \bigl( b^T (\eta + b \circ \gamma), \theta \bigr).
\end{equation}
This condition says that if the receiver regresses $\theta$ on the score, the regression coefficient is $1$. This is of course a necessary condition; otherwise the receiver would have a profitable linear deviation. By the linear conditional expectations property (\cref{res:LCE}), the receiver's best response is linear, so if he has no profitable linear deviation, he has no profitable deviation at all.
This is a single quartic polynomial equation in the coefficients $b_1 , \ldots, b_k$. Compare this scalar equality with the vector equality from the signaling equilibrium
\[
\var( \eta + b \circ \gamma ) b = b^T \cov ( (\eta + b \circ \gamma), \theta).
\]
If $b$ satisfies the equilibrium condition, then it automatically satisfies the scoring condition. As long as there are multiple features ($k > 2$), then the scoring condition is more permissive. If there is a single feature ($k = 1$), then the signaling and scoring conditions are the same, except that scoring always permits $b = 0$. In essence, the intermediary can always provide no information about the sender's features, the receiver will just match his decision with his prior expectation $\mu_\theta$. Thus, multiple features are crucial, and this is what distinguishes my setting from the single-dimensional setting of \cite{FrankelKartik2019,FrankelKartik2019WP}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
axis lines = center,
axis equal image,
scale = 1.1,
xtick = \empty,
ytick = \empty,
xmin = -0.1,
xmax = 1.1,
ymin = -0.1,
ymax = 1.1,
xlabel = {$x_1$},
ylabel = {$x_2$},
x label style={at={(current axis.right of origin)},anchor=west},
y label style={at={(current axis.above origin)},anchor=south},
]
\addplot [gray] {-2*x + 0.63};
\addplot [gray] {-2*x + 0.63 + 0.348};
\addplot [gray] {-2*x + 0.63 + 2*0.348};
\addplot [gray] {-2*x + 0.63 + 3*0.348};
\addplot [gray] {-2*x + 0.63 + 4*0.348};
\addplot [gray] {-2*x + 0.63 + 5*0.348};
\draw [Blue, very thick, ->, > = latex] (0.5 + 0.348/2, -1 + 0.63 + 3*0.348) -- (0.5 + 0.348/2 + 0.3, -1 + 0.63 + 3*0.348 + 0.15);
\draw (0.5 + 0.348/2 + 0.3, -1 + 0.63 + 3*0.348 + 0.15) node [below = 0.1cm, Blue] {$b$};
\end{axis}
\end{tikzpicture}
\caption{Level curves of scoring function}
\label{fig:coarsening}
\end{figure}
\cref{fig:coarsening} plots the level curves in feature space of the intermediary's scoring rule. For illustration, I take $k = 2$. The contour lines are equally spaced parallel lines, orthogonal to the coefficient vector $b$. In general for $k > 3$, the level sets are parallel hyperplanes orthogonal to the vector $b$. Upon seeing the intermediary's score, the receiver learns which level set the sender's feature lies in, but he cannot pinpoint the sender's exact feature vector. Based on this information, the receiver updates his belief about the sender's characteristic and takes a decision.
\subsection{A simple setting} \label{sec:simple_setting}
Consider the following simple setting, which I refer to throughout the rest of the paper. The characteristic $\theta$ has mean zero and unit variance. For each $i$,
\[
\eta_i = \theta + \varepsilon_i,
\]
and $\theta, \varepsilon_1, \ldots, \varepsilon_i, \gamma_1, \ldots, \gamma_k$ are uncorrelated. Denote the variances by $\sigma_{\varepsilon,i}^2 = \var(\varepsilon_i)$ and $\sigma_{\gamma,i}^2 = \var(\gamma_i)$ for each $i$.
\begin{comment}
As a benchmark, first suppose that there is no gaming, so that $X = \eta$. In the best scoring system, the coefficient vector is given by the projection
\[
\beta = \Sigma_{\eta \eta}^{-1} \Sigma_{\eta \theta},
\]
where
Simple algebra yields
\[
b_1 = \sigma_{\xi,2}^2 / \det \Sigma_{\eta \eta},
\qquad
b_2 = \sigma_{\xi,1}^2 / \det \Sigma_{\eta \eta},
\]
where
\[
\det \Sigma_{\eta \eta} = \sigma_{\xi,1}^2 +\sigma_{\xi,2}^2 + \sigma_{\xi,1}^2 \sigma_{\xi,2}^2.
\]
Naturally, more weight is placed on the more accurate feature. The sum of the weights is strictly less than $1$.
\end{comment}
\cref{fig:obedient_b} plots the set of obedience decision rules for $k = 2$, with $\sigma_{\varepsilon,1}^2 = \sigma_{\varepsilon,2}^2 = 1$ and $\sigma_{\gamma,1}^2 = 1.5$. The solid blue curve shows $\sigma_{\gamma,2}^2 = 1.5$ and the dashed orange curve shows $\sigma_{\gamma,2}^2 = 1.5$. If the intermediary recommends a decision using a rule strictly inside this curve, the receiver's best response is linear in the recommendation but with a coefficient strictly greater than $1$.%
\footnote{To see that these regions are nested, observe that the region inside the curve is given by the inequality
\[
b^T \Sigma_{\eta \eta} b + (b \circ b )^T \Sigma_{\gamma\g} (b \circ b) \leq b^T \Sigma_{\eta \theta}.
\]
As $\Sigma_{\gamma\g}$ decreases with respect to the positive semidefinite order, this inequality becomes more permissive.}
Conversely, if the intermediary recommends a decision using a rule strictly outside the curve, then the receiver's best response is linear in the recommendation, but with a coefficient strictly less than $1$ (and possibly negative). As the variance of the distortion ability $\gamma_2$ increases, the obedience constraint requires that for a given value of $b_1$, the coefficient $b_2$ shrinks towards $0$. Both curves pass through the origin, which corresponds to the intermediary providing no information to the receiver.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
axis lines = center,
scale = 1.1,
axis equal image,
xtick = \empty,
ytick = \empty,
xmin = -0.2,
xmax = 0.55,
ymin = -0.2,
ymax = 0.55,
xlabel = {$b_1$},
ylabel = {$b_2$},
x label style={at={(current axis.right of origin)},anchor=west},
y label style={at={(current axis.above origin)},anchor=south},
]
\addplot [Blue, thick, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_A.tex};
\addplot [Orange, thick, dashed, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_B.tex};
\end{axis}
\end{tikzpicture}
\caption{Obedient coefficient vectors for two different parameter values}
\label{fig:obedient_b}
\end{figure}
\begin{comment}
\begin{itemize}
\item less information for the receiver means fewer opportunities to deviate, i.e., set of strategies available to the receiver is smaller
\item effectively constrained to selecting a coefficient vector that is aligned with the vector $b$, just chooses coefficient, but this is not quite true if there is noise
\item run regression on one coefficient instead of $k$
\end{itemize}
\end{comment}
\subsection{Optimal scoring} \label{sec:optimal_scoring}
Having characterized the set of feasible decision rules, I now optimize over this set with respect to the intermediary's objective. The scoring rules are parameterized by the coefficients $b_0$ and $b$, but the intercept $b_0$ is pinned down by $b$, so the set of obedient decision rules is parameterized by $b$ alone. The intermediary minimizes her expected loss
\[
\E \Bigl[ \bigl(b_0 + b^T (\eta + b \circ \gamma) - \theta \bigr)^2 \Bigr].
\]
The standard bias--variance decomposition gives
\[
\E^2 \bigl[ b_0 + b^T (\eta + b \circ \gamma) - \theta \bigr] + \var \bigr( b_0 + b^T (\eta + b \circ \gamma) - \theta \bigr).
\]
For obedient decision rules, the first term vanishes. In the second term, $b_0$ does not affec the variance so it can be dropped, leaving an optimization over coefficient vectors $b$ in $\mathbf{R}^k$. The intermediary's problem is
\begin{equation} \label{eq:scoring_problem}
\begin{aligned}
& \text{minimize}
&& \var \bigl( b^T (\eta + b \circ \gamma) - \theta \bigr) \\
& \text{subject to }
&& \var \bigl( b^T(\eta + b \circ \gamma) \bigr)
=
\cov \bigl( b^T (\eta + b \circ \gamma), \theta \bigr).
\end{aligned}
\end{equation}
\begin{comment}
By Assumption \ref{it:uncorrelated}, this reduces to
\begin{equation} \label{eq:scoring_problem}
\begin{aligned}
& \text{minimize}
&& b^T \Sigma_{\eta \eta} b - 2 b^T \Sigma_{\eta \theta} + \sigma_{\theta}^2 \\
& \text{subject to }
&& b^T \Sigma_{\eta\h} b + (b \circ b)^T \Sigma_{\gamma \gamma} (b \circ b)
=
b^T \Sigma_{\eta \theta}.
\cov \bigl( b^T (\eta + b \circ \gamma), \theta \bigr).
\end{aligned}
\end{equation}
\end{comment}
The objective is convex, but the feasible set is not, so this is not a convex optimization problem.
\begin{prop}[Scoring uniqueness] \label{res:scoring_uniqueness}
The scoring problem has a unique solution.
\end{prop}
This allows for an unambiguous comparison between the signaling equilibrium vector, which I denote by $b_{\signal}$, and the scoring equilibrium, denoted $b_{\score}$.
\begin{thm}[Scoring and signaling] \label{res:scoring_equals_screening}
For almost every value of $(\Sigma_{\eta \theta}, \Sigma_{\eta \eta}, \Sigma_{\gamma\g})$, we have $b_{\signal} \neq b_{\score}$.
\end{thm}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
scale = 1.1,
xtick = \empty,
ytick = \empty,
xmin = 0.25,
xmax = 0.36,
ymin = 0.25,
ymax = 0.36,
xlabel = {$b_1$},
ylabel = {$b_2$},
y label style={rotate = -90}
]
\addplot [Blue, thick, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_A.tex};
\addplot [Orange, thick, dashed, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_B.tex};
\fill (1/3,1/3) circle (2pt);
\addplot [dotted, gray, thick] coordinates {(0.2,0.2) (1/3,1/3)} node [anchor = west] {$\beta$};
\fill [Blue] (0.317353, 0.317353) circle (2pt) node [anchor = west, xshift = 5pt, yshift = -1pt] {$b_{\mathrm{signal}} = b_{\mathrm{score}}$};
\fill [Orange] (0.335693, 0.271869) circle (2pt) node [anchor = east, xshift = -2pt] {$b_{\mathrm{signal}}$};
\fill [Orange] (0.347478, 0.260708) circle (2pt) node [anchor = east, xshift = -2pt] {$b_{\mathrm{score}}$};
\end{axis}
\end{tikzpicture}
\caption{Optimal scoring rule and signaling equilibrium}
\label{fig:optimal_scoring}
\end{figure}
\cref{fig:optimal_scoring} compares the signaling equilibrium and the scoring solution for the example shown in \cref{fig:obedient_b}, with the same parameter choices as before---the solid blue curve is $(\sigma_{\gamma,1}^2, \sigma_{\gamma,2}^2) = (1.5, 1.5)$ and the dashed blue curve is $(\sigma_{g,1}^2, \sigma_{\gamma,2}^2) = (1.5, 6)$. In the symmetric case, the signaling equilibrium vector is a scalar multiple of the regression coefficient $\beta$. In this case, the signaling equilibrium maximizes the receiver's utility over all obedient decisions rules, so the scoring solution coincides with the signaling equilibrium.
The orange curve illustrates the generic case. Here $\sigma_{\gamma,1}^2 > \sigma_{\gamma,2}^2$, so in the signaling equilibrium the receiver's decision is less sensitive to feature $1$ than to feature $2$. In this case, the signaling equilibrium does not maximize the receiver's utility over all obedient decision rules, and the intermediary can improve the accuracy of the receiver's decision by sliding further away from the $45$-degree line, in order to put more weight on feature $1$ and less weight on feature $2$. In general, there will be a local improvement away from the signaling equilibrium, along the curve of obedient decision rules, as long as the gradient of the receiver's objective is orthogonal to the curve.
I now present this perturbation argument more generally for arbitrary $k$. Start at the signaling equilibrium vector, which I denote here by $b^\star$ to simplify the notation. Write the receiver's utility as a function of both the sender's strategy and the receiver's strategy:
\[
U_R(a,b) = - \var \bigl( b^T ( \eta + a \circ \gamma) - \theta \bigr).
\]
When perturbing the vector $b$ away from the signaling equilibrium profile in the direction $v$, the first-order effect is
\[
\nabla_1 U_R (b^\star,b^\star) \cdot v + \nabla_2 U_R (b^\star, b^\star) \cdot v.
\]
But in equilibrium, the receiver is playing a best response, so the first-order condition implies that $\nabla_2 U_R (b^\star, b^\star) = 0$. Therefore, the first-order effect of a local change is only on the sender's feature vector. Writing out the variance, we get
\[
\nabla_1 U_R (b^\star, b^\star) = - 2 \diag (b^\star) \Sigma_{\gamma \gamma} (b^\star \circ b^\star),
\]
so
\[
\nabla_1 U_R (b^\star, b^\star) \cdot v = - (b^\star)^T \diag (b^\star) \Sigma_{\gamma \gamma} \diag(b^\star) v.
\]
Therefore there is a local improvement in moving away from the more heterogeneous features. Of course, this direction must remain within the manifold of obedient coefficient vectors. The receiver cannot systematically shrink the coefficient vector because doing so would violate the obedience constraint. But as long as the features are not symmetric, there is a direction that preserves the obedience constraint and local improves the receiver's information.
\begin{prop} [Asymmetry condition] \label{res:scaling}
If $b_{\signal}$ is not a scalar multiple of $\beta$, then $b_{\signal} \neq b_{\score}$.
\end{prop}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
clip = false,
name = plot,
legend to name = leg,
legend style={/tikz/every even column/.append style={column sep=0.3 cm}},
axis y line=middle,
axis x line=bottom,
clip = false,
scale = 1.1,
xtick = {1.5,6},
ytick = {0.25, 0.3, 0.333},
yticklabels = {0.27, 0.3, $\beta_1 = \beta_2$},
xmin = 1.5,
xmax = 6.25,
ymin = 0.25,
ymax = 0.35,
xlabel = {$\sigma_{\gamma,2}^2$},
x label style={at={(current axis.right of origin)},anchor=west},
]
\addplot [Blue, thick, smooth] table [col sep = comma, x index = 0, y index = 1] {signaling_CS1.tex};
\addplot [Blue, thick, smooth,forget plot] table [col sep = comma, x index = 0, y index = 1] {signaling_CS2.tex};
\addlegendentry{signaling}
\addplot [Orange, thick, dashed, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_CS1.tex} node [anchor = south] (topa) {};
\addplot [Orange, thick, dashed, smooth, forget plot] table [col sep = comma, x index = 0, y index = 1] {scoring_CS2.tex} node [anchor = north] (bottomb) {};
\addlegendentry{scoring}
\addplot [dotted, Green, thick, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_BR1.tex} node [anchor = north] (topb) {};
\addplot [dotted, Green, thick, smooth, forget plot] table [col sep = comma, x index = 0, y index = 1] {scoring_BR2.tex} node [anchor = south] (bottoma) {};
\addlegendentry{BR to scoring}
\end{axis}
\draw[decorate,decoration={brace, raise = 5pt}, thick, gray] (topa) -- (topb) node [midway, anchor = west, xshift = 10pt] {$b_1$};
\draw[decorate,decoration={brace, raise = 5pt}, thick, gray] (bottoma) -- (bottomb) node [midway, anchor = west, xshift = 10pt] {$b_2$};
\node[at=(plot.east), anchor=west, xshift = 0.5cm] {\pgfplotslegendfromname{leg}};
\end{tikzpicture}
\caption{Ex post best response to scoring}
\label{fig:scoring_BR}
\end{figure}
\begin{comment}
As the vector moves towards the less heterogeneous components, the receiver
If the coefficient vector in the signaling game is a scalar multiple of the regression vector, then the sender's distortion in the signaling equilibrium has the same proportional effect on each of the features. Through scoring, the intermediary can essentially rearrange the weights placed on the different features, but the intermediary cannot change the average weight. Therefore, in this case, the marginal effects of changing the sensisitivies all coincide, so there is nothing to be gained.
\end{comment}
Unlike signaling, the comparative statics result is stronger. A function $g$ is increasing with respect to a partial order if $x \succeq y$ implies $f(x) \geq f(y)$ and $x \succ y$ implies $f(x) > f(y)$. Note, however, that the relations $A \succeq B$ and $A \neq B$ do not imply $A \succ B$.
\begin{prop}[Scoring comparative statics] \label{res:scoring_CS}
The receiver's scoring utility is decreasing in the variance $\Sigma_{\gamma \gamma}$.
\end{prop}
As the variance $\Sigma_{\gamma \gamma}$ increases, so does the variance of the decision, which strictly slackens the constraint. This result is stronger than the comparative statics in the signaling case. In fact, the proof shows that the receiver's payoff is decreasing with respect to the even weaker copositive matrix order.
\section{Comparing commitments} \label{sec:comparing_commitment}
The intermediary partially resolves the receiver's commitment problem. Now I consider a \emph{screening} setting, in which the receiver commits to a decision as a function of the sender's features. In this section, I compare the three settings---signaling, scoring, and screening.
\subsection{Decision commitment}
The final commitment regime is full commitment for the receiver, termed \emph{screening}. The intermediary and the receiver have the same preferences, so the intermediary can fully reveal the sender's features to receiver and let the receiver commit to a decision rule. Alternatively, the intermediary can make a decision recommendation, without any obedience constraints on the receiver. To isolate the effect of commitment, I again restrict to linear decision rules. The problem is to minimize the expected loss
\[
\E \Bigl[ \bigl( b_0 + b^T (\eta + b \circ \gamma) - \theta\bigr)^2 \Bigr].
\]
This objective incorporates the sender's best response to this decision rule. Without the obedience constraint, it is feasible to commit to a decision rule that is systematically biased, but this is never optimal. Adjusting the intercept $b_0$ does not affect the sender's incentives, so it is always optimal to choose $b_0$ so that the expected decision equals the expected characteristic. Applying the same bias--variance decomposition used above in the intermediary's problem, it follows that the receiver faces an unconstrained optimization of coefficient vectors $b$ in $\mathbf{R}^k$, with loss function
\[
\var \bigl( b^T \eta + (b \circ b)^T \gamma - \theta \bigr) = \var (b^T \eta - \theta) + (b \circ b)^T \Sigma_{\gamma\g} (b \circ b).
\]
This is the third and final commitment regime. Here it is immediately clear that the receiver's payoff is decreasing in $\Sigma_{\gamma\g}$, with respect to the positive semidefinite order. The objective is the same in all three cases. Without the intermediary, the signaling equilibrium is pinned down by the system of $k$ equations in \eqref{eq:signaling_cov}. By \cref{res:existence_uniqueness}, there is exactly one coefficient vector satisfying these conditions, so the objective function plays no role. Next, with the intermediary, scoring imposes a single obedience condition \eqref{eq:scoring_cov}, so the feasible set is a $(k-1)$-dimensional hypersurface in $\mathbf{R}^k$. Finally, under screening, there are no constraints, so the optimization is over all of $\mathbf{R}^k$.
\subsection{Commitment reduces distortion}
Now I can formally quantify the loss from distortion. The receiver's loss separates and can be written as
\[
\var( b^T \eta - \theta) + \var\bigl( (b \circ b)^T \gamma)
=
\var(b^T \eta - \theta) + (b \circ b)^T \Sigma_{\gamma\g} (b \circ b).
\]
The second term reflects the uncertainty created by the sender's distortion. Define the norm $\| \cdot \|_{4,\gamma}$ by
\[
\| b \|_{4, \gamma} = \Bigl[ (b \circ b)^T \Sigma_{\gamma\g} (b \circ b) \Bigr]^{1/4}.
\]
See \cref{sec:convexity} for the proof this is in fact a norm. This norm measures the sensitivity of the receiver's decisions to the sender's features. It is with respect to this measure of sensitivity that the receiver's decision becomes less sensitive to the sender's features as commitment increases.
\begin{thm}[Commitment reduces distortion] \label{res:reduced_distortion}
If $\Sigma_{\gamma\g}$ is positive definite, then
\begin{align*}
\| \beta \|_{4,\gamma}
&> \| b_{\mathrm{signal}} \|_{4,\gamma}
\geq \| b_{\mathrm{score}} \|_{4,\gamma}
> \| b_{\mathrm{screen}} \|_{4,\gamma}.
\end{align*}
\end{thm}
As the receiver's commitment power increases, the receiver makes his decision less sensitive to the sender's features into order to reduce the sender's distortion.
\subsection{Comparing feature weights}
I compare all three commitment regimes in the simple setting of \cref{sec:simple_setting}. The parameter $\sigma_{\gamma,1}^2$ is fixed at $1.5$, and I vary $\sigma_{\gamma,2}^2$. \cref{fig:comparison} plots the features weights as $\sigma_{\gamma,2}^2$ varies from $1.5$ to $6$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
clip = false,
name = plot,
legend to name = legg,
legend style={/tikz/every even column/.append style={column sep=0.3 cm}},
axis y line=middle,
axis x line=bottom,
clip = false,
scale = 1.1,
xtick = {1.5,6},
ytick = {0.25, 0.3, 0.333},
yticklabels = {0.25, 0.3, $\beta_1 = \beta_2$},
xmin = 1.5,
xmax = 6.25,
ymin = 0.24,
ymax = 0.35,
xlabel = {$\sigma_{\gamma,2}^2$},
x label style={at={(current axis.right of origin)},anchor=west},
]
\addplot [Blue, thick, smooth] table [col sep = comma, x index = 0, y index = 1] {signaling_CS1.tex};
\addplot [Blue, thick, smooth, forget plot] table [col sep = comma, x index = 0, y index = 1] {signaling_CS2.tex} node [anchor = south] (bottoma) {};
\addlegendentry{signaling}
\addplot [Orange, thick, dashed, smooth] table [col sep = comma, x index = 0, y index = 1] {scoring_CS1.tex} node [anchor = south] (topa) {};
\addplot [Orange, thick, dashed, smooth, forget plot] table [col sep = comma, x index = 0, y index = 1] {scoring_CS2.tex};
\addlegendentry{scoring}
\addplot [Green, thick, dotted, smooth] table [col sep = comma, x index = 0, y index = 1] {screening_CS1.tex} node [anchor =north] (topb) {};
\addplot [Green, thick, dotted, smooth, forget plot] table [col sep = comma, x index = 0, y index = 1] {screening_CS2.tex} node [anchor = north] (bottomb) {};
\addlegendentry{screening}
\end{axis}
\draw[decorate,decoration={brace, raise = 5pt}, thick, gray] (topa) -- (topb) node [midway, anchor = west, xshift = 10pt] {$b_1$};
\draw[decorate,decoration={brace, raise = 5pt}, thick, gray] (bottoma) -- (bottomb) node [midway, anchor = west, xshift = 10pt] {$b_2$};
\node[at=(plot.east), anchor=west, xshift = 0.5cm] {\pgfplotslegendfromname{legg}};
\end{tikzpicture}
\caption{Comparing commitments}
\label{fig:comparison}
\end{figure}
First, look at the left axis, where $\sigma_{\gamma,1}^2 = \sigma_{\gamma,2}^2 = 1.5$. Here the signaling equilibrium and the scoring solution coincide, as was plotted in \cref{fig:optimal_scoring}. The screening solution, however, puts strictly smaller weights on both features. This highlights the fact that scoring allows the intermediary to rearrange the weights on the different features, but it does not allow the intermediary to systematically decrease the weight on every feature. When the features are symmetric, nothing can be gained from re-weighting, but commitment strictly increases the receiver's payoff.
Moving to the right, the variance $\sigma_{\gamma,2}^2$ increases, meaning that distortion ability on the second feature becomes more heterogeneous. For all three regimes, the weight placed on the second feature decreases. If the components were uncorrelated, this would have no effect on the first feature, but because $\eta_1$ and $\eta_2$ are positively correlated, the weight on the first feature also increases. Relative to the signaling equilibrium, the scoring solution has even less equal weights on the two features because the intermediary, unlike the receiver's equilibrium strategy, internalizes the effect of the greater decision sensitivity on the sender's choice of distortion. The scoring solution is not a signaling equilibrium. If, ex post, the receiver could observe the features, he would want to use different weights, as illustrated in \cref{fig:scoring_BR}.
\begin{comment}
\begin{thm}[Comparing feature weights] \label{res:comparing_feature_weights}
Suppose $\cov (\eta,\gamma) = 0$.
\begin{enumerate}
\item For screening, we have
\[
b_{\mathrm{screen}} ( \Sigma_{\gamma\g}, \Sigma_{\eta \eta}, \Sigma_{\gamma \theta}, \Sigma_{\eta \theta})
\in
E( 2 \Sigma_{\gamma\g}, \Sigma_{\eta \eta}, 2 \Sigma_{\gamma \theta}, \Sigma_{\eta \theta}).
\]
\item For scoring, there exists $\lambda$ such that
\[
b_{\mathrm{score}} ( \Sigma_{\gamma\g}, \Sigma_{\eta \eta}, \Sigma_{\gamma \theta}, \Sigma_{\eta \theta})
\in
E( 2 \Sigma_{\gamma\g}, \Sigma_{\eta \eta}, 2 \lambda \Sigma_{\gamma \theta}, \lambda \Sigma_{\eta \theta}).
\]
\end{enumerate}
\end{thm}
\end{comment}
\begin{comment}
These observations about the two-feature example hold more generally, as formalized in the next result. Recall that the vectors $b_{\mathrm{signal}}$ and $b_{\mathrm{score}}$ respectively minimize the functions
\begin{align*}
v_{\mathrm{signal}} (b) &= \var ( b^T \eta - \theta) + (1/2) (b \circ b)^T \var(\gamma) (b \circ b), \\
v_{\mathrm{screen}} (b) &= \var ( b^T \eta - \theta) + (b \circ b)^T \var(\gamma) (b \circ b).
\end{align*}
Now I present an analogous representation for scoring.
\end{comment}
\begin{comment}
In both cases, the terms involving $\gamma$ are doubled because a committed party anticipates how the receiver's decision rule influences the sender's distortion policy. Under scoring, the covariance parameters with $\theta$ are scaled by $\lambda$ to preserve the obedience constraint. This term $\lambda$ is the Lagrange multiplier on the covariance constraint.
\end{comment}
The observations in this example generalize to the simple setting introduced in \cref{sec:simple_setting}.
\begin{prop}[Feature weights] \label{res:feature_weights}
In the simple setting, the following hold.
\begin{enumerate}[label = (\roman*)]
\item $b_{\signal}$, $b_{\score}$, and $b_{\screen}$ are all nonnegative.
\item $b_{\score} \leq b_{\screen}$.
\item If $(\sigma_{\varepsilon,i}^2, \sigma_{\gamma,i}^2) > (\sigma_{\varepsilon,j}^2, \sigma_{\gamma,j}^2)$, then $b_i < b_j$ for all $b$ in $\{ b_{\signal}, b_{\score}, b_{\screen} \}$.
\end{enumerate}
\end{prop}
First, I compare the features weights across commitment levels. Relative to scoring, the screening solution puts less weight on every feature. Next I compare the weights on different features within the same commitment setting. Less weight is a placed on a feature if the intrinsic level is a noisier signal of the latent characteristic and the distortion ability is more heterogeneous.
\begin{comment}
\subsection{Decomposing the receiver's losses}
Using these optimization conditions, I can cleanly decompose the different sources of the receiver's losses. This is a formal sense in which scoring provides an intermediate level of commitment.
The total losses are ordered as:
\begin{align*}
\| \beta \|_{4,\gamma}^4
&>
\| b_{\mathrm{signal}} - \beta \|_{4, \eta}^4 + \| b_{\mathrm{signal}} \|_{4,\gamma}^4 \\
&\geq
\| b_{\mathrm{score}} - \beta \|_{4, \eta}^4 + \| b_{\mathrm{score}} \|_{4,\gamma}^4 \\
&>
\| b_{\mathrm{screen}} - \beta \|_{4, \eta}^4 + \| b_{\mathrm{screen}} \|_{4,\gamma}^4.
\end{align*}
Performing comparative statics in the sender's welfare is much more challenging.
\end{comment}
\begin{comment}
\begin{thm}[Screening] \label{res:scoring_screening}
Suppose $\var(\gamma)$ has full rank. Relative to scoring, screening strictly reduces both $\var(Y)$ and $\cov (Y, \theta)$.
\end{thm}
\subsection{Special case: Uniform correlation}
\subsection{Discussion of commitment}
If the receiver takes a binary action, then signaling and screening coincide. Why? In the binary case, suppose the intermediary recommends this action. Clearly, we have an ordering, the only feasible deviation is to completely uninformative.
If the receiver takes only two actions, and has preferences over the state only, then there is no difference between informational commitment and decision commitment, provided that he is restricted to playing a best response.
What explains this difference. In the binary case, recommending an action does not provide much information at all.
\begin{figure}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{@{} l p{3.1cm} p{3.5cm} p{3.1cm} @{} }
\toprule
& \textbf{Signaling} & \textbf{Scoring} & \textbf{Screening} \\
\midrule
Commitment & none & information & decisions \\
Constraints & $\E [ \theta | X] = f(X)$ & $\E [ \theta | f(X) ] = f(X)$ & none \\
Free coefficients & $0$ & $k - 1$ & $k$ \\
\bottomrule
\end{tabular}
\caption{Comparing commitment}
\label{fig:commitment}
\end{figure}
\end{comment}
\section{Extensions} \label{sec:extensions}
\subsection{Random scoring}
Suppose that the intermediary can use stochastic scoring rules
In the main model, I restrict to deterministic linear scores. Now I allow for stochastic scores. Now I impose the linearity restriction on the conditional expectation. There exists $b_0$ in $\mathbf{R}$ and $b$ in $\mathbf{R}^k$ such that, for all $x \in \mathbf{R}^k$,
\[
\E [ \sigma_R (x) ] = b_0 + b^T x
\]
Scoring is noise-free if $\sigma_R(x) = \E [ \sigma_R(x)]$ for all $x$.
\begin{prop}[Noise-free scoring] \label{res:noiseless_scoring}
Optimal scoring is noise-free
\end{prop}
In general, this result relies on the fact that $\Sigma_{\gamma \theta} \geq 0$. Next I provide a single-feature example, violating Assumption \ref{it:uncorrelated} in which noisy scoring is optimal.
\begin{exmp}[Noisy scoring] Take $k = 1$. Suppose $\eta$ and $\gamma$ are uncorrelated with unit variance and $\theta = \eta - 2 \gamma$. In this case the optimal scoring rules uses $b = 0.25$ and $t^2 = 0.059$. The noise dampens $b$ in order order to reduce the information loss from distortion.
\end{exmp}
\begin{comment}
uncorrelated noise to the linear scoring rule? Suppose the scoring rule takes the form
\[
f(x) = b_0 + b^T x + t \zeta,
\]
where $\zeta$ is uncorrelated elliptical noise with unit variance.%
\footnote{It is possible to choose $\zeta$ so that (i) $X$ and $\zeta$ are uncorrelated, and (ii) $(X, f(X))$ is jointly elliptical. I suppose $\zeta$ is chosen in this way. For a formal statement of existence, see .}
\end{comment}
\begin{comment}
In some settings, institutional restrictions make noise infeasible. People would be unlikely to accept an algorithm that assigned different scores to people with identical characteristics. On the other hand, there are subtle ways in which noise can be introduced. For instance, differences between human decisionmakers naturally introduce noise into the process.
Indeed, if $t = 0$, then we have a nice projection interpretation of the regression coefficients, but this interpretation does not hold in general.
In the statement of the intermediary's problem, I have allowed the intermediary to incorporate uncorrelated noise into her scoring rule. The scoring rule has \emph{no noise} if $t = 0$.
Expanding the noise term, the intermediary's problem becomes
\begin{equation*} \label{eq:intermediary_problem}
\begin{aligned}
& \text{minimize}
&& \var \bigl( b^T ( \eta + b \circ \gamma) - \theta \bigr) + t^2 \\
& \text{subject to }
&& \var \bigl( b^T \eta + (b \circ b)^T \gamma \bigr) + t^2
=
\cov \bigl( b^T \eta + (b \circ b)^T \gamma, \theta \bigr).
\end{aligned}
\end{equation*}
The covariance is $\sigma_{\eta \theta} b + \sigma_{\gamma \theta} b^2$, so we need negative and foc gives $b =- \sigma_{\eta \theta} /2 \sigma_{\gamma \theta}$. Then we need a particular condition to be satisfied.
On the one hand, it would seem that noise is useless because it increases the variance of the receiver's action. On the other hand, one might suspect that it changes the incentives of the sender to exert effort, but this is not directly true. The incentives are determined by $b$ only and hence noise has no effect.\footnote{A similar phenomenon is considered in a cheap talk setting by \cite{BlumeBoardKawamura2007}. There as well, the noise has no direct effect on the incentives of the sender, taking the response as given, but the noise changes which responses are feasible.}
I focused on linear decision rules, which could be achieved by deterministic scoring rules. What if additional randomness is allowed. I can generalize to the class of decision rules with linear conditional expectation.
This allows me to consider two types of noise.
We have an exact condition for the inclusion of noise. For the following result, drop Assumptions \ref{it:uncorrelated} and \ref{it:covariance}.
\end{comment}
\begin{comment}
To focus on the weighting scheme, I restriction attention to attainable outcome rules in which the decision rule has a linear conditional expectation. In particular, this of course includes linear scoring rules like those often seen in practice. But it also allows the scorer to add noise to the score or use random weights (independent of $X$).
This is a substantive decision.
This requirement means that
\[
y(x) = b_0 + b^T x + s \varepsilon,
\]
where $\varepsilon$ is also elliptical and is uncorrelated with $x$.\footnote{There is some subtley there because uncorrelated implies mean-independent but not indpendent.}
The obedience condition becomes
\[
b_0 + b^T (\eta + b \circ \gamma) + s \varepsilon
=
\E [\theta | b_0 + b^T (\eta + b \circ \gamma) + s \varepsilon].
\]
But by the elliptical distribution, the condition on the coefficient vector $b$ reduces to a single constraint on the covariance:
\[
\var \bigl( b^T (\eta + b \circ \gamma) + s \varepsilon \bigr)
=
\cov \bigl( \theta, b^T (\eta + b \circ \gamma + s \varepsilon \bigr),
\]
which immediately simplifies to
\[
\var( b^T \eta + (b \circ b)^T \gamma ) + s^2
=
\cov ( \theta, b^T \eta + (b \circ b)^T \gamma).
\]
\end{comment}
\subsection{Efficient scoring}
\begin{comment}
More generally, there is not a clean decomposition of the different terms. For instance, gaming can be correlated with $(\theta, \eta)$ and also appear in the receiver's bliss point. Note that the terms are quadratic for different reasons. The first two because we use quadratic loss for the receiver; the second because we use quadratic distortion cost for the sender.
Performing comparative statics is challenging because the problem is not convex
If the intermediary provides no information to the receiver, then the sender has no incentive to exert effort, and hence distortion cost will be zero.
\end{comment}
In the main model, the intermediary maximizes the receiver's utility $u_R$. More generally, suppose that the intermediary maximizes the expectation of the social welfare function
\[
\pi u_S + (1 - \pi) u_R,
\]
where $\pi$ in $[0,1]$ is the Pareto weight on the sender. If $\pi = 0$, this reduces to the baseline model. If $\pi = 1$, then the solution is to provide no information. The sender is risk-neutral and the scoring policy cannot change the receiver's expected decision. Therefore, the scoring policy affects the sender only through her cost of distortion
\[
(1/2) \sum_{i=1}^{k} (b_i \gamma_i)^2/\gamma_i
=
(1/2) (b \circ b)^T \gamma.
\]
For a linear decision rule, the intermediary minimizes
\[
\pi(1/2) (b \circ b)^T \mu_\gamma
+
(1 - \pi) \Brac{ \var(b^T \eta - \theta) + (b \circ b)^T \Sigma_{\gamma\g} (b \circ b)}.
\]
The noisiness can be captured by the \emph{noise ratio}
\[
\frac{ \E [ \var( f(X) | X)]}{ \var ( f(X))}.
\]
The denominator is the variance of the score. The numerator is the expected variance of the noise that is added to the score.
\begin{comment}
If the intermediary commits to provide no information about the sender's features, then there is no return from distortion, and the sender experiences zero cost.
As $\pi$ increases from $0$ to $1$, less weight is placed on information transmission and more weight is placed on the sender's wasteful distortion activity.
some Pareto weight $\pi$ in $[0,] \in [0,1]$.
If the intermediary only considered the the sender's preferences, the problem becomes trivial. Since the sender's utility is linear in the receiver's belief, and the receiver's decision cannot be systematically biased,
Thee expected cost is therefore
\[
(1/2) (b \circ b)^T \mu_\gamma.
\]
No matter what information policy the intermediary uses, the receiver's decision will be correct on average. Therefore, the only effect on the sender's utility is through the distortion cost. Suppose the intermediary puts weight $\pi \in [0,1]$ on the receiver and weight and $1 - \pi$ on the sender. The intermediary then minimizes
The first expression is the statistical objective. It is minimized by regressing $\theta$ on $\eta$. In addition, there are three penalties. First, there is a quartic penalty in $b$ with coefficient proportional to the heterogeneity of gaming ability that reflects the \emph{endogenous noise} introduced by the receiver's gaming ability. The is a quadratic penalty in $s$ that reflects the \emph{exogenous noise}. Finally, there is a quadratic penalty in $b$ that is proportional to the mean distortion ability. If a dimension is difficult to game anyway, then the sensitivity $b$ has a small effect on the sender's cost.
\end{comment}
\begin{prop}[Efficient scoring] \label{res:efficient_scoring}
There exists a cutoff $\bar{\pi} \in [0,1)$ such that optimal scoring is noise-free if and only if $\pi \leq \bar{\pi}$. The noise ratio is strictly increasing in $\pi$ for $\pi > \bar{\pi}$.
\end{prop}
\begin{comment}
conditional variance matrix $\Sigma_{\eta\h} - \Sigma_{\eta \gamma} \Sigma_{\gamma \gamma}^\dagger \Sigma_{\gamma \eta}$ has full rank.
\end{comment}
\subsection{Productive distortion}
In my model distortion takes a particularly simple form, additively adjusting each feature. But distortion represents some activity that may affect multiple features simultaneously, but in the baseline model I have already performed a convenient change of basis. The substantive assumption is that the feature vector depends linearly on the natural vector $\eta$ and distortion vector $d$. Suppose instead that the realized feature vector $x$ is given by
\[
x = x_0 + A \eta + B d,
\]
where $x_0$ is a $k$-vector and $A$ and $B$ are full rank $k \times k$ matrices. This setting can be reduced to redefining the feature vector to be $B^{-1} x$, which contains exactly the same information as the original feature vector $x$. Since
\[
B^{-1} x = B^{-1} (x_0 + A\eta) + d,
\]
this new feature vector takes the assumed form with natural vector $B^{-1} ( x_0 + A \eta)$. Because $B$ has full rank, this transformation preserves Assumptions \ref{it:uncorrelated} and \ref{it:covariance}.
\subsection{Observable features}
In the main model, the receiver observes sender's score, but he does not directly observe any of the sender's features. What if instead the receiver can observe some of the sender's features?
Suppose a subset of features is observable. Partition the index set as
\[
\{1,\ldots, k\} = I \cup J,
\]
where features $i$ in $I$ are directly observable, but features $j$ in $J$ are not. Therefore, the decision rule is parameterized by $(b_0, b) = (b_0, b_I, b_J)$. The obedience requirement is now
\begin{align*}
b_I &= \reg ( \theta | \eta_I + b_I \circ \gamma_I), \\
1 &= \reg (\theta | b^T ( \eta + b \circ \gamma)).
\end{align*}
Thus, this general problem interpolates between signaling, in which $I = \{1, \ldots, k\}$, and scoring, in which $J = \{1, \ldots, k\}$.
\begin{comment}
If the dimensions were uncorrelated, then again the problem would be completely separable, and we would observe the signaling solution on each of the $b_2$, and then the scoring solution up top. But more generally, we see that this imposes further constraints on the scoring solution, and we will tend to see less movement away from the signaling equilibrium.
\end{comment}
\begin{comment}
\subsection{Nonlinear scoring}
The restriction to linear rules by the receiver helps to control the endogenous distribution of the sender's features. If the receiver uses a nonlinear score, then the sender's best response is nonlinear, and hence the distribution of the sender's feature vectors may not be well-behaved. To construct equilibria, the receiver must update his beliefs about $\theta$ form his observation of $X$. In general, computing this conditional expectation is very challenging, and even if it can be computed, in must in general be checked at a continuum of points. With elliptical distributions, this conditional expectation can be calculated, and moreover has a simple form so it suffices to check finitely many equalities.
The elliptical distributions ensure that given a linear strategy by the sender, the receiver's inference problem is linear. \cite{FrankelKartik2019WP} conjecture that in the full commitment regime, linear scoring rules are optimal. In \cref{sec:nonlinear_decisions}, I provide a counterexample to this conjecture. For a carefully chosen elliptical distribution, I can construct nonlinear equilibria that outperform the linear equilibria. These examples, however, are quite fragile. Once we move away from linearity, the radial density function defining the elliptical distribution enters the analysis.
\end{comment}
\subsection{Correlated distortion ability} \label{sec:correlated_distortion}
The covariance assumptions capture the intuition that distortion impedes the informativeness of the sender's features. Without the covariance assumptions, the variance cannot be decomposed into the uncertainty introduced by the distortion ability and the information from the sender's intrinsic levels. In the scoring benchmark, the equilibrium still exists, but uniqueness is not guaranteed. Existence only relies on the feature vector having nonsingular variance. I can prove that generically scoring strictly improves upon signaling, but there is no easy expression for the generic condition.
\subsection{Further extensions}
I discuss additional extensions in \cref{sec:further_extensions}. In particular, I allow for the sender's cost function to be nonseparable across the different dimensions of distortion and for multi-dimensional decisions. I also allow the sender's distortion activity to be productive in the sense that it shifts the receiver's bliss point. The common thread of these extensions is that as long as the linear structure is preserved, the equilibria can be characterized by a large cubic system. This system, however, is challenging to analyze.
\section{Conclusion} \label{sec:conclusion}
I show that in the presence of strategic distortion, the receiver-optimal scoring rule outperforms full disclosure. There are natural directions for future work. To focus on the weighting of different features, I study a static model. In a dynamic model, I could analyze the relative weights on different features at different times. I also assume that the intermediary knows the distribution of the sender's type and latent characteristic. It would be interesting to try to estimate the moments of the distribution from observed behavior. This would be a first step towards applying the theory to design more accurate scoring systems.
\begin{comment}
There are three main suggestions for designing scores:
\begin{enumerate}
\item Using ``ungamed" data.
\item parameter constraints on algorithms.
\item Ex post regressions.
\end{enumerate}
\end{comment}
\begin{comment}
\section{Discussion of the model - to be distributed throughout the paper}
\paragraph{Features}
\paragraph{distortion}
In the model, distortion differs from its usual meaning in a few ways. In contrast to standard models of moral hazard, distortion is entirely unproductive: the receiver's payoff does not depend directly on the sender's distortion choice. While distortion is usually viewed as a productive activity to be maximized, in my model distortion interferes with the receiver's learning process.
Additionally, distortion can take positive or negative values. distortion by the sender adjusts the value of a feature away from its intrinsic level, in either direction. The cost of distortion depends on the square of the distortion so it is symmetric in the direction of adjustment. For some features, distortion in one dimension is costless. But this can be accommodated easily into my analysis because in the equilibria I study, every type chooses distortion with the same sign. Thus, the value of the distortion ability represents the value for distortion in the direction used in equilibrium. In the appendix, I discuss how to extend the model to allow for costless distortion in one direction, nor more generally to allow for different cost coefficients in the two directions. With costless distortion in one direction, the feature either entires positively or is entirely ignored.
\paragraph{Preferences}
The receiver's quadratic payoff is a standard way of quantifying the idea that the agent forms beliefs and has utility that increases in precision of beliefs. If an alternative convex loss function were used, the receiver's best response would not change. The quadratic has the advantage that variance is the suitable measure of welfare.
The sender has separable strict preferences. Separability means that cheap talk has no bite, so I am focused on the effect of screening. The cheap talk literature has assessed partial misalignment of preferences, but in many contexts the preferences misalignment is more extreme. The quadratic cost is standard, but the key feature is that
\end{comment}
\newpage
|
1,108,101,565,197 | arxiv | \section{Introduction}
\label{intro}
\setcounter{equation}{0}
In its original format,
the Ehrenfest classification scheme identifies the order of a phase transition as
that of the lowest derivative of the Helmholtz free energy which displays a
discontinuity there \cite{Eh33}. Typical transitions which fit to this scheme are
first-order solid-liquid-vapour transitions and second-order superconducting
transitions.
There are, however, many transitions characterised by divergent
rather than discontinuous behaviour.
Examples include ferromagnetic transitions in metals and the spontaneous
symmetry breaking of the Higgs field in particle physics, which display
power-law or logarithmic divergent behavior as the transition is approached.
The classification scheme has, in practice, been extended to encompass these
scenarios and the order of a transition is commonly given by the
order of the lowest derivative in which any type of non-analytic behaviour
is manifest.
It has long been suspected that transitions of Ehrenfest order greater than two
(with a discontinuity at the transition point) do not exist in nature.
However there is no obvious physical reason why this should be the case.
In fact, recent experimental observations
of the magnetic properties of a
cubic superconductor have been ascribed to its
possessing a fourth-order discontinuous transition
\cite{KuDo99}
(see also \cite{WoWr99} where the existence of
well defined anomalies in the specific heat at the transition point was
claimed). A theory for higher-order transitions was developed in
\cite{Ku97,KuSa02,FaYu04}
and found to be consistent with
experimental work.
Higher-order phase transitions (with either a discontinuity or a divergence
in an appropriate free-energy derivative) certainly exist in a number of theoretical models.
There are third-order temperature-driven transitions in
various ferromagnetic and antiferromagnetic spin models \cite{BeKa52,spin},
as well as spin models coupled to quantum gravity \cite{Ka86,fat}.
Recent theoretical studies also indicate the presence of third-order transitions in
various superconductors \cite{CrNo01}, DNA under mechanical strain \cite{RuBr02},
spin glasses \cite{CrRi03},
lattice and continuum gauge theories \cite{QCD} and matrix models linked to supersymmetry \cite{FuMi04}.
A fourth-order transition in a model of a branched polymer was studied in \cite{BiBu96}
and the Berezinskii-Kosterlitz-Thouless transition is of infinite-order \cite{KT}.
In this paper, we analyse higher-order transitions through the medium of
partition function zeros.
To set the notation, let $t$ represent a generic reduced even variable and $h$ be the
odd equivalent so that $t=T/T_c-1$ and $h=H/k_BT$ in the notation of the Ising model
(i.e., $T$ is the temperature, which is critical at $T_c$, and $H$ is
the external magnetic field). The critical point is given by $(t,h)=(0,0)$.
This may be the end-point of a line of first-order transitions, as is the case in the
Ising or Potts models. In the Potts-like case where the locus of transitions
is curved, we may instead assume that $t$ and $h$ are suitable mixed variables, so that
$h$ is orthogonal to $t$, which parameterizes arc length along the transition line \cite{KaSt00}.
The free energy in the thermodynamic limit is denoted by $f(t,h)$
and its $n^{\rm{th}}$-order even and odd derivatives are
$f^{(n)}_t(t,h)$ and $f^{(n)}_h(t,h)$ so that
the internal energy, specific heat, magnetization and susceptibility are
given (up to some inert factors) as
$
e(t,h) = f^{(1)}_t (t,h)
$,
$
C(t,h) = f^{(2)}_t (t,h)
$,
$
m(t,h) = f^{(1)}_h (t,h)
$,
and
$
\chi (t,h) = f^{(2)}_h (t,h)
$,
respectively.
In the following, to simplify the notation,
we drop the explicit functional dependency on a variable
if it vanishes.
One then commonly describes as an $m^{\rm{th}}$-order phase transition a situation where the
first $(m-1)$ derivatives of the free energy with respect to the even (thermal)
variable are continuous, but where
the $m^{\rm{th}}$ thermal derivative is singular, with a discontinuity or a divergence
at the transition point. The lowest $(m^\prime-1)$ derivatives of the free energy
with respect to the odd (field) variable may also be continuous in $t$,
with a singularity occuring in the ${m^\prime}^{\rm{th}}$ derivative.
Thus a continuous specific heat is realized if $m>2$ and the susceptibility
is also continuous if $m^\prime >2$ as well.
This situation, which is not
normally possible in a ferromagnet\footnote{
One can readily see this by considering
the Rushbrooke scaling law (\ref{RG2}) together with
hyperscaling which give $\gamma/\nu=d-2 \beta/\nu$ ($d$ being dimensionality and $\nu$
the correlation-length critical exponent).
Since for a system of finite linear extent $L$, the magnetisation obeys
$\langle |m| \rangle \propto L^{-\beta/\nu}$, and since
completely uncorrelated ferromagnetic spins would lead by the central limit theorem
to $\langle |m| \rangle \propto L^{-d/2}$, we obtain the bound $\beta/\nu < d/2$,
since the actual decay in the correlated case is slower. From this, one obtains the restriction that
$ \gamma/ \nu$ cannot be negative for a ferromagnet.
}, is the one analysed in \cite{Ku97,KuSa02}, in which $m=m^\prime >2$.
Such higher-order transitions may be possible in branched polymers and
diamagnets such as superconductors.
In the more common scenario where $m^\prime$ is not necessarily the same as $m$,
the scaling behaviour at the transition may be
described by critical exponents
at $h=0$:
\begin{equation}
f^{(m)}_t(t) \sim t^{-A} \;, \quad
f^{(m^\prime)}_h(t) \sim t^{-G} \;, \quad
f^{(1)}_h(t) \sim t^{\beta} \;,
\label{3}
\end{equation}
while, for the magnetization in field at $t=0$, we write
\begin{equation}
f^{(1)}_h(h) \sim h^{1/\delta} \;.
\label{4}
\end{equation}
In the familiar case of a second-order transition ($m=m^\prime=2$), the exponents
$A$ and $G$ become, in standard notation,
$\alpha$ and $\gamma$, associated with specific heat and
susceptibility, respectively.
In the theoretical work of \cite{KuSa02}, the
following scaling relations were derived for the case of a diverging higher-order transition
in which $m^\prime = m$:
\begin{eqnarray}
(m-1) A + m \beta + G = m(m-1) \;,
\label{Rushbrooke1}
\\
G = \beta \left( (m-1) \delta -1\right) \;.
\label{Griffiths1}
\end{eqnarray}
In the second-order case (\ref{Rushbrooke1}) and (\ref{Griffiths1}) become
equivalent to the standard Rushbrooke and Griffiths scaling laws,
\begin{equation}
\alpha + 2 \beta + \gamma = 2 \;, \quad
\gamma = \beta (\delta -1) \;,
\label{RG2}
\end{equation}
as one would expect.
Since the seminal work by Lee and Yang \cite{LY} as well as by Fisher \cite{Fi64},
the analysis of zeros of the partition function has become fundamental
to the study of phase transitions.
Fisher zeros in the complex temperature plane
pinch the real axis at the physical transition point.
The locus of Lee-Yang zeros, in the complex magnetic-field plane,
is controlled by the (real) temperature parameter
and in the high-temperature
phase, where there is no transition, it ends at the Yang-Lee edge.
Denoting the distance
of the edge from the real axis by $r_{\rm{YL}}$, one has the generic behaviour
\begin{equation}
r_{\rm{YL}}(t) \sim t ^{\Delta/2}
\;,
\label{edge}
\end{equation}
at a second-order transition. The exponent $\Delta$ is related to the other
exponents through $\Delta = {2 \gamma \delta}/{(\delta -1)}$.
In the remainder of this paper,
a number of results concerning the locus and density of
zeros are presented.
Higher-order transitions controlled by a single parameter are analysed
in Sec.~\ref{sec:fisher} where the locii and densities of the corresponding
Fisher zeros are determined. Various restrictions on the properties
of such transitions are established and simple quantitative methods for
analysing them are suggested.
In Sec.~\ref{sec:LY}, the focus is on the zeros of the Lee-Yang variety
where even and odd control parameters come into play.
Here, the scaling relations (\ref{Rushbrooke1})
and (\ref{Griffiths1}) are recovered and elucidated and a number of other
ones are presented.
Finally, conclusions are drawn in Sec.~\ref{sec:ccl}.
\section{Fisher Zeros}
\label{sec:fisher}
\setcounter{equation}{0}
In the bulk of physical models
the locus of Fisher zeros is linear
in a suitable parameter, $u$, which is a function of $t$ and
can be parameterized near the transition point, $u_c$,
by \cite{AbeLY,SuzukiLY,AbeF,SuzukiF}
\begin{equation}
u(r) =u_c+ r \exp(i \phi(r))
\; .
\label{singline}
\end{equation}
This singular line in the upper half-plane has $0 < \phi(r) < \pi$, while that
in the lower half is its complex conjugate.
In the thermodynamic limit, the (reduced) free energy is
\begin{equation}
f(t) = 2 {\rm{Re}} \int_0^R{\ln{\left( u-u(r) \right)}} g(r) dr \; ,
\end{equation}
where $g(r)$ is the density of zeros and $R$ is a cutoff.
We are interested in the moments given by
\begin{equation}
f_t^{(n)}(t)
=
2 (-1)^{n-1}(n-1)!
{\rm{Re}} \int_0^R{\frac{g(r)}{\left(u-u(r)\right)^n}} dr
\; ,
\label{2}
\end{equation}
and consider the cases of discontinuous and divergent $m^{\rm{th}}$-order
temperature-driven transitions separately.
The difference in free energies on either side of
the transition can be expanded as
$
f_+(t) - f_-(t)
=
\sum_{n=1}^\infty{c_n}(u-u_c)^n
$,
where $+$ and $-$ refer to above and below $u_c$.
For a discontinuous transition, $c_n =0$ for
$n<m$, while $c_m \ne 0$ and the discontinuity in
the $m^{\rm{th}}$ derivative of the free energy is
\begin{equation}
\Delta f_t^{(m)} = m!c_m
\; .
\label{fm}
\end{equation}
Now, the real parts of the free energies must match across the singular line
(otherwise the transition would be of order zero)
which, from (\ref{singline}), means
$
\sum_{n=m}^\infty{c_n}r^n \cos{n\phi (r)}
= 0
$.
Therefore
the impact angle (in the upper half-plane), $\phi = \lim_{r \rightarrow 0}{\phi(r)}$,
is
\begin{equation}
\phi = \frac{(2l+1)\pi}{2m}
\quad \quad {\rm{for}} \quad l = 0,\dots, m-1
\; .
\label{general}
\end{equation}
It is now clear that, under these conditions,
vertical impact is allowed only at any discontinuous
transitions of odd order.
A discontinuous second-order transition
with impact angle $\pi/2$ is forbidden.
Similarly an impact angle of $\pi/6$, for example, is only allowed
at a transition of order $3$ or $9$ or $15$, etc.
This recovers disparate results for first-, second- and third-order
transitions in \cite{LY}, \cite{BlEv} and \cite{fat} which are associated
with impact angles $\pi/2$ (corresponding to $l=0$), $\pi/4$ ($l=0$)
and $\pi/2$ ($l=1$), respectively.
The question now arises as to the mechanism by which the system selects its
$l$-value. One expects that further studies of higher-order transitions will be required to
provide an answer.
Let
$
t = u - u_c$, $\tau = t e^{-i\phi}
$
and assume that the leading behaviour of the density of zeros is
$
g(r) = g_0 r^{p}
$,
where $g_0 $ is constant.
If $p$ is an integer, analytical extension of
the integration (\ref{2}) to the complex plane yields the following result for the $n^{\rm{th}}$ derivative:
\begin{equation}
f_t^{(n)} (t)
=
\left.{
- 2 (n-1)! g_0 {\rm{Re}}
\sum_{j=1}^{p+1} e^{-in\phi} T_j
}\right|_\delta^R
\; ,
\label{39}
\end{equation}
where $\delta$ is a lower integral cutoff
and
\begin{eqnarray}
T_j & = & \frac{ p!\tau^{p+1-j}(r-\tau)^{j-n}}{(j-1)!(p+1-j)!(j-n)}
\quad \mbox{
for $j \ne n$}\;,
\\
T_n &= & \frac{p!\tau^{p+1-n}\ln{(r-\tau)}}{(n-1)!(p+1-n)!}
\; .
\end{eqnarray}
One finds that all $T_j$ terms vanish as the transition is approached
from above or below, except the term for which $j=p+1$. If $n\le p$
this term is constant and there is no discontinuity in $ f_t^{(n)}$
across the transition, while for $n=p+1$ it leads to a discontinuous
transition with
$
\Delta f_t^{(p+1)}
=
2 \pi g_0 p! \sin{(p+1)\phi}
$.
Therefore the first $p$ derivatives are continuous across the transition
while the $(p+1)^{\rm{th}}$ derivative is not.
In other words,
to generate a discontinous transition of
order $m$ under these assumptions,
it is necessary and sufficient that $p=m-1$, i.e., the
leading behaviour of the density is
\begin{equation}
g(r) = g_0 r^{m-1}
\; .
\label{hjjjj}
\end{equation}
From (\ref{fm}) and (\ref{general}), one now has
$c_m = (-1)^l 2\pi g_0 /m$, and
the discontinuity in the
$m^{\rm{th}}$ derivative of the free energy is related to the density of zeros
as
\begin{equation}
\Delta f_t^{(m)}
=
(-1)^l (m-1)! 2 \pi g_0
\; .
\label{318}
\end{equation}
This recovers the well known result that the latent heat or magnetization is related to the density
of zeros at a first-order transition through $\Delta f_t^{(1)}
=
2 \pi g_0
$ \cite{LY}.
We next consider an $m^{\rm{th}}$-order diverging transition where
\begin{equation}
f_t^{(m)}(t)
\sim
|t|^{-A}
\; ,
\label{dsply}
\end{equation}
for $0 < A < 1$.
If $A=0$, we are back to the discontinuous case or the case of a logarithmic
as opposed to power-law divergence (see the discussion
below),
while if $A >1$, it is more appropriate to consider the transition as
$(m-1)^{\rm{th}}$ order.
Considerations similar to those in \cite{AbeF,SuzukiF}
may be used to show that in order to obtain the appropriate divergence
it is necessary and sufficient that
\begin{equation}
g(r) = g_0 r^{m-1-A}
\; .
\label{fo}
\end{equation}
Indeed,
from the general expression (\ref{2}), the form (\ref{dsply}) is obtained
provided (with $r = t r^\prime$)
\begin{equation}
{\rm{Re}} \int_0^R{
\frac{t^A g(r) dr}{(r e^{i \phi}-t)^m}
}
= {\rm{Re}} \int_0^{R/t} \frac{t^{A-m+1} g(tr^\prime) dr^\prime}{(r^\prime e^{i\phi}-1)^m}
\end{equation}
is independent of $t$ as $t \rightarrow 0$.
The further condition that $g(0)=0$ gives $A < m-1$. If $m=1$, this violates the
condition that $0<A<1$, leading to the requirement that $m \ge 2$ for a diverging
transition. On this basis, there are no diverging first-order transitions.
This is consistent with experience.
To demonstrate sufficiency, we put (\ref{fo}) into (\ref{2})
and use the substitution $w = r\exp{(i\phi)}/|t|$, to find,
for the $n^{\rm{th}}$ derivative of the free energy,
\begin{equation}
f_t^{(n)}(t)
=
g_0(n-1)! |t|^{m-n-A}
e^{-i(m-A)\phi}
I_\pm
\; ,
\end{equation}
in which
\begin{equation}
I_\pm =
2 {\rm{Re}}
\int_0^{Re^{i\phi}/|t|}{
\frac{w^{m-1-A}}{(1 \pm w)^n} dw
}
\quad
{\mbox{for}} \quad t
{\rm{\raisebox{-.75ex}{ {\small \shortstack{$<$ \\ $>$}} }}}
0\;.
\end{equation}
If $n<m$, this vanishes as $t\rightarrow 0$, establishing the
continuity of the $n^{\rm{th}}$ derivative there, while
if $n=m$, one finds
\begin{equation}
f_t^{(m)}(t)
=
- 2 g_0 |t|^{-A} \Gamma (m-A) \Gamma (A)
\times
\left\{
\begin{array}{ll}
\cos{(m-A)\phi} & \mbox{if $t<0$} \\
\cos{\left((m-A)\phi+A \pi \right)} & \mbox{if $t>0$}\;.
\end{array}
\right.
\label{end}
\end{equation}
In the case of a second-order transition,
(\ref{end}) recovers a result derived in \cite{AbeF,preAbe}.
Note that (\ref{end}) provides a direct relationship between the impact angle
and the critical amplitudes on either side of the transition.
These critical amplitudes coincide if the impact angle is
$ \phi = (2N-A)\pi/2(m-A)$
where $N$ is any integer.
In particular, if $m$ is even
an impact angle of $\pi/2$ results in the symmetry of
amplitudes around the transition. This result was already observed in the
second-order case in \cite{AbeF}.
The implications of (\ref{end}) are that, while this symmetry may be extended to all even-order
diverging phase transitions, it does not hold for odd ones.
If $A= 0$ in (\ref{fo}), the singular part of the
$m^{\rm{th}}$ derivative of the
free energy becomes
\begin{equation}
f_t^{(m)}(t)
=
2 (m-1)! g_0
\times
\left\{
\begin{array}{ll}
\cos{(m\phi)} \ln{|t|} & \mbox{if $t<0$} \\
(\cos{(m\phi)} \ln{|t|} + \pi \sin{(m\phi)} ) & \mbox{if $t>0$}\;.
\end{array}
\right.
\end{equation}
This recovers a result in \cite{AbeF} if $m=2$.
Moreover, the discontinuity in the $m^{\rm{th}}$ moment across the
transition is
consistent with (\ref{general}) and (\ref{318}).
From (\ref{hjjjj}) and (\ref{fo}), the integrated density of Fisher zeros
is
$
G(r) \sim r^{m-A}
$
(where $A=0$ in the case of a discontinuous transition).
For a finite system of linear extent $L$, the integrated density is defined as
$ G_L(t_j) = (2j-1)/2L^d$ \cite{JaJo04}.
Equating $G(t_j)$ to $G_L(t_j)$ leads to the scaling behaviour
\begin{equation}
|t_j| \sim L^{-\frac{d}{m-A}}
\; .
\label{FSSF}
\end{equation}
In the diverging case where hyperscaling ($f(t) \sim \xi(t)^{d}$) holds, and
$m-A=2-\alpha = \nu d$, this recovers the usual expression,
$|t_j| \sim L^{-1/\nu}$, for finite-size scaling of Fisher zeros.
In the discontinuous case, where $A=0$, (\ref{FSSF}) yields
\begin{equation}
\nu = \frac{m}{d}
\; .
\end{equation}
This is a generalization of the usual formal identification of $\nu$ with $1/d$,
which applies to a first-order transition.
Such a generalized identification
was observed at the third-order ($m=3$) discontinuous transition
present in the spherical model in three dimensions \cite{BeKa52} as well as in the
Ising model on planar random graphs if the Hausdorff dimension is used for $d$
\cite{fat}.
\section{Lee-Yang Zeros}
\label{sec:LY}
\setcounter{equation}{0}
In the Lee-Yang case, where there is an edge, $r_{\rm{YL}}(t)$, to the distribution of zeros,
the free energy is
\begin{equation}
f(t,h) = 2 {\rm{Re}} \int_{r_{\rm{YL}}(t)}^R{\ln{(h-h(r,t))}g(r,t) dr }
\; ,
\label{g1}
\end{equation}
where the density of zeros is written as $g(r,t)$ to display its
$t$-dependency and where the locus of zeros is
$ h(r,t) = r \exp{(i \phi(r,t))}$.
(If the Lee-Yang circle theorem holds, $\phi=\pi/2$ and $R=\pi$ \cite{LY}.)
The ${m^\prime}^{\rm{th}}$ field derivative of the free energy at $h=0$ is
\begin{equation}
f^{({m^\prime})}_h(t) = 2 (-1)^{m^\prime -1} (m^\prime-1)! \frac{
\cos{({m^\prime}\phi)}
}{
r_{\rm{YL}}(t)^{{m^\prime}-1}
}
\int_1^{\frac{R}{
r_{\rm{YL}}
}}{
\frac{g(xr_{\rm{YL}},t)}{x^{m^\prime}}
dx }
\; ,
\label{g4}
\end{equation}
having used the substitution $r=xr_{\rm{YL}}(t)$.
As in the second-order case, we assume that $r_{\rm{YL}}(t)$ is sufficiently
small near the transition point ($t=0$) so that the upper integral limit
diverges
and compare with the limiting scaling behaviour in (\ref{3})
to find \cite{AbeLY,SuzukiLY}
\begin{equation}
g(r,t) = t^{-G} r_{\rm{YL}}(t)^{{m^\prime}-1} \Phi{\left(\frac{r}{r_{\rm{YL}}(t)}\right)}
\;,
\label{g5}
\end{equation}
where $\Phi$ is an unknown function of its argument.
Similar considerations yield, for the magnetization,
\begin{equation}
f^{(1)}_h(t,h) = 2 t^{-G} r_{\rm{YL}}(t)^{{m^\prime}-1}
{\rm{Re}}
\int_1^\infty{\frac{\Phi(x)}{\frac{h}{r_{\rm{YL}}(t)}-xe^{i\phi}} dx }
\; ,
\label{g6}
\end{equation}
which we may write as
\begin{equation}
f^{(1)}_h(t,h) = t^{-G} r_{\rm{YL}}(t)^{{m^\prime}-1}
\Psi_\phi{\left(\frac{h}{r_{\rm{YL}}(t)}\right)}
\; .
\label{g7}
\end{equation}
Comparison with (\ref{4}) now gives
$
\Psi{\left(h/r_{\rm{YL}}(t)\right)}
\sim
\left(h/r_{\rm{YL}(t)}\right)^{{1}/{\delta}}
$.
The $t$-dependence must cancel in (\ref{g7}) as $t \rightarrow 0$,
giving the small-$t$ scaling behaviour of the Yang-Lee edge
under these circumstances
to be
\begin{equation}
r_{\rm{YL}}(t)
\sim
t^{\frac{G\delta}{({m^\prime}-1)\delta -1}}
\;.
\label{3edge}
\end{equation}
When ${m^\prime}=2$ and $G=\gamma$,
this recovers the second-order transition behaviour of (\ref{edge}).
Furthermore, (\ref{g5}) now reads
\begin{equation}
g(r,t) = t^{\frac{G}{({m^\prime}-1)\delta -1}} \Phi{\left(\frac{r}{r_{\rm{YL}}(t)}\right)}
\;,
\label{g50}
\end{equation}
and the expression for the magnetization in ({\ref{g7})
gives
\begin{equation}
f^{(1)}_h(t,h) = t^{\frac{G}{({m^\prime}-1)\delta -1}}
\Psi_\phi{\left(\frac{h}{r_{\rm{YL}}(t)}\right)}
\; .
\end{equation}
Strictly, this equation of state has been derived for $t>0$, where there is an edge.
However we may assume it can be analytically
continued into the low temperature regime, where,
taking the $h \rightarrow 0$ limit and comparing with the magnetization in (\ref{3}),
it yields
the scaling relation
\begin{equation}
\beta = \frac{G}{({m^\prime}-1)\delta -1}
\; .
\label{sg2}
\end{equation}
In the situation where ${m^\prime}=m$, this recovers the Griffiths-type scaling relation
(\ref{Griffiths1}), derived in \cite{KuSa02}.
Integrating (\ref{g1}) by parts gives, for the singular part of the free energy,
\begin{equation}
f(t,h) = 2 {\rm{Re}} \int_{r_{\rm{YL}}(t)}^R{\frac{G(r,t) dr}{he^{-i\phi}-r} }
\; ,
\label{g11}
\end{equation}
where $G(r,t)$ is the integrated density of zeros. From
(\ref{g5}) and (\ref{3edge}), the latter is
$
G(r,t) =
t^{G(\delta+1)/(({m^\prime}-1)\delta-1)}
F\left( {r}/{r_{\rm{YL}}(t)} \right)
$
in which
$
F(x) = \int_1^x{\Phi(x^\prime)dx^\prime}
$.
Again using $r=x r_{\rm{YL}}(t)$ in (\ref{g11}),
and taking the upper integral limit to infinity,
one has, for the free energy,
\begin{equation}
f(t,h)
=
t^{G\frac{\delta+1}{({m^\prime}-1)\delta-1}}
{\cal{F}}_\phi{\left(
\frac{h}{r_{\rm{YL}}(t)}
\right)
}
\;,
\label{ppop}
\end{equation}
where
$
{\cal{F}}_\phi{\left(
w
\right)
}
=
2 {\rm{Re}}
\int_1^\infty{
{F(x) }/{(w e^{-i\phi}-x)}dx
}
$.
The $m^{\rm{th}}$ temperature derivative of the zero-field
free energy is therefore of the form
$
f_t^{(m)}(t)
\sim
t^{G(\delta+1)/(({m^\prime}-1)\delta-1)-m}
$. Comparison with (\ref{3}) then yields the scaling relation
\begin{equation}
A = m - G\frac{\delta+1}{({m^\prime}-1)\delta-1}
\;.
\label{sg1}
\end{equation}
Together, (\ref{sg2}) and (\ref{sg1}) recover all four scaling relations
derived in \cite{KuSa02} in the more restrictive case where ${m^\prime}=m$.
In the second-order case ($m=2$), they recover
the standard Rushbrooke and Griffiths scaling laws of (\ref{RG2}).
In fact, these laws also hold in the present case, albeit with negative
$\alpha$ (and possibly $\gamma$).
To see this, let $f_t^{(n)}(t) \sim t^{-\alpha_n}$ and
$f_h^{(n)}(t) \sim t^{-\gamma_n}$ (so that $\alpha_2=\alpha$
and $\gamma_2=\gamma$). Since $f_t^{(m)}(t) \sim t^{-A}$, one
has, directly, that
$
n-\alpha_n=m-A
$.
Differentiating (\ref{ppop}) with respect to field, now gives
\begin{equation}
f_h^{(n)}(t)
\sim
t^{\beta - (n-1) \beta \delta }
=
t^{n\beta - (n-1)(m-A)}
\;,
\end{equation}
having used (\ref{sg2}) and (\ref{sg1}) and set $h=0$.
Now, one has
\begin{equation}
\gamma_n= (n-1) \beta \delta -\beta
,
\quad
(n-1)\alpha_n + n \beta + \gamma_n = n(n-1)
,
\label{lr}
\end{equation}
which recover (\ref{RG2}) when $n=2$
\footnote{
It is interesting to note the restrictions imposed on $\delta$ at a higher-order transition
coming from the first equation of (\ref{lr}). For $n<{m^\prime}$, $\gamma_n$ should be
negative, so, if $\beta$ is positive,
the best bound on $\delta$ is
$\delta < 1/({m^\prime}-2)$.
Also, the second formula in (\ref{lr}) gives, for
$2\le n \le m-1$
and hence $\alpha_n < 0$,
$\delta > (m-1)/\beta-1$ or $\beta > (m-1)({m^\prime}-2)/({m^\prime}-1)$.
These are no restraints in the familiar second-order case (where $m= m^\prime = 2$ and
large $\delta$ is common),
but are severe constraints at higher order.}
The formul{\ae} (\ref{3}) describe the behaviour of various moments
as the critical point is approached tangential to the transition line (i.e., along
$h=0$). One may also be interested in the orthogonal behaviour, namely,
the $h$-dependence at $t=0$. In the case of the $h$-derivatives of free
energy, this comes directly from (\ref{4}). For the $t$-derivatives,
we may assume the power-law behaviour (at $t=0$),
\begin{equation}
f_t^{(j)}(h) \sim h^{s_j}
\;.
\label{sj}
\end{equation}
In the second-order case, (\ref{sj}) gives the $h$-dependency
of the internal energy and the specific heat at $t=0$ as
$
e(h) = f_t^{(1)}(h) \sim h^{\epsilon}
$ and
$
C(h) = f_t^{(2)}(h) \sim h^{- \sigma}
$.
These exponents are related to
$\delta$ and $\gamma$ through (see \cite{AbeLY} and references therein)
\begin{equation}
\epsilon = 2 - \frac{(\delta-1)(\gamma+1)}{\delta \gamma}
\;,\;
\sigma = \frac{(\delta-1)(\gamma+2)}{\delta \gamma} -2
\;.
\label{epssig}
\end{equation}
Following the reasoning of \cite{AbeLY}, we may argue that because there
should be no phase transition away from $h=0$ for any $t$, the
free energy, $f(t,h)$ in (\ref{ppop}) must be a power series in $t$ there.
So if ${\cal{F}}_\phi(w)$ involves a term, $w^q$,
the free energy involves
$
t^{
-G+
(m^\prime-q) G \delta
/
((m^\prime-1)\delta-1)
}
h^q
$
which must be an integral power, $N$, of $t$.
This gives $q=m^\prime-[(m^\prime-1)\delta-1](G+N)/G\delta$, or the power series
\begin{equation}
f(t,h) =
\sum_{N=0}^\infty{
a_n t^N h^{m^\prime - \frac{(m^\prime-1)\delta-1}{G\delta}(G+N)}
}
\;.
\label{ps}
\end{equation}
Differentiating appropriately, putting $t=0$ and comparing with (\ref{sj}) yields
the scaling laws
\begin{equation}
s_j = m^\prime- \frac{(m^\prime-1)\delta-1}{G\delta}(G+j)
\;.
\label{sl}
\end{equation}
In the second-order case with $m^\prime = 2$ this recovers (\ref{epssig}) with
$s_1=\epsilon$ and $s_2=-\sigma$.
\section{Conclusions}
\setcounter{equation}{0}
\label{sec:ccl}
Different types of higher-order phase transitions have been analysed
using the zeros of the partition function.
In the Fisher case, the impact angle
is restricted by the order and nature of the transition.
For a transition with a discontinuity in $f_t^{(m)}(t)$,
it is unclear how the system
selects from the $m$ permissible angles.
For a divergent transition, the impact angle determines the relevant
amplitude ratios.
Finite-size scaling is seen to hold at higher-order transitions
and the familiar formal identification of $\nu$ with $1/d$ that
is used at first-order transitions
extends to $\nu = m/d$ for discontinuous transitions of $m^{\rm{th}}$ order.
Lee-Yang zeros, on the other hand, are appropriate to the case where
two parameters control the system.
Here, they have been used as a route to derive scaling relations between
associated even and odd exponents, which recover
well known formulae in the second-order case,
including the Rushbrooke and Griffiths laws.
One of the main points of \cite{KuDo99} is that many higher-order transitions may exist
which have not yet been identified as such.
Indeed determination of critical exponents or latent-heat-like discontinuities
is notoriously difficult from numerical work on finite systems where there is
no true transition and signals are smoothed out.
There, amplitude ratios are often more discerning
and here we see impact angles even more so,
at least in theory. From
the results herein,
it would appear that analysis of the impact angle provides a very robust
way to recognise the order of transitions.
~ \\
~ \\
\noindent
{\bf{Acknowledgements:}}
This work was partially supported by the EU RTN-Network `ENRAGE': {\em Random Geometry
and Random Matrices: From Quantum Gravity to Econophysics\/} under grant
No.~MRTN-CT-2004-005616.
RK thanks Pradeep Kumar for an e-mail correspondence.
\bigskip
|
1,108,101,565,198 | arxiv | \section{Systolic inequalities and LS category}
A quarter century ago, M.~Gromov \cite{Gr1} initiated the modern
period in systolic geometry by proving a curvature-free~$1$-systolic
lower bound for the total volume of an essential Riemannian
manifold~$M$ of dimension~$d$, i.e.~a~$d$-fold product of the systole
is such a lower bound.
Here the term ``curvature-free'' is used in the literature to refer to
a bound independent of curvature invariants, with a constant depending
on the dimension (and possibly on the topology), but not on the
geometry (i.e.~the Riemannian metric). Note that such bounds cannot
be called ``metric-independent'' as the systolic invariants themselves
do depend on the metric.
Recently, M.~Brunnbauer~\cite{Bru3} proved that a~$(k-1)$-connected
manifold of dimension~$n=kd$ satisfies a curvature-free
stable~$k$-systolic inequality~$(\stsys_k)^d \leq C \vol_n$ if and
only if a purely homotopic condition on the image of the fundamental
class~$[M]$ in a suitable classifying space is satisfied. Thus the
total volume admits a lower bound in terms of a systolic product
with~$d$ factors. J.~Strom~\cite{S2} and the first named
author~\cite{D} were motivated by investigations of the
Lusternik-Schnirelmann (LS) category \cite{DKR} in its relation to
systolic geometry~\cite{KR1}. A.~Costa and M.~Farber \cite{CF} have
pursued the direction initiated in \cite{D}, as applied to motion
planning--related complexity.
Our first result links systolic inequalities to the cohomological
dimension (see \cite{Bro}) of the fundamental group, see
\theoref{t:cd} for a more precise statement.
\begin{theorem}
\label{aaa}
Let~$M$ be a closed~$n$-manifold, with~$\cd(\pi_1 M)=d \leq n$. Then
there can be no more than
\[
\frac{n+d}{2}
\]
systolic factors in a product which provides a curvature-free lower
bound for the total volume of~$M$.
\end{theorem}
It was shown in~\cite{KR1, KR2} that the maximal number of factors in
such a product coincides with the LS category~$\protect\operatorname{cat}$ in a number of
cases, including all manifolds of dimension~$\leq 3$. We apply
Theorem~\ref{aaa} to show that, in dimension~$4$, the number of
factors is bounded by the LS category, see \corref{c:dim=4} for a more
precise statement.
\begin{theorem}
\label{bbb}
For every closed orientable~$4$-manifold, the maximal number of
factors in a product of systoles which provides a curvature-free lower
bound for the total volume, is bounded above by~$\protect\operatorname{cat} M$.
\end{theorem}
Combining Theorem~\ref{aaa} with a volume lower bound resulting from
an inequality of Gromov's (see Sections~\ref{s:def} and
\ref{s:abjac}), we obtain the following result (typical examples are
$\T^2 \times S^3$ as well as the non-orientable~$S^3$-bundle over
$\T^2$).
\begin{theorem}
\label{ccc}
Let~$M$ be a closed~$5$-dimensional manifold. Assume that~$b_1( M)=
\cd(\pi_1 M)=2$ and furthermore that the typical fiber of the
Abel--Jacobi map to~$\T^2$ represents a nontrivial homology class.
Then the maximal possible number of factors in a systolic lower bound
for the total volume is~$3$. Note also that~$3\le \protect\operatorname{cat} M \le 4$.
\end{theorem}
The above result motivates the following question concerning upper
bounds for the Lusternik-Schnirelmann category, cf.~\cite{D}.
\begin{question}
Under the hypotheses of Theorem~\ref{ccc}, is the
Lusternik-Schnirelmann category of $M$ necessarily equal to $3$?
\end{question}
It will be convenient to formulate all of the above results in terms
of the systolic category. The idea of systolic category is to codify
lower bounds for the total volume, in terms of lower-dimensional
systolic invariants. We think of it as an elegant way of expressing
systolic statements. Here we wish to incorporate all possible
curvature-free systolic inequalities, stable or unstable. More
specifically, we proceed as follows.
\begin{definition}
\label{21b}
Given~$k\in {\mathbb N}, k>1$ we set
\begin{equation*}
\ \sys_{k}(M, {\mathcal G})= \inf \left\{\sysh_k^{\phantom{I}}(M',
{\mathcal G};A), \stsys_k(M, {\mathcal G}) \right\},
\end{equation*}
where $\sysh$ is the homology systole, $\stsys$ is the stable homology
systole, and the infimum is over all regular covering spaces~$M'$
of~$M$, and over all choices
\begin{equation}
\label{41}
A\in \{ {\mathbb Z}, {\mathbb Z}_2 \} .
\end{equation}
Furthermore, we define
\begin{equation*}
\sys_{1}(M, {\mathcal G})=\min \{\pisys_1(M, {\mathcal G}),\stsys_1(M,
{\mathcal G})\}.
\end{equation*}
\end{definition}
Note that the systolic invariants thus defined are nonzero~\cite{KR2}.
\begin{definition}
Let~$M$ be a closed~$n$-dimensional manifold. Let~$d\geq 1$ be an
integer. Consider a partition
\begin{equation}
\label{eq:partition}
n= k_1 + \ldots + k_d,\quad k_1\le k_2\le \cdots \le k_d
\end{equation}
where~$k_i\geq 1$ for all~$i=1,\ldots, d$. We say that the partition
(or the~$d$-tuple~$(k_1, \ldots, k_d)$) is {\em categorical} for~$M$
if the inequality
\begin{equation}
\label{eq:main}
\sys_{k_1}({\mathcal G}) \sys_{k_2}({\mathcal G}) \ldots \sys_{k_d}({\mathcal G})
\leq C(M) \vol_n({\mathcal G})
\end{equation}
is satisfied by all metrics~${\mathcal G}$ on~$M$, where the
constant~$C(M)$ is expected to depend only on the topological type
of~$M$, but not on the metric~${\mathcal G}$.
\end{definition}
The {\em size\/} of a partition is defined to be the integer~$d$.
\begin{definition}
The {\em systolic category} of~$M$, denoted~$\syscat(M)$, is the
largest size of a categorical partition for~$M$.
\end{definition}
In particular, we have~$\syscat M \le \dim M$.
We know of no example of manifold whose systolic category exceeds its
Lusternik-Schnirelmann category. The lower bound of~$b_1(M)+1$ for the
systolic category of a manifold~$M$ with non-vanishing fiber class in
the free abelian cover of~$M$, discussed in Section~\ref{s:abjac},
therefore inspires the following question.
\begin{question}
Is the non-vanishing of the fiber class in the free abelian cover
of~$M$, a sufficient condition to guarantee a lower bound of
$b_1(M)+1$ for the Lusternik-Schnirelmann category of~$M$?
\end{question}
The answer is affirmative if the fiber class can be represented as a
Massey product, see Section~\ref{s:abjac}.
The paper is organized as follows. In Sections \ref{s:prel} and
\ref{four}, we review the notion of systolic category. In
Section~\ref{s:group}, we obtain an upper bound for the systolic category
of a closed~$n$-manifold in terms of its fundamental group and, in
particular, prove that the systolic category of a 4-manifold with free
fundamental group does not exceed~$2$ (\corref{c:free}).
In Section~\ref{s:abjac} we investigate the next possible value of the
categories, namely~$3$. We recall a 1983 result of M.~Gromov's
related to Abel--Jacobi maps, and apply it to obtain a lower bound
of~$3$ for the systolic category for a class of manifolds defined by a
condition of non-trivial self-linking of a typical fiber of the
Abel--Jacobi map. In fact, non-triviality of the self-linking class
guarantees the homological non-triviality of the typical fiber lifted
to the free abelian cover.
Marcel Berger's monograph \cite[ pp.~325-353]{Be6} contains a detailed
exposition of the state of systolic affairs up to '03. More recent
developments are covered in \cite{SGT}.
Recent publications in systolic geometry include the articles
\cite{AK, Be08, Bru, Bru2, Bru3, BKSW, DKR, DR09, EL, HKU, KK, KK2,
KW, Ka4, KSh, RS, Sa08}.
\section{A systolic introduction}
\label{s:prel}
Let~$\T^2$ be a 2-dimensional torus equipped with a Riemannian
metric~$\gm$. Let~$A$ be the area of~$(\T^2,\gm)$. Let~$\ell$ be the
least length of a noncontractible loop in~$(\T^2,\gm)$. What can one
say about the scale-invariant ratio~$\ell^2/A$? It is easy to see
that the ratio can be made arbitrarily small for a suitable choice of
metric~$\gm$. On the other hand, it turns out that the ratio is
bounded from above. Indeed, C.~Loewner proved that~$\ell^2/A\le
2/\sqrt 3$, for all metrics~$\gm$ on~$\T^2$, see~\cite{Pu}.
More generally, given a closed~$n$-dimensional smooth manifold~$M$
with non-trivial fundamental group, M. Berger and M. Gromov asked
whether there exists a constant~$C>0$ such that the inequality
\begin{equation}\label{e:gromov}
\ell^n\le C\vol(M)=C\vol_n(M,\gm)
\end{equation}
holds for all Riemannian metrics~$\gm$ on~$M$; here~$\ell$ is the
least length of a noncontractible loop in~$M$, while~$C$ is required
to be independent of the metric~$\gm$. Indeed, Gromov~\cite{Gr1}
proved that such a~$C=C_n$ exists if~$M$ is an essential manifold,
meaning that~$M$ represents a nonzero homology class
in~$H_n(\pi_1(M))$. I.~Babenko~\cite{Bab1} proved a converse.
We generalize these invariants as follows. Let~$h_k$ denote the
minimum of~$k$-volumes of homologically nontrivial~$k$-dimensional
cycles (see Section~\ref{s:prel} for details). Do there exist a
partition~$k_1+ \cdots +k_d=n$ of~$n$, and a constant~$C$ such that
the inequality
\begin{equation}\label{e:syscat}
\prod_{i=1}^d h_{k_i}\le C\vol(M, \gm)
\end{equation}
holds for all Riemannian metrics~$\gm$ on~$M$? The invariants~$h_k$
are the~$k$-systoles, and the maximum value of~$d$ as in
\eqref{e:syscat} is called the {\em systolic category\/}~$\syscat$
of~$M$ \cite{KR1,SGT}. The goal of this work is to continue the
investigation of the invariant~$\syscat$ started in \cite{KR1, KR2}.
It was originally pointed out by Gromov (see \cite{Be5}) that this
definition has a certain shortcoming. Namely, for~$S^1\times S^3$ one
observes a ``systolic freedom'' phenomenon, in that the inequality
\begin{equation}\label{e:freedom}
\sysh_1\sysh_3\le C\vol(S^1\times S^3)
\end{equation}
is violated for any~$C\in {\mathbb R}$, by a suitable metric~$\gm$
on~$S^1\times S^3$ \cite{Gr2,Gr3}. This phenomenon can be overcome by
a process of stabilisation.
It turns out that the difference between the stable
systoles,~$\stsys_k$, and the ordinary ones,~$\sysh_k$ has significant
ramifications in terms of the existence of geometric inequalities.
Namely, in contrast with the violation of \eqref{e:freedom}, the
inequality
\begin{equation}
\label{27}
\stsys_1\stsys_3\le \vol_4(S^1\times S^3)
\end{equation}
holds for all metrics on~$S^1\times S^3$ \cite{Gr1} (see
Theorem~\ref{t:cuplength} for a generalisation).
\section{Gromov's inequalities}
\label{s:def}
\label{four}
Systolic category can be thought of as a way of codifying three
distinct types of systolic inequalities, all due to Gromov. There are
three main sources of systolic lower bounds for the total volume of a
closed manifold~$M$. All three originate in Gromov's 1983 Filling
paper~\cite{Gr1}, and can be summarized as follows.
\begin{enumerate}
\item
Gromov's inequality for the homotopy~$1$-systole of an essential
manifold~$M$, see \cite{We, Gu09} and \cite[p.~97]{SGT}.
\item
Gromov's stable systolic inequality (treated in more detail in
\cite{BK1, BK2}) corresponding to a cup product decomposition of the
rational fundamental cohomology class of~$M$, see
\theoref{t:cuplength}.
\item
A construction using the Abel--Jacobi map to the Jacobi torus of~$M$
(sometimes called the dual torus), also based on a theorem of Gromov
(elaborated in \cite{IK, BCIK2}).
\end{enumerate}
Let us describe the last construction in more detail. Let~$M$ be a
connected~$n$-manifold. Let~$b=b_1(M)$. Let
\begin{equation*}
\T^b := H_1(M;{\mathbb R})/H_1(M;{\mathbb Z})_{\mathbb R}
\end{equation*}
be its Jacobi torus. A natural metric on the Jacobi torus of a
Riemannian manifold is defined by the stable norm, see
\cite[p.~94]{SGT}.
The Abel--Jacobi map~$\AJ_M: M\to \T^b$ is discussed in \cite{Li,
BK2}, cf.~\cite[p.~139]{SGT}. A typical fiber~$F_M\subset M$
(i.e.~inverse image of a generic point) of~$\AJ_M$ is a smooth
imbedded~$(n-b)$-submanifold (varying in type as a function of the
point of~$\T^b$). Our starting point is the following observation of
Gromov's \cite[Theorem~7.5.B]{Gr1}, elaborated in \cite{IK}.
\begin{theorem}[M.~Gromov]
\label{t:abcover}
If the homology class~$\fmanifold\in H_{n-b}(\overline M)$ of the lift
of~$F_M$ to the maximal free abelian cover~$\overline M$ of~$M$ is nonzero,
then the total volume of~$M$ admits a lower bound in terms of the
product of the volume of the Jacobi torus and the infimum of areas of
cycles representing the class~$\fmanifold$.
\end{theorem}
We also reproduce the following result, due to Gromov~\cite{Gr1}, see
also~\cite{BK1, SGT}, stated in terms of systolic category.
\begin{thm}[M.~Gromov]
\label{t:cuplength}
For a closed orientable manifold~$M$, the systolic category of~$M$ is
bounded from below by the rational cup-length of~$M$.
\end{thm}
Note that Theorem~\ref{t:cuplength} is not directly related to
Theorem~\ref{t:abcover}. For instance, Theorem~\ref{t:cuplength}
implies that the systolic category of~$S^2\times S^2$, or~${\mathbb C} {\mathbb
P} ^2$, equals~$2$, while Theorem~\ref{t:abcover} gives no information
about simply-connected manifolds.
\section{Fundamental group and systolic category}
\label{s:group}
Throughout this section we will assume that~$\pi$ is a finitely
presented group. Denote by~$\cd(\pi)$ the cohomological dimension
of~$\pi$, see \cite{Bro}.
\begin{lemma}
\label{l:classif}
If~$\pi_1(M)$ is of finite cohomological dimension then~$M$ admits a
map~$f: M \to B$ to a simplicial complex~$B$ of dimension at most
$\cd(\pi_1 M)$, inducing an isomorphism of fundamental groups.
\end{lemma}
\begin{proof}
Let~$\pi=\pi_1(M)$. When~$d=\cd(\pi)\not=2$, by the Eilenberg--Ganea
theorem~\cite{EG}, there exists a~$d$-dimensional model~$B\pi$ for the
classifying space of~$\pi$, which may be assumed to be a simplicial
complex. This yields the desired map~$f: M \to B\pi$ by universality
of the classifying space.
In the case~$d=2$ it is still unknown whether one can assume that
$\dim B \pi=2$ for all~$\pi$. That this is so is the content of the
Eilenberg--Ganea conjecture. In this case we can assume only
that~$\dim B\pi\le 3$.
We claim that there is a map~$q:B\pi\to B\pi^{(2)}$ onto the
$2$-skeleton that induces an isomorphism of the fundamental
groups. Indeed, the obstruction to retracting~$B\pi$ onto~$B\pi^{(2)}$
is an element of the cohomology group
\[
H^3 \left(B\pi;\pi_2\left(B\pi^{(2)}\right)\right)
\]
with coefficients in the~$\pi$-module~$\pi_2(B\pi^{(2)})$. This group
is zero, since by hypothesis~$cd(\pi)=2$. By the classical
obstruction theory the identity map of~$B\pi^{(2)}$ can be changed on
the 2-dimensional skeleton without changes on the 1-dimensional
skeleton in such a way that a new map has an extension~$q$
to~$B\pi$. Since~$q:B\pi\to B\pi^{(2)}$ is the identity on the
1-skeleton, it induces an isomorphism of the fundamental groups.
Now in the case~$d=2$ we use the complex~$B\pi^{(2)}$ instead of
$B\pi$ together with the map~$q\circ f:M\to B\pi^{(2)}$ instead of
$f$.
\end{proof}
\begin{thm}
\label{t:cd}
Let~$M$ be a closed~$n$-manifold, with~$\cd(\pi_1(M))=d \leq n$. Then
the systolic category of~$M$ is at most~$(n+d)/2$.
\end{thm}
\begin{proof}
Let~$g_M$ be a fixed background metric on~$M$. Choose a map
\begin{equation*}
f: M \to B
\end{equation*}
as in \lemref{l:classif}, as well as a fixed PL metric~$g_B$ on~$B$.
Consider the metric~$f^*(g_B)$ on~$M$ pulled back by~$f$ (see
\cite{Bab1, SGT}). This metric is defined by a quadratic form of rank
at most~$d$ at every point of~$M$. Note that the quadratic form can
be thought of as the {\em square\/} of the length element.
Next, we scale the pull-back metric by a large real parameter~$t\in
{\mathbb R}$. When the length of a vector is multiplied by~$t$, the convention
is to write the metric as~$t^2 f^* (g_B)$. Since the rank of the
metric is at most~$d$, the volume of~$g_M + t^2 f^* (g_B)$ grows at
most as~$t^d$, where~$g_M$ is any fixed background metric on~$M$. We
obtain a family of metrics
\begin{equation}
\label{51}
g_t= g_M + t^2 f^*(g_B)
\end{equation}
on~$M$ with volume growing at most as~$t^d$ where~$d=\dim B$, while
the~$1$-systole grows as~$t$. Thus the ratio
\begin{equation*}
\frac{\sys_1^{d+1}(M,g_t)}{\vol(M,g_t)}
\end{equation*}
tends to infinity. The addition of a fixed background metric on~$M$
in~\eqref{51} ensures a uniform lower bound for all its~$k$-systoles
for~$k\geq 2$. It follows that a partition
\begin{equation*}
n= \underset{d+1}{\underbrace{1+\dots+1}} +k_{d+2}+\dots+k_r
\end{equation*}
cannot be categorical.
\end{proof}
\section{Systolic category, LS category, and~$\cd(\pi)$}
\begin{rem}
\label{sharp}
The upper bound of Theorem~\ref{t:cd} is sharp when~$\pi$ is a free
group ($d=1$) in the sense that~$\pi$ is the fundamental group of a
closed~$(2k+1)$-manifold
\begin{equation*}
M=\left( \prod_{i=1}^{k-1}S^2 \right) \times (\#(S^1\times S^2))
\end{equation*}
with~$\syscat M=k+1= (2k+2)/2$ in view of Theorem~\ref{t:cuplength}.
Similarly, for the~$(2k+2)$-dimensional manifold
\begin{equation*}
M=S^3\times\prod_{i=1}^{k-2}S^2\times (\#(S^1\times S^2)),
\end{equation*}
where~$k>1$, we have~$\syscat M=k+1$. The case of a~$4$-dimensional
manifold follows from \corref{c:free}.
\end{rem}
\begin{cor}
\label{c:free}
The systolic category of a closed orientable~$4$-manifold with free
fundamental group is at most~$2$, and it is exactly~$2$ if~$M$ is not
a homotopy sphere.
\end{cor}
\begin{proof}
The partitions~$4=1+1+2$ and~$4=1+1+1+1$ are ruled out by
Theorem~\ref{t:cd}. The only remaining possibilities are the
partitions~$4=1+3$ (corresponding to category value~$2$) and~$4=4$
(that would correspond to category value~$1$). If~$M$ is not
simply-connected, then by the hypothesis of the corollary,~$b_1(M)\geq
1$ and therefore~$M$ satisfies a systolic inequality of
type~\eqref{27} (see Theorem~\ref{t:cuplength}), proving that
$\syscat(M)=2$. If~$M$ is simply-connected but not a homotopy sphere,
then~$H_2(M)=\pi_2(M)$ is free abelian and~$b_2(M)>0$, so that~$M$
satisfies the systolic inequality
\[
\stsys_2(M)^2 \leq b_2(M) \vol(M),
\]
see \cite{BK1} (a special case of Theorem~\ref{t:cuplength}),
proving~$\syscat(M)=2$.
\end{proof}
\begin{cor}
\label{c:dim=4}
For every closed orientable~$4$-manifold we have the
inequality~$\syscat M \le \protect\operatorname{cat} M$, and the strict inequality is
possible in the case~$\syscat M=2<3=\protect\operatorname{cat} M$ only.
\end{cor}
We do not know, however, if the case of the strict inequality can be
realized.
\begin{proof}
If the fundamental group of~$M$ is free then~$\protect\operatorname{cat} M \le 2$~\cite{MK},
and in this case the result follows from \corref{c:free}. If~$\syscat
M=4$ then~$\protect\operatorname{cat} M=4$,~\cite{KR1, SGT}, and hence~$\syscat M \le \protect\operatorname{cat}
M$ if~$\protect\operatorname{cat} M \ge 3$. Finally, if the fundamental group of~$M$ is not
free then~$\protect\operatorname{cat} M \ge 3$ \cite{DKR}.
\end{proof}
\begin{rem}
\label{r:low}
The equality~$\syscat M^n = \protect\operatorname{cat} M^n$ for~$n\le 3$ was proved in
\cite{KR1,KR2}.
\end{rem}
\section{Self-linking of fibers and a lower bound for~$\syscat$}
\label{s:abjac}
In this section we will continue with the notation of
\theoref{t:abcover}.
\begin{prop}
\label{p:fm}
Let~$M$ be a closed connected manifold (orientable or non-orientable).
If a typical fiber of the Abel--Jacobi map represents a nontrivial
$(n-b)$-dimensional homology class in~$\overline M$, then systolic category
satisfies~$\syscat(M)\geq b+1$.
\end{prop}
\begin{proof}
If the fiber class is nonzero, then the Abel--Jacobi map is
necessarily surjective in the set-theoretic sense. One then applies
the technique of Gromov's proof of Theorem~\ref{t:abcover},
cf.~\cite{IK}, combined with a lower bound for the volume of the
Jacobi torus in terms of the~$b$-th power of the stable~$1$-systole,
to obtain a systolic lower bound for the total volume corresponding to
the partition
\begin{equation*}
n=1+1+\cdots+1+(n-b),
\end{equation*}
where the summand ``$1$'' occurs~$b$ times. Note that Poincar\'e
duality is not used in the proof.
\end{proof}
The goal of the remainder of this section is to describe a sufficient
condition for applying Gromov's theorem, so as to obtain such a lower
bound in the case when the fiber class in~$M$ vanishes.
From now on we assume that~$M$ is orientable, has dimension~$n$,
and~$b_1(M)=2$. Let~$\{\alpha,\beta \} \subset H^1(M)$ be an integral
basis for~$H^1(M)$. Let~$F_M$ be a typical fiber of the Abel--Jacobi
map. It is easy to see that~$[F_M]$ is Poincar\'e dual to the cup
product~$\alpha\smallsmile \beta$. Thus, if~$\alpha\smallsmile\beta\ne 0$ then~$\syscat M\ge
3$ by \propref{p:fm}. If~$\alpha\smallsmile \beta=0$ then the Massey
product~$\langle\alpha,\alpha,\beta\rangle$ is defined and has zero indeterminacy.
\begin{thm}\label{t:massey}
Let~$M$ be a closed connected orientable manifold of dimension~$n$
with~$b_1(M)=2$. If~$\langle\alpha,\alpha,\beta\rangle\smallsmile\beta \ne 0$ then~$\syscat M
\ge 3$.
\end{thm}
Note that, on the Lusternik--Schnirelmann side, we similarly have a
lower bound~$\protect\operatorname{cat} M \ge 3$ if~$\langle\alpha,\alpha,\beta\rangle\ne 0$, since the
element~$\langle \alpha, \alpha, \beta \rangle$ has category weight~$2$~\cite{R1,R2}.
To prove the theorem, we reformulate it in the dual homology language.
\begin{definition}
Let~$F=F_M \subset M$ be an oriented typical fiber. Assume~$[F]=0\in
H_{n-2}(M)$. Choose an~$(n-1)$-chain~$X$ with~$\partial X = F$.
Consider another regular fiber~$F'\subset M$. The oriented
intersection~$X\cap F'$ defines a class
\begin{equation*}
\ell_M(F_M, F_M) \in H_{n-3}(M),
\end{equation*}
which will be referred to as the {\em self-linking class\/} of a
typical fiber of~$\AJ_M$.
\end{definition}
The following lemma asserts, in particular, that the self-linking
class is well-defined, at least up to sign.
\begin{lemma}
The class~$\ell_M(F_M, F_M)$ is dual, up to sign, to the cohomology
class~$\langle \alpha,\alpha,\beta \rangle \smallsmile\beta \in H^3(M)$.
\end{lemma}
\begin{proof}
The classes~$\alpha, \beta$ are Poincar\'e dual to hypersurfaces~$A, B
\subset M$ obtained as the inverse images under~$\AJ_M$ of a
pair~$\{u,v\}$ of loops defining a generating set for~$H_1(\T^2)$.
The hypersurfaces can be constructed as inverse images of a regular
point under a projection~$M\to S^1$ to one of the summand circles.
Clearly, the intersection~$A\cap B \subset M$ is a typical fiber
\begin{equation*}
F_M=A\cap B
\end{equation*}
of the Abel--Jacobi map (namely, inverse image of the point~$u\cap
v\in \T^2$). Then another regular fiber~$F'$ can be represented
as~$A'\cap B'$ where, say, the set~$A'$ is the inverse image of a
loop~$u'$ ``parallel'' to~$u$. Then~$A'\cap X$ is a cycle, since
\begin{equation*}
\partial (A'\cap X)=A'\cap A \cap B=\emptyset.
\end{equation*}
Moreover, it is easy to see that the homology class~$[A'\cap X]$ is
dual to the Massey product~$\langle \alpha,\alpha, \beta\rangle$, by taking a
representative~$a$ of~$\alpha$ such that~$a\smallsmile a=0$. Now,
since~$F'=A'\cap B'$, we conclude that~$[F'\cap X]$ is dual, up to
sign, to~$\langle \alpha,\alpha,\beta\rangle\smallsmile \beta$.
\end{proof}
\begin{remark}
In the case of~$3$-manifolds with first Betti number~$2$, the
non-vanishing of the self-linking number is equivalent to the
non-vanishing of C.~Lescop's generalization~$\lambda$ of the
Casson-Walker invariant, cf.~\cite{Les}. See T.~Cochran and
J.~Masters~\cite{CM} for generalizations.
\end{remark}
Now \theoref{t:massey} will follow from \theoref{t:self} below.
\begin{theorem}\label{t:self}
Assume~$b_1(M^n)=2$. The non-triviality of the self-linking class
in~$H_{n-3}(M)$ implies the bound~$\syscat(M)\geq 3$.
\end{theorem}
The theorem is immediate from the proposition below. If the fiber
class in~$M$ of the Abel--Jacobi map vanishes, one can define the
self-linking class of a typical fiber, and proceed as follows.
\begin{prop}
\label{propal}
The non-vanishing of the self-linking of a typical fiber
$\AJ_M^{-1}(p)$ of~$\AJ_M: M \to \T^2$ is a sufficient condition for
the non-vanishing of the fiber class~$\fmanifold$ in the maximal free
abelian cover~$\manbar$ of~$M$.
\end{prop}
\begin{proof}
The argument is modeled on the one found in \cite{KL} in the case
of~$3$-manifolds, and due to A. Marin (see also
\cite[p.~165-166]{SGT}). Consider the pullback diagram
\begin{equation*}
\CD \manbar @>\overline{\AJ}_M >> {\mathbb R}^2\\ @VpVV @VVV\\ M @>\AJ_M >> \T^2
\endCD
\end{equation*}
where~$\AJ_M$ is the Abel--Jacobi map and the right-hand map is the
universal cover of the torus. Choose points~$x,y\in {\mathbb R}^2$ with
distinct images in~$\T^2$. Let~$\overline F_x=\overline{\AJ}_M^{-1}(x)$
and~$\overline F_y= \overline{\AJ}_M^{-1}(y)$ be lifts of the corresponding
fibers~$F_x, F_y \subset M$. Choose a properly imbedded
ray~$r_y\subset {\mathbb R}^2$ joining the point~$y\in {\mathbb R}^2$ to infinity while
avoiding~$x$ (as well as its~${\mathbb Z}^2$-translates), and consider the
complete hypersurface
\begin{equation*}
S = \overline{\AJ}^{-1}(r_y) \subset \manbar
\end{equation*}
with~$\partial S = \overline F_y$. We have~$S\cap T_g \overline F_x = \emptyset$
for all~$g\in G$, where~$G\cong{\mathbb Z}^2$ denotes the group of deck
transformations of the covering~$p: \manbar \to M$ and~$T_g$ is the
deck transformation given by~$g$.
We will prove the contrapositive. Namely, the vanishing of the class
of the lift of the fiber implies the vanishing of the self-linking
class. If the surface~$\overline F_x$ is zero-homologous in~$\manbar$, we
can choose a compact hypersurface~$W \subset \manbar$ with
\begin{equation*}
\partial W = \overline F_x
\end{equation*}
(if there is no such hypersurface for~$\overline F_x$, we work with a
sufficiently high multiple~$N F_x$, see \cite{KL} for details).
The~$(n-3)$-dimensional homology class~$\ell_M(F_x, F_y)$ in~$M$ can
therefore be represented by the~$(n-3)$-dimensional cycle given by the
oriented intersection~$p(W) \cap F_y$. Now we have
\begin{equation}
\label{1321}
\begin{aligned}
p(W) \cap F_y= \sum _{g\in G} T_g W \cap \overline F_y =
\sum _{g\in G} \partial \left( T_g W \cap S \right).
\end{aligned}
\end{equation}
But the last sum is a finite sum of boundaries, and hence represents
the zero homology class. The finiteness of the sum follows from the
fact that the first sum contains only finitely many non-zero summands,
due to the compactness of~$W$.
\end{proof}
To summarize, if the lift of a typical fiber to the maximal free
abelian covering of~$M^n$ with~$b_1(M)=2$ defines a nonzero class, then
one obtains the lower bound~$\syscat(M)\geq 3$, due to the existence
of a suitable systolic inequality corresponding to the partition
\begin{equation*}
n=1+1+(n-2)
\end{equation*}
as in \eqref{eq:main}, by applying Gromov's \theoref{t:abcover}.
|
1,108,101,565,199 | arxiv | \setcounter{equation}{0*{Abstract}
\else
\begin{center}
{\bf Abstract\vspace{-.5em}\vspace{0pt}}
\end{center}
\quotation
\fi}
\def\if@twocolumn\else\endquotation\fi{\if@twocolumn\else\endquotation\fi}
\def |
1,108,101,565,200 | arxiv | \section{Introduction}
The theory of rough paths~\cite{Lyons1998} has recently been extended to a multiparameter setting~\cite{Hairer2014Regularity,Gubinelli2012}. While \cite{Hairer2014Regularity} has a much wider range of applicability, both approaches allow to solve many interesting SPDEs that were well out of reach with previously existing methods; for example the continuous parabolic Anderson in dimension two~\cite{Hairer2014Regularity,Gubinelli2012}, the three-dimensional stochastic quantization equation~\cite{Hairer2014Regularity,Catellier2013}, the KPZ equation~\cite{Hairer2013KPZ,Gubinelli2014}, or the three-dimensional stochastic Navier Stokes equation~\cite{Zhu2014,Zhu2014Discretization}. Our methods developed in~\cite{Gubinelli2012} are based on harmonic analysis, on Littlewood-Paley decompositions of tempered distributions, and on a simple commutator lemma. This requires a non-negligible knowledge of Littlewood-Paley theory and Besov spaces, while at the same time the application to classical rough path SDEs is not quite straightforward. That is why here we develop the approach of~\cite{Gubinelli2012} in the slightly different language of Haar / Schauder functions, which allows us to communicate our basic ideas while requiring only very basic knowledge in analysis. Moreover, in the Haar Schauder formulation the application to SDEs poses no additional technical challenges.
It is a classical result of Ciesielski~\cite{Ciesielski1960} that $C^\alpha := C^\alpha([0,1],\mathbb{R}^d)$, the space of $\alpha$--H\"older continuous functions on $[0,1]$ with values in $\mathbb{R}^d$, is isomorphic to $\ell^\infty(\mathbb{R}^d)$, the space of bounded sequences with values in $\mathbb{R}^d$. The isomorphism gives a Fourier decomposition of a H\"older-continuous function $f$ as
\begin{align*}
f = \sum_{p,m} \langle H_{pm}, \mathrm{d} f \rangle G_{pm},
\end{align*}
where $(H_{pm})$ are the Haar functions and $(G_{pm})$ are the Schauder functions. Ciesielski proved that a continuous function $f$ is in $C^{\alpha}([0,1],\mathbb{R}^d)$ if and only if the coefficients $(\langle H_{pm}, \mathrm{d} f\rangle)_{p,m}$ decay rapidly enough.
Following Ciesielski's work, similar isomorphisms have been developed for many Fourier and wavelet bases, showing that the regularity of a function is encoded in the decay of its coefficients in these bases; see for example Triebel~\cite{Triebel2006}.
But until this day, the isomorphism based on Schauder functions plays a special role in stochastic analysis, because the coefficients in the Schauder basis have the pleasant property that they are just rescaled second order increments of $f$. So if $f$ is a stochastic process with known distribution, then also the distribution of its coefficients in the Schauder basis is known explicitly.
A simple application is the L\'evy-Ciesielski construction of Brownian motion. An incomplete list with further applications will be given below.
Another convenient property of Schauder functions is that they are piecewise linear, and therefore their iterated integrals $\int_0^\cdot G_{pm}(s) \mathrm{d} G_{qn}(s)$, can be easily calculated. This makes them an ideal tool for our purpose of studying integrals. Indeed, given two continuous functions $f$ and $g$ on $[0,1]$ with values in $\L(\mathbb{R}^d, \mathbb{R}^n)$, the space of linear maps from $\mathbb{R}^d$ to $\mathbb{R}^n$, and $\mathbb{R}^d$ respectively, we can formally define
\begin{align*}
\int_0^t f(s) \mathrm{d} g(s) := \sum_{p,m}\sum_{q,n} \langle H_{pm}, \mathrm{d} f \rangle \langle H_{qn}, \mathrm{d} g \rangle \int_0^t G_{pm} (s) \mathrm{d} G_{qn}(s).
\end{align*}
In this paper we study, under which conditions this formal definition can be made rigorous. We start by observing that the integral introduces a bounded operator from $C^\alpha \times C^\beta$ to $C^\beta$ if and only if $\alpha+\beta > 1$. Obviously, here we simply recover Young's integral~\cite{Young1936}. In our study of this integral, we identify different components:
\begin{align*}
\int_0^t f(s) \mathrm{d} g(s) = S(f,g)(t) + \pi_<(f,g)(t) + L(f,g)(t),
\end{align*}
where $S$ is the \emph{symmetric part}, $\pi_<$ the \emph{paraproduct}, and $L(f,g)$ the \emph{L\'evy area}. The operators $S$ and $\pi_<$ are defined for $f \in C^\alpha$ and $g \in C^\beta$ for arbitrary $\alpha,\beta>0$, and it is only the L\'evy area which requires $\alpha + \beta > 1$. Considering the regularity of the three operators, we have $S(f,g) \in C^{\alpha + \beta}$, $\pi_<(f,g) \in C^\beta$, and $L(f,g) \in C^{\alpha+\beta}$ whenever the latter is defined. Therefore, in the Young regime $\int_0^\cdot f(s) \mathrm{d} g(s) - \pi_<(f,g) \in C^{\alpha + \beta}$. We will also see that for sufficiently smooth functions $F$ we have $F(f) \in C^{\alpha}$ but $F(f) - \pi_<(\mathrm{D} F(f), f) \in C^{2\alpha}$. So both $\int_0^\cdot f(s) \mathrm{d} g(s)$ and $F(f)$ are given by a paraproduct plus a smoother remainder. This leads us to call a function $f \in C^\alpha$ \emph{paracontrolled} by $g$ if there exists a function $f^g \in C^\beta$ such that $f - \pi_<(f^g,g) \in C^{\alpha+\beta}$. Our aim is then to construct the L\'evy area $L(f,g)$ for $\alpha < 1/2$ and $f$ paracontrolled by $g$. If $\beta > 1/3$, then the term $L(f - \pi_<(f^g,g),g)$ is well defined, and it suffices to make sense of the term $L(\pi_<(f^g,g),g)$. This is achieved with the following commutator estimate:
\begin{align*}
\left\lVert L(\pi_<(f^g,g),g) - \int_0^\cdot f^g(s) \mathrm{d} L(g,g)(s)\right\rVert_{3\beta} \le \lVert f^g \rVert_\beta \lVert g \rVert_\beta \lVert g \rVert_\beta.
\end{align*}
Therefore, the integral $\int_0^\cdot f(s)\mathrm{d} g(s)$ can be constructed for all $f$ that are paracontrolled by $g$, provided that $L(g,g)$ can be constructed. In other words, we have found an alternative formulation of Lyons'~\cite{Lyons1998} rough path integral, at least for H\"older continuous functions of H\"older exponent larger than 1/3.
Since we approximate $f$ and $g$ by functions of bounded variation, our integral is of Stratonovich type, that is it satisfies the usual integration by parts rule. We also consider a non-anticipating It\^{o} type integral, that can essentially be reduced to the Stratonovich case with the help of the quadratic variation.
The last remaining problem is then to construct the L\'evy area $L(g,g)$ for suitable stochastic processes $g$. We construct it for certain hypercontractive processes. For continuous martingales that possess sufficiently many moments we give a construction of the It\^{o} iterated integrals that allows us to use them as integrators for our pathwise It\^{o} integral.
Below we give some pointers to the literature, and we introduce some basic notations which we will use throughout. In Section~\ref{s:preliminaries ciesielski} we recall some details on Ciesielski's isomorphism, and we give a short overview on rough paths and Young integration. In Section~\ref{s:paradifferential calculus} we develop a paradifferential calculus in terms of Schauder functions, and we examine the different components of Young's integral. In Section~\ref{s:schauder rough path integral} we construct the rough path integral based on Schauder functions. Section~\ref{s:pathwise ito} develops the pathwise It\^o integral. In Section~\ref{s:construction of levy area} we construct the L\'evy area for suitable stochastic processes. And in Section~\ref{s:sde} we apply our integral to solve both It\^o type and Stratonovich type SDEs in a pathwise way.
\paragraph{Relevant literature}
Starting with the L\'evy-Ciesielski construction of Brownian motion, Schauder functions have been a very popular tool in stochastic analysis. They can be used to prove in a comparatively easy way that stochastic processes belong to Besov spaces; see for example Ciesielski, Kerkyacharian, and Roynette~\cite{Ciesielski1993}, Roynette~\cite{Roynette1993}, and Rosenbaum~\cite{Rosenbaum2009}. Baldi and Roynette~\cite{Baldi1992} have used Schauder functions to extend the large deviation principle for Brownian motion from the uniform to the H\"older topology; see also Ben Arous and Ledoux~\cite{BenArous1994} for the extension to diffusions, Eddahbi, N'zi, and Ouknine~\cite{Eddahbi1999} for the large deviation principle for diffusions in Besov spaces, and Andresen, Imkeller, and Perkowski~\cite{Andresen2013} for the large deviation principle for a Hilbert space valued Wiener process in H\"older topology. Ben Arous, Gr\u{a}dinaru, and Ledoux~\cite{BenArous1994a} use Schauder functions to extend the Stroock-Varadhan support theorem for diffusions from the uniform to the H\"older topology. Lyons and Zeitouni~\cite{Lyons1999} use Schauder functions to prove exponential moment bounds for Stratonovich iterated integrals of a Brownian motion conditioned to stay in a small ball. Gantert~\cite{Gantert1994} uses Schauder functions to associate to every sample path of the Brownian bridge a sequence of probability measures on path space, and continues to show that for almost all sample paths these measures converge to the distribution of the Brownian bridge. This shows that the law of the Brownian bridge can be reconstructed from a single ``typical sample path''.
Concerning integrals based on Schauder functions, there are three important references: Roynette~\cite{Roynette1993} constructs a version of Young's integral on Besov spaces and shows that in the one dimensional case the Stratonovich integral $\int_0^\cdot F(W_s) \mathrm{d} W_s$, where $W$ is a Brownian motion, and $F \in C^2$, can be defined in a deterministic manner with the help of Schauder functions. Roynette also constructs more general Stratonovich integrals with the help of Schauder functions, but in that case only almost sure convergence is established, where the null set depends on the integrand, and the integral is not a deterministic operator. Ciesielski, Kerkyacharian, and Roynette~\cite{Ciesielski1993} slightly extend the Young integral of~\cite{Roynette1993}, and simplify the proof by developing the integrand in the Haar basis and not in the Schauder basis. They also construct pathwise solutions to SDEs driven by fractional Brownian motions with Hurst index $H>1/2$. Kamont~\cite{Kamont1994} extends the approach of~\cite{Ciesielski1993} to define a multiparameter Young integral for functions in anisotropic Besov spaces. Ogawa~\cite{Ogawa1984, Ogawa1985} investigates an integral for anticipating integrands he calls \emph{noncausal} starting from a Parseval type relation in which integrand and Brownian motion as integrator are both developed by a given complete orthonormal system in the space of square integrable functions on the underlying time interval. This concept is shown to be strongly related to Stratonovich type integrals (see Ogawa~\cite{Ogawa1985}, Nualart, Zakai~\cite{NualartZakai1989}), and used to develop a stochastic calculus on a Brownian basis with \emph{noncausal} SDE (Ogawa~\cite{Ogawa2007}).
Rough paths have been introduced by Lyons~\cite{Lyons1998}, see also~\cite{Lyons1995,Lyons1996,Lyons1997} for previous results. Lyons observed that solution flows to SDEs (or more generally ordinary differential equations (ODEs) driven by rough signals) can be defined in a pathwise, continuous way if paths are equipped with sufficiently many iterated integrals. More precisely, if a path has finite $p$--variation for some $p \ge 1$, then one needs to associate $\lfloor
p\rfloor$ iterated integrals to it to obtain an object which can be taken as the driving signal in an ODE, such that the solution to the ODE depends continuously on that signal. Gubinelli~\cite{Gubinelli2004, Gubinelli2010} simplified the theory of rough paths by introducing the concept of controlled paths, on which we will strongly rely in what follows. Roughly speaking, a path $f$ is controlled by the reference path $g$ if the small scale fluctuations of $f$ ``look like those of $g$''. Good monographs on rough paths are~\cite{Lyons2002, Lyons2007, Friz2010, Friz2013}.
\paragraph{Notation and conventions.
Throughout the paper, we use the notation $a \lesssim b$ if there exists a constant $c>0$, independent of the variables under consideration, such that $a \leqslant c \cdot b$, and we write $a \simeq b$ if $a \lesssim b$ and $b \lesssim a$. If we want to emphasize the dependence of $c$ on the variable $x$, then we write $a(x) \lesssim_{x} b(x)$.
For a multi-index $\mu = ( \mu_{1} , \ldots , \mu_{d} ) \in \mathbb{N}^{d}$ we write $| \mu | = \mu_{1} + \ldots + \mu_{d}$ and $\partial^{\mu} = \partial^{| \mu |} / \partial_{x_{1}}^{\mu_{1}} \cdots \partial_{x_{d}}^{\mu_{d}}$.
$\mathrm{D} F$ or $F'$ denote the total derivative of $F$. For $k \in \mathbb{N}$ we denote by $\mathrm{D}^{k} F$ the $k$-th order derivative of $F$. We also write $\partial_{x}$ for the partial derivative in direction $x$.
\section{Preliminaries}\label{s:preliminaries ciesielski}
\subsection{Ciesielski's isomorphism}\label{s:ciesielski}
Let us briefly recall Ciesielski's isomorphism between $C^\alpha([0,1],\mathbb{R}^d)$ and $\ell^\infty(\mathbb{R}^d)$. The \emph{Haar functions} $(H_{pm}, p \in \mathbb{N}, 1 \le m \le 2^p)$ are defined as
\begin{align*}
H_{pm}(t) := \begin{cases}
\sqrt{2^p}, & t \in \left[ \frac{m-1}{2^{p}}, \frac{2m-1}{2^{p+1}}\right),\\
-\sqrt{2^p}, & t \in \left[ \frac{2m-1}{2^{p+1}}, \frac{m}{2^{p}}\right), \\
0, & \text{otherwise.}
\end{cases}
\end{align*}
When completed by $H_{00} \equiv 1$, the Haar functions are an orthonormal basis of $L^2([0,1],\mathrm{d} t)$. For convencience of notation, we also define $H_{p0}\equiv 0$ for $p \ge 1$. The primitives of the Haar functions are called \emph{Schauder functions} and they are given by $G_{pm} (t) := \int_0^t H_{pm} (s) \mathrm{d} s$ for $t\in[0,1]$, $p\in \mathbb{N}$, $0 \le m \le 2^p$. More explicitly, $G_{00}(t) = t$ and for $p\in \mathbb{N}$, $1 \le m \le 2^p$
\begin{align*}
G_{pm} (t) = \begin{cases}
2^{p/2}\left(t - \frac{m-1}{2^{p}}\right), & t \in \left[ \frac{m-1}{2^{p}}, \frac{2m-1}{2^{p+1}}\right),\\
- 2^{p/2}\left(t - \frac{m}{2^{p}} \right), & t \in \left[ \frac{2m-1}{2^{p+1}}, \frac{m}{2^{p}}\right),\\
0, & \text{otherwise}.
\end{cases}
\end{align*}
Since every $G_{pm}$ satisfies $G_{pm}(0) = 0$, we are only able to expand functions $f$ with $f(0)=0$ in terms of this family $(G_{pm})$. Therefore, we complete $(G_{pm})$ once more, by defining $G_{-10}(t) := 1$ for all $t \in [0,1]$. To abbreviate notation, we define the times $t^i_{pm}$, $i = 0,1,2$, as
\begin{align*}
t_{pm}^0 := \frac{m-1}{2^p}, \quad t_{pm}^1 := \frac{2m-1}{2^{p+1}}, \quad t_{pm}^2 := \frac{m}{2^p},
\end{align*}
for $p \in \mathbb{N}$ and $1 \le m \le 2^p$. Further, we set $t^0_{-10} := 0$, $t^1_{-10}:= 0$, $t^2_{-10}:=1$, and $t^0_{00}:=0$, $t^1_{00}:=1$, $t^2_{00}:=1$, as well as $t^i_{p0} := 0$ for $p \ge 1$ and $i = 0,1,2$. The definition of $t^i_{-10}$ and $t^i_{00}$ for $i\neq 1$ is rather arbitrary, but the definition for $i = 1$ simplifies for example the statement of Lemma~\ref{l:schauder functions give linear interpolation} below.
For $f \in C([0,1],\mathbb{R}^d)$, $p\in \mathbb{N}$, and $1 \le m \le 2^p$, we write
\begin{align*}
\langle H_{pm}, \mathrm{d} f \rangle :=\,& 2^{\frac{p}{2}}\left[ \left(f\left(t^1_{pm}\right) - f\left(t^0_{pm}\right)\right) - \left( f\left(t^2_{pm}\right) - f\left(t^1_{pm}\right)\right)\right] \\
=\, & 2^{\frac{p}{2}}\left[ 2 f\left(t^1_{pm}\right) - f\left(t^0_{pm}\right) - f\left(t^2_{pm}\right)\right]
\end{align*}
and $\langle H_{00}, \mathrm{d} f \rangle := f(1) - f(0)$ as well as $\langle H_{-10}, \mathrm{d} f \rangle := f(0)$. Note that we only defined $G_{-10}$ and not $H_{-10}$
\begin{lem}\label{l:schauder functions give linear interpolation}
For $f\colon[0,1]\rightarrow \mathbb{R}^d$, the function
\[
f_k := \langle H_{-10}, \mathrm{d} f\rangle G_{-10} + \langle H_{00}, \mathrm{d} f \rangle G_{00} + \sum_{p=0}^k \sum_{m=1}^{2^p} \langle H_{pm}, \mathrm{d} f \rangle G_{pm} = \sum_{p=-1}^k \sum_{m=0}^{2^p} \langle H_{pm}, \mathrm{d} f \rangle G_{pm}
\]
is the linear interpolation of $f$ between the points $t^1_{-10}, t^1_{00}, t^1_{pm}$, $0 \le p \le k, 1 \le m \le 2^p$. If $f$ is continuous, then $(f_k)$ converges uniformly to $f$ as $k \rightarrow \infty$.
\end{lem}
Ciesielski~\cite{Ciesielski1960} observed that if $f$ is H\"older-continuous, then the series $(f_k)$ converges absolutely and the speed of convergence can be estimated in terms of the H\"older norm of $f$. The norm $\lVert \cdot \rVert_{C^\alpha}$ is defined as
\[
\lVert f \rVert_{C^\alpha} := \lVert f \rVert_\infty + \sup_{0\le s < t \le 1} \frac{|f_{s,t}|}{|t-s|^\alpha},
\]
where we introduced the notation
\[
f_{s,t} := f(t) - f(s).
\]
\begin{lem}[\cite{Ciesielski1960}]\label{l:ciesielski}
Let $\alpha \in (0,1)$. A continuous function $f: [0,1] \rightarrow \mathbb{R}^d$ is in $C^\alpha$ if and only if $\sup_{p,m} 2^{p(\alpha - 1/2)} |\langle H_{pm}, \mathrm{d} f\rangle| < \infty$. In this case
\begin{gather}\label{e:ciesielski isomorphism}
\sup_{p,m} 2^{p(\alpha - 1/2)} |\langle H_{pm}, \mathrm{d} f\rangle| \simeq \lVert f \rVert_\alpha \text{ and} \\ \nonumber
\lVert f - f_{N-1} \rVert_\infty = \Big\lVert \sum_{p = N}^\infty \sum_{m=0}^{2^p} |\langle H_{pm}, \mathrm{d} f\rangle| G_{pm} \Big\rVert_\infty \lesssim \lVert f \rVert_\alpha 2^{-\alpha N}.
\end{gather}
\end{lem}
Before we continue, let us slightly change notation. We want to get rid of the factor $2^{-p/2}$ in \eqref{e:ciesielski isomorphism}, and therefore we define for $p \in \mathbb{N}$ and $0 \le m \le 2^p$ the rescaled functions
\begin{align*}
\chi_{pm} := 2^{\frac{p}{2}} H_{pm} \qquad \text{and} \qquad \varphi_{pm} := 2^{\frac{p}{2}} G_{pm},
\end{align*}
as well as $\varphi_{-10} := G_{-10} \equiv 1$. Then we have for $p \in \mathbb{N}$ and $1 \le m \le 2^p$
\begin{align*}
\lVert\varphi_{pm}(t)\rVert_\infty = \varphi_{pm}(t^1_{pm}) = 2^{\frac{p}{2}} \int_{t^0_{pm}}^{t^1_{pm}} 2^{\frac{p}{2}} \mathrm{d} s = 2^p \left( \frac{2m-1}{2^{p+1}} - \frac{2m - 2}{2^{p+1}}\right) = \frac{1}{2},
\end{align*}
so that $\lVert \varphi_{pm}\rVert_\infty \le 1$ for all $p,m$. The expansion of $f$ in terms of $(\varphi_{pm})$ is given by $f_k = \sum_{p=0}^k \sum_{m=0}^{2^p} f_{pm} \varphi_{pm}$, where $f_{-10} := f(1)$, and $f_{00} := f(1)-f(0)$ and for $p \in \mathbb{N}$ and $m \ge 1$
\begin{align*}
f_{pm} := 2^{-p} \langle \chi_{pm}, \mathrm{d} f \rangle = 2 f\left(t^1_{pm}\right) - f\left(t^0_{pm}\right) - f\left(t^2_{pm}\right) = f_{t^0_{pm}, t^1_{pm}} - f_{t^1_{pm}, t^2_{pm}}.
\end{align*}
We write $\langle \chi_{pm}, \mathrm{d} f\rangle := 2^p f_{pm}$ for all values of $(p,m)$, despite not having defined $\chi_{-10}$.
\begin{defn}
For $\alpha > 0$ and $f \colon [0,1] \to \mathbb{R}^d$ the norm $\lVert \cdot \rVert_{\alpha}$ is defined as
\[
\lVert f \rVert_\alpha := \sup_{pm} 2^{p\alpha} |f_{pm}|,
\]
and we write
\begin{align*}
\mathcal{C}^\alpha := \mathcal{C}^\alpha(\mathbb{R}^d) := \left\{f \in C( [0,1], \mathbb{R}^d): \lVert f \rVert_\alpha < \infty\right\}.
\end{align*}
\end{defn}
The space $\mathcal{C}^\alpha$ is isomorphic to $\ell^\infty(\mathbb{R}^d)$, in particular it is a Banach space. For $\alpha \in (0,1)$, Ciesielski's isomorphism (Lemma~\ref{l:ciesielski}) states that $\mathcal{C}^\alpha = C^\alpha([0,1],\mathbb{R}^d)$. Moreover, it can be shown that $\mathcal{C}^1$ is the Zygmund space of continuous functions $f$ satisfying $|2f(x) - f(x+h) - f(x-h)| \lesssim h$. But for $\alpha > 1$, there is no reasonable identification of $\mathcal{C}^{\alpha}$ with a classical function space. For example if $\alpha \in (1,2)$, the space $C^{\alpha}([0,1], \mathbb{R}^d)$ consists of all continuously differentiable functions $f$ with $(\alpha-1)$--H\"older continuous derivative $\mathrm{D} f$. Since the tent shaped functions $\varphi_{pm}$ are not continuously differentiable, even an $f$ with a finite Schauder expansion is generally not in $C^{\alpha}$
The a priori requirement of $f$ being continuous can be relaxed, but not much. Since the coefficients $(f_{pm})$ evaluate the function $f$ only in countably many points, a general $f$ will not be uniquely determined by its expansion. But for example it would suffice to assume that $f$ is c\`adl\`ag.
\paragraph{Littlewood-Paley notation.}
We will employ notation inspired from Littlewood-Paley theory. For $p \ge -1$ and $f \in C([0,1])$ we define
\begin{align*}
\Delta_p f := \sum_{m=0}^{2^p} f_{pm} \varphi_{pm} \qquad \text{and} \qquad S_p f := \sum_{q \le p} \Delta_q f.
\end{align*}
We will occasionally refer to $(\Delta_p f)$ as the Schauder blocks of $f$. Note that
\[
\mathcal{C}^\alpha = \{f \in C([0,1],\mathbb{R}^d): \lVert (2^{p\alpha} \lVert \Delta_p f \rVert_\infty)_p \rVert_{\ell^\infty} < \infty\}.
\]
\subsection{Young integration and rough paths} \label{s:rough paths}
Here we present the main concepts of Young integration and of rough path theory. The results presented in this section will not be applied in the remainder of this chapter, but we feel that it could be useful for the reader to be familiar with the basic concepts of rough paths, since it is the main inspiration for the constructions developed below.
Young's integral~\cite{Young1936} allows to define $\int f \mathrm{d} g$ for $f \in C^\alpha$, $g \in C^\beta$, and $\alpha + \beta > 1$. More precisely, let $f \in C^\alpha$ and $g \in C^\beta$ be given, let $t \in [0,1]$, and let $\pi = \{t_0, \dots, t_N\}$ be a partition of $[0,t]$, i.e. $0=t_0 < t_1 < \dots < t_N=t$. Then it can be shown that the Riemann sums
\begin{align*}
\sum_{t_k \in \pi} f(t_k) (g(t_{k+1})-g(t_k)) := \sum_{k=0}^{N-1} f(t_k) (g(t_{k+1})-g(t_k))
\end{align*}
converge as the mesh size $\max_{k=0,\dots, N-1} |t_{k+1}-t_k|$ tends to zero, and that the limit does not depend on the approximating sequence of partitions. We denote the limit by $\int_0^t f(s) \mathrm{d} g(s)$, and we define $\int_s^t f(r) \mathrm{d} g(r) := \int_0^t f(r) \mathrm{d} g(r) - \int_0^s f(r) \mathrm{d} g(r)$. The function $t \mapsto \int_0^t f(s) \mathrm{d} g(s)$ is uniquely characterized by the fact that
\begin{align*}
\left| \int_s^t f(r) \mathrm{d} g(r) - f(s) (g(t)-g(s)) \right| \lesssim |t-s|^{\alpha + \beta} \lVert f \rVert_\alpha \lVert g \rVert_\beta
\end{align*}
for all $s,t \in [0,1]$. The condition $\alpha + \beta > 1$ is sharp, in the sense that there exist $f, g \in C^{1/2}$, and a sequence of partitions $(\pi_n)_{n \in \mathbb{N}}$ with mesh size going to zero, for which the Riemann sums $\sum_{t_k \in \pi_n} f(t_k) (g(t_{k+1})-g(t_k))$ do not converge as $n$ tends to $\infty$.
The condition $\alpha + \beta > 1$ excludes one of the most important examples: we would like to take $g$ as a sample path of Brownian motion, and $f = F(g)$. Lyons' theory of rough paths~\cite{Lyons1998} overcomes this restriction by stipulating the ``existence'' of basic integrals and by defining a large class of related integrals as their functionals. Here we present the approach of Gubinelli~\cite{Gubinelli2004}.
Let $\alpha \in (1/3,1)$ and assume that we are given two functions $v,w \in C^\alpha$, as well as an associated ``Riemann integral'' $I^{v,w}_{s,t} = \int_s^t v(r) \mathrm{d} w(r)$ that satisfies the estimate
\begin{align}\label{e:area estimate}
|\Phi^{v,w}_{s,t}|:=|I^{v,w}_{s,t} - v(s) w_{s,t}| \lesssim |t-s|^{2\alpha}.
\end{align}
The remainder $\Phi^{v,w}$ is often (incorrectly) called the \emph{area} of $v$ and $w$. This name has its origin in the fact that its antisymmetric part $1/2(\Phi^{v,w}_{s,t} - \Phi^{w,v}_{s,t})$ corresponds to the algebraic area spanned by the curve $((v(r), w(r)): r \in [s,t])$ in the plane $\mathbb{R}^2$.
If $\alpha \le 1/2$, then the integral $I^{v,w}$ cannot be constructed using Young's theory of integration, and also $I^{v,w}$ is not uniquely characterized by \eqref{e:area estimate}. But let us assume nonetheless that we are given such an integral $I^{v,w}$ satisfying \eqref{e:area estimate}. A function $f \in C^\alpha$ is \emph{controlled} by $v \in C^\alpha$ if there exists $f^v \in C^\alpha$, such that for all $s,t \in [0,1]$
\begin{align}\label{e:controlled}
|f_{s,t} - f^v_s v_{s,t}| \lesssim |t-s|^{2\alpha}.
\end{align}
\begin{prop}[\cite{Gubinelli2004}, Theorem 1]\label{p:Gubinelli rough paths}
Let $\alpha > 1/3$, let $v,w \in C^\alpha$, and let $I^{v,w}$ satisfy \eqref{e:area estimate}. Let $f$ and $g$ be controlled by $v$ and $w$ respectively, with derivatives $f^v$ and $g^w$. Then there exists a unique function $I(f,g) = \int_0^\cdot f(s) \mathrm{d} g(s)$ that satisfies for all $s,t \in [0,1]$
\begin{align*}
|I(f,g)_{s,t} - f(s) g_{s,t} - f^v(s) g^w(s) \Phi^{v,w}_{s,t}| \lesssim |t-s|^{3\alpha}.
\end{align*}
If $(\pi_n)$ is a sequence of partitions of $[0,t]$, with mesh size going to zero, then
\begin{align*}
I(f,g)(t) = \lim_{n \rightarrow \infty} \sum_{t_k \in \pi_n} \left( f(t_k) g_{t_k, t_{k+1}} + f^v_{t_k} g^w_{t_k} \Phi^{v,w}_{t_k, t_{k+1}}\right).
\end{align*}
\end{prop}
The integral $I(f,g)$ coincides with the Riemann-Stieltjes integral and with the Young integral, whenever these are defined. Moreover, the integral map is self-consistent, in the sense that if we consider $v$ and $w$ as paracontrolled by themselves, with derivatives $v^v = w^w \equiv 1$, then $I(v,w) = I^{v,w}$.
The only remaining problem is the construction of the integral $I^{v,w}$. This is usually achieved with probabilistic arguments. If $v$ and $w$ are Brownian motions, then we can for example use It\^{o} or Stratonovich integration to define $I^{v,w}$. Already in this simple example we see that the integral $I^{v,w}$ is not unique if $v$ and $w$ are outside of the Young regime.
It is possible to go beyond $\alpha > 1/3$ by stipulating the existence of higher order iterated integrals. For details see~\cite{Gubinelli2010} or any book on rough paths, such as~\cite{Lyons2002,Lyons2007,Friz2010,Friz2013}.
\section{Paradifferential calculus and Young integration}\label{s:paradifferential calculus}
In this section we develop the basic tools that will be required for our rough path integral in terms of Schauder functions, and we study Young's integral and its different components.
\subsection{Paradifferential calculus with Schauder functions}
Here we introduce a ``paradifferential calculus'' in terms of Schauder functions. Paradifferential calculus is usually formulated in terms of Littlewood-Paley blocks and was initiated by Bony~\cite{Bony1981}. For a gentle introduction see~\cite{Bahouri2011}.
We will need to study the regularity of $\sum_{p,m} u_{pm} \varphi_{pm}$, where $u_{pm}$ are functions and not constant coefficients. For this purpose we define the following space of sequences of functions.
\begin{defn}
If $(u_{pm}: p \ge -1, 0\le m\le2^p)$ is a family of affine functions of the form $u_{pm}: [t^0_{pm}, t^2_{pm}] \rightarrow \mathbb{R}^d$,
we set for $\alpha > 0$
\begin{align*}
\lVert (u_{pm})\rVert_{\mathcal{A}^\alpha} := \sup_{p,m} 2^{p\alpha} \lVert u_{pm}\rVert_\infty,
\end{align*}
where it is understood that $\lVert u_{pm} \rVert_\infty := \max_{t \in [t^0_{pm}, t^2_{pm}]} |u_{pm}(t)|$. The space $\mathcal{A}^\alpha := \mathcal{A}^\alpha(\mathbb{R}^d)$ is then defined as
\[
\mathcal{A}^\alpha := \left\{(u_{pm})_{p \ge -1, 0\le m\le2^p}: u_{pm}\in C([t^0_{pm}, t^2_{pm}], \mathbb{R}^d) \text{ is affine and } \lVert (u_{pm})\rVert_{\mathcal{A}^\alpha}<\infty \right\}.
\]
\end{defn}
In Appendix~\ref{a:schauder with affine coefficients} we prove the following regularity estimate:
\begin{lem}\label{l:upm hoelder}
Let $\alpha \in (0,2)$ and let $(u_{pm})\in \mathcal{A}^\alpha$. Then $\sum_{p,m} u_{pm} \varphi_{pm} \in \mathcal{C}^\alpha$, and
\begin{align*}
\Bigl\lVert \sum_{p,m} u_{pm} \varphi_{pm}\Bigr\rVert_\alpha \lesssim \lVert (u_{pm}) \rVert_{\mathcal{A}^\alpha}.
\end{align*}
\end{lem}
Let us introduce a paraproduct in terms of Schauder functions.
\begin{lem}\label{l:paraproduct definition}
Let $\beta \in (0,2)$, let $v \in C([0,1], \L(\mathbb{R}^d,\mathbb{R}^n))$, and $w \in \mathcal{C}^\beta(\mathbb{R}^d)$. Then
\begin{align}\label{e:paraproduct definition}
\pi_<(v,w) := \sum_{p=0}^\infty S_{p-1} v \Delta_p w \in \mathcal{C}^\beta(\mathbb{R}^n) \hspace{10pt} \text{and} \hspace{10pt} \lVert \pi_<(v,w) \rVert_\beta \lesssim \lVert v \rVert_\infty \lVert w \rVert_\beta.
\end{align}
\end{lem}
\begin{proof}
We have $\pi_<(v,w) = \sum_{p,m} u_{pm} \varphi_{pm}$ with $u_{pm} = (S_{p-1} v)|_{[t^0_{pm},t^2_{pm}]} w_{pm}$. For every $(p,m)$, the function $(S_{p-1} v)|_{[t^0_{pm},t^2_{pm}]}$ is the linear interpolation of $v$ between $t^0_{pm}$ and $t^2_{pm}$. As $\lVert (S_{p-1} v)|_{[t^0_{pm},t^2_{pm}]} w_{pm} \rVert_\infty \le 2^{-p\beta}\lVert v \rVert_\infty \lVert w \rVert_\beta$, the statement follows from Lemma~\ref{l:upm hoelder}.
\end{proof}
\begin{rmk}
If $v \in \mathcal{C}^\alpha(\mathbb{R})$ and $w \in \mathcal{C}^\beta(\mathbb{R})$, we can decompose the product $vw$ into three components, $vw = \pi_<(v,w) + \pi_>(v,w) + \pi_\circ(v,w)$, where $\pi_>(v,w) := \pi_>(w,v)$ and $\pi_\circ(v,w):= \sum_p \Delta_p v \Delta_p w$, and we have the estimates
\begin{align*}
\lVert \pi_>(v,w) \rVert_\alpha \lesssim \lVert v \rVert_\alpha \lVert w \rVert_\infty, \qquad \text{and}\qquad \lVert \pi_\circ(v,w) \rVert_{\alpha+\beta} \lesssim \lVert v \rVert_\alpha \lVert w \rVert_\beta
\end{align*}
whenever $\alpha+\beta \in (0,2)$. However, we will not use this.
\end{rmk}
The paraproduct allows us to ``paralinearize'' nonlinear functions. We allow for a smoother perturbation, which will come in handy when constructing global in time solutions to SDEs.
\begin{prop}\label{p:paralinearization}
Let $\alpha \in (0,1/2)$, $\beta \in (0,\alpha]$, let $v \in \mathcal{C}^\alpha(\mathbb{R}^d)$, $w \in \mathcal{C}^{\alpha+\beta}$, and $F \in C^{1+\beta/\alpha}_b(\mathbb{R}^d,\mathbb{R})$.
The
\begin{equation}\label{e:paralinearization estimate}
\lVert F(v+w) - \pi_<(\mathrm{D} F(v+w),v) \rVert_{\alpha + \beta} \lesssim \lVert F \rVert_{C^{1+\beta/\alpha}_b} (1 + \lVert v \rVert_\alpha)^{1+\beta/\alpha} (1 + \lVert w \rVert_{\alpha+\beta}).
\end{equation}
If $F \in C^{2+\beta/\alpha}_b$, then $F(v) - \pi_<(\mathrm{D} F(v),v)$ depends on $v$ in a locally Lipschitz continuous way:
\begin{align}\label{e:paralinearization lipschitz} \nonumber
&\lVert F(v) - \pi_<(\mathrm{D} F(v),v) - (F(u) - \pi_<(\mathrm{D} F(u),u)) \rVert_{\alpha + \beta} \\
&\hspace{160pt} \lesssim \lVert F \rVert_{C^{2+\beta/\alpha}_b} (1 + \lVert v \rVert_\alpha + \lVert u \rVert_\alpha)^{1+\beta/\alpha} \lVert v - u\rVert_{\alpha}.
\end{align}
\end{prop}
\begin{proof}
First note that $\lVert F(v+w) \rVert_\infty \le \lVert F \rVert_\infty$, which implies the required estimate for $(p,m) = (-1,0)$ and $(p,m) = (0,0)$. For all other values of $(p,m)$ we apply a Taylor expansion:
\begin{align*}
(F(v+w))_{pm}
= \mathrm{D} F(v(t^1_{pm}) + w(t^1_{pm}))v_{pm} + R_{pm},
\end{align*}
where $|R_{pm}| \lesssim 2^{- p (\alpha+\beta)} \lVert F\rVert_{C^{1+\beta/\alpha}_b} (\lVert v \rVert_\alpha^{1+\beta/\alpha} + \lVert w \rVert_{\alpha+\beta})$.
Subtracting $\pi_<(\mathrm{D} F(v),v)$ gives
\begin{align*}
&F(v+w) - \pi_<(\mathrm{D} F(v+w),v) \\
&\hspace{60pt}= \sum_{pm} [\mathrm{D} F(v(t^1_{pm}) + w(t^1_{pm})) - (S_{p-1} \mathrm{D} F(v+w))|_{[t^0_{pm}, t^2_{pm}]}] v_{pm} \varphi_{pm} + R.
\end{align*}
Now $(S_{p-1} \mathrm{D} F(v+w))|_{[t^0_{pm}, t^2_{pm}]}$ is the linear interpolation of $\mathrm{D} F(v+w)$ between $t^0_{pm}$ and $t^2_{pm}$, so according to Lemma~\ref{l:upm hoelder} it suffices to note that
\begin{align*}
&\lVert [\mathrm{D} F(v(t^1_{pm})+ w(t^1_{pm})) - (S_{p-1} \mathrm{D} F(v+w))|_{[t^0_{pm}, t^2_{pm}]}] v_{pm}\rVert_{\infty} \\
&\hspace{50pt} \lesssim 2^{-p\beta} \lVert \mathrm{D} F(v+w) \rVert_{C^\beta} 2^{-p\alpha} \lVert v \rVert_\alpha \lesssim 2^{-p(\alpha+\beta)} \lVert F \rVert_{C^{1+\beta/\alpha}_b} (1+\lVert v \rVert_\alpha + \lVert w \rVert_\alpha)^{\beta/\alpha} \lVert v \rVert_\alpha.
\end{align*}
The local Lipschitz continuity is shown in the same way.
\end{proof}
\begin{rmk}
Since $v$ has compact support, it actually suffices to have $F \in C^{1+\beta/\alpha}$ without assuming boundedness. Of course, then the estimates in Proposition~\ref{p:paralinearization} have to be adapted.
\end{rmk}
\begin{rmk}\label{r:gubinelli controlled implies our controlled}
The same proof shows that if $f$ is controlled by $v$ in the sense of Section~\ref{s:ciesielski}, i.e. $f_{s,t} = f^v(s) v_{s,t} + R_{s,t}$ with $f^v \in \mathcal{C}^\alpha$ and $|R_{s,t}|\le \lVert R\rVert_{2\alpha} |t-s|^{2\alpha}$, then $f - \pi_<(f^v,v) \in \mathcal{C}^{2\alpha}$.
\end{rmk}
\subsection{Young's integral and its different components}\label{s:young}
In this section we construct Young's integral using the Schauder expansion. If $v \in \mathcal{C}^\alpha$ and $w \in \mathcal{C}^\beta$, then we formally define
\begin{align*}
\int_0^\cdot v(s) \mathrm{d} w(s) := \sum_{p,m} \sum_{q,n} v_{pm} w_{qn} \int_0^\cdot \varphi_{pm}(s) \mathrm{d} \varphi_{qn}(s) = \sum_{p,q} \int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s).
\end{align*}
We show that this definition makes sense provided that $\alpha+\beta>1$, and we identify three components of the integral that behave quite differently. This will be our starting point towards an extension of the integral beyond the Young regime.
In a first step, let us calculate the iterated integrals of Schauder functions.
\begin{lem}\label{l:iterated schauder integrals1}
Let $p > q \ge 0$. Then
\begin{align}\label{e:iterated schauder integral p>q}
\int_0^1 \varphi_{pm}(s) \mathrm{d} \varphi_{qn}(s) = 2^{-p - 2} \chi_{qn}(t^0_{pm})
\end{align}
for all $m,n$. If $p = q$, then $\int_0^1 \varphi_{pm}(s) \mathrm{d} \varphi_{pn}(s) = 0$, except if $p = q = 0$, in which case the integral is bounded by 1. If $0 \le p < q$, then for all $(m,n)$ we have
\begin{align}\label{e:iterated schauder integral q<p}
\int_0^1 \varphi_{pm}(s) \mathrm{d} \varphi_{qn}(s) = - 2^{-q - 2} \chi_{pm}\left(t^0_{qn}\right).
\end{align}
If $p=-1$, then the integral is bounded by 1.
\end{lem}
\begin{proof}
The cases $p = q$ and $p=-1$ are easy, so let $p > q \ge 0$. Since $\chi_{qn} \equiv \chi_{qn}(t^0_{pm})$ on the support of $\varphi_{pm}$, we have
\begin{align*}
\int_0^1 \varphi_{pm}(s) \mathrm{d} \varphi_{qn}(s) = \chi_{qn}(t^0_{pm}) \int_0^1 \varphi_{pm}(s) \mathrm{d} s = \chi_{qn}(t^0_{pm}) 2^{-p-2}.
\end{align*}
If $0 \le p < q$, then integration by parts and \eqref{e:iterated schauder integral p>q} imply \eqref{e:iterated schauder integral q<p}.
\end{proof}
Next we estimate the coefficients of iterated integrals in the Schauder basis.
\begin{lem}\label{l:schauder coefficients of iterated integrals}
Let $i,p\ge -1$, $q \ge 0$, $0\le j \le 2^i$, $0\le m \le 2^p$, $0\le n \le 2^q$. Then
\begin{align}\label{e:schauder coefficients of iterated integrals good}
2^{-i} \Big|\Big\langle \chi_{ij}, \mathrm{d}\Big(\int_0^\cdot\varphi_{pm} \chi_{qn}\mathrm{d} s\Big)\Big\rangle\Big| \le 2^{-2(i \vee p \vee q) + p + q},
\end{align}
except if $p<q=i$. In this case we only have the worse estimate
\begin{align}\label{e:schauder coefficients of iterated integrals bad}
2^{-i} \Big|\Big\langle \chi_{ij}, \mathrm{d}\Big(\int_0^\cdot\varphi_{pm} \chi_{qn}\mathrm{d} s\Big)\Big\rangle\Big| \le 1.
\end{align}
\end{lem}
\begin{proof}
We have $\langle \chi_{-10}, \mathrm{d}(\int_0^\cdot \varphi_{pm} \chi_{qn}\mathrm{d} s)\rangle = 0$ for all $(p,m)$ and $(q,n)$. So let $i \ge 0$. If $i < p \vee q$, then $\chi_{ij}$ is constant on the support of $\varphi_{pm}\chi_{qn}$, and therefore Lemma~\ref{l:iterated schauder integrals1} gives
\[
2^{-i} \left|\langle \chi_{ij},\varphi_{pm} \chi_{qn}\rangle\right| \le \left|\langle \varphi_{pm}, \chi_{qn}\rangle\right| \le 2^{ p + q -2(p\vee q)} = 2^{-2(i \vee p \vee q) + p + q}.
\]
Now let $i > q$. Then $\chi_{qn}$ is constant on the support of $\chi_{ij}$, and therefore another application of Lemma~\ref{l:iterated schauder integrals1} implies that
\[
2^{-i} \left|\langle \chi_{ij}, \varphi_{pm}\chi_{qn}\rangle\right| \le 2^{-i} 2^q 2^{p+i -2(p\vee i)} = 2^{-2(i \vee p \vee q) + p + q}.
\]
The only remaining case is $i=q \ge p$, in whic
\[
2^{-i} \left|\langle \chi_{ij},\varphi_{pm} \chi_{qn}\rangle\right| \le 2^{i} \int_{t^0_{ij}}^{t^2_{ij}} \varphi_{pm}(s) \mathrm{d} s \le \lVert \varphi_{pm} \rVert_\infty \le 1.
\]
\end{proof}
\begin{cor}\label{c:schauder blocks}
Let $i, p\ge -1$ and $q \ge 0$. Let $v \in C([0,1],\L(\mathbb{R}^d,\mathbb{R}^n))$ and $w \in C([0,1],\mathbb{R}^d)$. Then
\begin{align}\label{e:schauder blocks good}
\Big\lVert \Delta_i\Big(\int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s)\Big)\Big\rVert_\infty \lesssim 2^{-(i\vee p\vee q) - i+p+q} \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty,
\end{align}
except if $i=q>p$. In this case we only have the worse estimate
\begin{align}\label{e:schauder blocks bad}
\Big \lVert \Delta_i\Big(\int_0^\cdot \Delta_p v (s) \mathrm{d} \Delta_q w(s)\Big)\Big\rVert_\infty \lesssim \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty.
\end{align}
\end{cor}
\begin{proof}
The case $i = -1$ is easy, so let $i \ge 0$. We have
\begin{align*}
\Delta_i\Big(\int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s)\Big) = \sum_{j,m,n} v_{pm} w_{qn} \langle 2^{-i} \chi_{ij}, \varphi_{pm} \chi_{qn}\rangle \varphi_{ij}.
\end{align*}
For fixed $j$, there are at most $2^{(i\vee p\vee q) - i}$ non-vanishing terms in the double sum.
Hence, we obtain from Lemma~\ref{l:schauder coefficients of iterated integrals} that
\begin{align*}
\Big\lVert \sum_{m,n} v_{pm} w_{qn} \langle 2^{-i} \chi_{ij}, \varphi_{pm} \chi_{qn}\rangle \varphi_{ij}\Big\rVert_\infty & \lesssim 2^{(i\vee p\vee q) - i} \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty (2^{-2(i\vee p \vee q) + p + q} + \mathbf{1}_{i=q>p}) \\
& = (2^{-(i\vee p\vee q) - i + p + q} + \mathbf{1}_{i=q>p}) \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty.
\end{align*}
\end{proof}
\begin{cor}\label{c:schauder blocks product}
Let $i,p,q \ge -1$. Let $v \in C([0,1],\L(\mathbb{R}^d,\mathbb{R}^n))$ and $w \in C([0,1],\mathbb{R}^d)$. Then for $p \vee q \le i$ we have
\begin{align}\label{e:schauder blocks product good}
\left\lVert \Delta_i\left(\Delta_p v \Delta_q w\right)\right\rVert_\infty \lesssim 2^{-(i\vee p\vee q) - i+p+q} \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty,
\end{align}
except if $i=q>p$ or $i=p>q$, in which case we only have the worse estimate
\begin{align}\label{e:schauder blocks product bad}
\left \lVert \Delta_i(\Delta_p v \Delta_q w)\right\rVert_\infty \lesssim \lVert \Delta_p v \rVert_\infty \lVert \Delta_q w \rVert_\infty.
\end{align}
If $p > i$ or $q>i$, then $\Delta_i(\Delta_p v \Delta_q w) \equiv 0$.
\end{cor}
\begin{proof}
The case $p=-1$ or $q=-1$ is easy. Otherwise we apply integration by parts and note that the estimates \eqref{e:schauder blocks good} and \eqref{e:schauder blocks bad} are symmetric in $p$ and $q$. If for example $p>i$, then $\Delta_p(v)(t^k_{ij}) = 0$ for all $k,j$, which implies that $\Delta_i (\Delta_p v \Delta_q w) = 0$.
\end{proof}
The estimates \eqref{e:schauder blocks good} and \eqref{e:schauder blocks bad} allow us to identify different components of the integral $\int_0^\cdot v(s) \mathrm{d} w(s)$. More precisely, \eqref{e:schauder blocks bad} indicates that the series $\sum_{p<q} \int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s)$ is rougher than the remainder $\sum_{p \ge q} \int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s)$. Integration by parts give
\[
\sum_{p<q} \int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s) = \pi_<(v,w) - \sum_{p<q} \sum_{m,n} v_{pm} w_{qn} \int_0^\cdot \varphi_{qn}(s) \mathrm{d} \varphi_{pm}(s).
\]
This motivates us to decompose the integral into three components, namely
\begin{align*}
\sum_{p,q} \int_0^\cdot \Delta_p v(s) \mathrm{d} \Delta_q w(s) = L(v,w) + S(v,w) + \pi_<(v,w).
\end{align*}
Here $L$ is defined as the antisymmetric \emph{L\'evy area} (we will justify the name below by showing that $L$ is closely related to the L\'evy area of certain dyadic martingales):
\begin{align*}
L(v,w) :=\,& \sum_{p>q} \sum_{m,n} (v_{pm} w_{qn} - v_{qn} w_{pm}) \int_0^\cdot \varphi_{pm} \mathrm{d} \varphi_{qn}\\
=\,& \sum_{p} \left(\int_0^\cdot \Delta_p v \mathrm{d} S_{p-1} w - \int_0^\cdot \mathrm{d} (S_{p-1} v) \Delta_{p} w\right).
\end{align*}
The \emph{symmetric part} $S$ is defined as
\begin{align*}
S(v,w) :=\, & \sum_{m,n \le 1} v_{0m} w_{0n} \int_0^\cdot \varphi_{0m} \mathrm{d} \varphi_{0n} + \sum_{p\ge 1} \sum_m v_{pm} w_{pm} \int_0^\cdot \varphi_{pm} \mathrm{d} \varphi_{pm} \\
=\,& \sum_{m,n\le 1} v_{0m} w_{0n} \int_0^\cdot \varphi_{0m} \mathrm{d} \varphi_{0n} + \frac{1}{2} \sum_{p\ge 1} \Delta_p v \Delta_p w,
\end{align*}
and $\pi_<$ is the paraproduct defined in \eqref{e:paraproduct definition}. As we observed in Lemma~\ref{l:paraproduct definition}, $\pi_<(v,w)$ is always well defined, and it inherits the regularity of $w$. Let us study $S$ and $L$.
\begin{lem}\label{l:Levy area regularity}
Let $\alpha, \beta \in (0,1)$ be such that $\alpha + \beta > 1$. Then $L$ is a bounded bilinear operator from $\mathcal{C}^\alpha \times \mathcal{C}^\beta$ to $\mathcal{C}^{\alpha+\beta}$.
\end{lem}
\begin{proof}
We only argue for $\sum_{p} \int_0^\cdot \Delta_p v \mathrm{d} S_{p-1} w$, the term $- \int_0^\cdot \mathrm{d} (S_{p-1} v) \Delta_{p} w$ can be treated with the same arguments. Corollary~\ref{c:schauder blocks} (more precisely \eqref{e:schauder blocks good}) implies that
\begin{align*}
&\Big\lVert \sum_p \Delta_i \Big(\int_0^\cdot \Delta_p v \mathrm{d} S_{p-1} w\Big) \Big\rVert_\infty \\
&\hspace{50pt} \le \sum_{p\le i} \sum_{q<p} \Big\lVert \Delta_i \Big(\int_0^\cdot \Delta_p v \mathrm{d} \Delta_q w \Big)\Big\rVert_\infty + \sum_{p> i} \sum_{q<p} \Big\lVert \Delta_i \Big( \int_0^\cdot \Delta_p v \mathrm{d} \Delta_q w \Big)\Big\rVert_\infty\\
&\hspace{50pt} \le \bigg(\sum_{p\le i} \sum_{q<p} 2^{-2i + p + q} 2^{-p\alpha}\lVert v \rVert_\alpha 2^{-q\beta} \lVert w \rVert_\beta + \sum_{p> i} \sum_{q<p} 2^{- i + q} 2^{-p\alpha} \lVert v \rVert_\alpha 2^{-q\beta} \lVert w \rVert_\beta \bigg) \\
&\hspace{50pt} \lesssim_{\alpha + \beta} 2^{-i(\alpha+\beta)} \lVert v \rVert_\alpha \lVert w \rVert_\beta,
\end{align*}
where we used $1-\alpha < 0$ and $1-\beta<0$ and for the second series we also used that $\alpha+\beta>1$.
\end{proof}
Unlike the L\'evy area $L$, the symmetric part $S$ is always well defined. It is also smooth.
\begin{lem}\label{l:symmetric part}
Let $\alpha,\beta \in (0,1)$. Then $S$ is a bounded bilinear operator from $\mathcal{C}^\alpha \times \mathcal{C}^\beta$ to $\mathcal{C}^{\alpha+\beta}$.
\end{lem}
\begin{proof}
This is shown using the same arguments as in the proof of Lemma~\ref{l:Levy area regularity}.
\end{proof}
In conclusion, the integral consists of three components. The L\'evy area $L(v,w)$ is only defined if $\alpha + \beta>1$, but then it is smooth. The symmetric part $S(v,w)$ is always defined and smooth. And the paraproduct $\pi_<(v,w)$ is always defined, but it is rougher than the other components. To summarize:
\begin{thm}[Young's integral]\label{t:young integral}
Let $\alpha, \beta \in (0,1)$ be such that $\alpha + \beta > 1$, and let $v \in \mathcal{C}^\alpha$ and $w \in \mathcal{C}^\beta$. Then the integral
\begin{align*}
I(v,\mathrm{d} w) := \sum_{p,q} \int_0^\cdot \Delta_p v \mathrm{d} \Delta_q w = L(v,w) + S(v,w) + \pi_<(v,w) \in \mathcal{C}^\beta
\end{align*}
satisfies $\lVert I(v,\mathrm{d} w) \rVert_\beta \lesssim \lVert v \rVert_\alpha \lVert w \rVert_\beta$ and
\begin{align}\label{e:Young controlled}
\lVert I(v,\mathrm{d} w) - \pi_<(v,w) \rVert_{\alpha+\beta} \lesssim \lVert v \rVert_\alpha \lVert w \rVert_\beta.
\end{align}
\end{thm}
\subsubsection*{L\'evy area and dyadic martingales}
Here we show that the L\'evy area $L(v,w)(1)$ can be expressed in terms of the L\'evy area of suitable dyadic martingales. To simplify notation, we assume that $v(0) = w(0) = 0$, so that we do not have to bother with the components $v_{-10}$ and $w_{-10}$.
We define a filtration $(\mathcal{F}_n)_{n \ge 0}$ on $[0,1]$ by setting
\begin{align*}
\mathcal{F}_n = \sigma(\chi_{p m}: 0 \le p \le n, 0 \le m \le 2^p),
\end{align*}
we set $\mathcal{F} = \bigvee_n \mathcal{F}_n$, and we consider the Lebesgue measure on $([0,1], \mathcal{F})$. On this space, the process $M_n = \sum_{p=0}^n \sum_{m=0}^{2^p} \chi_{pm}$, $n \in \mathbb{N}$, is a martingale. For any continuous function $v:[0,1] \rightarrow \mathbb{R}$ with $v(0) = 0$, the process
\begin{align*}
M^v_n = \sum_{p=0}^n \sum_{m=0}^{2^p} \langle 2^{-p} \chi_{pm}, \mathrm{d} v\rangle \chi_{pm} = \sum_{p=0}^n \sum_{m=0}^{2^p} v_{pm} \chi_{pm} = \partial_t S_n v,
\end{align*}
$n \in \mathbb{N}$, is a martingale transform of $M$, and therefore a martingale as well. Since it will be convenient later, we also define $\mathcal{F}_{-1} = \{\emptyset, [0,1]\}$ and $M_{-1}^v = 0$ for every $v$.
Assume now that $v$ and $w$ are continuous real-valued functions with $v(0) = w(0) = 0$, and that the L\'evy area $L(v,w)(1)$ exists. Then it is given by
\begin{align*}
L(v,w)(1) & = \sum_{p=0}^\infty \sum_{q=0}^{p-1} \sum_{m,n} (v_{pm} w_{qn} - v_{qn} w_{pm}) \int_0^1 \varphi_{pm}(s) \chi_{qn}(s) \mathrm{d} s \\
& = \sum_{p=0}^\infty \sum_{q=0}^{p-1} \sum_{m,n} (v_{pm} w_{qn} - v_{qn} w_{pm}) 2^p \int_0^1 \chi_{qn}(s) 1_{[t^0_{pm}, t^2_{pm})}(s) \mathrm{d} s \langle \varphi_{pm},1 \rangle \\
& = \sum_{p=0}^\infty \sum_{q=0}^{p-1} \sum_{m,n} (v_{pm} w_{qn} - v_{qn} w_{pm}) 2^{-p} \int_0^1 \chi_{qn}(s) \chi_{pm}^2(s) \mathrm{d} s 2^{-p-2}\\
& = \sum_{p=0}^\infty \sum_{q=0}^{p-1} 2^{-2p-2} \int_0^1 \sum_{m,n} \sum_{m'} (v_{pm} w_{qn} - v_{qn} w_{pm}) \chi_{qn}(s) \chi_{pm}(s) \chi_{pm'}(s) \mathrm{d} s,
\end{align*}
where
in the last step we used that $\chi_{pm}$ and $\chi_{pm'}$ have disjoint support for $m \neq m'$. The $p$--th \emph{Rademacher function} (or \emph{square wave}) is defined for $p \ge 1$ as
\begin{align*}
r_p(t) := \sum_{m'=1}^{2^p} 2^{-p} \chi_{pm'}(t).
\end{align*}
The martingale associated to the Rademacher functions is given by $R_0 := 0$ and $R_p := \sum_{k=1}^p r_k$ for $p \ge 1$. Let us write $\Delta M^v_p = M^v_p - M^v_{p-1}$ and similarly for $M^w$ and $R$ and all other discrete time processes that arise. This notation somewhat clashes with the expression $\Delta_p v$ for the dyadic blocks of $v$, but we will only use it in the following lines, where we do not directly work with dyadic blocks. The quadratic covariation of two dyadic martingales is defined as $[M, N]_n := \sum_{k=0}^n \Delta M_k \Delta N_k$, and the discrete time stochastic integral is defined as $(M\cdot N)_n := \sum_{k=0}^n M_{k-1} \Delta N_k$. Writing $E(\cdot)$ for the integral $\int_0^1 \cdot \mathrm{d} s$, we obtain
\begin{align*}
L(v,w)(1) & = \sum_{p=0}^\infty \sum_{q=0}^{p-1} 2^{-p-2} E\left(\Delta M^v_p \Delta M^w_q \Delta R_p - \Delta M^v_q \Delta M^w_p \Delta R_p \right) \\
& = \sum_{p=0}^\infty 2^{-p-2} E\left(\left(M^w_{p-1} \Delta M^v_p - M^v_{p-1} \Delta M^w_p \right) \Delta R_p \right)\\
& = \sum_{p=0}^\infty 2^{-p-2} E\left(\Delta \left[M^w \cdot M^v - M^v \cdot M^w, R\right]_p \right).
\end{align*}
Hence, $L(v,w)(1)$ is closely related to the L\'evy area $1/2(M^w \cdot M^v - M^v \cdot M^w)$ of the dyadic martingale $(M^v, M^w)$.
\section{Paracontrolled paths and pathwise integration beyond Young}\label{s:schauder rough path integral}
In this section we construct a rough path integral in terms of Schauder functions.
\subsection{Paracontrolled paths}
We observed in Section~\ref{s:paradifferential calculus} that for $w \in \mathcal{C}^\alpha$ and $F \in C^{1+\beta/\alpha}_b$ we have $F(w) - \pi_<(\mathrm{D} F(w),w) \in \mathcal{C}^{\alpha+\beta}$. In Section~\ref{s:young} we observed that if $v \in \mathcal{C}^\alpha$, $w\in \mathcal{C}^\beta$ and $\alpha + \beta > 1$, then the Young integral $I(v,\mathrm{d} w)$ satisfies $I(v,\mathrm{d} w) - \pi_<(v,w) \in \mathcal{C}^{\alpha + \beta}$. Hence, in both cases the function under consideration can be written as $\pi_<(f^w,w)$ for a suitable $f^w$, plus a smooth remainder. We make this our definition of paracontrolled paths:
\begin{defn}
Let $\alpha > 0$ and $v \in \mathcal{C}^\alpha(\mathbb{R}^d)$. For $\beta \in (0,\alpha]$ we define
\[
\mathcal{D}^{\beta}_v := \mathcal{D}^\beta_v(\mathbb{R}^n) := \left\{(f,f^v) \in \mathcal{C}^\alpha(\mathbb{R}^n) \times \mathcal{C}^\beta(\L(\mathbb{R}^d,\mathbb{R}^n)): f^\sharp = f - \pi_<(f^v,v)\in \mathcal{C}^{\alpha+\beta}(\mathbb{R}^n)\right\}.
\]
If $(f,f^v) \in \mathcal{D}^\beta_v$, then $f$ is called \emph{paracontrolled} by $v$. The function $f^v$ is called the \emph{derivative} of $f$ with respect to $v$. Abusing notation, we write $f \in \mathcal{D}^\beta_v$ when it is clear from the context what the derivative $f^v$ is supposed to be. We equip $\mathcal{D}^\beta_v$ with the norm
\begin{align*}
\lVert f \rVert_{v,\beta} := \lVert f^v \rVert_\beta + \lVert f^\sharp\rVert_{\alpha+\beta}.
\end{align*}
If $v \in \mathcal{C}^\alpha$ and $(\tilde f, \tilde f^{\tilde v}) \in \mathcal{D}^\beta_{\tilde v}$, then we also write
\[
d_{\mathcal{D}^\beta}(f,\tilde f) := \lVert f^v - \tilde f^{\tilde v} \rVert_\beta + \lVert f^\sharp - \tilde f^\sharp \rVert_{\alpha+\beta}.
\]
\end{defn}
\begin{ex}
Let $\alpha \in (0,1)$ and $v \in \mathcal{C}^\alpha$. Then Proposition~\ref{p:paralinearization} shows that $F(v) \in \mathcal{D}^\beta_v$ for every $F \in C^{1+\beta/\alpha}_b$, with derivative $\mathrm{D} F(v)$.
\end{ex}
\begin{ex}
Let $\alpha + \beta >1$ and $v\in \mathcal{C}^\alpha$, $w \in \mathcal{C}^\beta$. Then by~\eqref{e:Young controlled}, the Young integral $I(v,\mathrm{d} w)$ is in $\mathcal{D}^\alpha_w$, with derivative $v$.
\end{ex}
\begin{ex}\label{ex:controlled old vs new}
If $\alpha + \beta < 1$ and $v \in \mathcal{C}^\alpha$, then $(f,f^v) \in \mathcal{D}^\beta_v$ if and only if $|f_{s,t} - f^v_s v_{s,t}| \lesssim |t-s|^{\alpha+\beta}$ and in that case
\[
\lVert f^v\rVert_\infty + \sup_{s\neq t} \frac{| f^v_{s,t} |}{|t-s|^\beta} + \sup_{s \neq t}\frac{|f_{s,t} - f^v_s v_{s,t}|}{|t-s|^{\alpha+\beta}} \lesssim \|f\|_{v,\beta}(1+\lVert v \rVert_\alpha).
\]
Indeed we have $|f^v_s v_{s,t} - \pi_<(f^v,v)_{s,t}| \lesssim |t-s|^{\alpha+\beta} \lVert f^v\rVert_\beta \lVert v \rVert_\alpha$, which can be shown using similar arguments as for Lemma~B.2 in~\cite{Gubinelli2012}.
In other words, for $\alpha \in (0,1/2)$ the space $\mathcal{D}^\alpha_v$ coincides with the space of controlled paths defined in Section~\ref{s:rough paths}.
\end{ex}
The following commutator estimate, the analog of Theorem~2.3 of~\cite{Bony1981} in our setting, will be useful for establishing some stability properties of~$\mathcal{D}^\beta_v$.
\begin{lem}\label{l:commutator 2}
Let $\alpha, \beta \in (0,1)$, and let $u\in C([0,1],\L(\mathbb{R}^n;\mathbb{R}^m))$, $v\in \mathcal{C}^\alpha(\L(\mathbb{R}^d;\mathbb{R}^n))$, and $w \in \mathcal{C}^\beta(\mathbb{R}^d)$. Then
\begin{align*}
\lVert \pi_<(u, \pi_<(v,w)) - \pi_<(uv, w)\rVert_{\alpha + \beta} \lesssim \lVert u\rVert_\infty \lVert v \rVert_\alpha \lVert w \rVert_\beta.
\end{align*}
\end{lem}
\begin{proof}
We have
\begin{align*}
\pi_<(u, \pi_<(v,w)) - \pi_<(uv, w) & = \sum_{p,m} (S_{p-1}u (\pi_<(v,w))_{pm} - S_{p-1}(uv) w_{pm})\varphi_{pm}
\end{align*}
and $[S_{p-1}u (\pi_<(v,w))_{pm} - S_{p-1}(uv) w_{pm}]|_{[t^0_{pm},t^2_{pm}]}$ is affine. By Lemma \ref{l:upm hoelder} it suffices to control $\lVert[S_{p-1}u (\pi_<(v,w))_{pm} - S_{p-1}(uv) w_{pm}]|_{[t^0_{pm},t^2_{pm}]}\rVert_\infty$.
The cases $(p,m) = (-1,0)$ and $(p,m) = (0,0)$ are easy, so let $p \ge 0$ and $m \ge 1$. For $r<q<p$ we denote by $m_q$ and $m_r$ the unique index in generation $q$ and $r$ respectively for which $\chi_{pm} \varphi_{q m_q} \not \equiv 0$ and similarly for $r$. We apply Lemma \ref{l:schauder coefficients of iterated integrals} to obtain for $q<p$
\begin{align*}
|(S_{q-1}v \Delta_q w)_{pm}| & = \Big|\sum_{r<q} v_{rm_r} w_{qm_q} 2^{-p} \langle \chi_{pm}, \mathrm{d}(\varphi_{rm_r} \varphi_{qm_q})\rangle \Big|\\
& = \Big|\sum_{r<q} v_{rm_r} w_{qm_q} 2^{-p} \langle \chi_{pm}, \chi_{rm_r} \varphi_{qm_q} + \varphi_{rm_r} \chi_{qm_q} \rangle\Big| \\
& \le \lVert v \rVert_\alpha \lVert w \rVert_\beta \sum_{r<q} 2^{-r\alpha} 2^{-q\beta} 2^{-p}2^{-2p+r+p+q} \lesssim 2^{-2p+q(2-\alpha-\beta)} \lVert v \rVert_\alpha \lVert w \rVert_\beta.
\end{align*}
Hence
\[
\Big\lVert \Big( S_{p-1}u \sum_{q<p} (S_{q-1} v \Delta_q w)_{pm} \Big) \Big|_{[t^0_{pm},t^2_{pm}]}\Big\rVert_{\infty} \lesssim \lVert u \rVert_\infty \lVert v \rVert_\alpha \lVert w \rVert_\beta 2^{-p(\alpha+\beta)}.
\]
If $p<q$, then $\Delta_q w(t^k_{pm}) = 0$ for all $k$ and $m$, and therefore $(S_{q-1}v \Delta_q w)_{pm}=0$, so that it only remains to bound $\|[S_{p-1}u (S_{p-1}v \Delta_p w)_{pm} - S_{p-1}(uv) w_{pm}]|_{t^0_{pm}, t^2_{pm}]}\|_\infty$. We have $\Delta_p w(t^0_{pm}) = \Delta_p w(t^2_{pm})=0$ and $\Delta_p w(t^1_{pm}) = w_{pm}/2$. On $[t^0_{pm}, t^2_{pm}]$, the function $S_{p-1} v$ is given by the linear interpolation of $v(t^0_{pm})$ and $v(t^2_{pm})$, and therefore $(S_{p-1}v \Delta_p w)_{pm} = \frac{1}{2}(v(t^0_{pm}) + v(t^2_{pm}))w_{pm}$, leading to
\begin{align*}
&\lVert [S_{p-1}u (S_{p-1}v \Delta_p w)_{pm} - S_{p-1}(uv) w_{pm}]|_{[t^0_{pm},t^2_{pm}]}\rVert_{\infty}\\
&\hspace{80pt} \le |w_{pm}|\times \Big\lVert\Big[ \Big(u(t^0_{pm})+\frac{\cdot-t^0_{pm}}{t^2_{pm}-t^0_{pm}}u_{t^0_{pm},t^2_{pm}}\Big)\frac{v(t^0_{pm}) + v(t^2_{pm})}{2} \\
&\hspace{160pt} - \Big((uv)(t^0_{pm}) + \frac{\cdot-t^0_{pm}}{t^2_{pm}-t^0_{pm}}(uv)_{t^0_{pm},t^2_{pm}}\Big)\Big]\Big|_{[t^0_{pm},t^2_{pm}]}\Big\rVert_{\infty} \\
&\hspace{80pt} \lesssim \lVert u\rVert_{\infty} \lVert v \rVert_\alpha \lVert w \rVert_\beta 2^{-p(\alpha+\beta)},
\end{align*}
where the last step follows by rebracketing.
\end{proof}
As a consequence, we can show that paracontrolled paths are stable under the application of smooth functions.
\begin{cor}\label{c:controlled under smooth}
Let $\alpha \in (0,1)$, $\beta \in (0,\alpha]$, $v \in \mathcal{C}^\alpha$, and $f \in \mathcal{D}^\beta_v$ with derivative $f^v$. Let $F \in C^{1+\beta/\alpha}_b$. Then $F(f) \in \mathcal{D}^\beta_v$ with derivative $\mathrm{D} F(f) f^v$, and
\begin{align*}
\lVert F(f)\rVert_{v,\beta} \lesssim \lVert F\rVert_{C^{1+\beta/\alpha}_b} (1 + \lVert v \rVert_\alpha)^{1+\beta/\alpha} (1+\lVert f \rVert_{v,\beta}) (1 + \lVert f^v \rVert_\infty)^{1+\beta/\alpha}.
\end{align*}
Moreover, there exists a polynomial $P$ which satisfies for all $F \in C^{2+\beta/\alpha}_b$, $\tilde v \in \mathcal{C}^\alpha$, $\tilde f \in \mathcal{D}^\beta_{\tilde v}$, and
\[
M = \max\{\lVert v \rVert_\alpha, \lVert \tilde v \rVert_\alpha, \lVert f \rVert_{v,\beta}, \lVert \tilde f \rVert_{\tilde v,\beta}\}
\]
the bound
\[
d_{\mathcal{D}^\beta}(F(f), F(\tilde f)) \le P(M) \lVert F\rVert_{C^{2+\beta/\alpha}_b}(d_{\mathcal{D}^\beta}(f,\tilde f) + \lVert u - \tilde u\rVert_\alpha).
\]
\end{cor}
\begin{proof}
The estimate for $\|\mathrm{D} F(f) f^v\|_\beta$ is straightforward. For the remainder we apply Proposition~\ref{p:paralinearization} and Lemma~\ref{l:commutator 2} to obtain
\begin{align*}
\|F(f)^\sharp\|_{\alpha+\beta} & \le \|F(f) - \pi_<(\mathrm{D} F(f), f)\|_{\alpha+\beta} + \|\pi_<(\mathrm{D} F(f), f^\sharp)\|_{\alpha+\beta} \\
&\quad + \|\pi_<(\mathrm{D} F(f), \pi_<(f^v,v)) - \pi_<(\mathrm{D} F(f) f^v, v) \|_{\alpha+\beta} \\
&\lesssim \lVert F \rVert_{C^{1+\beta/\alpha}_b} (1 + \lVert \pi_<(f^v,v)\rVert_\alpha)^{1+\beta/\alpha}(1 + \lVert f^\sharp \rVert_{\alpha+\beta}) \\
&\quad + \lVert F \rVert_{C^1_b} \lVert f \rVert_{v,\beta} + \lVert F \rVert_{C^1_b} \lVert f^v \rVert_{\beta} \lVert v \rVert_\alpha \\
&\lesssim \lVert F\rVert_{C^{1+\beta/\alpha}_b} (1+\lVert f^v \rVert_{\infty})^{1+\beta/\alpha} (1 + \lVert v \rVert_\alpha)^{1+\beta/\alpha}(1+\lVert f \rVert_{v,\beta}).
\end{align*}
The difference $F(f) - F(\tilde f)$ is treated in the same way.
\end{proof}
When solving differential equations it will be crucial to have a bound which is linear in $\lVert f \rVert_{v,\beta}$. The superlinear dependence on $\lVert f^v \rVert_\infty$ will not pose any problem as we will always have $f^v = F(\tilde f)$ for some suitable $\tilde f$, so that for bounded $F$ we get $\lVert F(f)\rVert_{v,\beta} \lesssim_{F,v} 1 + \lVert f \rVert_{v,\beta}$.
\subsection{A basic commutator estimate}
Here we prove the commutator estimate which will be the main ingredient in the construction of the integral $I(f,\mathrm{d} g)$, where $f$ is paracontrolled by $v$ and $g$ is paracontrolled by $w$, and where we assume that the integral $I(v,\mathrm{d} w)$ exists.
\begin{prop}\label{p:commutator 1}
Let $\alpha, \beta, \gamma \in (0,1)$, and assume that $\alpha+\beta+\gamma>1$ and $\beta+\gamma < 1$. Let $f\in \mathcal{C}^\alpha$, $v\in \mathcal{C}^\beta$, and $w \in \mathcal{C}^\gamma$. Then the ``commutator''
\begin{align}\label{e:commutator 1 def}
C(f,v,w) &:= L(\pi_<(f, v),w) - I(f,\mathrm{d} L(v,w))\\ \nonumber
& := \lim_{N\rightarrow \infty} [L(S_N(\pi_<(f, v)), S_N w) - I(f,\mathrm{d} L(S_N v,S_N w))] \\ \nonumber
&\, = \lim_{N\rightarrow \infty} \sum_{p\le N} \sum_{q<p} \Biggl[ \int_0^\cdot \Delta_p (\pi_<(f,v))(s) \mathrm{d} \Delta_{q} w(s) - \int_0^\cdot \mathrm{d} (\Delta_{q} (\pi_<(f,v)))(s) \Delta_{p} w(s) \\ \nonumber
&\hspace{90pt} - \Bigl(\int_0^\cdot f(s) \Delta_p v(s) \mathrm{d} \Delta_{q} w(s) - \int_0^\cdot f(s) \mathrm{d} (\Delta_{q} v)(s) \Delta_{p} w(s)\Bigr) \Biggr]
\end{align}
converges in $\mathcal{C}^{\alpha+\beta+\gamma-\varepsilon}$ for all $\varepsilon > 0$. Moreover,
\begin{align*}
\lVert C(f,v,w)\rVert_{\alpha + \beta + \gamma} \lesssim \lVert f\rVert_\alpha \lVert v \rVert_\beta \lVert w \rVert_\gamma.
\end{align*}
\end{prop}
\begin{proof}
We only argue for the first difference in~\eqref{e:commutator 1 def}, i.e. for
\begin{align}\label{e:commutator 1 pr1}
X_N := \sum_{p\le N} \sum_{q<p} \left[ \int_0^\cdot \Delta_p (\pi_<(f,v))(s) \mathrm{d} \Delta_{q} w(s) - \int_0^\cdot f(s) \Delta_p v(s) \mathrm{d} \Delta_{q} w(s) \right].
\end{align}
The second difference can be handled using the same arguments. First we prove that $(X_N)$ converges uniformly, then we show that $\lVert X_N \rVert_{\alpha + \beta + \gamma}$ stays uniformly bounded. This will imply the desired result, since bounded sets in $\mathcal{C}^{\alpha+\beta+\gamma}$ are relatively compact in $\mathcal{C}^{\alpha+\beta+\gamma-\varepsilon}$.
To prove uniform convergence, note that
\begin{align}\label{e:commutator 1 pr2}\nonumber
X_N - X_{N-1} & = \sum_{q<N}\left[ \int_0^\cdot \Delta_N (\pi_<(f,v))(s) \mathrm{d} \Delta_{q} w(s) - \int_0^\cdot f(s) \Delta_N v(s) \mathrm{d} \Delta_{q} w(s) \right]\\ \nonumber
& = \sum_{q<N} \Biggl[ \sum_{j\le N} \sum_{i<j} \int_0^\cdot \Delta_N (\Delta_i f \Delta_j v)(s) \mathrm{d} \Delta_{q} w(s)\\
&\hspace{40pt} - \sum_{j \ge N} \sum_{i\le j} \int_0^\cdot \Delta_j (\Delta_i f \Delta_N v)(s) \mathrm{d} \Delta_{q} w(s) \Biggr],
\end{align}
where for the second term it is possible to take the infinite sum over $j$ outside of the integral because $\sum_j \Delta_j g$ converges uniformly to $g$ and because $\Delta_q w$ is a finite variation path. We also used that $\Delta_N (\Delta_i f \Delta_j v)=0$ whenever $i > N$ or $j > N$. Only very few terms in \eqref{e:commutator 1 pr2} cancel. Nonetheless these cancellations are crucial, since they eliminate most terms for which we only have the worse estimate \eqref{e:schauder blocks product bad} in Corollary~\ref{c:schauder blocks product}. We obtain
\begin{align}\label{e:commutator 1 pr3}\nonumber
X_N - X_{N-1}& = \sum_{q<N} \sum_{j<N} \sum_{i<j} \int_0^\cdot \Delta_N (\Delta_i f \Delta_j v)(s) \mathrm{d} \Delta_{q} w(s) - \sum_{q<N} \int_0^\cdot \Delta_N (\Delta_N f \Delta_N v)(s) \mathrm{d} \Delta_{q} w(s) \\ \nonumber
&\hspace{20pt} - \sum_{q<N} \sum_{j > N} \sum_{i<j} \int_0^\cdot \Delta_j (\Delta_i f \Delta_N v)(s) \mathrm{d} \Delta_{q} w(s) \\
&\hspace{20pt} - \sum_{q<N} \sum_{j > N} \int_0^\cdot \Delta_j (\Delta_j f \Delta_N v)(s) \mathrm{d} \Delta_{q} w(s).
\end{align}
Note that $\lVert \partial_t \Delta_q w\rVert_\infty \lesssim 2^q \lVert \Delta_q w \rVert_\infty$. Hence, an application of Corollary~\ref{c:schauder blocks product}, where we use \eqref{e:schauder blocks product good} for the first three terms and \eqref{e:schauder blocks product bad} for the fourth term, yields
\begin{align}\label{e:commutator 1 pr convergence speed} \nonumber
\lVert X_N - X_{N-1} \rVert_\infty &\lesssim \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma \Biggl[ \sum_{q<N} \sum_{j<N} \sum_{i<j} 2^{-2N + i + j} 2^{-i\alpha} 2^{-j\beta}2^{q(1-\gamma)} \\ \nonumber
&\qquad + \sum_{q<N} 2^{-N(\alpha + \beta)} 2^{q(1-\gamma)} + \sum_{q<N} \sum_{j > N} \sum_{i<j} 2^{-2j+i+N} 2^{-i\alpha} 2^{-N\beta} 2^{q(1-\gamma)} \\ \nonumber
&\qquad + \sum_{q<N} \sum_{j > N} 2^{-j\alpha} 2^{-N\beta} 2^{q(1-\gamma)}\Biggr] \\
&\lesssim \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma 2^{-N(\alpha + \beta + \gamma - 1)},
\end{align}
where in the last step we used $\alpha, \beta, \gamma < 1$. Since $\alpha + \beta + \gamma > 1$, this gives us the uniform convergence of $(X_N)$.
Next let us show that $\lVert X_N \rVert_{\alpha + \beta + \gamma} \lesssim \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma$ for all $N$. Similarly to \eqref{e:commutator 1 pr3} we obtain for $n \in \mathbb{N}$
\begin{align*}
\Delta_n X_N &= \sum_{p\le N} \sum_{q<p}\Delta_n \Biggl[ \sum_{j<p} \sum_{i<j} \int_0^\cdot \Delta_p (\Delta_i f \Delta_j v)(s) \mathrm{d} \Delta_{q} w(s) - \int_0^\cdot \Delta_p (\Delta_p f \Delta_p v)(s) \mathrm{d} \Delta_{q} w(s)\\
&\hspace{80pt}- \sum_{j>p} \sum_{i\le j} \int_0^\cdot \Delta_j (\Delta_i f \Delta_p v)(s) \mathrm{d} \Delta_{q} w(s) \Biggr],
\end{align*}
and therefore by Corollary~\ref{c:schauder blocks}
\begin{align*}
\lVert \Delta_n X_N\rVert_\infty &\lesssim \sum_{p} \sum_{q<p} \Biggl[ \sum_{j<p} \sum_{i<j} 2^{-(n\vee p) - n + p + q} \lVert \Delta_p (\Delta_i f \Delta_j v)\rVert_\infty \lVert \Delta_{q} w\rVert_\infty \\
&\hspace{60pt} + 2^{-(n\vee p) - n + p + q}\lVert \Delta_p (\Delta_p f \Delta_p v)\rVert_\infty \lVert \Delta_{q} w\rVert_\infty\\
&\hspace{60pt} + \sum_{j>p} \sum_{i\le j} 2^{-(n\vee j) - n + j + q} \lVert\Delta_j(\Delta_i f \Delta_p v)\rVert_\infty \lVert \Delta_{q} w\rVert_\infty \Biggr].
\end{align*}
Now we apply Corollary~\ref{c:schauder blocks product}, where for the last term we distinguish the cases $i < j$ and $i = j$. Using that $1-\gamma > 0$, we get
\begin{align*}
\lVert \Delta_n X_N\rVert_\infty & \lesssim \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma \sum_p 2^{p(1-\gamma)} \Biggl[ \sum_{j<p} \sum_{i<j} 2^{-(n\vee p) - n + p} 2^{-2p} 2^{i(1-\alpha)} 2^{j(1-\beta)} \\
&\hspace{150pt} + 2^{-(n\vee p) - n + p} 2^{-p\alpha} 2^{-p\beta}\\
&\hspace{150pt} + \sum_{j>p} \sum_{i < j} 2^{-(n\vee j) - n + j} 2^{-2j + i(1-\alpha) + p (1-\beta)}\\
&\hspace{150pt} + \sum_{j>p} 2^{-(n\vee j) - n + j} 2^{-j\alpha - p \beta} \Biggr]\\
&\lesssim \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma 2^{-n(\alpha+\beta+\gamma)},
\end{align*}
where we used both that $\alpha+\beta+\gamma>1$ and that $\beta+\gamma<1$.
\end{proof}
\begin{rmk}
If $\beta + \gamma = 1$, we can apply Proposition~\ref{p:commutator 1} with $\beta - \varepsilon$ to obtain that $C(f,v,w) \in \mathcal{C}^{\alpha + \beta + \gamma - \varepsilon}$ for every sufficiently small $\varepsilon > 0$. If $\beta + \gamma > 1$, then we are in the Young setting and there is no need to introduce the commutator.
\end{rmk}
For later reference, we collect the following result from the proof of Proposition~\ref{p:commutator 1}:
\begin{lem}\label{l:commutator speed of convergence}
Let $\alpha, \beta, \gamma, f, v, w$ be as in Proposition~\ref{p:commutator 1}. Then
\[
\lVert C(f,v,w) - L(S_N(\pi_<(f, v)), S_N w) - I(f,\mathrm{d} L(S_N v,S_N w))\rVert_\infty \lesssim 2^{-N(\alpha + \beta + \gamma - 1)} \lVert f \rVert_\alpha \lVert v \rVert_\beta \lVert w\rVert_\gamma.
\]
\end{lem}
\begin{proof}
Simply sum up~\eqref{e:commutator 1 pr convergence speed} over $N$.
\end{proof}
\subsection{Pathwise integration for paracontrolled paths}\label{s:schauder rough path}
In this section we apply the commutator estimate to construct the rough path integral under the assumption that the L\'evy area exists for a given reference path
\begin{thm}\label{t:rough path integral}
Let $\alpha \in (1/3,1)$, $\beta \in (0,\alpha]$ and assume that $2\alpha+\beta>0$ as well as $\alpha+\beta \neq 1$. Let $v \in \mathcal{C}^\alpha(\mathbb{R}^d)$ and assume that the L\'evy area
\begin{align*}
L(v,v) := \lim_{N \rightarrow \infty}\bigl( L(S_N v^k, S_N v^\ell) \bigr)_{1\le k \le d, 1\le \ell \le d}
\end{align*}
converges uniformly and that $\sup_N \lVert L(S_N v, S_N v) \rVert_{2\alpha} < \infty$. Let $f \in \mathcal{D}^\alpha_v(\L(\mathbb{R}^d, \mathbb{R}^m))$. Then $I(S_N f, \mathrm{d} S_N v)$ converges in $\mathcal{C}^{\alpha - \varepsilon}$ for all $\varepsilon > 0$. Denoting the limit by $I(f,\mathrm{d} v)$, we have
\begin{align*}
\lVert I(f,\mathrm{d} v)\rVert_\alpha \lesssim \lVert f \rVert_{v,\beta} \bigl(\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 + \lVert L(v,v) \rVert_{2\alpha}\bigr).
\end{align*}
Moreover, $I(f,\mathrm{d} v) \in \mathcal{D}^{\alpha}_v$ with derivative $f$ and
\begin{align*}
\lVert I(f,\mathrm{d} v) \rVert_{v,\alpha} \lesssim \lVert f \rVert_{v,\beta} \bigl(1 + \lVert v \rVert_\alpha^2 + \lVert L(v,v) \rVert_{2\alpha}\bigr).
\end{align*}
\end{thm}
\begin{proof}
If $\beta+\gamma >1$, everything follows from the Young case, Theorem~\ref{t:young integral}, so let $\beta+\gamma<1$. We decompose
\begin{align*}
I(S_N f, \mathrm{d} S_N v) & = S(S_N f, S_N v) + \pi_<(S_N f, S_N v) + L(S_N f^\sharp, S_N v) \\
&\quad + [L(S_N \pi_<(f^v,v), S_N v) - I(f^v, \mathrm{d} L(S_N v,S_N v))] + I(f^v, \mathrm{d} L(S_N v,S_N v)).
\end{align*}
Convergence then follows from Proposition~\ref{p:commutator 1} and Theorem~\ref{t:young integral}. The limit is given by
\[
I(f,\mathrm{d} v) = S(f,v) + \pi_<(f,v) + L(f^\sharp, v) + C(f^v,v,v) + I(f^v, \mathrm{d} L(v,v)),
\]
from where we easily deduce the claimed bounds.
\end{proof}
\begin{rmk}\label{r:locality of integral}
Since $I(f,\mathrm{d} v) = \lim_{N\to \infty} \int_0^\cdot S_N f \mathrm{d} S_N v$, the integral is a local operator in the sense that $I(f,\mathrm{d} v)$ is constant on every interval $[s,t]$ for which $f|_{[s,t]}=0$. In particular we can estimate $I(f,\mathrm{d} v)|_{[0,t]}$ using only $f|_{[0,t]}$ and $f^v|_{[0,t]}$.
\end{rmk}
For fixed $v$ and $L(v,v)$, the map $f \mapsto I(f,\mathrm{d} v)$ is linear and bounded from $\mathcal{D}^\beta_v$ to $\mathcal{D}^\alpha_v$, and this is what we will need to solve differential equations driven by $v$. But we can also estimate the speed of convergence of $I(S_N f, \mathrm{d} S_N v)$ to $I(f, \mathrm{d} v)$, measured in uniform distance rather than in $\mathcal{C}^\alpha$:
\begin{cor}\label{c:rough path speed of convergence}
Let $\alpha \in (1/3,1/2]$ and let $\beta,v,f$ be as in Theorem~\ref{t:rough path integral}. Then we have for all $\varepsilon \in (0, 2\alpha + \beta-1)$
\begin{align*}
\lVert I(S_N f, \mathrm{d} S_N v) - I(f,\mathrm{d} v)\rVert_\infty &\lesssim_\varepsilon 2^{-N(2\alpha + \beta - 1 - \varepsilon)} \lVert f\rVert_{v,\beta} \bigl(\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 \bigr)\\
&\qquad +\lVert f^v \rVert_\beta \lVert L(S_N v, S_N v) - L(v,w)\rVert_{2\alpha-\varepsilon}.
\end{align*}
\end{cor}
\begin{proof}
We decompose $I(S_N f, \mathrm{d} S_N v)$ as described in the proof of Theorem~\ref{t:rough path integral}. This gives us for example the term
\[
\| \pi_<(S_N f - f, S_N v) + \pi_<(f, S_N v - v)\|_\infty \lesssim_\varepsilon \| S_N f - f\|_\infty \| v \|_\alpha + \| f \|_\infty \| f \|_\alpha \|S_N v - v\|_\varepsilon
\]
for all $\varepsilon > 0$. From here it is easy to see that
\[
\| \pi_<(S_N f - f, S_N g) + \pi_<(f, S_N g - g)\|_\infty \lesssim 2^{-N(\alpha-\varepsilon)} \|f\|_\alpha \|v\|_\alpha \lesssim 2^{-N(\alpha-\varepsilon)} \|f\|_{v,\beta} (\|v\|_\alpha + \|v\|_\alpha^2).
\]
But now $\beta\le \alpha \le 1/2$ and therefore $\alpha \ge 2\alpha + \beta - 1$.
Let us treat one of the critical terms, say $L(S_N f^\sharp, S_N v) - L(f^\sharp, v)$. Since $2 \alpha + \beta - \varepsilon > 1$, we can apply Lemma~\ref{l:Levy area regularity} to obtain
\begin{align*}
&\lVert L(S_N f^\sharp, S_N v) - L(f^\sharp, v) \rVert_\infty \lesssim \lVert L(S_N f^\sharp - f^\sharp, S_N v)\rVert_{1+\varepsilon} + \lVert L(f^\sharp, S_N v - v)\rVert_{1+\varepsilon}\\
&\hspace{120pt}\lesssim_\varepsilon \lVert S_N f^\sharp - f^\sharp\rVert_{1+\varepsilon - \alpha} \lVert v \rVert_\alpha + \lVert f^\sharp\rVert_{\alpha+\beta} \lVert S_N v - v\rVert_{1+\varepsilon - \alpha-\beta} \\
&\hspace{120pt}\lesssim 2^{-N(\alpha + \beta - (1 + \varepsilon - \alpha))} \lVert f^\sharp\rVert_{\alpha+\beta} \lVert v \rVert_\alpha + 2^{-N(\alpha - (1+\varepsilon - \alpha-\beta))} \lVert f^\sharp\rVert_{\alpha+\beta} \lVert v \rVert_\alpha \\
&\hspace{120pt}\lesssim 2^{-N(2\alpha + \beta - 1 -\varepsilon)}\lVert f^\sharp \rVert_{\alpha+\beta} \lVert v \rVert_\alpha.
\end{align*}
Lemma~\ref{l:commutator speed of convergence} gives
\begin{align*}
\lVert L(S_N \pi_<(f^v,v), S_N v) - L(\pi_<(f^v,v),v) \rVert_\infty &\lesssim 2^{-N(2 \alpha + \beta - 1)} \lVert f^v \rVert_\beta \lVert v\rVert_\alpha^2\\
&\quad + \lVert I(f^v, \mathrm{d} L(S_N v, S_N v)) - I(f^v, \mathrm{d} L(v,v))\rVert_\infty.
\end{align*}
The second term on the right hand side can be estimated using the continuity of the Young integral, and the proof is complete.
\end{proof}
\begin{rmk}
In Lemma~\ref{l:commutator speed of convergence} we saw that the rate of convergence of
\[
L(S_N \pi_<(f^v,v),S_N v) - I(f^v, \mathrm{d} L(S_Nv, S_Nv)) - (L(\pi_<(f^v,v),v) - I(f^v, \mathrm{d} L(v,v)))
\]
is in fact $2^{-N(2\alpha+\beta - 1)}$ when measured in uniform distance, and not just $2^{-N(2\alpha +\beta- 1 -\varepsilon)}$. It is possible to show that this optimal rate is attained by the other terms as well, so that
\begin{align*}
\lVert I(S_N f, \mathrm{d} S_N v) - I(f,\mathrm{d} v)\rVert_\infty &\lesssim 2^{-N(2\alpha + \beta - 1)} \lVert f\rVert_{v,\beta} \bigl(\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 \bigr)\\
&\qquad +\lVert f^v \rVert_\beta \lVert L(S_N v, S_N w) - L(v,w)\rVert_{2\alpha - \varepsilon}.
\end{align*}
Since this requires a rather lengthy calculation, we decided not to include the arguments here.
\end{rmk}
Since we approximate $f$ and $g$ by the piecewise smooth functions $S_N f$ and $S_N g$ when defining the integral $I(f,\mathrm{d} g)$, it is not surprising that we obtain a Stratonovich type integral:
\begin{prop}\label{p:ibp stratonovich}
Let $\alpha \in (1/3,1)$ and $v \in \mathcal{C}^\alpha(\mathbb{R}^d)$. Let $\varepsilon > 0$ be such that $(2+\varepsilon)\alpha > 1$ and let $F \in C^{2+\varepsilon}(\mathbb{R}^d,\mathbb{R})$. Then
\begin{align*}
F(v(t)) - F(v(0)) = I(\mathrm{D} F(v),\mathrm{d} v)(t) := \lim_{N\rightarrow \infty} I(S_N \mathrm{D} F(v), \mathrm{d} S_N v)(t)
\end{align*}
for all $t \in [0,1]$.
\end{prop}
\begin{proof}
The function $S_N v$ is Lipschitz continuous, so that integration by parts gives
\begin{align*}
F(S_N v(t)) - F(S_N v(0)) = I(\mathrm{D} F(S_N v), \mathrm{d} S_N v)(t).
\end{align*}
The left hand side converges to $F(v(t)) - F(v(0))$. It thus suffices to show that $I(S_N \mathrm{D} F(v)-\mathrm{D} F(S_N v), \mathrm{d} S_N v)$ converges to zero. By continuity of the Young integral, Theorem~\ref{t:young integral}, it suffices to show that $\lim_{N\rightarrow \infty} \lVert S_N \mathrm{D} F(v) - \mathrm{D} F(S_N v)\rVert_{\alpha(1+\varepsilon')} = 0$ for all $\varepsilon' < \varepsilon$. Recall that $S_N v$ is the linear interpolation of $v$ between the points $(t^1_{pm})$ for $p \le N$ and $0 \le m \le 2^p$, and therefore $\Delta_p \mathrm{D} F(S_Nv) = \Delta_p \mathrm{D} F(v) = \Delta_p S_N \mathrm{D} F(v)$ for all $p \le N$. For $p > N$ and $1 \le m \le 2^p$ we apply a first order Taylor expansion to both terms and use the $\varepsilon$--H\"older continuity of $\mathrm{D}^2 F$ to obtain
\begin{align*}
\left|[S_N \mathrm{D} F(v) - \mathrm{D} F(S_N v)]_{pm}\right| & \le C_F 2^{-p\alpha(1+\varepsilon)} \lVert S_N v \rVert_\alpha
\end{align*}
for a constant $C_F>0$. Therefore, we get for all $\varepsilon' \le \varepsilon$
\begin{align*}
\lVert S_N \mathrm{D} F(v) - \mathrm{D} F(S_Nv)\rVert_{\alpha(1+ \varepsilon')} \lesssim_F 2^{-N\alpha(\varepsilon-\varepsilon')} \lVert v \rVert_\alpha,
\end{align*}
which completes the proof.
\end{proof}
\begin{rmk}\label{r:symmetric structure induces cancellations stratonovich}
Note that here we did not need any assumption on the area $L(v,v)$. The reason are cancellations that arise due to the symmetric structure of the derivative of $\mathrm{D} F$, the Hessian of $F$.
Proposition~\ref{p:ibp stratonovich} was previously obtained by Roynette~\cite{Roynette1993}, except that there $v$ is assumed to be one dimensional and in the Besov space $B^{1/2}_{1,\infty}$.
\end{rmk}
\section{Pathwise It\^{o} integration}\label{s:pathwise ito}
In the previous section we saw that our pathwise integral $I(f,\mathrm{d} v)$ is of Stratonovich type, i.e. it satisfies the usual integration by parts rule. But in applications it may be interesting to have an It\^{o} integral. Here we show that a slight modification of $I(f,\mathrm{d} v)$ allows us to treat non-anticipating It\^{o}-type integrals.
A natural approximation of a non-anticipating integral is given for $k \in \mathbb{N}$ by
\begin{align*}
I^{\mathrm{It\hat{o}}}_k (f,\mathrm{d} v) (t) :=\, & \sum_{m=1}^{2^k} f(t^0_{km}) (v(t^2_{km}\wedge t) - v(t^0_{km}\wedge t))\\
=\, & \sum_{m=1}^{2^k} \sum_{p,q} \sum_{m,n} f_{pm} v_{qn} \varphi_{pm}(t^0_{km}) (\varphi_{qn}(t^2_{km}\wedge t) - \varphi_{qn}(t^0_{km}\wedge t)).
\end{align*}
Let us assume for the moment that $t=m2^{-k}$ for some $0 \le m \le 2^k$. In that case we obtain for $p \ge k$ or $q \ge k$ that $\varphi_{pm}(t^0_{km})(\varphi_{qn}(t^2_{km}\wedge t) - \varphi_{qn}(t^0_{km}\wedge t)) = 0$. For $p,q<k$, both $\varphi_{pm}$ and $\varphi_{qn}$ are affine functions on $[t^0_{km}\wedge t, t^2_{k m}\wedge t]$, and for affine $u$ and $w$ and $s<t$ we have
\begin{align*}
u(s)(w(t) - w(s)) = \int_s^t u(r) \mathrm{d} w(r) - \frac{1}{2} [u(t) - u(s)] [w(t) - w(s)].
\end{align*}
Hence, we conclude that for $t=m2^{-k}$
\begin{align}\label{e:ito via piecewise linear}
I^{\mathrm{It\hat{o}}}_k (f,\mathrm{d} v)(t) = I(S_{k-1} f, \mathrm{d} S_{k-1} v)(t) - \frac{1}{2}[f,v]_k(t),
\end{align}
where $[f,v]_k$ is the $k$--th dyadic approximation of the quadratic covariation $[f,v]$, i.e.
\begin{align*}
[f,v]_k(t) := \sum_{m=1}^{2^k} [f(t^2_{km}\wedge t) - f(t^0_{km}\wedge t)][v(t^2_{km}\wedge t) - v(t^0_{km}\wedge t)].
\end{align*}
From now on we study the right hand side of~\eqref{e:ito via piecewise linear} rather than $I^{\mathrm{It\hat{o}}}_k(f,\mathrm{d} v)$, which is justified by the following remark.
\begin{rmk}\label{r:our ito vs nonanticipating riemann}
Let $\alpha \in (0,1)$. If $f\in C([0,1])$ and $v\in \mathcal{C}^\alpha$, then
\begin{align*}
\Bigl\lVert I^{\mathrm{It\hat{o}}}_k(f,\mathrm{d} v) - \Bigl(I(S_{k-1} f,\mathrm{d} S_{k-1}v) - \frac{1}{2} [S_{k-1} f, S_{k-1}v]_k \Bigr) \Bigr\rVert_\infty \lesssim 2^{-k\alpha} \lVert f \rVert_\infty \lVert v \rVert_\alpha.
\end{align*}
This holds because both functions agree in all dyadic points of the form $m2^{-k}$, and because between those points the integrals can pick up mass of at most $\lVert f \rVert_\infty 2^{-k\alpha} \lVert v \rVert_\alpha$.
\end{rmk}
We write $[v,v] := ([v^i, v^j])_{1 \le i, j \le d}$ and $L(v,v) := (L(v^i, v^j))_{1\le i,j\le d}$, and similarly for all expressions of the same type.
\begin{thm}\label{t:pathwise ito integral}
Let $\alpha \in (0,1/2)$ and let $\beta\le \alpha$ be such that $2\alpha + \beta > 1$. Let $v\in \mathcal{C}^\alpha(\mathbb{R}^d)$ and $f \in \mathcal{D}^\beta_v(\L(\mathbb{R}^d;\mathbb{R}^n))$. Assume that $(L(S_k v, S_k v))$ converges uniformly, with uniformly bounded $\mathcal{C}^{2\alpha}$ norm. Also assume that $([v,v]_k)$ converges uniformly. Then $(I^{\mathrm{It\hat{o}}}_k(f,\mathrm{d} v))$ converges uniformly to a limit $I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v) = I(f,\mathrm{d} v) - 1/2[f,v]$ which satisfies
\begin{align*}
\lVert I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v)\rVert_\infty \lesssim \lVert f\rVert_{v,\beta} (\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 + \lVert L(v,v) \rVert_{2\alpha} + \lVert[v,v]\rVert_\infty),
\end{align*}
and where the quadrativ variation $[f,v]$ is given by
\begin{equation}\label{e:quadratic variation controlled explicit}
[f,v] = \int_0^\cdot f^{v}(s) \mathrm{d} [v,v](s) := \bigg( \sum_{j,\ell=1}^d\int_0^\cdot (f^{ij})^{v,\ell}(s) \mathrm{d} [v^j,v^\ell](s)\bigg)_{1\le i \le n},
\end{equation}
where $(f^{ij})^{v,\ell}$ is the $\ell$--th component of the $v$--derivative of $f^{ij}$.
For $\varepsilon \in (0,3\alpha-1)$ the speed of convergence can be estimated by
\begin{align*}
\big\lVert I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v) - I^{\mathrm{It\hat{o}}}_k(f,\mathrm{d} v) \big\rVert_\infty & \lesssim_\varepsilon 2^{-k(2\alpha + \beta - 1 - \varepsilon)} \lVert f\rVert_{v,\beta} \bigl( \lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 \bigr)\\
&\quad +\lVert f^v \rVert_\beta \lVert L(S_{k-1} v, S_{k-1} v) - L(v,v)\rVert_{2\alpha} \\
&\quad + \lVert f^v \rVert_\infty \lVert [v,v]_k - [v,v]\rVert_{\infty}.
\end{align*}
\end{thm}
\begin{proof}
By Remark~\ref{r:our ito vs nonanticipating riemann}, it suffices to show our claims for $I(S_{k-1} f, \mathrm{d} S_{k-1} v) -1/2[f,v]_k$. The statements for the integral $I(S_{k-1} f, \mathrm{d} S_{k-1} g)$ follow from Theorem~\ref{t:rough path integral} and Corollary~\ref{c:rough path speed of convergence}. So let us us concentrate on the quadratic variation $[f,v]_k$. Recall from Example~\ref{ex:controlled old vs new} that $f \in \mathcal{D}^\beta_v$ if and only if $R^f_{s,t} = f_{s,t} - f^v(s) w_{s,t}$ satisfies $|R^f_{s,t}| \lesssim |t-s|^{\alpha+\beta}$. Hence
\begin{align*}
[f,v]^i_k (t) & = \sum_m \big(f_{t^0_{km} \wedge t, t^2_{km} \wedge t} v_{t^0_{km} \wedge t, t^2_{km} \wedge t}\big)^i\\
& = \sum_m \big(R^f_{t^0_{km} \wedge t, t^2_{km} \wedge t} v_{t^0_{km} \wedge t, t^2_{km} \wedge t}\big)^i + \sum_{j,\ell=1}^d \sum_m (f^{ij})^{v,\ell}(t^0_{km} \wedge t) v^\ell_{t^0_{km} \wedge t, t^2_{km} \wedge t} v^j_{t^0_{km} \wedge t, t^2_{km} \wedge t}.
\end{align*}
It is easy to see that the first term on the right hand side is bounded by
\[
\Big| \sum_m \big(R^f_{t^0_{km} \wedge t, t^2_{km} \wedge t} v_{t^0_{km} \wedge t, t^2_{km} \wedge t}\big)^i \Big| \lesssim 2^{-k(2\alpha+\beta-1)} \lVert f \rVert_{v,\beta}(\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2).
\]
For the second term, let us fix $\ell$ and $j$. Then the sum over $m$ is just the integral of $(f^{ij})^{v,\ell}$ with respect to the signed measure $\mu^k_t = \sum_{m} \delta_{t^0_{km}} v^j_{t^0_{km} \wedge t, t^2_{km} \wedge t} v^\ell_{t^0_{km} \wedge t, t^2_{km} \wedge t}$. Decomposing $\mu^k_t$ into a positive and negative part as
\begin{align*}
\mu^k_t & = \frac{1}{4} \Big[\sum_m \delta_{t^0_{km}} [(v^j+v^\ell)_{t^0_{km} \wedge t, t^2_{km} \wedge t}]^2 -\sum_m \delta_{t^0_{km}} [(v^j - v^\ell)_{t^0_{km} \wedge t, t^2_{km} \wedge t}]^2\Big
\end{align*}
and similarly for $\mathrm{d} \mu_t = \mathrm{d} [v^j,v^\ell]_t$
we can estimate
\begin{align*}
&\Big| \int_0^1 (f^{ij})^{v,\ell}(s) \mu^k_t(\mathrm{d} s) - \int_0^1 (f^{ij})^{v,\ell}(s) \mu_t(\mathrm{d} s) \Big| \\
&\hspace{100pt} \lesssim \left\lVert f^v \right\rVert_\infty \left(\left\lVert [v^i+v^j]_k - [v^i + v^j]\right\rVert_\infty + \left\lVert [v^i-v^j]_k - [v^i - v^j]\right\rVert_\infty\right)\\
&\hspace{100pt} \lesssim \left\lVert f^v \right\rVert_\infty \lVert [v,v]_k - [v,v]\rVert_\infty,
\end{align*}
where we write $[u] := [u,u]$ and similarly for $[u]_k$. By assumption the right hand side converges to zero, from where we get the uniform convergence of $[f,g]_k$ to $[f,g]$.
\end{proof}
\begin{rmk}
We calculate the pathwise It\^o integral $I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v)$ as limit of nonanticipating Riemann sums involving only $f$ and $v$. This is interesting for applications of mathematical finance, because the integral process has a natural interpretation as capital obtained from investing. The classical rough path integral, see Proposition~\ref{p:Gubinelli rough paths}, is obtained via ``compensated Riemann sums'' that explicitly depend on $f^v$ and $I^{\mathrm{It\hat{o}}}(v,\mathrm{d} v)$.
\end{rmk}
\begin{rmk}
We calculate the pathwise It\^{o} integral $I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v)$ as limit of nonanticipating Riemann sums involving only $f$ and $v$.
The classical rough path integral, see Proposition~\ref{p:Gubinelli rough paths}, is obtained via ``compensated Riemann sums'' that depend explicitly on the derivative $f^v$ and the iterated integrals of $v$. For applications in mathematical finance, it is more convenient to have an integral that is the limit of nonanticipating Riemann sums, because this can be interpreted as capital process obtained from investing.
\end{rmk}
Note that $[v,v]$ is always a continuous function of bounded variation, but a priori it is not clear whether it is in $\mathcal{C}^{2\alpha}$. Under this additional assumption we have the following stronger result.
\begin{cor}\label{c:pathwise ito with smooth quadratic variation}
In addition to the conditions of Theorem~\ref{t:pathwise ito integral}, assume that also $[v,v] \in \mathcal{C}^{2\alpha}$. Then $I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v) \in \mathcal{D}^\alpha_v$ with derivative $f$, and
\begin{align*}
\lVert I^{\mathrm{It\hat{o}}}(f,\mathrm{d} v) \rVert_{v,\alpha} \lesssim \lVert f \rVert_{v,\beta} \bigl(1 + \lVert v \rVert_\alpha^2+ \lVert L(v,v) \rVert_{2\alpha} + \lVert [v,v]\rVert_{2\alpha} \bigr).
\end{align*}
\end{cor}
\begin{proof}
This is a combination of Theorem~\ref{t:rough path integral} and the explicit representation \eqref{e:quadratic variation controlled explicit} together with the continuity of the Young integral, Theorem~\ref{t:young integral}.
\end{proof}
The term $I(S_{k-1}f,\mathrm{d} S_{k-1}v)$ has the pleasant property that if we want to refine our calculation by passing from $k$ to $k+1$, then we only have to add the additional term $I(S_{k-1}f, \mathrm{d} \Delta_k v) + I(\Delta_k f, \mathrm{d} S_k v)$. For the quadratic variation $[f,v]_k$ this is not exactly true. But $[f,v]_k(m2^{-k}) = [S_{k-1}f,S_{k-1}v]_k(m2^{-k})$ for $m=0,\dots, 2^k$, and there is a recursive way of calculating $[S_{k-1}f, S_{k-1}v]_k$:
\begin{lem}
Let $f,v \in C([0,1],\mathbb{R})$. Then
\begin{align}\label{e:recursive quadratic variation}
[S_k f,S_k v]_{k+1}(t) & = \frac{1}{2} [S_{k-1} f, S_{k-1}v]_k(t) + [S_{k-1} f, \Delta_k v]_{k+1}(t) + [\Delta_k f, S_k v]_{k+1}(t) + R_k(t)
\end{align}
for all $k\ge 1$ and all $t \in [0,1]$, where
\begin{align*}
R_k(t) := -\frac{1}{2} f_{\llcorner t^k \lrcorner,t} v_{\llcorner t^k \lrcorner,t} + f_{\llcorner t^k \lrcorner,\ulcorner t^{k+1}\urcorner \wedge t} v_{\llcorner t^k \lrcorner,\ulcorner t^{k+1}\urcorner \wedge t} + f_{\ulcorner t^{k+1}\urcorner \wedge t, t} v_{\ulcorner t^{k+1}\urcorner \wedge t,t}
\end{align*}
and $\llcorner t^k \lrcorner := \lfloor t 2^k \rfloor 2^{-k}$ and $\ulcorner t^{k}\urcorner := \llcorner t^k \lrcorner + 2^{-(k+1)}$. In particular, we obtain for $t=1$ that
\begin{align}\label{e:cesaro formula quadratic variation}
[f,v]_{k+1}(1) = \frac{1}{2}[f,v]_k(1) + \frac{1}{2} \sum_m f_{km} v_{km} = \frac{1}{2^{k+1}}\sum_{p\le k} \sum_m 2^{p} f_{pm} v_{pm}.
\end{align}
If moreover $\alpha \in (0,1)$ and $f,v \in \mathcal{C}^\alpha$, then $\lVert [S_{k-1} f, S_{k-1} g]_k - [f,g]_k \rVert_\infty \lesssim 2^{-2k\alpha} \lVert f \rVert_\alpha \lVert g \rVert_\alpha$.
\end{lem}
\begin{proof}
Equation~\eqref{e:recursive quadratic variation} follows from a direct calculation using the fact that $S_{k-1} f$ and $S_{k-1} v$ are affine on every interval $[t^0_{k\ell},t^1_{k\ell}]$ respectively $[t^1_{k\ell},t^2_{k\ell}]$ for $1 \le \ell \le 2^k$.
The formula for $[f,v]_{k+1}(1)$ follows from the that $[\Delta_p f, \Delta_q v]_{k+1}(1) = 0$ unless $p=q$, and that $[\Delta_k f, \Delta_k v]_{k+1} = 1/2 \sum_m f_{km} v_{km}$.
The estimate for $\lVert [S_{k-1} f, S_{k-1} g]_k - [f,g]_k \rVert_\infty$ holds because the two functions agree in all dyadic points $m 2^{-k}$
\end{proof}
\begin{rmk}
The Ces\`aro mean formula \eqref{e:cesaro formula quadratic variation} makes the study of existence of the quadratic variation accessible to ergodic theory. This was previously observed by Gantert~\cite{Gantert1994}. See also Gantert's thesis~\cite{Gantert1991}, Beispiel 3.29, where it is shown that ergodicity alone (of the distribution of $v$ with respect to suitable transformations on path space) is not sufficient to obtain convergence of $([v,v]_k(1))$ as $k$ tends to $\infty$.
\end{rmk}
It would be more natural to assume that for the controlling path $v$ the non-anticipating Riemann sums converge, rather than assuming that $(L(S_{k}v, S_k v))_k$ and $([v,v]_k)$ converge. This is indeed sufficient, as long as a uniform H\"older estimate is satisfied by the Riemann sums.
We start by showing that the existence of the It\^{o} iterated integrals implies the existence of the quadratic variation.
\begin{lem}\label{l:ito implies quadratic variation}
Let $\alpha \in (0,1/2)$ and let $v \in \mathcal{C}^\alpha(\mathbb{R}^d)$. Assume that the non-anticipating Riemann sums $(I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v))_k$ converge uniformly to $I^{\mathrm{It\hat{o}}}(v,\mathrm{d} v)$. Then also $([v,v]_k)_k$ converges uniformly to a limit $[v,v]$.
If moreover
\begin{align}\label{e:discrete hoelder} \nonumber
&\sup_k \sup_{0 \le m < m' \le 2^k} \frac{|I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v)(m' 2^{-k}) - I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v)(m 2^{-k}) - v(m2^{-k}) (v(m'2^{-k}) - v(m2^{-k}))|}{|(m'-m)2^{-k}|^{2\alpha}}\\
&\hspace{20pt} = C < \infty,
\end{align}
then $[v,v] \in \mathcal{C}^{2\alpha}$ and $\lVert [v,v] \rVert_{2\alpha} \lesssim C + \lVert v \rVert_\alpha^2$.
\end{lem}
\begin{proof}
Let $t \in [0,1]$ and $1 \le i,j \le d$. Then
\begin{align*
& v^i(t) v^j(t) - v^i(0)v^j(0) = \sum_{m = 1}^{2^k} \left[v^i(t^2_{km}\wedge t) v^j(t^2_{km}\wedge t) - v^i(t^0_{km}\wedge t) v^j(t^0_{km}\wedge t)\right] \\
&\hspace{50pt} = \sum_{m = 1}^{2^k} \left[v^i(t^0_{km}) v^j_{t^0_{km}\wedge t, t^2_{km}\wedge t} + v^j(t^0_{km}) v^i_{t^0_{km}\wedge t, t^2_{km}\wedge t} + v^i_{t^0_{km}\wedge t, t^2_{km}\wedge t} v^j_{t^0_{km}\wedge t, t^2_{km}\wedge t}\right] \\
&\hspace{50pt} = I^{\mathrm{It\hat{o}}}_k(v^i,\mathrm{d} v^j)(t) + I^{\mathrm{It\hat{o}}}_k(v^j,\mathrm{d} v^i)(t) + [v^i,v^j]_k(t),
\end{align*}
which implies the convergence of $([v,v]_k)_k$ as $k$ tends to $\infty$. For $0\le s<t\le 1$ this give
\begin{align*}
([v^i,v^j]_k)_{s,t} & = \bigl(v^i v^j\bigr)_{s,t} - I^{\mathrm{It\hat{o}}}_k(v^i,\mathrm{d} v^j)_{s,t} - I^{\mathrm{It\hat{o}}}_k(v^j,\mathrm{d} v^i)_{s,t} \\
& = \left[v^i(s) v^j_{s,t} - I^{\mathrm{It\hat{o}}}_k(v^i,\mathrm{d} v^j)_{s,t}\right] + \left[v^j(s) v^i_{s,t} - I^{\mathrm{It\hat{o}}}_k(v^j,\mathrm{d} v^i)_{s,t}\right] + v^i_{s,t} v^j_{s,t},
\end{align*}
At this point it is easy to estimate $\lVert [v,v]\rVert_{2\alpha}$, where we work with the classical H\"older norm and not the $\mathcal{C}^{2\alpha}$ norm. Indeed let $0 \le s < t \le 1$. Using the continuity of $[v,v]$, we can find $k$ and $s\le s_k = m_s 2^{-k}< m_t 2^{-k} = t_k \le t$ with $|[v,v]|_{s,s_k} + |[v,v]|_{t,t_k}\le \lVert v\rVert_\alpha^2 |t-s|^{2\alpha}$. Moreover,
\[
|[v,v]|_{s_k,t_k} \le \Big(\sup_{\ell \ge k} \sup_{0 \le m < m' \le 2^\ell} \frac{|([v,v]_\ell)_{m2^{-\ell},m'2^{-\ell}}|}{|(m'-m)2^{-\ell}|^{2\alpha}} \Big)|t_k - s_k|^{2\alpha} \le (2C + \lVert v \rVert_{\alpha}^2) |t-s|^{2\alpha}.
\]
\end{proof}
\begin{rmk}
The ``coarse-grained H\"older condition''~\eqref{e:discrete hoelder} is from~\cite{Perkowski2013Pathwise} and has recently been discovered independently by~\cite{Kelly2014}.
\end{rmk}
Similarly convergence of $(I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v))$ implies convergence of $(L(S_k v, S_k v))_k$:
\begin{lem}\label{l:ito implies stratonovich}
In the setting of Lemma~\ref{l:ito implies quadratic variation}, assume that~\eqref{e:discrete hoelder} holds.
Then $L(S_k v, S_k v)$ converges uniformly as $k$ tends to $\infty$, and
\begin{align*}
\sup_k \lVert L(S_k v, S_k v)\rVert_{2\alpha} \lesssim C + \lVert v \rVert_\alpha^2.
\end{align*}
\end{lem}
\begin{proof}
Let $k \in \mathbb{N}$ and $0 \le m \le 2^k$, and write $t = m 2^{-k}$. Then we obtain from \eqref{e:ito via piecewise linear} that
\begin{align}\label{e:ito implies stratonovich pr1}
&L(S_{k-1} v, S_{k-1} v)(t) \\ \nonumber
&\hspace{40pt} = I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v)(t) + \frac{1}{2} [v,v]_k(t) - \pi_<(S_{k-1}v, S_{k-1}v)(t) - S(S_{k-1}v, S_{k-1}v)(t).
\end{align}
Let now $s,t \in [0,1]$. We first assume that there exists $m$ such that $t^0_{km} \le s < t \le t^2_{km}$. Then we use $\lVert \partial_t \Delta_q v \rVert_\infty \lesssim 2^{q(1-\alpha)} \lVert v \rVert_\alpha$ to obtain
\begin{align}\label{e:ito implies stratonovich pr2}
&|L(S_{k-1}v, S_{k-1}v)_{s,t}| \le \sum_{p<k}\sum_{q<p} \left| \int_s^t \Delta_p v(r) \mathrm{d} \Delta_q v(r) - \int_s^t \mathrm{d} \Delta_q v(r) \Delta_p v(r)\right| \\ \nonumber
&\hspace{40pt} \lesssim \sum_{p<k} \sum_{q<p} |t-s| 2^{-p\alpha} 2^{q(1-\alpha)} \lVert v \rVert_\alpha^2 \lesssim |t-s| 2^{-k(2\alpha-1)} \lVert v \rVert_\alpha^2 \le |t-s|^{2\alpha} \lVert v \rVert_\alpha^2.
\end{align}
Combining \eqref{e:ito implies stratonovich pr1} and \eqref{e:ito implies stratonovich pr2}, we obtain the uniform convergence of $(L(S_{k-1} v,S_{k-1} v))$ from Lemma~\ref{l:ito implies quadratic variation} and from the continuity of $\pi_<$ and $S$.
For $s$ and $t$ that do not lie in the same dyadic interval of generation $k$, let $\ulcorner s^k\urcorner = m_s 2^{-k}$ and $\llcorner t^k\lrcorner = m_t 2^{-k}$ be such that $\ulcorner s^k\urcorner - 2^{-k} < s \le \ulcorner s^k\urcorner$ and $\llcorner t^k\lrcorner \le t < \llcorner t^k\lrcorner + 2^{-k}$. In particular, $\ulcorner s^k\urcorner\le \llcorner t^k\lrcorner$. Moreover
\begin{align*}
|L(S_{k-1}v, S_{k-1}v)_{s,t}| &\le |L(S_{k-1}v, S_{k-1}v)_{s,\ulcorner s^k\urcorner}| + |L(S_{k-1}v, S_{k-1}v)_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner }| \\
&\hspace{20pt} + |L(S_{k-1}v, S_{k-1}v)_{\llcorner t^k \lrcorner,t}|.
\end{align*}
Using~\eqref{e:ito implies stratonovich pr2}, the first and third term on the right hand side can be estimated by $(|\ulcorner s^k\urcorner -s|^{2\alpha} + |t-\llcorner t^k \lrcorner|^{2\alpha})\lVert v \rVert_\alpha^2 \lesssim |t-s|^{2\alpha} \lVert v \rVert_\alpha^2$. For the middle term we apply \eqref{e:ito implies stratonovich pr1} to obtain
\begin{align*}
|L(S_{k-1}v, S_{k-1}v)_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner }| & \le \left|I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v)_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner } - v(\ulcorner s^k\urcorner)(v(\llcorner t^k\lrcorner) - v(\ulcorner s^k\urcorner))\right| \\
&\hspace{20pt} + \left|v(\ulcorner s^k\urcorner)v_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner } - \pi_<(S_{k-1}v,S_{k-1}v)_{\ulcorner s^k\urcorner, \llcorner t^k\lrcorner}\right| \\
&\hspace{20pt} + \frac{1}{2} \left|([v,v]_k)_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner }\right| + \left| S(S_{k-1}v, S_{k-1}v)_{\ulcorner s^k\urcorner,\llcorner t^k\lrcorner }\right| \\
& \lesssim |\llcorner t^k\lrcorner - \ulcorner s^k\urcorner|^{2\alpha}\left( C + \lVert v \rVert_\alpha^2\right) \le |t-s|^{2\alpha} \left(C + \lVert v \rVert_\alpha^2\right),
\end{align*}
where Example~\ref{ex:controlled old vs new}, Lemma~\ref{l:ito implies quadratic variation}, and Lemma~\ref{l:symmetric part} have been used.
\end{proof}
It follows from the work of F\"ollmer that our pathwise It\^{o} integral satisfies It\^{o}'s formula:
\begin{cor}
Let $\alpha \in (1/3, 1/2)$ and $v\in \mathcal{C}^\alpha(\mathbb{R}^d)$. Assume that the non-anticipating Riemann sums $(I^{\mathrm{It\hat{o}}}_k(v,\mathrm{d} v))_k$ converge uniformly to $I^{\mathrm{It\hat{o}}}(v,\mathrm{d} v)$
and let $F \in C^2(\mathbb{R}^d,\mathbb{R})$. Then $(I^{\mathrm{It\hat{o}}}_k(\mathrm{D} F(v), \mathrm{d} v))_k$ converges to a limit $I^{\mathrm{It\hat{o}}}(\mathrm{D} F(v), \mathrm{d} v)$ that satisfies for all $t \in [0,1]$
\begin{align*}
F(v(t)) - F(v(0)) = I^{\mathrm{It\hat{o}}}(\mathrm{D} F(v), \mathrm{d} v)(t) + \int_0^t \sum_{k,\ell=1}^d \partial_{x_k} \partial_{x_\ell} F(v(s)) \mathrm{d} [v^k, v^\ell](s).
\end{align*}
\end{cor}
\begin{proof}
This is Remarque 1 of F\"ollmer~\cite{Follmer1979} in combination with Lemma~\ref{l:ito implies quadratic variation}.
\end{proof}
\section{Construction of the L\'evy area}\label{s:construction of levy area}
To apply our theory, it remains to construct the L\'evy area respectively the pathwise It\^{o} integrals for suitable stochastic processes. In Section~\ref{ss:hypercontractive area} we construct the L\'evy area for hypercontractive stochastic processes whose covariance function satisfies a certain ``finite variation'' property. In Section~\ref{ss:pathwise ito area for martingales} we construct the pathwise It\^{o} iterated integrals for some continuous martingales.
\subsection{Hypercontractive processes}\label{ss:hypercontractive area}
Let $X\colon [0,1] \to \mathbb{R}^d$ be a centered continuous stochastic process, such that $X^i$ is independent of $X^j$ for $i \neq j$. We write $R$ for its covariance function, $R\colon [0,1]^2 \to \mathbb{R}^{d\times d}$ and $R(s,t) := (E(X^i_s X^j_t))_{1 \le i,j\le d}$. The increment of $R$ over a rectangle $[s,t] \times [u,v] \subseteq [0,1]^2$ is defined as
\begin{align*}
R_{[s,t] \times [u,v]} := R(t,v) + R(s,u) - R(s,v) - R(t,u) := (E(X^i_{s,t} X^j_{u,v}))_{1 \le i, j \le d}.
\end{align*}
Let us make the following two assumptions.
\begin{itemize}
\item[($\rho$--var)] There exists $C > 0$ such that for all $0 \le s < t \le 1$ and for every partition $s = t_0 < t_1 < \dots < t_n = t$ of $[s,t]$ we have
\begin{align*}
\sum_{i,j=1}^n | R_{[t_{i-1},t_i] \times [t_{j-1},t_j]}|^\rho \le C |t-s|.
\end{align*}
\item[(HC)] The process $X$ is hypercontractive, i.e. for every $m,n \in \mathbb{N}$ and every $r \ge 1$ there exists $C_{r,m,n} > 0$ such that for every polynomial $P: \mathbb{R}^n \rightarrow \mathbb{R}$ of degree $m$, for all $i_1, \dots, i_n \in \{1, \dots, d\}$, and for all $t_1, \dots, t_n \in [0,1]$
\begin{align*}
E(|P(X^{i_1}_{t_1}, \dots, X^{i_n}_{t_n})|^{2r}) \le C_{r,m,n} E(|P(X^{i_1}_{t_1}, \dots, X^{i_n}_{t_n})|^{2})^r.
\end{align*}
\end{itemize}
These conditions are taken from~\cite{Friz2010c}, where under even more general assumptions it is shown that it is possible to construct the iterated integrals $I(X, \mathrm{d} X)$, and that $I(X,\mathrm{d} X)$ is the limit of $(I(X^n, \mathrm{d} X^n))_{n \in \mathbb{N}}$ under a wide range of smooth approximations $(X^n)_n$ that converge to $X$.
\begin{ex}
Condition (HC) is satisfied by all Gaussian processes. More generally, it is satisfied by every process ``living in a fixed Gaussian chaos''; see~\cite{Janson1997}, Theorem~3.50. Slightly oversimplifying things, this is the case if $X$ is given by polynomials of fixed degree and iterated integrals of fixed order with respect to a Gaussian reference process.
Prototypical examples of processes living in a fixed chaos are Hermite processes. They are defined for $H \in (1/2,1)$ and $k\in \mathbb{N}$, $k \ge 1$ as
\begin{align*}
Z^{k,H}_t = C(H,k) \int_{\mathbb{R}^k} \left(\int_0^t \prod_{i=1}^k (s - y_i)^{-\left(\frac{1}{2} + \frac{1-H}{k}\right)}_+\mathrm{d} s\right) \mathrm{d} B_{y_1} \dots \mathrm{d} B_{y_k},
\end{align*}
where $(B_y)_{y \in \mathbb{R}}$ is a standard Brownian motion, and $C(H,k)$ is a normalization constant. In particular, $Z^{k,H}$ lives in the Wiener chaos of order $k$. The covariance of $Z^{k,H}$ is
\begin{align*}
E(Z^{k,H}_s Z^{k,H}_t) = \frac{1}{2} \left( t^{2H} + s^{2H} + |t-s|^{2H}\right)
\end{align*}
Since $Z^{1,H}$ is Gaussian, it is just the fractional Brownian motion with Hurst parameter $H$. For $k=2$ we obtain the Rosenblatt process. For further details about Hermite processes see~\cite{Peccati2011}. However, we should point out that it follows from Kolmogorov's continuity criterion that $Z^{k,H}$ is $\alpha$--H\"older continuous for every $\alpha < H$. Since $H \in (1/2,1)$, Hermite processes are amenable to Young integration, and it is trivial to construct $L(Z^{k,H}, Z^{k,H})$.
\end{ex}
\begin{ex}
Condition ($\rho$--var) is satisfied by Brownian motion with $\rho = 1$. More generally it is satisfied by the fractional Brownian motion with Hurst index $H$, for which $\rho = 1/(2H)$. It is also satisfied by the fractional Brownian bridge with Hurst index $H$. A general criterion that implies condition ($\rho$--var) is the one of Coutin and Qian~\cite{Coutin2002}: If $E(|X^i_{s,t}|^2) \lesssim |t-s|^{2H}$ and $|E(X^i_{s,s+h} X^i_{t,t+h})| \lesssim |t-s|^{2H-2} h^2$ for $i = 1, \dots, d$, then ($\rho$--var) is satisfied for $\rho = 1/(2H)$. For details and further examples see~\cite{Friz2010}, Section 15.2.
\end{ex}
\begin{lem}\label{l:rho-var to dyadic generation}
Assume that the stochastic process $X:[0,1]\rightarrow \mathbb{R}$ satisfies ($\rho$--var). Then we have for all $p \ge -1$ and for all $M,N \in\mathbb{N}$ with $M \le N \le 2^{p}$ that
\begin{align}\label{e:rho-var to dyadic generation}
\sum_{m_1,m_2=M}^N |E(X_{pm_1} X_{pm_2})|^\rho \lesssim (N-M+1)2^{-p}.
\end{align}
\end{lem}
\begin{proof}
The case $p\le 0$ is easy so let $p \ge 1$. It suffices to note that
\begin{align*}
E(X_{pm_1} X_{pm_2}) & = E\left((X_{t^0_{pm_1},t^1_{pm_1}} - X_{t^1_{pm_1},t^2_{pm_1}})(X_{t^0_{pm_2},t^1_{pm_2}} - X_{t^1_{pm_2},t^2_{pm_2}})\right) \\
& = \sum_{i_1, i_2 = 0,1} (-1)^{i_1 + i_2} R_{[t^{i_1}_{pm_1},t^{i_1+1}_{pm_1}]\times [t^{i_2}_{pm_2},t^{i_2+1}_{pm_2}]},
\end{align*}
and that $\{t^i_{pm}: i=0,1,2, m = M, \dots, N\}$ partitions the interval $[(M-1) 2^{-p}, N 2^{-p}]$.
\end{proof}
\begin{lem}\label{l:generation moment estimate}
Let $X,Y: [0,1] \rightarrow \mathbb{R}$ be independent, centered, continuous processes, both satisfying ($\rho$--var) for some $\rho \in [1,2]$. Then for all $i, p \ge -1$, $q<p$, and $0 \le j \le 2^i$
\begin{align*}
E\Big[\Big|\sum_{m\le 2^p} \sum_{n\le 2^q} X_{pm} Y_{qn} \langle 2^{-i}\chi_{ij}, \varphi_{pm} \chi_{qn}\rangle\Big|^2\Big] \lesssim 2^{(p \vee i)(1/\rho - 4)} 2^{(q \vee i)(1-1/\rho)} 2^{-i} 2^{p(4-3/\rho)} 2^{q/\rho}.
\end{align*}
\end{lem}
\begin{proof}
Since $p > q$, for every $m$ there exists exactly one $n(m)$, such that $\varphi_{pm}\chi_{qn(m)}$ is not identically zero. Hence, we can apply the independence of $X$ and $Y$ to obtain
\begin{align*}
&E\Bigl[\Bigl|\sum_{m\le 2^p} \sum_{n\le 2^q} X_{pm} Y_{qn} \langle 2^{-i}\chi_{ij}, \varphi_{pm} \chi_{qn}\rangle\Bigr|^2\Bigr] \\
&\hspace{20pt} \le \sum_{m_1,m_2=0}^{2^p} \bigl|E(X_{pm_1}X_{pm_2})E(Y_{qn(m_1)}Y_{qn(m_2)}) \langle 2^{-i}\chi_{ij}, \varphi_{pm_1}\chi_{qn(m_1)}\rangle \langle 2^{-i}\chi_{ij}, \varphi_{pm_2}\chi_{qn(m_2)}\rangle\bigr|.
\end{align*}
Let us write $M_j := \{m: 0 \le m \le 2^p, \langle \chi_{ij}, \varphi_{pm}\chi_{qn(m)}\rangle\neq 0\}$. We also write $\rho'$ for the conjugate exponent of $\rho$, i.e. $1/\rho + 1/\rho' = 1$. H\"older's inequality and Lemma~\ref{l:schauder coefficients of iterated integrals} imply
\begin{align*}
&\sum_{m_1,m_2 \in M_j} \bigl|E(X_{pm_1}X_{pm_2})E(Y_{qn(m_1)}Y_{qn(m_2)}) \langle 2^{-i}\chi_{ij}, \varphi_{pm_1}\chi_{qn(m_1)}\rangle \langle 2^{-i}\chi_{ij}, \varphi_{pm_2}\chi_{qn(m_2)}\rangle\bigr| \\
&\hspace{20pt} \lesssim \Biggl(\sum_{m_1,m_2 \in M_j} \bigl|E(X_{pm_1}X_{pm_2})\bigr|^\rho\Biggr)^{1/\rho} \Biggl(\sum_{m_1,m_2 \in M_j}\bigl|E(Y_{qn(m_1)}Y_{qn(m_2)})\bigr|^{\rho'} \Biggr)^{1/\rho'} (2^{-2 (p \vee i) + p + q})^2.
\end{align*}
Now write $N_j$ for the set of $n$ for which $\chi_{ij} \chi_{qn}$ is not identically zero. For every $\bar{n} \in N_j$ there are $2^{p-q}$ numbers $m \in M_j$ with $n(m) = \bar{n}$. Hence
\begin{align*}
&\Bigl(\sum_{m_1,m_2 \in M_j}\bigl|E(Y_{qn(m_1)}Y_{qn(m_2)})\bigr|^{\rho'} \Bigr)^{1/\rho'} \\
&\hspace{60pt} \lesssim (2^{2(p-q)})^{1/\rho'} \bigg(\Big(\max_{n_1, n_2 \in N_j} \bigl|E(Y_{qn_1}Y_{qn_2})\bigr|\Big)^{\rho'-\rho} \sum_{n_1,n_2 \in N_j}\bigl|E(Y_{qn_1}Y_{qn_2})\bigr|^{\rho} \bigg)^{1/\rho'},
\end{align*}
where we used that $\rho \in [1,2]$ and therefore $\rho' - \rho \ge 0$ (for $\rho'=\infty$ we interpret the right hand side as $\max_{n_1, n_2 \in N_j} |E(Y_{qn_1}Y_{qn_2})|$). Lemma~\ref{l:rho-var to dyadic generation} implies that $\bigl(\bigl|E(Y_{qn_1}Y_{qn_2})\bigr|^{\rho'-\rho}\bigr)^{1/\rho'} \lesssim 2^{-q(1/\rho - 1/\rho')}$. Similarly we apply Lemma~\ref{l:rho-var to dyadic generation} to the sum over $n_1, n_2$, and we obtain
\begin{align*}
& (2^{2(p-q)})^{1/\rho'} \biggl(\Big(\max_{n_1, n_2 \in N_j} \bigl|E(Y_{qn_1}Y_{qn_2})\bigr|\Big)^{\rho'-\rho} \sum_{n_1,n_2 \in N_j}\bigl|E(Y_{qn_1}Y_{qn_2})\bigr|^{\rho} \biggr)^{1/\rho'}\\
&\hspace{60pt} \lesssim (2^{2(p-q)})^{1/\rho'} 2^{-q(1/\rho - 1/\rho')} (|N_j| 2^{-q})^{1/\rho'} = 2^{(q \vee i)/ \rho'} 2^{-i/\rho'} 2^{2p/\rho'} 2^{q(-2/\rho'-1/\rho)} \\
&\hspace{60pt} = 2^{(q \vee i)(1-1/\rho)} 2^{i(1/\rho-1)} 2^{2p(1-1/\rho)} 2^{q(1/\rho-2)},
\end{align*}
where we used that $|N_j| = 2^{(q \vee i) - i}$. Since $|M_j| = 2^{(p \vee i) - i}$, another application of Lemma~\ref{l:rho-var to dyadic generation} yields
\begin{align*}
\Bigl(\sum_{m_1,m_2 \in M_j} \bigl|E(X_{pm_1}X_{pm_2})\bigr|^\rho\Bigr)^{1/\rho} \lesssim 2^{(p \vee i) / \rho} 2^{-i / \rho} 2^{-p/\rho}.
\end{align*}
The result now follows by combining these estimates:
\begin{align*}
&E\Bigl[\Bigl|\sum_{m\le 2^p} \sum_{n\le 2^q} X_{pm} Y_{qn} \langle 2^{-i}\chi_{ij}, \varphi_{pm} \chi_{qn}\rangle\Bigr|^2\Bigr]\\
&\hspace{25pt} \lesssim \Bigl(\sum_{m_1,m_2 \in M_j} \bigl|E(X_{pm_1}X_{pm_2})\bigr|^\rho\Bigr)^{1/\rho} \Bigl(\sum_{m_1,m_2 \in M_j}\bigl|E(Y_{qn(m_1)}Y_{qn(m_2)})\bigr|^{\rho'} \Bigr)^{1/\rho'} (2^{-2 (p \vee i) + p + q})^2\\
&\hspace{25pt} \lesssim \big(2^{(p \vee i) / \rho} 2^{-i / \rho} 2^{-p/\rho}\big) \big(2^{(q \vee i)(1-1/\rho)} 2^{i(1/\rho-1)} 2^{2p(1-1/\rho)} 2^{q(1/\rho-2)}\big)\big(2^{-4 (p \vee i) + 2p + 2q} \big)\\
&\hspace{25pt} = 2^{(p \vee i)(1/\rho - 4)} 2^{(q \vee i)(1-1/\rho)} 2^{-i} 2^{p(4-3/\rho)} 2^{q/\rho}.
\end{align*}
\end{proof}
\begin{thm}\label{t:existence of levy area}
Let $X\colon [0,1] \to \mathbb{R}^d$ be a continuous, centered stochastic process with independent components, and assume that $X$ satisfies (HC) and ($\rho$--var) for some $\rho \in [1,2)$. Then for every $\alpha \in (0,1/\rho)$ almost surely
\begin{align*}
\sum_{N \ge 0} \left\lVert L(S_N X, S_N X) - L(S_{N-1} X, S_{N-1} X) \right\rVert_\alpha < \infty,
\end{align*}
and therefore $L(X,X) = \lim_{N \rightarrow \infty} L(S_N X,S_N X)$ is almost surely $\alpha$--H\"older continuous.
\end{thm}
\begin{proof}
First note that $L$ is antisymmetric, and in particular the diagonal of the matrix $L(S_N X, S_N X)$ is constantly zero. For $k, \ell \in \{1, \dots, d\}$ with $k \neq \ell$ we have
\begin{align*}
& \lVert L(S_N X^k, S_N X^\ell) - L(S_{N-1} X^k, S_{N-1} X^\ell)\rVert_\alpha \\
&\hspace{20pt} = \Bigl\lVert\sum_{q<N} \sum_{m,n} (X^k_{Nm} X^\ell_{qn} - X^k_{qn}X^\ell_{Nm}) \int_0^\cdot \varphi_{N m}(s) \mathrm{d} \varphi_{qn}(s)\Bigr\rVert_\alpha \\
&\hspace{20pt} \le \sum_{q<N} \Bigl\lVert \sum_{m,n} X^k_{Nm} X^\ell_{qn} \int_0^\cdot \varphi_{N m}(s) \mathrm{d} \varphi_{qn}(s)\Bigr\rVert_\alpha + \sum_{q<N} \Bigl\lVert \sum_{m,n} X^\ell_{Nm} X^k_{qn} \int_0^\cdot \varphi_{N m}(s) \mathrm{d} \varphi_{qn}(s)\Bigr\rVert_\alpha
\end{align*}
Let us argue for the first term on the right hand side, the arguments for the second one being identical. Let $r \ge 1$. Using the hypercontractivity condition (HC), we obtain
\begin{align*}
&\sum_{i,N} \sum_{j\le2^i} \sum_{q<N} P\Bigl( \Bigl|\sum_{m,n} X^\ell_{Nm} X^k_{qn} \langle 2^{-i} \chi_{ij}, \varphi_{N m} \chi_{qn}\rangle\Bigr| > 2^{-i\alpha} 2^{-N/(2r)} 2^{-q/(2r)} \Bigr) \\
&\hspace{100pt} \le \sum_{i,N} \sum_{j\le 2^i} \sum_{q<N} E\Bigl( \Bigl|\sum_{m,n} X^\ell_{Nm} X^k_{qn} \langle 2^{-i} \chi_{ij}, \varphi_{N m} \chi_{qn}\rangle\Bigr|^{2r}\Bigr) 2^{ i\alpha 2 r} 2^{N + q}\\
&\hspace{100pt} \lesssim \sum_{i,N} \sum_{j\le2^i} \sum_{q<N} E\Bigl( \Bigl|\sum_{m,n} X^\ell_{Nm} X^k_{qn} \langle 2^{-i} \chi_{ij}, \varphi_{N m} \chi_{qn}\rangle\Bigr|^{2}\Bigr)^r 2^{ i\alpha 2 r} 2^{N + q}.
\end{align*}
Now we can apply Lemma~\ref{l:generation moment estimate} to bound this expression by
\begin{align*}
&\sum_{i,N} \sum_{j\le2^i} \sum_{q<N} \bigl( 2^{(N \vee i)(1/\rho - 4)} 2^{(q \vee i)(1-1/\rho)} 2^{-i} 2^{N(4-3/\rho)} 2^{q/\rho}\bigr)^r 2^{ i\alpha 2 r} 2^{N + q}\\
&\hspace{60pt} \lesssim \sum_{i} 2^i \sum_{N\le i} \sum_{q<N} 2^{ir(2\alpha - 4)} 2^{Nr(4-3/\rho + 1/r)} 2^{qr(1/\rho+1/r)} \\
&\hspace{80pt} + \sum_{i} 2^i \sum_{N > i} \sum_{q\le i} 2^{ir(2\alpha - 1/\rho)} 2^{Nr(1/r - 2/\rho)} 2^{qr(1/\rho+1/r)} \\
&\hspace{80pt} + \sum_{i} 2^i \sum_{N > i} \sum_{i<q<N} 2^{ir(2\alpha - 1)} 2^{Nr(1/r - 2/\rho)} 2^{qr(1+1/r)} \\
&\hspace{60pt} \lesssim \sum_{i} 2^{ir(2\alpha + 3/r - 2/\rho)} + \sum_{i} \sum_{N > i} 2^{ir(2\alpha + 2/r)} 2^{Nr(1/r - 2/\rho)}\\
&\hspace{80pt} + \sum_{i} \sum_{N > i} 2^{ir(2\alpha +1/r - 1)} 2^{Nr(1 + 2/r - 2/\rho)}.
\end{align*}
For $r \ge 1$ we have $1/r - 2/\rho < 0$, because $\rho < 2$. Therefore, the sum over $N$ in the second term on the right hand side converges. If now we choose $r > 1$ large enough so that $1 + 3/r - 2/\rho < 0$ (and then also $2\alpha + 3/r - 2/\rho < 0$), then all three series on the right hand side are finite. Hence, Borel-Cantelli implies the existence of $C(\omega) > 0$, such that for almost all $\omega \in \Omega$ and for all $N, i, j$ and $q<N$
\begin{align*}
\Bigl|\sum_{m,n} X^\ell_{Nm}(\omega) X^k_{qn}(\omega) \langle 2^{-i} \chi_{ij}, \varphi_{N m} \chi_{qn}\rangle\Bigr| \le C(\omega) 2^{-i\alpha} 2^{-N/(2r)} 2^{-q/(2r)}.
\end{align*}
From here it is straightforward to see that for these $\omega$ we have
\begin{align*}
\sum_{N=0}^\infty \left\lVert L(S_N X(\omega), S_N X(\omega)) - L(S_{N-1} X(\omega), S_{N-1} X(\omega)) \right\rVert_\alpha < \infty.
\end{align*}
\end{proof}
\subsection{Continuous martingales}\label{ss:pathwise ito area for martingales}
Here we assume that $(X_t)_{t \in [0,1]}$ is a $d$--dimensional continuous martingale. Of course in that case it is no problem to construct the It\^{o} integral $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$. But to apply the results of Section~\ref{s:pathwise ito}, we still need the pathwise convergence of $I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)$ to $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$ and the uniform H\"older continuity of $I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)$ along the dyadics.
Recall that for a $d$--dimensional semimartingale $X=(X^1, \dots, X^d)$, the quadratic variation is defined as $[ X] = ([ X^i, X^j])_{1 \le i,j \le d}$. We also write $X_s X_{s,t} := (X^i_s X^j_{s,t})_{1 \le i, j \le d}$ for $s,t \in [0,1]$.
\begin{thm}\label{t:continuous martingale iterated integrals}
Let $X=(X^1,\dots,X^d)$ be a $d$--dimensional continuous martingale.
Assume that there exist $p \ge 2$ and $\beta > 0$, such that $p \beta > 7/2$, and such that
\begin{align}\label{e:martingale area assumption}
E(|[ X ]_{s,t}|^p) \lesssim |t-s|^{2p\beta}
\end{align}
for all $s,t \in [0,1]$. Then $I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)$ almost surely converges uniformly to $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$. Furthermore, for all $\alpha \in (0, \beta - 1/p)$ we have $X \in \mathcal{C}^\alpha$ and almost surely
\begin{align}\label{e:uniform hoelder along dyadics for martingale}
\sup_k \sup_{0 \le \ell < \ell' \le 2^k} \frac{|I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)_{\ell 2^{-k}, \ell' 2^{-k}} - X_{\ell2^{-k}} X_{\ell2^{-k}, \ell' 2^{-k}}|}{|(\ell'-\ell)2^{-k}|^{2\alpha}} < \infty.
\end{align}
\end{thm}
\begin{proof}
The H\"older continuity of $X$ follows from Kolmogorov's continuity criterion. Indeed, applying the Burkholder-Davis-Gundy inequality and \eqref{e:martingale area assumption} we have
\begin{align*}
E(|X_{s,t}|^{2p}) \lesssim \sum_{i=1}^d E(|X^i_{s,t}|^{2p}) \lesssim \sum_{i=1}^d E(|[ X^i]_{s,t}|^p) \lesssim E(|[ X ]_{s,t}|^p) \lesssim |t-s|^{2p\beta},
\end{align*}
so that $X \in \mathcal{C}^{\alpha}$ for all $\alpha \in (0, \beta - 1/(2p))$ and in particular for all $\alpha \in (0, \beta - 1/p)$. Since we will need it below, let us also study the regularity of the It\^{o} integral $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$: A similar application of Burkholder-Davis-Gundy gives
\begin{align*}
E(|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{s,t} - X_s X_{s,t}|^p)\lesssim E\Bigl( \Bigl| \int_s^t |X_r - X_s|^2 \mathrm{d}|[ X ]|_s \Bigr|^{\frac{p}{2}} \Bigr).
\end{align*}
We apply H\"older's inequality (here we need $p \ge 2$) to obtain
\begin{equation*}
E\Bigl( \Bigl| \int_s^t |X_r - X_s|^2 \mathrm{d}|[ X ]|_s \Bigr|^{\frac{p}{2}} \Bigr) \lesssim E\Bigl( |[ X ]|_{s,t}^{\frac{p}{2} - 1} \int_s^t |X_r - X_s|^p \mathrm{d}|[ X ]|_s \Bigr).
\end{equation*}
Now the inequalities by Cauchy-Schwarz and then by Burkholder-Davis-Gundy yield
\begin{align*}
E\Bigl( |[ X ]|_{s,t}^{\frac{p}{2} - 1} \int_s^t |X_r - X_s|^p \mathrm{d}|[ X ]|_s \Bigr) & \lesssim E\Bigl( |[ X ]|_{s,t}^{\frac{p}{2}} \sup_{r \in [s,t]} |X_r - X_s|^p \Bigr) \\
&\le \sqrt{E\Bigl(\sup_{r \in [s,t]}|X_r - X_s|^{2p}\Bigr)} \sqrt{E(|[ X ]|_{s,t}^p)}\\
&\lesssim E(|[ X ]_{s,t}|^p) \lesssim |t-s|^{2p\beta}.
\end{align*}
The Kolmogorov criterion for rough paths, Theorem 3.1 of~\cite{Friz2013}, now implies that
\begin{align}\label{e:continuous martingale pr1}
|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{s,t} - X_s X_{s,t}| \lesssim |t-s|^{2 \alpha}
\end{align}
almost surely for all $\alpha \in (0, \beta - 1/p)$.
Let us get to the convergence of $I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)$.
As before, we have
\begin{align*}
& E(|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}} - I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}}|^p) \\
&\hspace{60pt} = E\Bigl( \Bigl| \int_{\ell 2^{-k}}^{\ell' 2^{-k}} \sum_{m = \ell}^{\ell'-1} \mathbf{1}_{[m2^{-k}, (m+1)2^{-k})}(r) X_{m 2^{-k},r} \mathrm{d} X_s \Bigr|^p \Bigr) \\
&\hspace{60pt} \lesssim E\Bigl( |[ X]|_{\ell 2^{-k}, \ell' 2^{-k}}^{\frac{p}{2}-1} \int_{\ell 2^{-k}}^{\ell' 2^{-k}} \Bigl|\sum_{m = \ell}^{\ell'-1} \mathbf{1}_{[m2^{-k}, (m+1)2^{-k})}(r) |X_{m 2^{-k},r}|^2 \Bigr|^{\frac{p}{2}} \mathrm{d}|[ X ]|_s \Bigr).
\end{align*}
Since the terms in the sum all have disjoint support, we can pull the exponent $p/2$ into the sum, from where we conclude that
\begin{align*}
&E\Bigl( |[ X]|_{\ell 2^{-k}, \ell' 2^{-k}}^{\frac{p}{2}-1} \int_{\ell 2^{-k}}^{\ell' 2^{-k}} \sum_{m = \ell}^{\ell'-1} \mathbf{1}_{[m2^{-k}, (m+1)2^{-k})}(r) |X_{m 2^{-k},r}|^p \mathrm{d}|[ X ]|_s \Bigr)\\
&\hspace{70pt} \lesssim \sqrt{E\Bigl(\sup_{r \in [s,t]} \Bigl| \sum_{m = \ell}^{\ell'-1} \mathbf{1}_{[m2^{-k}, (m+1)2^{-k})}(r) |X_{m 2^{-k},r}|^p \Bigr|^2 \Bigr)} \sqrt{E( |[ X]|_{\ell 2^{-k}, \ell' 2^{-k}}^p)}\\
&\hspace{70pt} \lesssim \sqrt{\sum_{m=\ell}^{\ell'-1} E( |[ X]_{m 2^{-k},(m+1)2^{-k}}|^p)} \sqrt{E( |[ X]_{\ell 2^{-k}, \ell' 2^{-k}}|^p)}\\
&\hspace{70pt} \lesssim \sqrt{(\ell'-\ell) (2^{-k})^{2p\beta}} \sqrt{|(\ell'-\ell) 2^{-k}|^{2p\beta}} = (\ell'-\ell)^{\frac{1}{2} + p\beta} 2^{-k 2 p \beta}.
\end{align*}
Hence, we obtain for $\alpha \in \mathbb{R}$ that
\begin{align*}
&P\left(|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}} - I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}}| > |(\ell'-\ell)2^{-k}|^{2\alpha}\right) \\
&\hspace{160pt} \lesssim \frac{(\ell'-\ell)^{\frac{1}{2} + p\beta} 2^{-k 2 p \beta}}{(\ell'-\ell)^{2p\alpha} 2^{-k 2 p \alpha}} = (\ell'-\ell)^{\frac{1}{2} + p\beta - 2p\alpha} 2^{-k2p (\beta - \alpha)}.
\end{align*}
If we set $\alpha = \beta - 1/(2p) - \varepsilon$, then $1/2 + p \beta - 2p\alpha = 3/2 - p \beta + 2p\varepsilon$. Now by assumption $p \beta > 7/2$ and therefore we can find $\alpha \in (0, \beta - 1/(2p))$ such that
\begin{align}\label{e:continuous martingale pr2}
1/2 + p \beta - 2p\alpha < -2.
\end{align}
Estimating the double sum by a double integral, we easily see that for all $\gamma<-2$
\begin{align*}
\sum_{\ell=1}^{2^k} \sum_{\ell'=\ell+1}^{2^k} (\ell'-\ell)^{\gamma} \lesssim 2^k.
\end{align*}
Therefore, we have for $\alpha \in (0, \beta - 1/(2p))$ satisfying \eqref{e:continuous martingale pr2}
\begin{align*}
&\sum_{\ell=1}^{2^k} \sum_{\ell'=\ell+1}^{2^k} P\left(|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}} - I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)_{\ell2^{-k}, \ell'2^{-k}}| > |(\ell'-\ell)2^{-k}|^{2\alpha}\right) \\
&\hspace{50pt} \lesssim 2^k 2^{-k2p (\beta - \alpha)}.
\end{align*}
Since $\alpha < \beta - 1/(2p)$, this is summable in $k$, and therefore Borel-Cantelli implies that
\begin{align}\label{e:continuous martingale pr3}
\sup_k \sup_{0 \le \ell < \ell' \le 2^k} \frac{|I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)_{\ell 2^{-k}, \ell' 2^{-k}} - I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)_{\ell 2^{-k}, \ell' 2^{-k}} |}{|(\ell'-\ell)2^{-k}|^{2\alpha}} < \infty
\end{align}
almost surely. We only proved this for $\alpha$ close enough to $\beta - 1/(2p)$, but of course then it also holds for all $\alpha'\le\alpha$.
The estimate \eqref{e:uniform hoelder along dyadics for martingale} now follows by combining \eqref{e:continuous martingale pr1} and \eqref{e:continuous martingale pr3}. The uniform convergence of $I^{\mathrm{It\hat{o}}}_k(X,\mathrm{d} X)$ to $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$ follows from \eqref{e:continuous martingale pr3} in combination with the H\"older continuity of $X$.
\end{proof}
\begin{ex}
The conditions of Theorem~\ref{t:continuous martingale iterated integrals} are satisfied by
all It\^{o} martingales of the form $X_t = X_0 + \int_0^t \sigma_s \mathrm{d} W_s$, as long as $\sigma$ satisfies $E(\sup_{s \in [0,1]} |\sigma_s|^{2p}) < \infty$
for some $p > 7$. In that case we can take $\beta = 1/2$ so that in particular $\beta - 1/p > 1/3$, which means that $X$ and $I^{\mathrm{It\hat{o}}}(X,\mathrm{d} X)$ are sufficiently regular to apply the results of Section~\ref{s:pathwise ito}.
\end{ex}
\section{Pathwise stochastic differential equations}\label{s:sde}
We are now ready to solve SDEs of the form
\begin{equation}\label{e:sde}
\mathrm{d} y(t) = b(y(t)) \mathrm{d} t + \sigma(y(t)) \mathrm{d} v(t), \qquad y(0) = y_0,
\end{equation}
pathwise, where the ``stochastic'' integral $\mathrm{d} v$ will be interpreted as $I(\sigma(y), \mathrm{d} v)$ or $I^{\mathrm{It\hat{o}}}(\sigma(y), \mathrm{d} v)$.
Assume for example that $(v, L(v,v)) \in \mathcal{C}^\alpha \times \mathcal{C}^{2\alpha}$ for some $\alpha \in (1/3,1/2)$ are given, and that $b$ is Lipschitz continuous whereas $\sigma \in C^{1+\varepsilon}_b$ for some $\varepsilon$ with $2(\alpha+\varepsilon)>1$. Then Corollary~\ref{c:controlled under smooth} implies that $\sigma(y) \in \mathcal{D}^{\varepsilon\alpha}_v$ for every $y \in \mathcal{D}^\alpha_v$, and Theorem~\ref{t:rough path integral} then shows that $y_0 + \int_0^\cdot b(y(t)) \mathrm{d} t + I(\sigma(y),\mathrm{d} v) \in \mathcal{D}^\alpha_v$. Moreover, if we restrict ourselves to the set
\[
\mathcal{M}_\sigma = \{ y \in \mathcal{D}^\alpha_v : \lVert y^v \rVert_\infty \le \lVert \sigma \rVert_\infty \},
\]
then the map $\mathcal{M}_\sigma \ni (y,y^v) \mapsto \Gamma(y) = (y_0 + \int_0^\cdot b(y(t)) \mathrm{d} t + I(\sigma(y),\mathrm{d} v), \sigma(y)) \in \mathcal{M}_\sigma$ satisfies the bound
\begin{align*}
\lVert \Gamma(y) \rVert_{v,\alpha} & \lesssim |y_0| + |b(0)| + \lVert b \rVert_{\mathrm{Lip}} \lVert y \rVert_\infty + \lVert \sigma(y) \rVert_{v,\varepsilon\alpha} (\lVert v \rVert_\alpha + \lVert v \rVert_\alpha^2 + \lVert L(v,v)\rVert_{2\alpha}) + \lVert \sigma(y) \rVert_\alpha \\
& \lesssim |y_0| + |b(0)| + (1 + \lVert b \rVert_{\mathrm{Lip}})(1 + \lVert \sigma \rVert_{C^{1+\varepsilon}_b}^{2+\varepsilon})(1 + \lVert v \rVert_\alpha^2 + \lVert L(v,v)\rVert_{2\alpha})(1 + \lVert y \rVert_{v,\varepsilon\alpha}),
\end{align*}
where we wrote $\lVert b \rVert_{\mathrm{Lip}}$ for the Lipschitz norm of $b$.
To pick up a small factor we apply a scaling argument. For $\lambda\in(0,1]$ we introduce the map $\Lambda_\lambda \colon \mathcal{C}^\beta \to \mathcal{C}^\beta$ defined by $\Lambda_\lambda f(t) = f(\lambda t)$. Then for $\lambda = 2^{-k}$ and on the interval $[0,\lambda]$ equation~\eqref{e:sde} is equivalent to
\begin{equation}\label{e:sde rescaled}
\mathrm{d} y^\lambda(t) = \lambda b(y^\lambda(t)) \mathrm{d} t + \lambda^\alpha \sigma(y^\lambda(t)) \mathrm{d} v^\lambda(t), \qquad y^\lambda(0) = y_0,
\end{equation}
where $y^\lambda = \Lambda_\lambda y$, $v^\lambda = \lambda^{-\alpha} \Lambda_\lambda v$. To see this, note that
\[
\Lambda_\lambda I(f,\mathrm{d} v) = \lim_{N\to \infty} \int_0^{\lambda\cdot} S_N f (t) \partial_t S_N v (t) \mathrm{d} t = \lim_{N\to\infty} \int_0^{\cdot} (\Lambda_\lambda S_N f) (t) \partial_t (\Lambda_\lambda S_N v) (t) \mathrm{d} t.
\]
But now $\Lambda_{2^{-k}} S_N g = S_{N-k} \Lambda_\lambda g$ for all sufficiently large $N$, and therefore
\[
\Lambda_\lambda I(f,\mathrm{d} v) = \lambda^\alpha I(\Lambda_\lambda f, \mathrm{d} v^\lambda).
\]
For the quadratic covariation we have
\[
\Lambda_\lambda [f,v] = [\Lambda_\lambda f, \Lambda_\lambda v] = \lambda^\alpha [\Lambda_\lambda f, v^\lambda],
\]
from where we get~\eqref{e:sde rescaled} also in the It\^o case. In other words we can replace $b$ by $\lambda b$, $\sigma$ by $\lambda^\alpha \sigma$, and $v$ by $v^\lambda$.
It now suffices to show that $v^\lambda$, $L(v^\lambda, v^\lambda)$, and $[v^\lambda, v^\lambda]$ are uniformly bounded in $\lambda$. Since only increments of $v$ appear in~\eqref{e:sde} we may suppose $v(0) = 0$, in which case it is easy to see that $\lVert \Lambda_\lambda v \rVert_\alpha \lesssim \lambda^\alpha \lVert v \rVert_\alpha$ and $\lVert [v^\lambda, v^\lambda]\rVert_{2\alpha} \lesssim \lVert [v,v]\rVert_{2\alpha}$. As for the L\'evy area, we have
\begin{align*}
L(v^\lambda, v^\lambda) & = I(v^\lambda, \mathrm{d} v^\lambda) - \pi_<(v^\lambda, v^\lambda) - S(v^\lambda, v^\lambda) = \lambda^{-2\alpha} \Lambda_\lambda I(v,\mathrm{d} v) - \pi_<(v^\lambda, v^\lambda) - S(v^\lambda, v^\lambda) \\
& = \lambda^{-2\alpha} \big\{ \Lambda_\lambda L(v,v) + [\Lambda_\lambda \pi_<(v,v) - \pi_<(\Lambda_\lambda v,\Lambda_\lambda v)] + [\Lambda_\lambda S(v,v) - S(\Lambda_\lambda v, \Lambda_\lambda v)]\big\},
\end{align*}
and therefore
\[
\lVert L(v^\lambda, v^\lambda) \rVert_{2\alpha} \lesssim \lVert L(v,v) \rVert_{2\alpha} + \lVert S(v,v) \rVert_{2\alpha} + \lVert v \rVert_{\alpha}^2 + \lambda^{-2\alpha} \lVert \Lambda_\lambda \pi_<(v,v) - \pi_<(v^\lambda,v^\lambda) \rVert_{2\alpha}.
\]
But now
\begin{align*}
|\Lambda_\lambda \pi_<(v,v)_{s,t} - \pi_<(v^\lambda,v^\lambda)_{s,t}| & \le | \pi_<(v,v)_{\lambda s,\lambda t} - v(\lambda s) v_{\lambda s, \lambda t}| \\
&\quad + |\Lambda_\lambda v(s) (\Lambda_\lambda v)_{s,t} - \pi_<(\Lambda_\lambda v, \Lambda_\lambda v)_{s,t}| \\
&\lesssim \lVert v \rVert_\alpha^2 |\lambda(t-s)|^{2\alpha} + \lVert \Lambda_\lambda v \rVert_\alpha |t-s|^{2\alpha} \\
&\lesssim \lambda^{2\alpha} \lVert v \rVert_\alpha^2 |(t-s)|^{2\alpha}.
\end{align*}
From here we obtain the uniform boundedness of $\lVert v^\lambda \rVert_{v^\lambda,\alpha}$ for small $\lambda$, depending only on $b,\sigma, v, L(v,v)$ and possibly $[v,v]$, but not on $y_0$. If $\sigma \in C^{2+\varepsilon}_b$, similar arguments give us a contraction for small $\lambda$, and therefore we obtain the existence and uniqueness of solutions to~\eqref{e:sde rescaled}. Since all operations involved depend on $(v,L(v,v),y_0)$ and possibly $[v,v]$ in a locally Lipschitz continuous way, also $y^\lambda$ depends locally Lipschitz continuously on this extended data.
Then $y = \Lambda_{\lambda^{-1}} y^\lambda$ solves~\eqref{e:sde} on $[0,\lambda]$, and since $\lambda$ can be chosen independently of $y_0$, we obtain the global in time existence and uniqueness of a solution which depends locally Lipschitz continuously on $(v, L(v,v), y_0)$ and possibly $[v,v]$.
\begin{thm}\label{t:sde}
Let $\alpha \in (1/3, 1)$ and let $(v,L(v,v))$ satisfy the assumptions of Theorem~\ref{t:rough path integral}. Let $y_0 \in \mathbb{R}^d$ and $\varepsilon>0$ be such that $\alpha(2+\varepsilon) > 2$ and let $\sigma \in C^{2+\varepsilon}_b$ and $b$ be Lipschitz continuous. Then there exists a unique $y \in \mathcal{D}^\alpha_v$ such that
\[
y = y_0 + \int_0^\cdot b(y(t)) \mathrm{d} t + I(\sigma(y), \mathrm{d} v).
\]
The solution $y$ depends locally Lipschitz continuously on $(v, L(v,v), y_0)$. If furthermore $[v,v]$ satisfies the assumptions of Corollary~\ref{c:pathwise ito with smooth quadratic variation}, then there also exists a unique solution $x\in \mathcal{D}^\alpha_v$ to
\begin{align*}
x & = y_0 + \int_0^\cdot b(x(t)) \mathrm{d} t + I^{\mathrm{It\hat{o}}}(\sigma(x), \mathrm{d} v) \\
& = y_0 + \int_0^\cdot b(x(t)) \mathrm{d} t + I(\sigma(x), \mathrm{d} v) - \frac{1}{2} \int_0^\cdot \mathrm{D} \sigma(x(t)) \sigma(x(t)) \mathrm{d} [v,v]_t
\end{align*}
and $x$ depends locally Lipschitz continuously on $(v, L(v,v), [v,v], y_0)$.
\end{thm}
\begin{rmk}
Since our integral is pathwise continuous, we can of course consider anticipating initial conditions and coefficients. Such problems arise naturally in the study of random dynamical systems; see for example~\cite{Imkeller1998,Arnold1999}. There are various approaches, for example filtration enlargements, Skorokhod integrals, or the noncausal Ogawa integral. While filtration enlargements are technically difficult, Skorokhod integrals have the disadvantage that in the anticipating case the integral is not always easy to interpret and can behave pathologically; see~\cite{Barlow1995}. With classical rough path theory these technical problems disappear. But then the integral is given as limit of compensated Riemann sums (see Proposition~\ref{p:Gubinelli rough paths}). With our formulation of the integral it is clear that we can indeed consider usual Riemann sums. An approach to pathwise integration which allows to define anticipating integrals without many technical difficulties while retaining a natural interpretation of the integral is the stochastic calculus via regularization of Russo and Vallois~\cite{Russo1993,Russo2007}. The integral notion studied by Ogawa~\cite{Ogawa1984, Ogawa1985} for anticipating stochastic integrals with respect to Brownian motion is based on Fourier expansions of integrand and integrator, and therefore related to our and the Stratonovich integral (see Nualart, Zakai~\cite{NualartZakai1989}). Similarly as the classical It\^o integral, it is interpreted in an $L^2$ limit sense, not a pathwise one.
\end{rmk}
|
1,108,101,565,201 | arxiv | \section{Introduction}
\label{sec:introduction}
Let $c(n,i)$ denote the number of permutations of $[n]=\{1,2,\ldots,n\}$ with $i$ cycles.
The following equation is well known; for example see \cite{Bona,Stanley1997}:
\begin{equation}
\label{eq:2}
\sum_{i=1}^n c(n,i) x^i = x(x+1) \cdots (x+n-1).
\end{equation}
Let $n$ and $k$ be positive integers with $k<n$.
By differentiating \eqref{eq:2} with respect to $x$ and substituting $x=-k$, we get the following:
\begin{equation}
\label{eq:4}
\sum_{i=1}^n c(n,i)\cdot i \cdot (-k)^{i-1} = (-1)^k k! (n-k-1)!.
\end{equation}
In particular, if $k=1$, then \eqref{eq:4} implies the following theorem.
\begin{thm}\label{thm:1}
The total number of cycles of all even permutations of $[n]$ and the
total number of cycles of all odd permutations of $[n]$ differ by $(-1)^n(n-2)!$.
\end{thm}
The problem of finding a bijective proof of Theorem~\ref{thm:1} was proposed by Mikl\'{o}s B\'{o}na and it has been added to \cite{StanleyEC1se} as an exercise (private communication with Richard Stanley and Mikl\'{o}s B\'{o}na).
In this note, we prove Theorem~\ref{thm:1} bijectively by finding a sign-reversing involution. We also prove \eqref{eq:4} bijectively.
\section{Bijective proofs}
\label{sec:bijective-proofs}
Recall the lexicographic order on the pairs of integers, that is, $(i_1,j_1) \leq (i_2,j_2)$ if and only if $i_1 < i_2$, or $i_1=i_2$ and $j_1\leq j_2$. Note that this is a linear order.
Let $T(n)$ denote the set of pairs $(\pi,C)$ where $\pi$ is a permutation of $[n]$ and $C$ is a cycle of $\pi$. Then Theorem~\ref{thm:1} is equivalent to the following:
\begin{equation}
\label{eq:1}
\sum_{(\pi,C)\in T(n)} \sgn(\pi) =(-1)^n(n-2)!.
\end{equation}
\begin{proof}[Proof of Theorem~\ref{thm:1}]
We define a map $\phi:T(n)\rightarrow T(n)$ as follows. Let $(\pi,C)\in T(n)$.
\textbf{\textsc{Case 1:}} $C$ contains at most $n-2$ integers. Let $(i,j)$ be the smallest pair in lexicographic order for distinct integers $i$ and $j$ which are not contained in $C$.
Then we define $\phi(\pi,C)=(\tau_{ij}\pi,C)$, where $\tau_{ij}$ is the transposition exchanging $i$ and $j$.
\textbf{\textsc{Case 2:}} $C$ contains at least $n-1$ integers. If $C$ does not contain $1$, then we define $\phi(\pi,C)=(\pi,C)$. If $C$ contains $1$, then we have either $\pi=(a_0)(1,a_1,a_2,\ldots,a_{n-2})$ or $\pi=(1,a_0,a_1,\ldots,a_{n-2})$ in cycle notation for some integers $a_i$. Let $\pi'=(1,a_0,a_1,\ldots,a_{n-2})$ if $\pi=(a_0)(1,a_1,a_2,\ldots,a_{n-2})$, and $\pi'=(a_0)(1,a_1,a_2,\ldots,a_{n-2})$ if $\pi=(1,a_0,a_1,\ldots,a_{n-2})$.
We define $\phi(\pi,C)=(\pi',C')$, where $C'$ is the cycle of $\pi'$ containing $1$.
Let us define the \emph{sign} of $(\pi,C)\in T(n)$ to be $\sgn(\pi)$. It
is easy to see that $\phi$ is a sign-reversing involution on $T(n)$ whose
fixed points are precisely those $(\pi,C)\in T(n)$ such that $1$ forms a
$1$-cycle and the rest of the integers form an $(n-1)$-cycle, which is $C$. Since
there are $(n-2)!$ such fixed points of $\phi$ which all have sign
$(-1)^n$, we get \eqref{eq:1}, and thus Theorem~\ref{thm:1}.
\end{proof}
Now we will generalize this argument to prove \eqref{eq:4}.
Let $P(n,k)$ denote the set of triples $(\pi,C,f)$ where $\pi$ is a permutation of $[n]$, $C$ is a cycle of $\pi$ and $f$ is a function from the set of cycles of $\pi$ except $C$ to $[k]$.
The left-hand side of \eqref{eq:4} is equal to
\begin{align*}
\sum_{(\pi,C)\in T(n)} (-k)^{cyc(\pi)-1} &= \sum_{(\pi,C)\in T(n)} (-1)^{cyc(\pi)-1} k^{cyc(\pi)-1}\\
&= (-1)^{n-1}\sum_{(\pi,C,f)\in P(n,k)} \sgn(\pi),
\end{align*}
because $\sgn(\pi)=(-1)^{n-cyc(\pi)}$ and for given $(\pi,C)\in T(n)$,
there are $k^{cyc(\pi)-1}$ choices of $f$ with $(\pi,C,f)\in P(n,k)$.
Thus we get that \eqref{eq:4} is equivalent to the following:
\begin{equation}
\label{eq:5}
\sum_{(\pi,C,f)\in P(n,k)} \sgn(\pi) = (-1)^{n-k-1} k! (n-k-1)!.
\end{equation}
Let us define the \emph{sign} of $(\pi,C,f)\in P(n,k)$ to be $\sgn(\pi)$. Let
$\Fix(n,k)$ denote the set of elements $(\pi,C,f)\in P(n,k)$ such that (1)
each integer $i\in[k]$ forms a 1-cycle of $\pi$ and the integers
$k+1,k+2,\ldots,n$ form an $(n-k)$-cycle of $\pi$, which is $C$ and (2) the
$f$ values of the cycles of $\pi$ except $C$ are all distinct. Then, to
prove \eqref{eq:5}, it is sufficient to find a sign-reversing involution on
$P(n,k)$ whose fixed point set is $\Fix(n,k)$.
We will define a map $\psi:P(n,k)\to P(n,k)$ as follows.
Let $(\pi,C,f)\in P(n,k)$.
\textbf{\textsc{Case 1:}} There is a pair $(i,j)$ of integers $i<j$ such that $i\in C_1\ne C$ and $j\in C_2\ne C$ with $f(C_1)=f(C_2)$. Here we may have $C_1=C_2$. Let $(i,j)$ be the smallest such pair in lexicographic order. Then we define $\psi(\pi,C,f) = (\tau_{ij}\pi, C, f')$, where $f'(C')=f(C')$ if $i,j\not\in C'$, and $f'(C')=f(C_1)$ otherwise. As before, $\tau_{ij}$ is the transposition exchanging $i$ and $j$.
\textbf{\textsc{Case 2:}} Case 1 does not hold. Then the cycles of $\pi$ except $C$ are all $1$-cycles whose $f$ values are all distinct. Thus there are at most $k$ $1$-cycles of $\pi$ except $C$.
We can represent $(\pi,C,f)$ as a digraph $D$ with vertex set $[n]$ as
follows. For each integer $i$ contained in $C$, add an edge $i\to\pi(i)$.
For each integer $i$ of $[n]$ which is not contained in $C$, add an edge
$i\to f(i)$, where $f(i)$ is the $f$ value of the $1$-cycle $(i)$
consisting of $i$. For example, see Figures~\ref{fig:digraph} and
\ref{fig:digraph2}. Note that we can recover $(\pi,C,f)$ from $D$ even
when $D$ consists of cycles only because in this case $C$ is the only cycle
containing integers greater than $k$.
\begin{figure}
\centering
\begin{tikzpicture}[line width=1pt]
\node [shift={(1.2,0)}] (1) at (72:1) {$1$};
\node [shift={(-1,0)}] (8) at (144:1) {$9$};
\node [shift={(2.4,0)}] (6) at (72:1) {$11$};
\node (a) at (0:1) {$3$};
\node (b) at (72:1) {$2$};
\node (c) at (144:1) {$8$};
\node (d) at (216:1) {$10$};
\node (e) at (-72:1) {$5$};
\draw [bend left, ->] (b) to (a);
\draw [bend left, ->] (c) to (b);
\draw [bend left, ->] (d) to (c);
\draw [bend left, ->] (e) to (d);
\draw [bend left, ->] (a) to (e);
\draw [->] (1) to (b);
\draw [->] (6) to (1);
\draw [->] (8) to (c);
\node (3) at (3.5,0) {$4$};
\node (4) at (4.5,0) {$6$};
\node (5) at (5.5,0) {$7$};
\draw [bend left=60,->] (3) to (4);
\draw [bend left=60,->] (4) to (3);
\draw [loop above,->] (5) to (5);
\end{tikzpicture}
\caption{The digraph representing $(\pi,C,f)\in P(11,8)$, where $\pi=(2,3,5,10,8)(1)(4)(6)(7)(9)(11)$, $C=(2,3,5,10,8)$, $f(1)=2$, $f(4)=6$, $f(6)=4$, $f(7)=7$, $f(9)=8$ and $f(11)=1$.}
\label{fig:digraph}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[line width=1pt]
\node (8) at (180:2) {$9$};
\node [shift = {(-1.2,0)}] (6) at (120:1) {$11$};
\node (3) at (0:1) {$3$};
\node (2) at (60:1) {$2$};
\node (1) at (120:1) {$1$};
\node (5) at (180:1) {$8$};
\node (9) at (240:1) {$10$};
\node (4) at (300:1) {$5$};
\draw [bend left, ->] (1) to (2);
\draw [bend left, ->] (2) to (3);
\draw [bend left, ->] (3) to (4);
\draw [bend left, ->] (4) to (9);
\draw [bend left, ->] (9) to (5);
\draw [bend left, ->] (5) to (1);
\draw [->] (6) to (1);
\draw [->] (8) to (5);
\node (x) at (3.5,0) {$4$};
\node (y) at (4.5,0) {$6$};
\node (z) at (5.5,0) {$7$};
\draw [bend left=60,->] (x) to (y);
\draw [bend left=60,->] (y) to (x);
\draw [loop above,->] (z) to (z);
\end{tikzpicture}
\caption{The digraph representing $(\pi,C,f)\in P(11,8)$, where $\pi=(1,2,3,5,10,8) (4)(6)(7)(9)(11)$, $C=(1,2,3,5,10,8)$, $f(4)=6$, $f(6)=4$, $f(7)=7$, $f(9)=8$ and $f(11)=1$.}
\label{fig:digraph2}
\end{figure}
Now we consider the two sub-cases where $C$ contains an integer in $[k]$ or not.
\textbf{\textsc{Sub-Case 2-a:}} $C$ does not contain any integer in $[k]$. It is easy to see that we have this sub-case if and only if $(\pi,C,f)\in \Fix(n,k)$. We define $\psi(\pi,C,f) = (\pi,C,f)$.
\textbf{\textsc{Sub-Case 2-b:}} $C$ contains an integer in $[k]$. Let $m$ be the smallest such integer.
For an integer $i\in C$, we say that $i$ is \emph{free} if $i\in[k]$ and the in-degree of $i$ in $D$ is $1$, i.e.~there is no integer outside of $C$ pointing to $i$.
A sequence $(m_1,m_2,\ldots,m_\ell)$ of integers in $C$ is called a \emph{free chain} if it satisfies (1) for each $i\in[\ell]\setminus\{1\}$, $m_i$ is free and $m_i=\pi(m_{i-1})$, and (2) for each $i\in[\ell]$, $m_i$ is the $i$th-smallest integer in $C$.
Note that we always have a free chain, for example the sequence consisting of $m$ alone. Moreover, there is a unique maximal free chain.
Let $(m_1,m_2,\ldots,m_\ell)$ be the maximal free chain. Let $\overline m = m_1$ if $\ell$ is odd, and $\overline m = m_2$ if $\ell$ is even.
\begin{example}
The maximal free chains of the digraphs in Figures~\ref{fig:digraph} and \ref{fig:digraph2} are $(2,3,5)$ and $(1,2,3,5)$ respectively. Thus $\overline m =2$ in both Figures~\ref{fig:digraph} and \ref{fig:digraph2}.
\end{example}
Let $D'$ be the digraph obtained from $D$ by doing the following. If
$\overline m$ is free, then let $u,v$ be the integers in $C$ such that $D$
has the edges $v\to u$ and $u\to \overline m$. It is not difficult to see
that in this case $C$ has at least two integers, which implies $u\ne
\overline m$. Then we remove the edge $v\to u$ and add an edge
$v\to\overline m$. If $\overline m$ is not free, then let $u$ and $v$ be
the integers with $u\not\in C$ and $v\in C$ such that $D$ has the edges
$u\to \overline m$ and $v\to \overline m$. Then we remove the edge $v\to
\overline m$ and add an edge $v\to u$.
We define $\psi(\pi,C,f)$ to be the element in $P(n,k)$ represented by $D'$.
\begin{example}
Let $(\pi,C,f)$ be represented by the digraph in Figure~\ref{fig:digraph}. Since $\overline m = 2$, $\psi(\pi,C,f)$ is represented by the digraph in Figure~\ref{fig:digraph2}. Note that $\psi(\psi(\pi,C,f))=(\pi,C,f)$.
\end{example}
It is easy to see that $\psi$ is a sign-reversing involution on $P(n,k)$ with fixed point set $\Fix(n,k)$. Thus we have proved \eqref{eq:4} bijectively.
\section*{Acknowledgement}
The author would like to thank the anonymous referee for reading the
manuscript carefully and making helpful comments. He would also like
to thank Vincent Beck for pointing out a mathematical typo.
\bibliographystyle{elsarticle-num}
|
1,108,101,565,202 | arxiv | \section{Introduction}
\label{Intro}
Frustrated antiferromagnets (AFs) attract significant attention now due to their rich phase diagrams and multiferroic properties of some of their phases with non-collinear magnetic ordering (see, e.g., Refs~\cite{nagaosa,Ehrenberg1997,Kimura2003,mni3,utesov2017}). Multiferroics of spin origin in which ferroelectricity is induced by spiral magnetic order show a giant magnetoelectric response (see, e.g., Refs~\cite{Cheong2007,Tokura2009}) that makes them promising materials for technological applications. The frustration plays an important role in such multiferroics providing the non-collinear spin textures. For example, non-collinear magnetic phases in frustrated magnet MnWO$_4$ were shown to be ferroelectric. \cite{arkenbout2006,nojiri2011,mitamura2012} Thus, phase transitions in frustrated AFs governed by external magnetic field is an important topic now.
The plane in which spins rotate (spiral plane) is selected in real materials by small anisotropic spin interactions. As a result, application of small or moderate magnetic fields within the spiral plane produces a flop of the spiral plane in many multiferroics accompanied with the flop of the electric moment. \cite{nagaosa} We address this effect in frustrated AFs with small biaxial anisotropy in our previous paper \cite{utesov2018} and show that the flop of the spiral plane resembles the conventional spin-flop transition in collinear AFs \cite{Neel1936} (see Figs.~\ref{fig1}(a) and \ref{fig1}(b)). Critical fields at which these transitions take place are given by similar formulas having the structure $S \sqrt{DJ}$, where $J$ is the characteristic energy of the exchange interaction, $D \ll J$ is the anisotropy value, and $S$ is the spin value.
\begin{figure}
\centering
\includegraphics[width=8cm]{figure.pdf}\\
\caption{Possible scenarios of phase transitions in frustrated antiferromagnet with Hamiltonian~\eqref{ham1} upon magnetic field $\bf H$ increasing when $\bf H$ is directed along the easy axis $z$. (a) Strong easy-axis anisotropy. The conventional spin-flop transition in which the collinear antiferromagnetic (AF) phase is followed by the canted AF phase (CAF) at $H>H_{sf}$. (b) Weak anisotropy. First-order transition from the spiral phase (YZ) in which all spins lie in the easy $yz$ plane to the conical spiral phase (XY) in which spins rotate in $xy$ plane.
(c)--(e) Scenarios discussed in the present paper which are expected to arise at moderate anisotropy.
\label{fig1}}
\end{figure}
In the present paper, we continue the discussion of anisotropic frustrated AFs in small or moderate magnetic fields and consider evolution of phase transitions upon variation of the anisotropy value in a simple model containing the frustrated exchange interaction and the single-ion biaxial anisotropy (or dipolar forces). Applying the field along the easy direction, we observe the conventional spin-flop transition presented in Fig.~\ref{fig1}(a) at sufficiently strong easy-axis anisotropy. At weak anisotropy, we find the spiral plane flop shown in Fig.~\ref{fig1}(b) which was discussed in detail in our previous paper \cite{utesov2018}. The main goal of the present study is quantitative consideration of the moderate anisotropy regime in the mean-field approximation. We propose novel sequences of phase transitions presented in Figs.~\ref{fig1}(c)--(e). Scenario shown in Figs.~\ref{fig1}(c) can be interpreted as the spin-flop transition splitting into two first-order transitions with an intermediate spiral phase. In Sec.~\ref{Frust} we find expressions for the critical fields and conditions for realization of these scenarios of phase transitions.
In Sec.~\ref{applic}, we present some particular sets of model parameters at which scenarios shown in Figs.~\ref{fig1}(c)--(e) arise. We demonstrate that the scenario of phase transitions depicted in Fig.~\ref{fig1}(c) is realized in the considered model with parameters proposed in Ref.~\cite{zh} for description of experimentally obtained phase diagram of MnWO$_4$. We present a summary of results and our conclusions in Sec.~\ref{Conc}.
\section{Frustrated antiferromagnets. General consideration.}
\label{Frust}
In this section, we present a general consideration of simple models in which a subtle interplay between different magnetic interactions leads to sequences of phase transitions shown in Fig.~\ref{fig1}.
\subsection{Antiferromagnets with single-ion biaxial anisotropy}
\label{Theor}
We consider the frustrated Heisenberg AF with small single-ion biaxial anisotropy whose Hamiltonian has the form
\begin{eqnarray}
\label{ham1}
\mathcal{H} &=& \mathcal{H}_{ex} + \mathcal{H}_{an} + \mathcal{H}_{z}, \nonumber \\
\mathcal{H}_{ex} &=& -\frac12 \sum_{i,j} J_{ij} \left(\mathbf{S}_i \cdot \mathbf{S}_j\right), \\
\mathcal{H}_{an} &=& - \sum_i \left[ D(S_i^z)^2 + E (S_i^y)^2\right], \nonumber \\
\mathcal{H}_z &=& - \sum_i \left(\mathbf{h} \cdot \mathbf{S}_i\right),\nonumber
\end{eqnarray}
where ${\bf h}=g \mu_B {\bf H}$ is the magnetic field in energy units and we assume for definiteness that $D > E > 0$ so that $x$ and $z$ are the hard and the easy axes, respectively. We also assume in all general derivations below that there is one spin in a unit cell and the lattice is arbitrary. After the Fourier transform
\begin{equation}
\label{four1}
\mathbf{S}_j = \frac{1}{\sqrt{N}} \sum_\mathbf{q} \mathbf{S}_\mathbf{q} e^{i \mathbf{q} \mathbf{R}_j},
\end{equation}
where $N$ is the number of spins in the lattice, Hamiltonian \eqref{ham1} acquires the form
\begin{eqnarray}
\label{ex2}
\mathcal{H}_{ex} &=& -\frac12 \sum_\mathbf{q} J_\mathbf{q} \left(\mathbf{S}_\mathbf{q} \cdot \mathbf{S}_{-\mathbf{q}}\right), \\
\label{an21}
\mathcal{H}_{an} &=& - \sum_\mathbf{q}\left[ D S^z_\mathbf{q} S^z_{-\mathbf{q}} + E S^y_\mathbf{q} S^y_{-\mathbf{q}}\right], \\
\label{z21}
\mathcal{H}_z &=& - \sqrt{N} \left(\mathbf{h} \cdot \mathbf{S}_{\bf 0}\right).
\end{eqnarray}
We assume that $J_\mathbf{q}$ has two equivalent maxima at ${\bf q}=\pm {\mathbf{k}}$. Then, in the absence of the anisotropy, the ground state of the system at $h=0$ is a plane spiral with modulation vector $\mathbf{k}$. We consider a simple case in which strong enough anisotropy leads to a collinear antiferromagnetic (AF) structure characterized by the vector ${\bf q}=\mathbf{k}_0$ in which spins are directed along $z$ axis at $h=0$ and the average magnetization is zero ($\mathbf{k}_0$ can be equal to, e.g., $(\pi,\pi,\pi)$, $(0,0,\pi)$, etc.). In general, there can be also other more complicated collinear structures one of which is discussed in Sec.~\ref{Onefourth}.
At finite $\bf h$ applied along $z$ axis, the competing spin structures are the following (see Fig.~\ref{fig1}): (i) the collinear AF phase, (ii) the canted AF state (CAF), (iii) the helical state in which spins rotate in the easy $yz$ plane (YZ), and (iv) the conical spiral in which spins rotate in the $xy$ plane (XY). Due to the anisotropy, the AF state has lower energy than CAF at small $h$ and YZ has lower energy than XY. Classical ground state energies $\cal E$ of the considered structures read as
\begin{equation}
\label{enaf}
\frac{1}{N} \mathcal{E}_{AF} = -\frac{S^2 }{2} J_{\mathbf{k}_0}-S^2D,
\end{equation}
\begin{equation}
\label{ensf}
\frac{1}{N} \mathcal{E}_{CAF} \approx -\frac{S^2 }{2} J_{\mathbf{k}_0}- S^2 E - \frac{h^2}{2(J_{\mathbf{k}_0}-J_{0})},
\end{equation}
\begin{equation}
\label{enyz}
\frac{1}{N} \mathcal{E}_{YZ} \approx -\frac{S^2 }{2} J_\mathbf{k}- \frac{S^2(D+E)}{2}- \frac{h^2}{2(2 J_{\mathbf{k}}-J_{0}-J_{2\mathbf{k}})},
\end{equation}
\begin{equation}
\label{enxy}
\frac{1}{N} \mathcal{E}_{XY} \approx -\frac{S^2 }{2} J_{\mathbf{k}}- \frac{S^2 E }{2} - \frac{h^2}{2(J_{\mathbf{k}}-J_{0})}.
\end{equation}
The detailed derivation of Eqs.~\eqref{enyz} and \eqref{enxy} can be found in Ref.~\cite{utesov2018}. Eqs.~\eqref{enyz} and \eqref{enxy} are obtained in the first order in $D$ and $E$, under assumption that $h$ is of the order of the conventional spin-flop field
\begin{equation}
\label{hsf}
h_{sf} = S \sqrt{2(D-E)(J_{\mathbf{k}_0}-J_{0})}
\end{equation}
which is much smaller than the saturation field $h_s\sim SJ$. Eq.~\eqref{hsf} is found by comparing Eqs.~\eqref{enaf} and \eqref{ensf}. We also neglect higher order harmonics in spiral phases which arise due to the anisotropy. As it was shown in Ref.~\cite{utesov2018}, contributions from higher harmonics to the ground state energy read as (notice, they are of the second order in the anisotropy)
\begin{equation}\label{harm1}
-\frac{S^2(D-E)^2}{2(J_\mathbf{k}-J_{3\mathbf{k}})}\, \text{ and } \, -\frac{S^2 E^2}{2(J_\mathbf{k}-J_{3\mathbf{k}})}
\end{equation}
for YZ and XY structures, respectively. Thus, our approach is valid if
\begin{eqnarray}
\label{cond1}
D-E &\ll& \min\{ J_{\mathbf{k}}-J_{3\mathbf{k}}, J_{\mathbf{k}_0}-J_{0} \},\nonumber\\
E &\ll& \min\{ J_{\mathbf{k}}-J_{3\mathbf{k}}, J_{\mathbf{k}_0}-J_{0} \}
\end{eqnarray}
(see Ref.~\cite{utesov2018} for more details).
One can see from Eqs.~\eqref{enaf}--\eqref{enxy} that the AF phase is stable at $h=0$ if
\begin{equation}
\label{cond2}
D-E > J_{\mathbf{k}}-J_{\mathbf{k}_0} \equiv \alpha.
\end{equation}
Besides, the CAF phase is energetically preferable in comparison with XY one if
\begin{equation}
\label{cond3}
E > \alpha.
\end{equation}
The opposite case of $D-E<\alpha$ and $E<\alpha$ is considered in detail in Ref.~\cite{utesov2018}, where the spiral plane flop was observed upon the field increasing (i.e., the transition shown in Fig.~\ref{fig1}(b)). Conditions \eqref{cond1} and \eqref{cond2} are compatible with each other if $\mathbf{k}$ is not very close to and not very far from $\mathbf{k}_0$ (we also imply here that $2\mathbf{k}_0$ is equal to a reciprocal lattice vector as it is frequently the case in AF phases). As it is shown in Sec.~\ref{applic}, this can be achieved in a rather broad range of model parameters.
If conditions \eqref{cond1}--\eqref{cond3} hold, one has AF$\leftrightarrow$YZ$\leftrightarrow$CAF sequence of phase transitions instead of the conventional scenarios of AF$\leftrightarrow$CAF and YZ$\leftrightarrow$XY. The critical field at which the AF$\leftrightarrow$YZ transition takes place can be found from Eqs.~\eqref{enaf} and \eqref{enyz}, the result being
\begin{equation}
\label{h1}
h_1 = S \sqrt{(D-E-\alpha)(2 J_{\mathbf{k}}-J_{0}-J_{2\mathbf{k}})}.
\end{equation}
The critical field of YZ$\leftrightarrow$CAF transition derived from Eqs.~\eqref{ensf} and \eqref{enyz} has the form
\begin{equation}
\label{h2}
h_2 = S \sqrt{(D-E+\alpha) \frac{(2 J_{\mathbf{k}}-J_{0}-J_{2\mathbf{k}})(J_{\mathbf{k}_0}-J_{0})}{2 J_{\mathbf{k}}-J_{\mathbf{k}_0}-J_{2\mathbf{k}}}}.
\end{equation}
The condition of existence of YZ phase, $h_1<h_2$, reads as
\begin{equation}
\label{exyz}
\alpha < D-E < \alpha \frac{2 J_{\mathbf{k}}-J_{0}-J_{2\mathbf{k}}}{2 J_{\mathbf{k}}-J_0-J_{2\mathbf{k}}-2(J_{\mathbf{k}_0}-J_0)},
\end{equation}
where we take into account also Eq.~\eqref{cond2} and assume that the denominator is positive. Bearing in mind the positiveness of $J_{\mathbf{k}_0}-J_0$, one concludes that Eq.~\eqref{exyz} gives a finite interval for $D-E$. If the denominator in Eq.~\eqref{exyz} is negative, $h_1<h_2$ if Eqs.~\eqref{cond1} and \eqref{cond2} holds.
One can see from Eqs.~\eqref{ensf} and \eqref{enxy} that XY phase is energetically preferable in comparison with CAF state if
\begin{equation}
\label{cond4}
E<\alpha.
\end{equation}
In this case, two possible sequences of phase transitions can appear which are presented in Fig.~\ref{fig1}(d) and \ref{fig1}(e). The first one is AF$\leftrightarrow$YZ$\leftrightarrow$XY. The field of AF$\leftrightarrow$YZ transition is given by Eq.~\eqref{h1}. YZ$\leftrightarrow$XY transition is of the spiral plane flop type which is described in detail in Ref.~\cite{utesov2018} and which arises at $h=h_{sp}$, where
\begin{equation}
\label{hsp}
h_{sp} = S \sqrt{D \frac{(2 J_{\mathbf{k}}-J_{0}-J_{2\mathbf{k}})(J_{\mathbf{k}}-J_{0})}{J_{\mathbf{k}}-J_{2\mathbf{k}}}}.
\end{equation}
This scenario appears if
\begin{equation}
\label{cond5}
J_0 \leq J_{2\mathbf{k}}
\quad\mbox{ or }\quad
D < (E + \alpha) \frac{J_{\mathbf{k}}-J_{2\mathbf{k}}}{J_0-J_{2\mathbf{k}}}.
\end{equation}
When both of these conditions are violated, one has
\begin{equation}
\label{cond6}
h_1>h_{sp},
\end{equation}
where $h_1$ and $h_{sp}$ are given by Eqs.~\eqref{h1} and \eqref{hsp}, respectively, and the sequence of phase transitions shown in Fig.~\ref{fig1}(e) (AF$\leftrightarrow$XY) takes place. Corresponding critical field derived from Eqs.~\eqref{enaf} and~\eqref{enxy} reads as
\begin{equation}
\label{hxy}
h_{xy} = S \sqrt{(2D-E-\alpha) (J_{\mathbf{k}}-J_{0})}.
\end{equation}
\subsection{Antiferromagnets with dipolar forces}
\label{Dip}
In low-symmetry lattices, the magneto-dipolar interaction can effectively produce the biaxial anisotropy. \cite{utesov2017,utesov2018} Moreover, dipolar forces can be the main source of anisotropy in systems containing magnetic ions with half-filled $d$-shells (e.g., Mn$^{2+}$) because the spin-orbit interaction is particularly small in them. Then, we consider in this subsection the model with Hamiltonian \eqref{ham1} in which $\mathcal{H}_{an}$ is replaced by
\begin{eqnarray}
\label{ham4}
\mathcal{H}_d &=& \frac12 \sum_{i,j} D^{\alpha \beta}_{ij} S^\alpha_i S^\beta_j, \\
{\cal D}^{\alpha \beta}_{ij} &=& \omega_0 \frac{v_0}{4 \pi} \left( \frac{1}{R_{ij}^3} - \frac{3 R_{ij}^\alpha R_{ij}^\beta }{R_{ij}^5}\right), \nonumber
\end{eqnarray}
where $v_0$ is the unit cell volume and
\begin{equation}\label{dipen}
\omega_0 = 4 \pi \frac{(g \mu_B)^2}{v_0} \ll J
\end{equation}
is the characteristic dipolar energy. After Fourier transform \eqref{four1} we have
\begin{equation}\label{dip2}
\mathcal{H}_d = \frac12 \sum_\mathbf{q} {\cal D}^{\alpha \beta}_\mathbf{q} S^\alpha_\mathbf{q} S^\beta_{-\mathbf{q}}.
\end{equation}
Tensor ${\cal D}^{\alpha \beta}_\mathbf{q}/2$ has three eigenvalues $\lambda_1(\mathbf{q}) \geq \lambda_2(\mathbf{q}) \geq \lambda_3(\mathbf{q})$ corresponding to three orthogonal eigenvectors $\mathbf{v}_1(\mathbf{q})$, $\mathbf{v}_2(\mathbf{q})$, and $\mathbf{v}_3(\mathbf{q})$. There is a correspondence with the model having the single-ion biaxial anisotropy if we denote $D=\lambda_1(\mathbf{q}) -\lambda_3(\mathbf{q})$ and $E=\lambda_1(\mathbf{q}) -\lambda_3(\mathbf{q})$ and direct $z$ axis along $\mathbf{v}_3(\mathbf{q})$, $y$ axis along $\mathbf{v}_2(\mathbf{q})$, and $x$ axis along $\mathbf{v}_1(\mathbf{q})$. The spiral vector $\mathbf{k}$ minimizes $-J(\mathbf{q})+ [ \lambda_2(\mathbf{q}) + \lambda_3(\mathbf{q})]/2$ and it is close to the momentum which maximizes $J(\mathbf{q})$.
If $\lambda_i(\mathbf{k}) \approx \lambda_i(\mathbf{k}_0)$ and $\mathbf{v}_i(\mathbf{k}) \approx \mathbf{v}_i(\mathbf{k}_0)$ at $i=1,2,3$, results of Sec.~\ref{Theor} are directly applicable to the considered situation upon the substitutions $D \rightarrow \lambda_1(\mathbf{q}) -\lambda_3(\mathbf{q})$ and $E \rightarrow \lambda_1(\mathbf{q}) -\lambda_2(\mathbf{q})$. However, the eigenvalues and eigenvectors can differ at momenta $\mathbf{k}_0$ and $\mathbf{k}$. Thus, easy and hard axes can be different in the collinear and the spiral structures. This complicates the behaviour of the system under external magnetic field. Corresponding analysis is out of the scope of the present paper.
\section{Frustrated antiferromagnets. Applications.}
\label{applic}
\subsection{Chain of classical spins}
\label{Model}
We discuss now a particular realization of model \eqref{ham1} in which considered sequences of phase transitions can arise: a system of classical spin chains with two competing antiferromagnetic exchange interactions $J_1$ and $J_2$ between nearest and next-nearest spins. This model is relevant to 3D spin systems containing ferromagnetic planes interacting with each other by the frustrating AF interactions. Each ferromagnetic plane plays a role of a classical magnetic moment in the mean-field consideration of the spin ordering.
We have in this case
\begin{equation}\label{AJq}
J_\mathbf{q} = - 2 (J_1 \cos{q_c} + J_2 \cos{2q_c}).
\end{equation}
If $J_2 > J_1/4$, $J_\mathbf{q}$ has a maximum at $\mathbf{k}=(0,0,k)$, where
\begin{equation}\label{Ak}
k= \pi - \arccos{\frac{J_1}{4J_2}}.
\end{equation}
Let us consider the following set of dimensionless parameters
\begin{equation}
\label{Apar1}
J_1=1, \quad J_2 = 0.3, \quad D=0.2, \quad E=0.1, \quad S=1
\end{equation}
which gives $k\approx 0.81 \pi$, $\mathbf{k}_0=(0,0,\pi)$, $J_{\mathbf{k}}-J_{\mathbf{k}_0} \approx 0.04$, and $J_{\mathbf{k}}-J_{3\mathbf{k}} \approx 1.35$. Then, conditions \eqref{cond1}, \eqref{cond2}, and \eqref{cond3} are well satisfied and the scenario shown in Fig.~\ref{fig1}(c) is realized. Eqs.~\eqref{h1} and \eqref{h2} give $h_1 \approx 0.6$ and $h_2 \approx 1.34$. Field $h_{sf} \approx 0.9$ given by Eq.~\eqref{hsf} lies in between of $h_1$ and $h_2$. Ground state energies \eqref{enaf}--\eqref{enxy} of considered spin states are drawn in Fig.~\ref{plot1}(a). Notice that XY conical spiral has higher energy than CAF. The saturation field, which can be estimated as $h_s \approx S(J_\mathbf{k}-J_0) \approx 4$ is not shown in Fig.~\ref{plot1}. One can replace $J_2$ in Eq.~\eqref{Apar1} by any value from the interval $(0.27,0.34)$ to realize the considered scenario of phase transitions AF$\leftrightarrow$YZ$\leftrightarrow$CAF.
\begin{figure}
\centering
\includegraphics[width=8cm]{plot1.pdf}
\includegraphics[width=8cm]{plot2.pdf}
\includegraphics[width=8cm]{plot3.pdf}
\caption{Ground state energies of competing phases \eqref{enaf}--\eqref{enxy} for the set of parameters (a) \eqref{Apar1}, (b) \eqref{Apar2}, and (c) \eqref{Apar3}. Critical fields $h_1$, $h_2$, $h_{sp}$, and $h_{xy}$ as well as $h_{sf}$ are denoted by gray vertical lines which are given by Eqs.~\eqref{h1}, \eqref{h2}, \eqref{hsp}, \eqref{hxy}, and \eqref{hsf}, respectively.
\label{plot1}}
\end{figure}
The sequence of phase transitions AF$\leftrightarrow$YZ$\leftrightarrow$XY (see Fig.~\ref{fig1}(d)) appears with the following set of parameters:
\begin{equation}
\label{Apar2}
J_1=1, \quad J_2 = 0.3, \quad D=0.1, \quad E=0.02, \quad S=1.
\end{equation}
Evidently, conditions \eqref{cond4} and \eqref{cond5} ($J_{0}<J_{2\mathbf{k}}$) hold in this case. Eqs.~\eqref{h1} and~\eqref{hsp} yields $h_1 \approx 0.5$ and $h_{sp} \approx 1.17 $, respectively. Corresponding ground state energies are plotted in Fig.~\ref{plot1}(b). One can replace $J_2$ in Eq.~\eqref{Apar2} by any value from the interval $(0.29,0.33)$ to realize this scenario of phase transitions.
The scenario depicted in Fig.~\ref{fig1}(e) (AF$\leftrightarrow$XY) can appear if one includes the third-nearest-neighbor exchange interaction along the chain so that
\begin{equation}\label{AJq2}
J_\mathbf{q} = - 2 (J_1 \cos{q_c} + J_2 \cos{2q_c} + J_3\cos{3q_c}).
\end{equation}
Exchange constants
\begin{equation}\label{Apar3}
J_1=1, \quad J_2 = -0.5, \quad J_3 = -0.4,
\end{equation}
give $k\approx 0.87 \pi$, $J_{\mathbf{k}}-J_{\mathbf{k}_0} \approx 0.05$, $J_{0}-J_{2\mathbf{k}} \approx 1.86$, and $J_{\mathbf{k}}-J_{3\mathbf{k}} \approx 1.69$. For this set of parameters, the scenario AF$\leftrightarrow$XY appears if $E<\alpha$ and $D>0.16$ (see Eqs.~\eqref{cond4} and \eqref{cond5}). Ground state energies are plotted in Fig.~\ref{plot1}(c) for
\begin{equation}
\label{Apar4}
D=0.17, \quad E=0.02
\end{equation}
which satisfy these conditions. Eq.~\eqref{hxy} gives $h_{xy} \approx 0.76$.
To confirm analytical results presented in this subsection, we perform numerical simulations using Monte-Carlo method on chains containing $500$ and $1000$ sites. We use $10^6$ and $2 \cdot 10^6$ numbers of steps. The results obtained are almost independent of these numbers of sites and steps. Numerical calculations quantitatively reproduce with high accuracy spin orderings and ground state energies depicted in Fig.~\ref{plot1} for all considered sets of parameters. In particular, we get for parameters~\eqref{Apar1} $h_1 \approx 0.61$, $h_2 \approx 1.24$, and $h_{sf} \approx 0.87$ (cf.\ Fig.~\ref{plot1}(a)). For parameters~\eqref{Apar2}, we obtain $h_1 \approx 0.52$ and $h_{sp} \approx 1.07$ (cf.\ Fig.~\ref{plot1}(b)). For parameters listed in Eqs.~\eqref{Apar3} and~\eqref{Apar4}, we get $h_{xy} \approx 0.77$ (cf.\ Fig.~\ref{plot1}(c)).
\subsection{Two-up-two-down collinear structure at $h=0$}
\label{Onefourth}
The theory above remains valid also if the collinear order is realized at $h=0$ in which spins are arranged in some direction in two-up-two-down manner $\uparrow \uparrow \downarrow \downarrow$ (the so-called $1/4$-structure). It appears, for instance, in the model considered in Sec.~\ref{Model} at large enough $D-E$ and $J_2>J_1/2$ (as it is seen from Eqs.~\eqref{AJq} and \eqref{Ak}, AF ordering discussed above appears at $J_1 > 2 J_2$). All the results of Sec.~\ref{Theor} are applicable in this case if one defines $\mathbf{k}_0$ as the vector of the $1/4$-structure. In particular, the $1/4$-structure is given in the classical spin chain as $S_j=S\sqrt2\cos(k_0R_j+\pi/4)$, where $k_0=\pi/2a$ and $a$ is the lattice spacing.
Our theory can analytically describe some results obtained in model \eqref{ham1} in Ref.~\cite{zh} in the framework of real-space mean-field approach. A complicated magnetic phase diagram of MnWO$_4$ observed experimentally~\cite{arkenbout2006,nojiri2011,mitamura2012} was qualitatively reproduced in Ref.~\cite{zh} with parameters
\begin{equation}
\label{Zhpar}
J_1=1, \quad J_2 = 2, \quad D=0.4, \quad E=0.2, \quad S=5/2
\end{equation}
which yield $\mathbf{k} \approx 0.54 \pi$, $J_{\mathbf{k}}-J_{\mathbf{k}_0} \approx 0.125$, and $J_{\mathbf{k}}-J_{3\mathbf{k}} \approx 1.94$.
We find that if the field is directed along the easy axis, transitions take place from the $1/4$-structure to the YZ state and then to the CAF phase when the field increases. In this case, the CAF state consists of four magnetic sublattices forming two pairs. Within each pair, spins are oriented in the same direction. Spins from different pairs are oriented as in the CAF state of conventional antiferromagnet. Eqs.~\eqref{h1} and \eqref{h2} give $h_1 \approx 2.7$ and $h_2 \approx 7.4$ with parameters \eqref{Zhpar}. Corresponding numerical results of Ref.~\cite{zh} are approximately $1.4$ and $6.9$. The discrepancy in $h_1$ is attributed to its rather small value, which shows an importance of higher order terms in $D$ and $E$ neglected above. Taking into account the second order term (see Eq.~(23) of Ref.~\cite{utesov2018}), we obtain $h_1 \approx 2.3$ in better agreement with Ref.~\cite{zh}. Thus, our theory satisfactorily describes the numerics in this case.
\section{Summary and conclusions}
\label{Conc}
To conclude, we discuss different scenarios of phase transitions in frustrated antiferromagnets with biaxial anisotropy or dipolar forces in magnetic field applied along the easy axis. The magnetic field is assumed to be not very close to the saturation field. There are well known scenarios of phase transitions shown in Fig.~\ref{fig1}(a) and \ref{fig1}(b): the conventional spin-flop transition and the flop of the spiral plane at strong and weak easy-axis anisotropy, respectively. We demonstrate that much less studied scenarios can appear at moderate anisotropy which are presented in Figs.~\ref{fig1}(c)--(e) and in which magnetic field induces first-order transitions to spiral phases from the collinear one. In particular, the sequence of phase transitions shown in Fig.~\ref{fig1}(c) can be interpreted as a splitting of the spin-flop transition shown in Fig.~\ref{fig1}(a) into two transitions with the intermediate spiral phase. Critical fields of these transitions are given in the mean-field approximation by Eqs.~\eqref{h1} and \eqref{h2}, by Eqs.~\eqref{h1} and \eqref{hsp}, and by Eq.~\eqref{hsp} for scenarios shown in Figs.~\ref{fig1}(c), \ref{fig1}(d), and \ref{fig1}(e), respectively. Corresponding necessary conditions for realization of these scenarios are given by Eqs.~\eqref{cond1}--\eqref{cond3} and \eqref{exyz}; Eqs.~\eqref{cond1}, \eqref{cond2}, \eqref{cond4}, and \eqref{cond5}; and Eqs.~\eqref{cond1}, \eqref{cond2}, \eqref{cond4}, and \eqref{cond6}.
We demonstrate both analytically and numerically (using Monte-Carlo simulations) the appearance of scenarios shown in Figs.~\ref{fig1}(c)--(e) in particular anisotropic Heisenberg models with competing exchange couplings. We show also that the sequence of phase transitions presented in Fig.~\ref{fig1}(c) was found in MnWO$_4$ both experimentally \cite{arkenbout2006,nojiri2011,mitamura2012} and numerically \cite{zh} (in the relevant model) and our theory reproduces the numerical findings even quantitatively.
It should be noted that some of the scenarios found above in the frustrated antiferromagnets can appear also in anisotropic systems with a monoaxial Dzyaloshinskii-Moriya interaction (which produces a spiral ordering at sufficiently weak anisotropy). In particular, we have found using Monte-Carlo simulations that the scenario shown in Fig.~\ref{fig1}(c) can arise. A detailed discussion of these models will be reported elsewhere.
\begin{acknowledgments}
We thank A.O. Sorokin for valuable discussion. The reported study was funded by RFBR according to the research project 18-02-00706.
\end{acknowledgments}
|
1,108,101,565,203 | arxiv | \section{Introduction}
Recent technological advances have made pervasive wearable devices for human health~\cite{cheol2018wearable}, fitness~\cite{scalise2018wearables}, and activity~\cite{hegde2017automatic} monitoring possible. According to~\cite{dias2018wearable}, the wearable devices market is currently having a worldwide revenue
of around \$26 billion, and is expected to reach almost \$73 billion in 2022.
Monitoring an individual's daily activities enables location-based services~\cite{castro2013taxi}, travel route planning~\cite{liu2014exploiting}, surveillance~\cite{kumari2017increasing}, and human computer interaction~\cite{welch2018wearable}.
\fakepar{Motivation}
Conventional motion sensors such as pedometers, accelerometers, inclinometers, and gyroscopes~\cite{taraldsen2012physical} consume energy for converting the physical phenomenon into an analog signal as well as for converting that analog signal into its digital form, using an \ac{adc} as portrayed in Fig.~\ref{fig:energy_positive_negative}.
Naturally, activity sensors for motion context detection must be portable, which poses a challenge for their power supply.
Batteries are bulky, expensive and need to be either replaced or recharged, both of which require access to the device and human intervention.
Kinetic Energy Harvesters (KEHs) convert ambient vibration, stress or motion into usable electrical energy using piezoelectric, electrostatic or electromagnetic transducers.
They can replace batteries, realizing a truly pervasive, sustainable \ac{iot}, consisting of trillions of tiny and economical sensors/devices without generating tons of toxic waste~\cite{hester2017future}.
The physical effects that are exploited for \ac{keh} are the same as used in sensors that measure dynamic mechanical variables, like acceleration~\cite{khalifa2017harke}.
Using \ac{keh} transducer simultaneously as a sensor and as an energy source can replace the conventional accelerometer to reduce system complexity, cost and energy consumption.
If the harvested energy is higher than the energy required for sampling the signal with an \ac{adc}, the additional energy may even be used to power other parts of the system, and the sensor is \textit{energy positive}.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm, height=2cm]{energy_positive_negative_4.pdf}\vspace{-0.2cm}
\caption{Conventional (energy negative) vs energy positive sensing}
\label{fig:energy_positive_negative}
\vspace{-0.65cm}
\end{figure}
\fakepar{State-of-the-art}
Previous work falls short of solving the key challenges towards \textit{energy positive sensing}.
Most studies~\cite{khalifa2015energy, khalifa2017harke, 8730510, umetsu2019ehaas} explore the sensing potential of the \ac{keh} open circuit voltage, i.e., without actually harvesting energy.
Lan et al.~\cite{lan2017capsense} employ the charging rate of a capacitor connected to a \ac{keh} transducer inside a shoe sole as a sensing signal for human activity recognition.
Their system, however, uses two separate transducers and capacitors for sensing and energy harvesting.
Although it has been shown that sampling the open circuit voltage of a \ac{keh} transducer can save energy compared to using an accelerometer, a second transducer significantly adds to the weight and cost of the system.
The full potential of \ac{keh}-based sensing is only unleashed, when the same transducer is used simultaneously as power supply and sensor.
Ma et al.~\cite{ma2018sehs} make a first step towards simultaneous sensing and energy harvesting by sampling the transducer voltage while storing harvested energy in a capacitor.
They describe the resulting \textit{interference problem}, where the transducer voltage is enveloped in the capacitor voltage, affecting the quality of the sensing signal.
They propose a filter to mitigate this effect, however, their proposed filter only considers the charging curve of the capacitor, neglecting the effects of a dynamic load consuming energy from the capacitor.
It is thus not applicable in practical scenarios, where the harvested energy is used to power the system.
\fakepar{Contribution}
In this paper, we present a system architecture for \textit{energy positive sensing}, where a single transducer is used for sensing while simultaneously powering a dynamic load.
We systematically explore various sensing signals in converter-less and converter-based energy harvesting circuits in order to find a high quality sensing signal that is less affected by the interference problem. To evaluate the end-to-end performance of the proposed system, we collect extensive \ac{keh} data from various transport modes, including ferry, train, bus, car, tricycle and pedestrian movement.
We extract the dominant feature set, implement five classifiers and compare the \ac{tmd} accuracy between the sensing signals.
We find that sensing the harvesting current signal in the converter-based design can be energy positive, delivering up to ten times as much power as it consumes for signal acquisition, while offering comparable detection accuracy to the accelerometer signal for most of the considered transport modes.
The key contributions of this paper are summarized below:
\begin{itemize}
\item We present the first complete architecture for sampling the signal from a \ac{keh} transducer, while simultaneously powering a dynamic load from the harvested energy.
\item We systematically explore multiple sensing signals, including current and voltage in two different energy harvesting circuit designs.
\item We compare the classification performance between various \ac{keh} signals and a 3-axis accelerometer signal in a \ac{tmd} case study. We find that although the interference problem affects the pattern of the harvesting voltage signal, the achievable accuracy in motion context detection is not highly affected.
\item We show that the harvesting current signal outperforms all other KEH signals and achieves classification performance on par with the accelerometer signal with two-fold lower energy consumption, while generating up to 10 times as much power as it consumes for signal acquisition.
\end{itemize}
The remainder of the paper is organized as follows. Sections~\ref{System_Architecture},~\ref{Hardware_Development} and~\ref{Transport_Mode_Detection_Algorithm} present system architecture, system design and \ac{tmd} as a case study respectively. Section~\ref{results} describes the achieved results and Section~\ref{Literature_Review} discusses the state-of-the-art. Finally, Section~\ref{Conclusion_and_Future_Work} concludes the paper.
\section{System Architecture}
\label{System_Architecture}
The principal building blocks of the proposed system architecture are shown in Fig.~\ref{sensing_points}.
A transducer converts the kinetic energy into electrical energy and a rectifier is used to rectify the harvesting AC voltage.
The rectified voltage charges a capacitor either directly or through a DC-DC converter.
The energy from the capacitor is used to power a load, for example, the signal acquisition circuit, microcontroller, or a transceiver.
We detail the main characteristics of the system architecture in the following subsections.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=2.1cm]{sensing_points_1.pdf}
\caption{Architecture of simultaneous sensing and energy harvesting with the availability of multiple sensing points}
\label{sensing_points}
\vspace{-0.65cm}
\end{figure}
\subsection{Simultaneous sensing and energy harvesting}
Due to high source impedance~\cite{kalantarian2015monitoring}, the voltage across a transducer changes dramatically when current flows, i.e., when closing the circuit to extract energy.
When connecting a capacitor to the output of the rectifier, the capacitor voltage envelops the harvesting voltage and changes its pattern compared to the open circuit configuration.
This has been described as the interference problem and previous work proposes a filtering algorithm based on the capacitor voltage to reduce the effect on sensing signal quality~\cite{ma2018sehs}.
Our architecture includes not only a capacitor, but also an intermittently powered load that uses the energy stored on the capacitor, as shown in Fig.~\ref{sensing_points}.
The load reflects the behaviour of a typical batteryless sensor node that switches on when enough charge has been accumulated in a small capacitor to, for example, sample, store, process or transmit the \ac{keh} data.
The load discharges the capacitor and its dynamic behaviour thus also distorts the harvesting voltage waveform, as discussed in Section~\ref{impact_of_energy_harvesting}.
In contrast to previous work~\cite{ma2018sehs}, which uses custom filters to mitigate the effect of the capacitor on the harvesting AC voltage, we explore the potential of other sensing signals, which do not suffer from the interference problem, allowing us to 1) reduce cost in terms of energy, delay and computational complexity by omitting these additional filter stages; and 2) exclude the hard-to-predict effects of dynamic load behaviour on sensing signal quality.
\subsection{Energy positive sensing}
In order to highlight the difference between conventional sensing and \ac{keh}-based sensing, we categorize sensing devices into two classes based on their energy profile; energy negative sensors and \textit{energy positive sensors}.
The key difference between a conventional motion sensor and a \ac{keh} transducer is that the former consumes energy to convert the kinetic energy to an analog signal, whereas the latter generates energy, as shown in Fig.~\ref{fig:energy_positive_negative}.
Both types of sensors require an \ac{adc} to convert the analog signal to its digital form.
Conventional motion sensors (such as accelerometers) are thus always \textit{energy negative} and need to replenish energy regularly for their uninterrupted operation.
\ac{keh}-based sensors, on the other hand, can be energy negative or \textit{energy positive} depending on the amount of harvested energy relative to the energy consumed for acquiring the \ac{keh} signal.
If the harvested energy is higher than the energy required for signal acquisition, it is called \textit{energy positive sensing}.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=3.5cm]{transiently_powered_sensors.pdf}\vspace{-0.3cm}
\caption{Capacitor voltage in intermittently powered sensors}
\label{fig:transiently_powered_sensors}
\vspace{-0.65cm}
\end{figure}
\subsection{Exploring multiple sensing points}
Previous works employ the open circuit AC voltage~\cite{khalifa2017harke, umetsu2019ehaas, 8730510} from the energy harvester or capacitor voltage~\cite{lan2017capsense} for extracting context information. There are various sensing points in the energy harvesting circuit that offer two types of sensing signals i.e., voltage and current which contain context information. Sensing points 1, 2 and 3 in Fig.~\ref{sensing_points} capture the current and voltage signals at the transducer, rectifier and energy storage unit, respectively. We evaluate various \ac{keh} signals by comparing the information content between them, using different designs of the energy harvesting circuit.
\section{System Design}
\label{Hardware_Development}
In this section, we discuss design options for simultaneous sensing and energy harvesting, present our hardware prototypes and analyze the \ac{keh} signals.
\subsection{Hardware designs for \ac{keh} sensing and energy harvesting}
We employ a piezoelectric transducer to convert ambient kinetic energy into electrical energy.
Under mechanical stress, it generates an electric field with alternating polarity~\cite{khalifa2017harke}.
Most previous works on \ac{keh} sensing~\cite{khalifa2015energy, khalifa2017harke, 8730510, umetsu2019ehaas} directly use this open circuit AC voltage as a sensing signal.
To extract usable electrical energy from the transducer, the circuit has to be closed, such that current can flow.
The harvesting current is typically alternating and thus needs to be rectified by, for example, a full bridge rectifier, consisting of four diodes.
\fakepar{Batteryless design}
\label{sec:batteryless_design}
Traditional energy harvesting systems use large rechargeable batteries in order to compensate for variations in the harvested energy~\cite{geissdoerfer2019preact}.
Batteryless devices instead use only a tiny capacitor to accumulate just enough energy to support the largest atomic operation~\cite{gomez2016dynamic}, for example, transmitting a packet over a wireless link.
This works well when the temporal application requirements are well aligned with temporal energy availability, as in \textit{energy positive sensors}.
The sensor only provides useful data, when it also provides energy, allowing to dramatically reduce system cost and size by omitting large batteries.
As the amount of harvested energy is usually too small to support perpetual operation of the device, it resorts to what is known as intermittent execution (shown in Fig.~\ref{fig:transiently_powered_sensors}):
The device remains completely off, while the capacitor accumulates charge from the harvester.
Once the $V_{thr}^{on}$ threshold is reached, the device switches on and executes, quickly draining the capacitor until the $V_{thr}^{off}$ threshold is reached and the device is powered off again.
In contrast to previous work, we discuss the effect of this dynamic load behaviour in the context of the interference problem in Section~\ref{impact_of_energy_harvesting}.
There are two main design options for charging the capacitor which are described in detail below:
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm, height=6cm]{hardware_sensing_v5.pdf}
\caption{Illustration of hardware prototypes developed for \ac{keh} data collection (a) Open circuit design that collects AC and rectified voltages, and (b) Converter-less and (c) Converter-based designs that sample the current and voltage signals at various sensing points in the circuit}
\label{sensing_harware}
\vspace{-0.65cm}
\end{figure}
\fakepar{Converter-less design}
In a converter-less design, the capacitor is placed in parallel to the rectifier.
Therefore, the voltage across the transducer is given by:
\begin{equation}
v_{AC} = v_{cap}+2\cdot v_d
\label{eq:envel_formation}
\end{equation}
where, $v_{cap}$ is the capacitor voltage and $v_d$ is the voltage drop across the corresponding diode in the full wave rectifier.
If the open circuit voltage is higher than the voltage on the capacitor, $v_d$ is approximately constant (one diode drop, typically \SI{700}{\milli\volt}) and thus, the voltage across the transducer equals the capacitor voltage plus two diode drops.
This may result in low energy yield.
For example, if the capacitor voltage is \SI{3}{\volt} and the input vibration is low, then the open circuit voltage of the transducer may be less than \SI{3}{\volt}.
In this case, no current can flow and thus energy that could have potentially been harvested is wasted.
The dependence between capacitor voltage and transducer voltage has important implications for the sensing signal quality that will be discussed in Section~\ref{impact_of_energy_harvesting} in detail.
\fakepar{Converter-based design}
In a converter-based design, a DC-DC boost-converter is placed between the rectifier and the capacitor.
This allows to optimize the operating point (i.e., harvesting voltage) of the transducer independent of the voltage on the capacitor.
For example, the converter can be configured to regulate the voltage at its input to \SI{100}{\milli\volt} by dynamically controlling the current flow from the transducer.
This allows to extract energy for charging the capacitor from the transducer even under very low motion or vibrations.
The decoupling of the transducer from the capacitor and load has another important effect: while the harvesting voltage is kept constant by the regulator and thus does not contain context information, the current changes approximately linearly with the kinetic energy input, yielding a high quality sensing signal that is not affected by the interference problem~\cite{ma2018sehs}.
In this document, we refer to DC-DC boost converters, DC-DC converters and converters interchangeably.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=2.5cm]{experiment_setup_bike_1.png}\vspace{-0.1cm}
\caption{Experimental setup for data collection using tricycle}
\label{fig:experiment_setup_bike}
\vspace{-0.45cm}
\end{figure}
\subsection{Prototyping and experimental setup}
For our study, we design four different \ac{keh} hardware prototypes for collecting data from three types of \ac{keh} circuits: Open circuit (Fig.~\ref{sensing_harware}a), converter-less (Fig.~\ref{sensing_harware}b) and converter-based (Fig.~\ref{sensing_harware}c). For the converter-less design, we use two prototypes; one measures the voltage signals, the other measures the current signals. Considering these different designs, the three sensing points in our architecture in Fig.~\ref{sensing_points} offer numerous potential sensing signals, ten of which we record and analyze as shown in Fig.~\ref{sensing_harware}.
There is no current flowing in the open circuit design, so we only record the voltage before and after the rectifier.
The voltage at sensing point 2b is the same as at 3b, therefore we do not sample it twice in Fig.~\ref{sensing_harware}b.
Lastly, we can not sample the AC current in converter-less and converter-based designs and the AC transducer voltage in the converter-based design due to hardware limitations.
We use a S230-J1FR-1808XB two-layered piezoelectric bending transducer from MID\'E technology\footnote{https://www.mide.com/} in all hardware designs.
All signals are sampled with a 12-bit \ac{adc} at a sampling frequency of \SI{100}{\hertz}.
The first design (see Fig.~\ref{sensing_harware}a) represents the open-circuit configuration and serves as a benchmark for comparison to the state of the art~\cite{khalifa2015energy, khalifa2017harke, umetsu2019ehaas, 8730510}.
The other two designs use a \SI{220}{\micro\farad} capacitor to temporarily store the harvested energy and an intermittently powered load\footnote{In this experiment, we set $V_{thr}^{on} = 3.38 V$ and $V_{thr}^{off} = 2.18 V$} consisting of two \acp{led}, mimicking the behaviour of a batteryless device. The third design (in Fig.~\ref{sensing_harware}c) uses a TI BQ25504 DC-DC boost-converter between the rectifier and the capacitor.
The converter is configured to regulate its input voltage to around \SI{1}{\volt}.
The prototypes are designed as dataloggers with a focus on enabling accurate measurements of signals at all sensing points rather than on optimizing harvesting efficiency.
In order to analyze the characteristics of the available signals, in an initial study, we collect data from a full-sized adult tricycle using the four hardware prototypes as shown in Fig.~\ref{fig:experiment_setup_bike}.
For data collection, the prototypes are placed in a plastic box with a size of \SI{39}{\centi\metre}$\times$\SI{29}{\centi\metre}$\times$\SI{17}{\centi\metre} and the piezoelectric transducers are mounted on a \SI{23}{\centi\metre} long metallic bar mounted inside the box.
We use a block tip mass of \SI{24.62}{\gram}$\pm$0.5\% with each transducer to make it more sensitive to lower frequency vibrations.
During the experiment using the tricycle, the box containing the prototypes is placed in the wire basket behind the saddle as shown in Fig.~\ref{fig:experiment_setup_bike}.
\begin{figure}[t!]
\centering
\includegraphics[width=9.5cm, height=16cm]{KEH_signals_bike_1.pdf}
\caption{Different types of signals from \ac{keh} on a tricycle; the labels indicate the sensing points in Fig.~\ref{sensing_harware}}
\label{fig:keh_signals_bike}
\vspace{-0.5cm}
\end{figure}
\subsection{The interference problem at different sensing points}
\label{impact_of_energy_harvesting}
Most previous works on \ac{keh}-based sensing use the open circuit AC voltage of the transducer as a sensing signal~\cite{khalifa2015energy, khalifa2017harke, 8730510, umetsu2019ehaas}. However, Fig.~\ref{fig:keh_signals_bike} depicts example data traces from all signals collected from our hardware prototypes. The AC voltage shown in Fig.~\ref{fig:keh_signals_bike}a is proportional to the displacement of the tip and thus accurately reflects the excitation of the transducer.
Fig.~\ref{fig:keh_signals_bike}b depicts the voltage after rectification, corresponding to the absolute values of the AC voltage.
When closing the circuit by connecting a capacitor to the output of the rectifier, the voltage on the transducer is enveloped by the voltage on the capacitor as discussed in Sec.~\ref{sec:batteryless_design}.
This has previously been described as the interference problem~\cite{ma2018sehs}.
In this section, we extend the discussion of the interference problem to the effect of a transiently powered load, consuming energy from the capacitor.
Furthermore, we explore various other sensing signals in different energy harvesting designs and discuss how they are affected by the capacitor and load.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=2.75cm]{tricycle_capacitor_envelop_v2.pdf}\vspace{-0.3cm}
\caption{\ac{keh} AC voltage is enveloped in the capacitor voltage}
\label{fig:envelope}
\vspace{-0.65cm}
\end{figure}
\fakepar{Effect of capacitor and load}
The effect of the combination of capacitor and load is illustrated in Fig.~\ref{fig:envelope}.
As expected, the AC voltage is enveloped by the capacitor voltage minus two diode drops (see Eq.~\ref{eq:envel_formation}).
The dashed green graph shows the mirrored capacitor voltage, highlighting that, due to the rectifier, the signal is enveloped in either polarity.
In a converter-less design, the capacitor is connected directly to the output of the rectifier.
Thus, the converter-less rectified voltage shown in Fig.~\ref{fig:keh_signals_bike}d equals the capacitor voltage.
The shape of the envelope is crucially determined by the behaviour of the transiently powered load that is illustrated in Fig.~\ref{fig:transiently_powered_sensors}.
The gradual rise reflects the charging process and the sharp drop in voltage is due to the load being switched on.
Nevertheless, the affected signals may still contain enough information to distinguish between various contexts with reasonable accuracy as described in Section~\ref{results} in detail.
The previous approach of applying filters to compensate for the capacitor charging curve~\cite{ma2018sehs} can not easily be applied when considering the hard-to-predict load behaviour.
Instead, we seek to explore different sensing signals in the energy harvesting circuit that may be less affected by the interference problem.
\fakepar{Current signals}
Fig.~\ref{fig_interference_effect} shows the harvesting current for converter-less and converter-based designs compared to the rectified harvesting voltage.
The rectified voltage equals the capacitor voltage and thus exhibits the expected \textit{envelope distortion}.
The current signals, in contrast, exhibit the typical waveform of a damped spring mass oscillator, which is a widely used model for a tip-mass loaded piezoelectric transducer~\cite{khalifa2017harke}.
They are approximately proportional to the displacement of the tip mass and thus incorporate details of the underlying physical process.
However, current can only flow, once the voltage across the transducer exceeds the voltage on the output of the rectifier plus two diode drops.
Thus, the current is zero when the voltage on the transducer is lower than the \textit{threshold voltage} and the corresponding information is lost.
We call this the \textit{threshold distortion}.
In a converter-less system, the threshold voltage is the varying capacitor voltage.
In a converter-based system, it is the constant, configurable input voltage of the DC-DC converter.
For example, in our measurement campaign, we empirically set the input voltage to \SI{1}{\volt}.
As a result, current starts to flow at lower voltages, resulting in higher energy yield and more context information in the converter-based current signal (Fig.~\ref{fig_interference_effect}c) than in the converter-less signal (Fig.~\ref{fig_interference_effect}b).
Threshold distortion is less critical than envelope distortion as it only affects the part of the signal when the amplitude of the signal is too low for harvesting.
For \textit{energy positive}, batteryless sensing, the transducer must anyway be dimensioned to provide sufficient energy when we want to sample the signal.
The envelope distortion instead affects the parts of the voltage signal when energy can be harvested.
In summary, the harvesting current signal is an attractive signal whose sensing potential has not been previously explored in sufficient detail.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=5.5cm]{interference_effect.pdf}\vspace{-0.3cm}
\caption{Effect of capacitor voltage on the harvesting current}
\label{fig_interference_effect}
\vspace{-0.65cm}
\end{figure}
\begin{table}[ht]
\centering
\caption{Detail of the collected data using \ac{keh}}
\begin{tabular}{cccc}
\toprule
Transport & Duration with & Duration without & Number of\\
mode & stops (minutes) & stops (minutes) & trials\\
\midrule
\midrule
Ferry & 156 & 93 & 4\\
Train & 125 & 96 & 4\\
Bus & 144 & 76 & 6\\
Car & 98 & 83 & 2 \\
Tricycle & 72 & 59 & 4\\
Pedestrian & 216 & 66 & 8\\
\textbf{Total} & \textbf{811} & \textbf{473} & --\\
\bottomrule
\end{tabular}
\label{tab:data_collection}
\vspace{-0.5cm}
\end{table}
\section{Transport Mode Detection}
\label{Transport_Mode_Detection_Algorithm}
We employ \ac{tmd} as a case study to compare the sensing performance of the considered \ac{keh} signals.
\subsection{Data collection}
Volunteers\footnote{Ethical approval has been granted from CSIRO for carrying out this experiment (Approval number 106/19)} are asked to carry the box (containing the hardware prototypes) while travelling using multiple transport modes including ferry, train, bus, car, tricycle and pedestrian movement.
During transitions between the vehicular transport modes, the volunteers walk as pedestrians, including slow walking, brisk walking, moving upstairs/downstairs and some stop periods.
The data is collected from various transport modes in Brisbane city with variations in seating location, time of the day, subjects and transport routes, with an average duration of 78 minutes from each transport mode as shown in Table~\ref{tab:data_collection}.
For each trial, we use a different vehicle or choose another route.
We also alternate the location of the prototypes within the corresponding vehicle (i.e., front, middle or rear section).
This ensures that the collected data is representative of all types of vibrations experienced in the vehicles.
In order to compare the performance of the proposed \ac{keh}-based architecture with the state-of-the-art, we also collect data using an MPU9250 3-axis accelerometer.
Finally, the volunteers manually record the actual transport mode across the course of the experiments, to serve as ground truth for classification.
We do not use load voltage and current (at points 3b and 2c in Fig.~\ref{sensing_harware}) for sensing as the load is not turned on frequently especially when vibrations are low such as on the ferry or the train.
Similarly, the rectified voltage in Fig.~\ref{sensing_harware}d (at point 1c) is also not used for extracting information as it is fixed to a specific level by the converter and does not contain information.
The remaining five \ac{keh} signals are analyzed in terms of their sensing potential in Section~\ref{results}.
\begin{figure}[t!]
\centering
\includegraphics[width=8cm, height=3.5cm]{proposed_model_v3.pdf}
\caption{The model architecture for transport mode detection}
\label{fig_sensing_model}
\vspace{-0.65cm}
\end{figure}
\begin{table}[ht]
\centering
\caption{The number of features selected using RFE}
\begin{tabular}{cccc}
\toprule
Signal & Initial features & Using RFE & \acs{cv} score\\
\midrule
\midrule
Accelerometer (Acc) & 128 & 36 & 0.95\\
OC-AC-V & 42 & 26 & 0.93\\
OC-REC-V & 42 & 31 & 0.86\\
CL-AC-V & 42 & 26 & 0.92\\
CL-REC-V & 42 & 36 & 0.84\\
CL-C & 42 & 29 & 0.72\\
CB-C & 42 & 09 & 0.93\\
\bottomrule
\end{tabular}
\label{tab:features_overall}
\vspace{-0.25cm}
\end{table}
\subsection{System model}
The proposed system model for \ac{tmd} is depicted in Fig.~\ref{fig_sensing_model}.
A wearable device collects real-time data from various sensing points in the \ac{keh} circuit while travelling on various transport modes.
A server processes the collected data, removes stops/pauses, extracts the dominant feature set for \ac{tmd}, and implements the classification techniques to classify the current transport mode. A computational unit located within the vehicle, along the travel route or a personal digital assistant can serve the purpose of a server.
Below, we explain each component in detail.
\begin{figure}[t]
\centering
\includegraphics[width=9cm, height=4cm]{all_classifiers_bar_plot.pdf}\vspace{-0.3cm}
\caption{\ac{tmd} accuracy of accelerometer and \ac{keh} signals using five classification algorithms (window size = 1 second)}
\label{fig:all_classifiers}
\vspace{-0.40cm}
\end{figure}
\subsubsection{Pre-processing}
First, we convert the \ac{adc} readings into actual voltage and current signals.
When the vehicle is stationary, for example at traffic lights, there are lower vibrations as compared to the moving state, making \ac{tmd} difficult~\cite{8730510}.
Therefore, we detect and remove these stops/pauses based on the average value of the signal~\cite{stockx2014subwayps}.
\subsubsection{Feature selection}
We divide the collected data into equal sized windows with a 50\% overlap and extract multiple time and frequency domain features as described in~\cite{hemminki2013accelerometer, khalifa2017harke, 8730510}.
As individual features can embed varying levels of information content about the transport mode, we employ \ac{rfe} to find the minimal and most significant feature set using \ac{cv}~\cite{guyon2002gene}.
Table~\ref{tab:features_overall} shows the total number of features selected in various types of sensing signals with the corresponding CV score.
It also depicts that only 9 features are selected out of 42 time and frequency domain features from the converter-based current signal.
These are the common features selected from all types of signals (which include maximum value, minimum value, amplitude range, coefficient of variation, skewness, kurtosis, inter-quartile range, absolute area, and root mean square value).
Note that the converter-based current signal offers comparable \ac{cv} score as the accelerometer and open circuit AC voltage signals using a smaller feature set, indicating the rich information content embedded in it.
\subsubsection{Classification}
Five well-known machine learning classifiers are implemented including \ac{rf}, \ac{dt}, \ac{svm}, \ac{knn} and \ac{nb}. For each classifier, we perform 10-fold cross validation and plot all results averaged with 95\% confidence interval. Prior to the implementation of classification algorithms, we use \ac{smote}~\cite{chawla2002smote} to handle imbalanced data from the various transport modes and normalise the selected features with zero mean and standard deviation of one.
\section{Performance Evaluation}
\label{results}
In this section, we evaluate the performance of the proposed architecture using \ac{tmd} as a case study.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm, height=5cm]{Transport_mode_detection_accuracy.pdf}\vspace{-0.3cm}
\caption{Accuracy of \ac{tmd} using accelerometer and \ac{keh} signals with growing window sizes}
\label{fig:Transport_mode_detection_accuracy}
\vspace{-0.40cm}
\end{figure}
\subsection{Detection accuracy of \ac{keh}-based sensing signals}
\label{results_a}
\subsubsection{Different classifiers}
We compare \ac{tmd} accuracies of accelerometer and five \ac{keh}-based sensing signals using the considered classification schemes in Fig.~\ref{fig:all_classifiers}.
The results indicate that the \ac{rf} classifier outperforms other classifiers for all signals including accelerometer.
Therefore, in the rest of the document, we present the results from the \ac{rf} classifier.
For all classifiers, the open circuit rectified voltage achieves lower detection accuracy than the open circuit AC voltage due to the loss of information by inverting the negative part of the signal.
Also, the converter-less current signal achieves lower detection accuracy than the converter-based current signal due to the interference problem of the capacitor voltage in the former as discussed in Section~\ref{impact_of_energy_harvesting}.
Intuitively, although the pattern of converter-less AC voltage is affected due to the interference problem shown in Fig.~\ref{fig:envelope}, we find that its context detection performance (93.95\%) is not highly affected compared to the open circuit AC voltage (94.56\%); however, it provides lower detection accuracy than the accelerometer signal (98.35\%) as depicted in Fig.~\ref{fig:all_classifiers}. Converter-based current signal offers the highest detection accuracy (96.85\%) among all \ac{keh}-based signals which is close to the detection accuracy of the accelerometer signal (98.35\%) for a window size of one second.
This shows that using the converter to decouple the transducer and capacitor, allows the flow of current during smaller vibrations, ultimately yielding context-rich \ac{keh} signals. It is worth mentioning that any single axis signal of accelerometer offers lower detection accuracy ($Acc_x$: 92.10\%, $Acc_y$: 93.12\%, $Acc_z$: 90.70\%) than the combined 3-axis signal (98.35\%), for a window size of one second, due to the higher information content across spatial dimensions embedded into the 3-axis signal. Therefore, all results presented in this document employ a 3-axis accelerometer signal as the key benchmark, while \ac{keh} signals are single axis in nature.
\subsubsection{Varying window size}
We now study the impact of window size on the \ac{tmd} accuracy of accelerometer, open circuit AC voltage, converter-less AC voltage and converter-based current, as the four signals that provide highest detection accuracy.
We plot the accuracy for each signal with varying window sizes from \SI{1}{\second} to \SI{10}{\second} in Fig.~\ref{fig:Transport_mode_detection_accuracy}.
It is clearly shown that the \ac{tmd} accuracy increases with the increasing window sizes for all including the accelerometer and KEH signals.
Although the open circuit AC voltage signal offers lower accuracy than the accelerometer signal for smaller window sizes, it provides comparable accuracy to the accelerometer signal for window sizes greater than \SI{7}{\second}.
Similarly, with converter-less AC voltage, the detection accuracy increases with the increasing window size as depicted in Fig.~\ref{fig:Transport_mode_detection_accuracy}.
However, it is slightly lower than the open circuit AC voltage for all window sizes due to the interference problem experienced with the inclusion of energy harvesting circuit.
Nevertheless, the converter-based current signal achieves comparable accuracy to the accelerometer even with a smaller window size (starting from \SI{1}{\second}) as shown in Fig.~\ref{fig:Transport_mode_detection_accuracy}.
Furthermore, for larger window sizes (\SI{10}{\second}), the converter-based current signal provides the same accuracy as that of the accelerometer signal (98.5\%).
It is worth noting that the \ac{keh} converter-based current signal offers high detection accuracy using less features than the conventional 3-axis accelerometer signal as depicted in Table~\ref{tab:features_overall}.
\begin{figure}[t!]
\centering
\includegraphics[width=9cm, height=4.5cm]{cm_ac_voltage_cb_current_v1.png}\vspace{-0.2cm}
\caption{Confusion matrices for \ac{tmd} using (a) \ac{keh} converter-less AC voltage and (b) \ac{keh} converter-based current signals (window size = 1 second)}
\label{Confusion_matrices}
\vspace{-0.65cm}
\end{figure}
\subsubsection{Current signal vs AC voltage}
We now focus on comparing the \ac{keh} current signal with the converter-less AC voltage signal in order to distinguish the considered transport modes.
Fig.~\ref{Confusion_matrices} shows the confusion matrices of both signals, highlighting that converter-less AC voltage signal provides detection accuracy of more than 94\% for most of the transport modes. However, the detection accuracy for trains (87.63\%) and ferries (90.76\%) is the lowest as both of these transport modes have dedicated non-road transport surfaces (i.e., river water and tracks respectively), resulting in similar, low vibration amplitudes.
Furthermore, the lower signal amplitude for these transport modes is more susceptible to noise which makes it harder to differentiate between them.
Similarly, for converter-based current signal, an accuracy of higher than 97\% is achieved for most of the transport modes except for ferries and trains where 92.55\% and 91.94\% accuracies are achieved, respectively.
The reason behind the high detection accuracy of the converter-based current lies in the lower threshold distortion as compared to the converter-less signals (voltage and current) as depicted in Fig.~\ref{fig_interference_effect}.
Other types of signals (like converter-less AC voltage and current) are affected by the charged capacitor as it hinders the flow of current from the transducer especially during lower vibrations. On the other hand, in the converter-based design, the capacitor voltage is decoupled from the transducer voltage and current flows even during lower vibrations which reflects the detailed physical phenomenon and enhances the detection accuracy.
\begin{table}[t!]
\centering
\caption{Harvested power for various transport modes using converter-less and converter-based \ac{keh} circuits}
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{Transport mode} & \multicolumn{2}{c}{Harvested Power [\si{\micro\watt}]}\\
& Converter-less & Converter-based\\ \midrule
\midrule
Ferry & 0.1 & 0.2 \\
\midrule
Train & 0.06 & 0.1 \\
\midrule
Bus & 3.2 & 20.4 \\
\midrule
Car & 4.0 & 27.8 \\
\midrule
Tricycle & 5.3 & 20.4 \\
\midrule
Pedestrian & 3.7 & 10.5 \\
\midrule
\textbf{Average} & \textbf{2.7} & \textbf{13.2} \\
\bottomrule
\end{tabular}
\label{tab:harvested_energy}
\vspace{-0.35cm}
\end{table}
\subsection{Energy harvesting}
\label{Energy_Harvesting_using_keh}
In order to compare the energy harvesting potential between transport modes and circuit designs, we calculate the average harvesting power by first calculating the stored energy on the capacitor for any point in time, then adding up all positive changes in stored energy and finally dividing by the recording duration.
The results are presented in Table~\ref{tab:harvested_energy}.
The highest power is harvested in the car and on the tricycle, where we observe strong vibrations close to the resonant frequency of the transducer ($\approx$\SI{25}{\hertz}).
On the other hand, due to smooth pathways and lower vibration amplitudes, we record the lowest power on the ferry and the train.
In most cases, we observe significantly higher energy yield with the converter-based design than with the converter-less energy harvesting design.
\subsection{Energy consumption and system costs}
\label{Power_Consumption_Analysis}
In Section~\ref{results_a}, we discovered that the current into the DC-DC converter can be used as a sensing signal for \ac{tmd}, achieving detection accuracy comparable to a 3-axis accelerometer signal.
Previous work found that sampling the voltage of a \ac{keh} transducer consumes orders of magnitude less energy than sampling a state-of-the-art accelerometer~\cite{khalifa2017harke}.
In this section, we show how the harvesting current signal can be accessed in a real system and how this solution compares to the accelerometers in terms of power requirements and costs.
We propose to use a shunt ampere-meter to convert the current into a voltage that can easily be sampled by the sensor node using an \ac{adc} as shown in Fig.~\ref{fig:keh_current_sensing_amplifier}.
The current over resistor $R_S$ causes a negative voltage drop that is inverted and amplified by the inverting operational amplifier $A$.
By choosing the corresponding resistor values, the amplifier output can be adjusted to match the input range of the \ac{adc}.
For all power calculations, we assume a supply voltage of \SI{3}{\volt}.
Using a low power operational amplifier (e.g. TI LPV521, \SI{350}{\nano\ampere}) and low value for the shunt resistor, the sum of power consumption and losses are around \SI{2}{\micro\watt} under typical harvesting conditions.
This is more than two orders of magnitude less than the power consumption of the lowest power analog accelerometer (ADXL356, \SI{450}{\micro\watt}) that we could find.
When adding the current for an external, low power \ac{adc} (e.g., ADS7042, $\approx$\SI{700}{\nano\watt}@\SI{100}{\hertz}), the power consumption of the proposed system is still less than \SI{3}{\micro\watt}.
Highly integrated, digital accelerometers achieve significantly lower power consumption than their analog counterparts (e.g., ADXL363, \SI{7.2}{\micro\watt}@\SI{100}{\hertz}) by co-design of the sensor and the signal acquisition chain and duty-cycling according to the configured sampling rate.
This is still more than twice as much as our proposed \ac{keh} current sensor consumes based on discrete, off the shelf components.
We expect that integrating our circuit with an optimized signal acquisition chain would further reduce the power consumption.
Fig.~\ref{fig:power_consumption_analysis} compares the harvested and consumed power for accelerometer, open circuit voltage, converter-less voltage and converter-based current.
It shows that the \ac{keh} circuits consume significantly lower energy than the accelerometer for providing the sensing signals.
The accelerometer and \ac{keh} open circuit design consume power from an external source without generating energy.
On the other hand, on average, converter-less and converter-based designs harvest more energy than the required for signal acquisition leading towards \textit{energy positive sensing}.
\begin{figure}[t!]
\centering
\includegraphics[width=6cm, height=3cm]{keh_current_sensor.png}
\caption{Current sensing mechanism using an amplifier}
\label{fig:keh_current_sensing_amplifier}
\vspace{-0.65cm}
\end{figure}
\begin{figure}[ht]
\vspace{-0.5cm}
\centering
\includegraphics[width=8cm, height=3cm]{power_consumption_analysis.pdf}\vspace{-0.3cm}
\caption{Comparison between the consumed power in signal acquisition and the harvested power using \ac{keh}}
\label{fig:power_consumption_analysis}
\vspace{-0.45cm}
\end{figure}
\fakepar{Costs}
The component costs for the proposed \ac{keh} current sensing circuit, including the amplifier (0.49 USD) and three resistors ($<$0.01 USD) per device in quantities of 1000 is around 0.50 USD. This is less than a third of the price of the cheapest accelerometer in the same quantity (Kionix Inc. KXTC9-2050-FR, 1.54 USD) that we could find.
\subsection{Energy positive sensing: Discussion and analysis}
In order to analyze \textit{energy positive sensing} quantitatively, we define the \ac{apr} as the ratio of harvested power ($P_{har}$) and power required for signal acquisition ($P_{acq}$).
\begin{equation}
APR = \frac{P_{har}}{P_{acq}}
\end{equation}
Energy negative sensing has \ac{apr}$\,<\,$1 and \textit{energy positive} has \ac{apr}$\,>\,$1.
The boundary between the two classes at \ac{apr}\,=\,1 represents \textit{energy neutral sensing}. In Fig.~\ref{overall_bar_plot}, we plot the \ac{apr} over the achieved accuracy of \ac{tmd} for all combinations of transport modes and sensing signals.
Although the accelerometer signal provides the highest detection accuracy, it has zero \ac{apr} due to zero harvested power.
Similarly, \ac{keh}-based sensing in an open circuit configuration offers lower accuracy than the accelerometer signal and provides zero \ac{apr} as no energy is being harvested. Instead both accelerometer and KEH open circuit designs consume energy from an external source for signal acquisition.
Therefore, both of these devices are energy negative for all transport modes.
The transducer voltage in the converter-less design (CL-AC-V) offers \textit{energy positive sensing} with \ac{apr} of four to eight for four out of six considered transport modes with reasonable detection accuracy.
However, the converter-based current signal (CB-C) outperforms all KEH signals and offers \ac{tmd} accuracy comparable to the accelerometer signal with an \ac{apr} of four to ten for four out of six considered transport modes.
In summary, the results show that \textit{energy positive sensing} is possible with both converter-less and converter-based energy harvesting designs for at least two-thirds of the considered transport modes.
Depending on the transport mode, \ac{keh}-based sensing circuit provides up to ten times as much power as required for the signal acquisition while offering detection accuracy close to the 3-axis accelerometer.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm, height=6cm]{accuracy_scatter_plot.pdf}\vspace{-0.3cm}
\caption{Comparison of various signals in terms of \ac{apr} and \ac{tmd} accuracy (window size = 1 second)}
\label{overall_bar_plot}
\vspace{-0.65cm}
\end{figure}
\section{Related Work}
\label{Literature_Review}
We categorize the literature into three classes based on the utilization of KEH for energy generation, sensing and simultaneous sensing and energy harvesting.
\subsection{KEH as a source of energy}
Many previous works employed \ac{keh} as a source of energy~\cite{magno2018micro,fan2018capturing} to replace the non-rechargeable batteries in \ac{iot} for their widespread deployment. The authors in~\cite{magno2018micro} presented a KEH wearable device which harvests energy from daily human activities. Whereas, Fan et al.~\cite{fan2018capturing} employed KEH in a shoe to harvest energy from the foot strikes during walking and running. However, still the harvested energy from a single tiny KEH is not enough to power the conventional sensors (such as accelerometers) continuously which pushes researchers to find alternate methods for perpetual and sustainable sensing.
\subsection{KEH as a sensor}
Recently, \ac{keh} is also used for human and machine context detection in various applications, reducing the conventional sensor related power consumption~\cite{khalifa2017harke}. The authors in~\cite{khalifa2017harke, kalantarian2015monitoring, xu2018keh, lin2019h2b} used \ac{keh} open circuit AC voltage for human activity recognition, monitoring eating habits, gait recognition and sensing heartbeat respectively. As the vibration pattern is distinct during different human activities like walking, running, standing, etc., the generated energy from \ac{keh} contains information about the type of activity being performed. Similarly, Lan et al.~\cite{lan2017capsense} employed a capacitor to store the harvested energy from \ac{keh} and then use the charging rate of the capacitor for human activity recognition.
In order to extract rich context information, multi-source energy harvesters can also be employed. Umetsu et al.~\cite{umetsu2019ehaas} used \ac{seh} and \ac{keh} for room level location detection, where, SEH differentiates between the indoor and outdoor environments while \ac{keh} provides information about the type of human activity e.g., walking, sitting, standing, etc.
\subsection{Simultaneous sensing and energy harvesting KEH}
Building on the two aspects of KEH, i.e. sensing and energy generation, Ma et al.~\cite{ma2018sehs} proposed a technique to achieve both simultaneously for human gait recognition. However, they used two piezoelectric transducers in the front and rear of a shoe, where the second transducer significantly adds to the weight and costs of the system. Furthermore, their proposed system only considers the charging curve of the capacitor, neglecting the effects of a dynamic load consuming energy from the capacitor. It is thus not applicable in practical scenarios, where
the harvested energy is used to power the system. We address these issues by presenting a system architecture for \textit{energy positive sensing}, where a single transducer is used for sensing while simultaneously powering a dynamic load.
\section{Conclusion and Future Work}
\label{Conclusion_and_Future_Work}
In this paper, we present a scheme to use \ac{keh} simultaneously as a source for energy and information enabling energy positive sensing.
The novelty of this work lies in the exploration of voltage and current signals at various sensing points in the energy harvesting circuit.
In addition to sensing, we utilize the harvested energy from \ac{keh} to power a realistic load.
We present transport mode detection as a case study, design four different KEH prototypes and collect data from six transport systems.
Five classification techniques are implemented on five types of \ac{keh} signals and it is concluded that the harvesting current in a converter-based energy harvesting circuit achieves detection accuracy close to the accelerometer signal with two-fold lower energy consumption.
The results show that energy positive sensing is possible for at least two-thirds of the considered transport modes.
We expect that the proposed scheme triggers the exploration of energy positive sensing using other transducers as well. In future, multi-axis \ac{keh} sensing and energy harvesting can be explored to obtain even higher detection accuracy as well as harvested energy.
\bibliographystyle{IEEEtran}
\input{main_v2.bbl}
\end{document}
|
1,108,101,565,204 | arxiv | \section{Conclusion}
To the best of our knowledge, this is the first work towards a static analysis framework for Tezos smart contracts.
\textsc{Tezla}\xspace positions itself as an intermediate representation obtained from a Michelson smart contract, the low-level language of Tezos smart contracts.
This representation abstracts the stack usage through the usage of a store, easing the adoption of mechanism and frameworks for program analysis that assume this characteristic, while maintaining the original semantics of the smart contract.
We have presented a case study on how this intermediate representation can be used to implement a static analysis by using \textsc{Tezla}\xspace along side the \textsc{SoftCheck}\xspace platform.
This has shown how effortlessly one can perform static analysis on Michelson code without forcing developers to use a different language or implement \texttt{ad-hoc} static analysis tooling for a stack based language.
Michelson smart contracts have a mechanism of contract level polymorphism called entrypoints, where a contract can be called with an entrypoint name and an
argument.
This mechanism takes the form of a parameter composed as nesting of \texttt{or} types with entrypoint name annotations.
This parameter is then checked at the top of contract in a nesting of \texttt{IF_LEFT} instructions, running the desired entrypoint this way.
This mechanism is optional and transparent to smart contracts without entrypoints.
As such, they are also transparent to \textsc{Tezla}\xspace.
We therefore plan to extend \textsc{Tezla}\xspace in order to deal with entrypoints and generate isolated components for each entrypoint of a smart contract, which allow us to obtain clearer control-flow graphs and analysis
results.
Future plans include a formal account of the \textsc{Tezla}\xspace resource
analysis in order to formally verify that the semantics (including gas
consumption) of a \textsc{Tezla}\xspace-represented contract are maintained in respect to the
original Michelson code. This will also make way to the development of a
platform for principled static analysis of Michelson smart contracts.
We plan to study which properties are of interest so that we can integrate existing tools and algorithms for code optimization, resource usage analysis and security and correctness verification.
Another direction to tackle is the interfacing of \textsc{Tezla}\xspace with other static analysis platforms such as those provided by the MOPSA project \cite{mine-TAPAS18} which, among other abilities, provides a means to integrate static analyses.
\section{Building statics analyses for Tezla smart contracts}
In this section, we present the experiments conducted in order to test and
demonstrate the applicability of the \textsc{Tezla}\xspace intermediate representation to
perform static analysis.
\subsection{SoftCheck}
We build and organise these static analyses upon a generic data-flow analysis platform
called \textsc{SoftCheck}\xspace~{\cite{reisSoftcheck}}.
\textsc{SoftCheck}\xspace provides an internal and intermediate program representation, called \textsc{SCIL}\xspace, rich enough to express high-level as well as low-level imperative programming constructs and simple enough to be adequately translated into CFGs.
\textsc{SoftCheck}\xspace is organised upon a generic monotone
framework~\cite{Kam1977} that is able to extract a set of data-flow equations from (1) a suitable representation of programs and; (2) a set of monotone functions; and then to solve them. \textsc{SoftCheck}\xspace is written in \textsc{OCaml} and makes use of functor interfaces to leverage its genericity (see fig~\ref{fig:softchek}).
By generic we mean that, given a translation from a programming language to \textsc{SCIL}\xspace.
\textsc{SoftCheck}\xspace gives the ability to instantiate its underlying monotone framework by means of a functor interface. Then all defined static analyses are automatically available for the given programming language.
On the other hand, once written as a set of properties and monotone functions, a particular static analysis can be incorporated (again, through instantiating a functor) as an available static analysis for all interfaced programming languages.
\textsc{SoftCheck}\xspace offers several standard data-flow analysis such as very busy expressions, available expressions, tainted analysis etc.
We propose in the next sections to detail how we have interfaced \textsc{Tezla}\xspace with \textsc{SCIL}\xspace, how we have designed a simple but useful data-flow analysis within \textsc{SoftCheck}\xspace and how we have tested this analysis on the \textsc{Michelson}\xspace smart contracts running in the \textsc{Tezos}\xspace blockchain.
\begin{figure}[ht]
\centering
\resizebox{0.7\textwidth}{!}{\input{softcheck.tikz}}
\caption{\textsc{SoftCheck}\xspace~in a picture}%
\label{fig:softchek}
\end{figure}
\subsection{Constructing a Tezla Representation of a Contract}
To obtain the \textsc{Tezla}\xspace~representation of a smart contract, we first developed a
parser to obtain an abstract syntax representation of a \textsc{Michelson}\xspace smart contract.
This parser was implemented in OCaml and Menhir and respects the syntax
described in the Tezos documentation~\cite {michelson}.
It allows us to obtain a data type that fully abstracts the syntax (with the exception of annotations). To improve the integration between these two forms, \textsc{Tezla}\xspace data types were built upon the data types of \textsc{Michelson}\xspace.
Control-flow graphs are a common representation among static analysis tools.
We provide a library for automatic extraction of such representation from any
\textsc{Tezla}\xspace-represented smart contract.
This library is based upon the control-flow generation template present on \textsc{SoftCheck}\xspace. As such, control-flow graphs generated
with this library can be used with \textsc{SoftCheck}\xspace without further work.
To instantiate the control-flow graph generation template, we simply provided the library with a module with functions that describe how control flows between each node.
\subsection{Sign detection: an example analysis}
Here we devise an example of a static analysis for sign detection. The abstract domain consists of the following abstract sign values:: 0 (zero), 1 (one), 0+ (zero or positive), 0- (zero or negative), \(\top \) (don't know)
and \(\bot \) (not a number). These values are organised
according to the lattice on figure~\ref{fig:signLattice}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.2\textwidth]{sign-lattice.pdf}
\caption{Sign lattice.}%
\label{fig:signLattice}
\end{figure}
Using \textsc{SoftCheck}\xspace, we implemented a simple sign detection analysis of numerical values.
By definition, \texttt{nat}s have a lowest precision value
of 0+, while \texttt{int}s can have any value. Every other data type has a sign value of \(\bot \).
This implementation does not propagate information to non-simple types (\texttt{pair}, \texttt{or}, etc.), but it does perform some precision refinements on branching.
To implement such an analysis, we provided \textsc{SoftCheck}\xspace, in
addition to the previously defined \textsc{Tezla}\xspace control-flow graph library, a module that defines how each instruction
impacts the sign value of a variable. Then, using the
integrated solver mechanism based on the monotone framework,
we are able to run this analysis on any \textsc{Tezla}\xspace represented smart contract.
We now present an example. Figure~\ref{fig:contractSign} shows the code of a smart contract and its \textsc{Tezla}\xspace~representation. This contract
multiplies its parameter by \(-5\) if the parameter is equal to \(0\), or by
\(-2\) otherwise, and stores the result in the storage.
Figure~\ref{fig:contractSignCfg} shows the control-flow graph of
representation of that contract.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\begin{verbatim}
parameter nat ;
storage int ;
code { CAR ;
DUP ;
PUSH nat 0 ;
COMPARE ;
EQ ;
IF { PUSH int -5 ; MUL }
{ PUSH int -2 ; MUL } ;
NIL operation ;
PAIR }
\end{verbatim}
\caption{\textsc{Michelson}\xspace code.}%
\label{fig:contractSignMich}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.4\textwidth}
\begin{verbatim}
v0 := CAR parameter_storage;
v1 := DUP v0;
v2 := PUSH nat 0;
v3 := COMPARE v2 v1;
v4 := EQ v3;
IF v4
{
v5 := PUSH int -5;
v6 := MUL v5 v0
}
{
v7 := PUSH int -2;
v8 := MUL v7 v0
};
v9 := phi(v6, v8);
v10 := NIL operation;
\end{verbatim}
\caption{\textsc{Tezla}\xspace code.}%
\label{fig:contractSignTezla}
\end{subfigure}
\caption{Example contract for sign analysis.}%
\label{fig:contractSign}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{sign2.png}
\caption{Generated CFG, by the \textsc{SoftCheck}\xspace tool}%
\label{fig:contractSignCfg}
\end{figure}
Running this analysis on the previously mentioned contract
produced the results available in Figure~\ref{fig:contractSignAnalysis}.
In these results we can observe the known sign value of each variable at the exit of each block of the control-flow graph in Figure~\ref{fig:contractSignCfg}.
For brevity purposes, we omitted non-numerical variables from the result.
\begin{figure}[ht]
\centering
\setlength{\columnseprule}{0.25pt}
\begin{multicols}{5}
\begin{lstlisting}[basicstyle=\tiny]
0: {
v0: 0+
}
1: {
v0: 0+,
v1: 0+
}
2: {
v0: 0+,
v1: 0+,
v2: 0
}
3: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-
}
4: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-
}
5: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-
}
6: {
v0: 0,
v1: 0,
v2: 0,
v3: 0-,
v5: -
}
7: {
v0: 0,
v1: 0,
v2: 0,
v3: 0-,
v5: -,
v6: 0
}
8: {
v0: +,
v1: +,
v2: 0,
v3: 0-,
v7: -
}
9: {
v0: +,
v1: +,
v2: 0,
v3: 0-,
v7: -,
v8: -
}
10: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-
}
11: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-
}
12: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-
}
13: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0
}
14: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0,
v13: 0+
}
15: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0,
v13: 0+
}
16: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0,
v13: 0+
}
17: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: +,
v10: +,
v11: +,
v12: 0,
v13: 0+,
v15: top
}
18: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0,
v13: 0+,
v16: 0+
}
19: {
v0: 0+,
v1: 0+,
v2: 0,
v3: 0-,
v5: -,
v6: 0,
v7: -,
v8: -,
v9: 0-,
v10: 0-,
v11: 0-,
v12: 0,
v13: 0+,
v15: top
v16: 0+,
v17: top
}
\end{lstlisting}
\end{multicols}
\caption{generated report for the sign analysis}%
\label{fig:contractSignAnalysis}
\end{figure}
It it possible to observe from the results that the analysis takes
into account several details. For instance, the sign of values of type \texttt{nat} are, by definition, always zero or positive.
The analysis also refines the sign values on conditional branches according to the test. In this case, we can notice that in blocks 6 and 7 (true branch) the sign value of \texttt{v1} must be \texttt{0}, as the test corresponds to \texttt{0 == v1}. Complementary to this, in blocks 8 and 9 the value of \texttt{v1} assumes the sign value of \texttt{+}, given that being a \texttt{nat} value its value must be \texttt{0+} and we know that its values is not zero because the test \texttt{0 == v1} failed.
We can also conclude from the result of this analysis that the block 17
(true branch) will never be carried out, as the test of that conditional
(\texttt{0 < v11}) will always be false because the sign of \texttt{v11}
is \texttt{0-}, which means it will always be less than 0.
Due to the \textsc{Tezla}\xspace nature, we were able to take advantage of existing of tooling,
such as the \textsc{SoftCheck}\xspace~platform, and effortlessly design the run a data-flow
analysis. This enables and eases the development of static analysis that
can be used to verify smart contracts but also to perform code optimisations,
such as dead code elimination. Albeit simple, the sign analysis can be used to
instrument such dead code elimination procedure.
\subsection{Experimental Results and Benchmarking}
\textsc{Tezla}\xspace and all the tooling are implemented in OCaml and
are available under~\cite{fresco}.
\textsc{Tezla}\xspace accepts Michelson contracts that are valid according
to the Tezos protocol 006 Carthage. We conducted Experimental
evaluations that consisted in transforming to \textsc{Tezla}\xspace and running
the developed analyses on a batch of smart contracts.
In order to so, we implemented a tool that allows the extraction
of smart contracts available in the Tezos blockchain. With that
tool, we extracted 142 unique smart contracts. We tested
these unique contracts alongside 21 smart contracts we have implemented ourselves.
We successfully converted all smart contracts with a coverage result of all Michelson instructions except for 9 instruction that were not used in any of these 163 contracts. On those, we ran the available analyses and obtained the benchmarks presented on table~\ref{tab:benchmark}. These experiments were performed on
a machine with an Intel i7--8750H (2.2 GHz) with 6 cores and 32 GB of RAM.
In the absence of an optimisation tool that takes advantages of the information computed by the analysis, the reports produced by the analysis need to be manually inspected. These reports, the source code of contracts under evaluation, as well as the respective analysis result and other performed static analyses are available at~\cite{tezcheck,ReisTezCheckAnalysisResultsGitLab}.
\begin{table}[ht]
\centering
\begin{tabular}{@{}lllll@{}}
\cmidrule(r){1-2} \cmidrule(l){4-5}
Average time & 0.48 s & & \begin{tabular}[c]{@{}l@{}}Worst-case\\ (number of\\ instructions)\end{tabular} & \begin{tabular}[c]{@{}l@{}}2231\\ (6.08 s)\end{tabular} \\ \cmidrule(r){1-2} \cmidrule(l){4-5}
Worst-case (time) & \begin{tabular}[c]{@{}l@{}}9.87 s\\ (926 instructions)\end{tabular} & & \begin{tabular}[c]{@{}l@{}}Average time\\ per instrucion\end{tabular} & 0.0009 \\ \cmidrule(r){1-2} \cmidrule(l){4-5}
\end{tabular}
\caption{Benchmark results.}
\label{tab:benchmark}
\end{table}
\section{Introduction}
The term ``smart contract'' was proposed by Nick Szabo as a way to formalize and
secure relationships over public networks~\cite{Szabo1997}. In a blockchain,
a smart contract is an application written in some specific language that is
embedded in a transaction (hence the program code is immutable once it is out in
the network). Some examples of smart contracts applications are the management
of agreements between parties without resorting to a third party (escrow) and to
function as a multi-signature account spending requirement. Smart contracts have
the ability to transfer/receive funds to/from users or from other smart contracts and
can interact with other smart contracts.
There has been recent reports of bugs and consequently attacks in smart
contracts that have led to losses of millions of dollars worth of assets. One of the
most famous and most costly of these attacks was on the Distributed Autonomous
Organization (DAO), on the Ethereum blockchain. The attacker managed to withdraw
approximately 3.6 million ether from the contract.
Given the fact that a smart contract in a blockchain can't be updated or patched,
there is an increasing interest in providing tools and mechanisms that guarantee or
potentiate the correctness of smart contracts and to verify certain properties.
However, current tools and algorithms for program verification, based for example on deductive verification and static analysis, are usually designed for classical store-based
languages in contrast with \textsc{Michelson}\xspace, the smart contract language for the Tezos Blockchain~\cite{Goodman2014,michelson}, which is stack based.
To facilitate the usage of such tools to verify \textsc{Michelson}\xspace~smart
contracts, we present \textsc{Tezla}\xspace, a store based intermediate representation language for \textsc{Michelson}\xspace, and its respective tooling.
We provide an automated decompiler of \textsc{Michelson}\xspace~smart contracts to \textsc{Tezla}\xspace.
The decompiler preserves the semantics, flow and resource usage of the original smart contract, so that
properties like gas consumption can be faithfully verified at the \textsc{Tezla}\xspace~representation level.
To support our work, we present a case-study of a demo platform for the static analysis of Tezos smart contracts using the \textsc{Tezla}\xspace~intermediate representation alongside with an example analysis.
The paper is structured as follows. In section 2 we introduce the syntax and
semantics of \textsc{Tezla}\xspace. The decompiler mechanism is described in section 3. Section
4 addresses the static analysis platform case-study that targets
\textsc{Tezla}\xspace-represented smart contracts. Finally section 5 concludes with a general
overview of this contribution and future lines of work.
\section{Tezla}
\textsc{Tezla}\xspace aims to facilitate the adoption of existing static analysis tools and algorithms.
As such, \textsc{Tezla}\xspace is an intermediate representation of \textsc{Michelson}\xspace code that uses a store instead of a stack, enforces the Static Assignment Form (SSA) and preserves information about gas consumption. We will see in the next section how such characteristics ease the translation of \textsc{Tezla}\xspace program into their Control Flow Graph (CFG) forms and the construction of data-flow equations.
Compiled languages (like Albert, LIGO, SmartPy, Lorentz, etc.) also provide a higher-level abstraction over Michelson. However, as it happens with most compiled languages, the produced code may not be as concise or compact as expected which, in the case of smart contracts, may result in undesired costs. \textsc{Tezla}\xspace was designed to have a tight integration with the Michelson code to be executed, not as a language that compiles to it nor a higher level language that ease the writing of \textsc{Michelson}\xspace smart contracts.
In the \textsc{Tezla}\xspace representation, push-like instructions are translated into variable assignments, whereas instructions that consume stack values are transformed to expressions that use as arguments the variables that match the values from the stack. Furthermore, lists, sets and maps deconstruct and lifting of \texttt{option} and \texttt{or} types that happen implicitly are represented through explicit expressions added to \textsc{Tezla}\xspace.
Since the operational effect of stack manipulation is transposed into variable assignments, we also expose in a \textsc{Tezla}\xspace represented contract the stack manipulation as instructions that act as no-op instructions in the case of a semantics that do not take resource consumption into account\footnote{This is the case of the semantics presented in this paper.}. In the case of a resource aware semantics, these instructions will semantically encode this consumption.
The following section describes in detail the process of transforming a \textsc{Michelson}\xspace smart contract to a \textsc{Tezla}\xspace representation.
\subsection{Push-like instructions and stack values consumption}
Instructions that push \(N\) values to the stack are translated to \(N\) variable
assignments of those values. The translation process maintains a \textsc{Michelson}\xspace program stack that associates each stack position to the variable to which that
position value was assigned to. When a stack element is consumed, the
corresponding variable is used to represent the value. A very simple example is provided in fig.~\ref{fig:example1}.
\begin{figure}[ht]
\begin{subfigure}[b]{0.4\textwidth}
\begin{lstlisting}
PUSH nat 5;
PUSH nat 6;
ADD;
\end{lstlisting}
\caption{\textsc{Michelson}\xspace code.}%
\label{fig:example1mich}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{lstlisting}
v1 := PUSH nat 5;
v2 := PUSH nat 6;
v3 := ADD v1 v2;
\end{lstlisting}
\caption{\textsc{Tezla}\xspace code.}%
\label{fig:example1tezla}
\end{subfigure}
\caption{Stack manipulation example.}%
\label{fig:example1}
\end{figure}
The block on figure~\ref{fig:example1mich} is translated to the \textsc{Tezla}\xspace~representation shown in figure~\ref{fig:example1tezla}.
From the previous example, we can also observe that \textsc{Michelson}\xspace~instructions that
consume \(N\) stack variables are translated to an expression that consumes those
\(N\) values. Concretely, the instruction \texttt{ADD} that consumes two values
(say, \texttt{a} and \texttt{b}), from the stack is translated to \texttt{ADD a b}.
\subsection{Branching}
\textsc{Michelson}\xspace~provides developers with branching structures that act on different conditions. As \textsc{Tezla}\xspace~aims at being used as an intermediate representation for static analysis, there are some properties we would like to maintain. One such property is static single assignment form (SSA-form)~\cite{Rosen1988}.
This is guaranteed as \textsc{Tezla}\xspace-represented smart contracts are, by construction, in SSA-form, since each assignment uses new variables.
In order to deal with branching, the \textsc{Tezla}\xspace~representation makes use of \(\phi \)-functions (see~\cite{Rosen1988}) that select between two values depending on the branch.
As an illustration consider the \textsc{Michelson}\xspace~example in figure~\ref{fig:exampleTz1Mich}.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\centering
\begin{lstlisting}
parameter int ;
storage (list int) ;
code { UNPAIR ;
SWAP ;
IF_CONS
{ DUP ; DIP { CONS ; SWAP } ;
ADD ; CONS }
{ NIL int ; SWAP ; CONS } ;
NIL operation ;
PAIR }
\end{lstlisting}
\caption{\textsc{Michelson}\xspace code.}%
\label{fig:exampleTz1Mich}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.4\textwidth}
\centering
\begin{lstlisting}[escapeinside={(*}{*)}]
v0 := CAR parameter_storage;
v1 := CDR parameter_storage;
SWAP;
IF_CONS v1
{
v2 := hd v1;
v3 := tl v1;
v4 := DUP v2;
v5 := CONS v2 v3;
SWAP;
v6 := ADD v4 v0;
v7 := CONS v6 v5
}
{
v8 := NIL int;
SWAP;
v9 := CONS v0 v8
};
v10 := (*\(\phi\)*)(v7, v9);
v11 := NIL operation;
v12 := PAIR v11 v10;
\end{lstlisting}
\caption{\textsc{Tezla}\xspace code.}%
\label{fig:exampleTz1Tezla}
\end{subfigure}
\caption{Branching example.}%
\label{fig:exampleTz1}
\end{figure}
This contract takes an \texttt{int} as parameter and a list of \texttt{int}s
as storage and inserts the sum of the parameter with the head of the list at the
lists's head. If the list is empty, it inserts the parameter into the empty list.
Here, each branch of the \texttt{IF_CONS} instruction will result in a stack
with a list of integers, whose values depends on which branch was executed.
This translates to the \textsc{Tezla}\xspace~representation presented figure~\ref{fig:exampleTz1Tezla}.
The variable \texttt{v10} will receive its value through a \(\phi \)-function that
returns the value of \texttt{v7} if the true branch is executed, or the value of
\texttt{v9} otherwise.
The \texttt{IF_CONS} instruction deconstructs a list in the true branch, putting
the head and the tail of the list on top of the stack. From this example, it is
possible to observe that the deconstruction of a list is explicit through two
variable assignments. This is also the behaviour of \texttt{IF_NONE} and
\texttt{IF_LEFT} instructions, where the unlifting of \texttt{option} and
\texttt{or} types happens explicitly through an assignment.
\subsection{Loops, maps and iterations}
\textsc{Michelson}\xspace~also provides language constructs for looping and iteration over the
elements of lists, sets and maps. These are treated using the same
\(\phi \)-functions mechanism in order to preserve SSA-form. We can observe this
on the example fig.~\ref{fig:exampleTz2}.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\begin{lstlisting}[escapeinside={(*}{*)}]
PUSH nat 0 ;
LEFT nat ;
LOOP_LEFT
{ DUP ;
PUSH nat 100 ;
COMPARE ;
GE ;
IF
{ PUSH nat 1 ;
ADD ; LEFT nat }
{ RIGHT nat } } ;
INT ;
\end{lstlisting}
\caption{\textsc{Michelson}\xspace code.}%
\label{fig:exampleTz2Mich}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\textwidth}
\centering
\begin{lstlisting}[escapeinside={(*}{*)}]
v0 := PUSH nat 0;
v1 := LEFT nat v0;
LOOP_LEFT v2 := (*\(\phi\)*)(v1, v12)
{
v3 := unlift_or v2;
v4 := DUP v3;
v5 := PUSH nat 100;
v6 := COMPARE v5 v4;
v7 := GE v6;
IF v7
{
v8 := PUSH nat 1;
v9 := ADD v8 v3;
v10 := LEFT nat v9;
}
{
v11 := RIGHT nat v3;
}
v12 := (*\(\phi\)*)(v10, v11);
}
v13 := unlift_or v2;
v14 := INT v13;
\end{lstlisting}
\caption{\textsc{Tezla}\xspace code.}%
\label{fig:exampleTz2Tezla}
\end{subfigure}
\caption{Loop example.}%
\label{fig:exampleTz2}
\end{figure}
This example uses a \texttt{LOOP_LEFT} (loop with an accumulator) to sum 1
to a \texttt{nat} (starting with the value 0) until that value becomes greater
than 100 and casts the result to an \texttt{int}.
This example translates to the code presented in fig~\ref{fig:exampleTz2Tezla}.
Note that the \texttt{LOOP_LEFT} variable is assigned to the value of
\texttt{v1} if it is the first time that the loop condition is checked, or
\texttt{v12} if the program flow comes from the loop body. Also notice that the
same explicit deconstruction of an \texttt{or} variable is applied here, where
\texttt{v5} gets assigned the value of the unlifting of the loop variable in the
beginning of the loop body and at the end of the loop. Similar behaviour applies
to the other looping and iteration instructions.
\subsection{Full example}
We now present a full example of a complete \textsc{Michelson}\xspace~smart contract (figure~\ref{fig:contractexampleMich}).
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.47\textwidth} \begin{lstlisting}[escapeinside={(*}{*)}]
parameter (list bool) ;
storage (pair bool (pair nat int)) ;
code { DUP ;
CAR ;
DIP { CDR } ;
DIP { DUP ; CAR ; DIP { CDR } ;
DIP { DUP ; CAR ;
DIP { CDR } } } ;
ITER { AND ;
DUP ;
IF
{ DIP 2
{ PUSH int 1 ;
ADD } }
{ DIP 2
{ PUSH int -1 ;
ADD } } } ;
DIP { PAIR } ;
PAIR ;
NIL operation ;
PAIR }
\end{lstlisting}
\caption{\textsc{Michelson}\xspace code.}%
\label{fig:contractexampleMich}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\begin{lstlisting}[escapeinside={(*}{*)}]
v0 := DUP parameter_storage;
v1 := CAR v0;
v2 := CDR parameter_storage;
v3 := DUP v2;
v4 := CAR v3;
v5 := CDR v2;
v6 := DUP v5;
v7 := CAR v6;
v8 := CDR v5;
ITER v9 := (*\(\phi\)*)(v1, v18)
{
v10 := hd v9;
v11 := AND v10 v4;
v12 := DUP v11;
IF v12
{
v13 := PUSH int 1;
v14 := ADD v13 v8;
}
{
v15 := PUSH int -1;
v16 := ADD v15 v8;
};
v17 := (*\(\phi\)*)(v14, v16);
v18 := tl v9;
};
v19 := PAIR v7 v17;
v20 := PAIR v11 v19;
v21 := NIL operation;
v22 := PAIR v21 v20;
return v22;
\end{lstlisting}
\caption{\textsc{Tezla}\xspace code.}%
\label{fig:contractexampleTezla}
\end{subfigure}
\caption{Example contract.}%
\label{fig:contractexample}
\end{figure}
The contract takes a list of \texttt{bool}s as parameter and iterates over that list, It performs a boolean \texttt{AND} between an element of the list and the previous \texttt{AND} (the initial value of this accumulator is the \texttt{bool} on the storage). Depending on the result it either adds 1 ot -1 to the \texttt{int} on the storage. The values to be stored are the last \texttt{AND} result, the \texttt{nat} that was previously on the storage (notice that this value isn't changed nor it is used anywhere else in the program) and the resulting \texttt{int} from the sums on the iteration. This contract translates to the \textsc{Tezla}\xspace~code of fig.~\ref{fig:contractexampleTezla}.
In this complete example we can observe that a \textsc{Michelson}\xspace contract has a parameter and storage.
The initial stack of any \textsc{Michelson}\xspace~smart contract is a stack that contains a single pair whose first element is the input parameter and second element is the contract storage. As such, we introduce a variable called \texttt{parameter_storage} that contains the value of that pair.
The final stack of any \textsc{Michelson}\xspace~smart contract is also a stack that contains a single pair whose first element is a list of internal operations that it wants to emit and whose second element is the resulting storage of the smart contract.
We identify the variable containing this pair through the addition of a \texttt{return} instruction.
\section{Related Work}
Albert~\cite{Bernardo2020} is an intermediate language for the development of
Michelson smart contracts. This language provides an high-level abstraction of
the stack and some of the language datatypes. This language can be compiled to
Michelson through a compiler written in Coq that targets
Mi-Cho-Coq~\cite{Bernardo2019}, a Coq specification of the Michelson language.
Several high-level languages \cite{Alfour,Andrews,Maurel,DaiLambda,Serokell} that target
Michelson have been developed. Each one presents a different mechanism that
abstracts the low-level stack usage. However, a program analysis tool that
would target one of these languages should not be easily reusable to
programs written in the other languages.
Scilla~\cite{Sergey2018,sergeySaferSmartContract2019} is an intermediate
language that aims to be a translation target of high-level languages for smart
contract development. It introduces a communicating automata-based computational
model that separates the communication and programming aspects of a contract.
The purpose of this language is to serve as a basis representation for program
analysis and verification of smart contracts.
Slither~\cite{Feist2019}, presented in 2019, is a static analysis framework for
Ethereum smart contract. It uses the Solidity smart contract compiler
generated Abstract Syntax Tree to transform the contract into an intermediate
representation called SlithIR. This representation also uses a SSA form and
reduced instruction in order to facilitate the implementation of program
analyses of smart contracts. However Slither has no formal semantics and also
the representation is not able to accurately model some low level information like gas computations.
Solidifier~\cite{Antonino2020} is a bounded model checker for Ethereum smart
contracts that converts the original source code to Solid, a formalisation of
Solidity that runs on its own execution environment. Solid is translated to
Boogie, an intermediate verification language that is used by the bounded model
checker Corral, which it then used to look for semantic-property violations.
Durieux et.\ al~\cite{Durieux2020} presented a review on static analysis tools for Ethereum smart contracts. This work presents an extensive list of 35 tools,
of which 9 respected their inclusion criteria
and were used to test several vulnerabilities on a sample set of 47,587 smart contracts.
|
1,108,101,565,205 | arxiv | \section{Introduction: von Neumann's automata and construction theory}
John von Neumann's theory of self-reproducing automata
\cite{vonneumann1951,vonneumann1966} is now regarded as one of the
greatest theoretical achievements made in early stages of artificial
life research \cite{marchal1998,sipper1998,mcmullin2000}. Before
working on its specific implementation on cellular automata, von
Neumann sketched a general outline of his self-reproducing automaton
that consists of the following parts \cite{vonneumann1951}:
\begin{description}
\item{$\cal A$:} A universal constructor that constructs a product
$\cal X$ from an instruction tape $\cal I(X)$ that describes how to
construct $\cal X$.
\item{$\cal B$:} A tape copier that duplicates $\cal I(X)$.
\item{$\cal C$:} A controller that dominates $\cal A$ and $\cal B$ and
does the following:
\begin{enumerate}
\item Give $\cal I(X)$ to $\cal A$ and let it construct $\cal X$.
\item Pass $\cal I(X)$ to $\cal B$ and let it duplicate $\cal I(X)$.
\item Attach one copy of $\cal I(X)$ to $\cal X$ and separate $\cal
X+I(X)$ from the rest.
\end{enumerate}
\end{description}
The functions of these parts are symbolically written as
\begin{eqnarray}
&& \cal A + I(X) \to A + I(X) + X , \\
&& \cal B + I(X) \to B + {\rm 2} I(X) , \\
&& \cal (A + B + C) + I(X) \nonumber \\
&& \cal ~~~~~~~ \to ((A + I(X)) + B + C) \nonumber \\
&& \cal ~~~~~~~ \to ((A + I(X) + X) + B + C) \nonumber \\
&& \cal ~~~~~~~ \to (A + (B + I(X)) + C) + X \nonumber \\
&& \cal ~~~~~~~ \to (A + (B + {\rm 2} I(X)) + C) + X \nonumber \\
&& \cal ~~~~~~~ \to (A + B + C) + I(X) + X + I(X) \nonumber \\
&& \cal ~~~~~~~ \to \left\{ (A + B + C) + I(X) \right\}
+ \left\{ X + I(X) \right\} . \label{roleofC}
\end{eqnarray}
Then self-replication can be achieved if one lets $\cal X=D \equiv A+B+C$,
i.e.,
\begin{eqnarray}
\cal D + I(D) &\to&
\cal \left\{ D + I(D) \right\} + \left\{ D + I(D) \right\} . \label{selfrep}
\end{eqnarray}
Figure \ref{neumann} illustrates these notations visually. Note that
for the above conclusion to apply, $\cal D$ must be within the product set
of $\cal A$, which is by no means trivial.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\columnwidth]{neumann-automaton.eps}
\end{center}
\caption{Schematic illustration of von Neumann's self-reproducing
automaton. It consists of three parts (universal constructor $\cal A$,
tape copier $\cal B$, and controller $\cal C$; $\cal D \equiv A+B+C$)
and an instruction tape $\cal I(D)$. Since $\cal I(D)$ contains the
information about how to construct the automaton $\cal D$ itself, the
whole system $\cal D+I(D)$ can self-replicate.}
\label{neumann}
\end{figure}
Alan Turing's preceding work on computationally universal machines
\cite{turing1936} gave a hint for von Neumann to develop these
formulations of self-reproducing automata, especially on the idea of
universal constructor $\cal A$. These two kinds of machines apparently
share a similar concept that a universal machine, given an appropriate
finite description, can execute arbitrary tasks specified in the
description. We should note however, that this similarity has often
been overstated in the literature, leading to some misunderstandings
of von Neumann's original intention, recently argued by McMullin
\cite{mcmullin2000,mcmullin1993}. The most significant difference
between these two types of universal machines is that the
constructional machine must be made of the same parts that it operates
on, and therefore both the machine and the parts must be embedded in
the same space-time and obey the same ``physical'' rules, while the
computational machine can be separate from the symbols it operates on,
like the Turing machine's head that exists outside its tape. Another
equally important difference is that computational universality is
defined by the ability of {\em computing the behavior of all the other
models of computation}, while the constructional universality is
defined by the ability of {\em constructing all the structures in a
given specific product set}, which has nothing to do with the ability
of computing the behavior of other constructors. The latter issue will
be revisited later.
The aforementioned differences suggest the need for a distinct domain
of research specially dedicated to the issues of machine construction,
pioneered by von Neumann's work on constructional machines but since
left unnamed to date. This would be closely related to computation
theory pioneered by Turing's work, but should be unique by involving
physical interpretation and implementation of production processes and
thereby connecting logic and mathematics to biology and
engineering. Here I propose to call it {\em construction theory}, a
domain of research that focuses on the theoretical aspects of
productive behaviors of natural or artificial systems, including
self-replication, self-repair and other epigenetic processes. There is
a recent resurgence of studies on these topics in artificial life and
other related fields
\cite{freitas2004,zykov2005,buckley2006,ewaschuk2006,suzuki2006,zhang2006}. Like
in computation theory, there are many important problems yet to be
addressed in construction theory, such as identifying the class of
constructible structures with a given set of physical rules; obtaining
necessary/sufficient conditions for a universal constructor to exist
for a given product set; determining whether there is a single {\em
truly universal} construction model that could emulate all other
construction models; etc.
In what follows, we will focus on one particular question regarding
the relationship between computation and construction theories. While
von Neumann's universal constructor was largely inspired by Turing's
universal computer, what the entire self-replicating automaton $\cal
D$ in construction theory would parallel in computation theory
remained unclear to many, perhaps because von Neumann himself did not
detail in his writings how his theoretical model was related to
computation theory. Besides the universal constructor $\cal A$, the
automaton $\cal D$ also includes $\cal B$ that duplicates a given tape
and $\cal C$ that attaches a copy of the duplicated tapes to the
product of $\cal A$. They are the subsystems that von Neumann added to
the automaton in view of self-replication (and subsequent evolutionary
processes). Their counterparts are not present in the design of Turing
machines, and therefore, the entire architecture of self-reproducing
automata has often been considered a heuristic design meaningful only
on the construction side, but not on the computation side.
Here I would like to help readers realize that self-replication in
construction theory actually has a fundamental relationship with the
diagonalization proof of the undecidability of the halting problem in
computation theory. This relationship was already suggested by some
mathematicians and theoretical computer scientists
\cite{myhill1964,rogers1967}; however, it somehow failed to bring a
broader conceptual impact to other related fields, including
theoretical biology, artificial life, and complex systems
research. Specifically, the mathematical description of
self-replication by von Neumann's universal constructors is of
identical form with the circular computational processes of universal
computers that appear in Turing's original proof of the undecidability
of the halting problem. This leads us to a new interpretation of a
self-replicating biological organism as embodying an attempt to solve
the undecidable halting problem for a {\em diagonal} input, not in
computation theory but in the context of von Neumann's construction
theory. This attempt, of course, will never be completed in a finite
time because of the indefinite cascade of
self-computation/construction, which accounts for the undecidability
of the halting problem and also agrees well with the fact that life
has maintained its reproductive activity for an indefinitely long
period of time.
\section{The halting problem}
The halting problem is a well-known decision problem in theoretical
computer science that can be informally described as follows:
\begin{quote}
{\em Given a description of a computer program and an initial input it
receives, determine whether the program eventually finishes
computation and halts on that input.}
\end{quote}
This problem has been one of the most profound issues in computation
theory since 1936 when Turing proved that no general algorithm
exists to solve this problem for any arbitrary programs and inputs
\cite{turing1936}. The conclusion is often paraphrased that the
halting problem is {\em undecidable}.
Turing's proof uses {\em reductio ad absurdum}. A well-known
simplified version takes the following three steps.
First, assume that there is a general algorithm that can solve the
halting problem for any program $p$ and input $i$. This means that
there must be a Turing machine $M$ that always halts for any $p$ and
$i$ and computes the function
\begin{equation}
f(p,i) \equiv
\begin{cases}
1 & \text{if the program $p$ halts on the input $i$,} \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Second, one can easily derive from this machine another Turing machine
$M'$ whose behavior is modified to compute only {\em diagonal}
components in the $p$-$i$ space, i.e.,
\begin{equation}
f'(p) \equiv f(p,p) . \label{diagonal}
\end{equation}
This machine determines whether the program $p$ halts when its
self-description is given to it as an input. Such self-reference would
be meaningless for most actual computer programs, but still
theoretically possible.
Then, finally, one can tweak $M'$ slightly to make yet another machine
$M^*$ that falls into an infinite loop if $f'(p) = 1$. What could
happen if $M^*$ was supplied with its self-description $p(M^*)$? It
eventually halts if $f'(p(M^*)) = f(p(M^*),p(M^*)) = 0$, i.e., if it
does not halt on $p(M^*)$. Or, it loops forever if $f'(p(M^*)) =
f(p(M^*),p(M^*)) = 1$, i.e., if it eventually halts on $p(M^*)$. Both
lead to contradiction. Therefore, the assumption we initially made
must be wrong---there must be no general algorithm to solve the
halting problem.
\section{Turing's original proof}
Here I would like to bring up an informative yet relatively untold
fact that Turing himself did not like to have such a tricky
mathematical treatment as the above third step that introduces a
factitious logical inversion into the mechanism of the machine, so he
intentionally avoided using it in his original proof. Below is a quote
from his original paper \cite[p.246]{turing1936}, which tells us how
unique Turing's thought was and how much emphasis he placed on an
intuitive understanding of mathematical concepts:
\begin{quote}
{\em ``... The simplest and most direct proof of [the fact that there
is no general process for determining whether a given program
continues to write symbols indefinitely] is by showing that, if this
general process exists, then there is a machine which computes
$J$\footnote{A binary sequence whose $n$-th digit is a Boolean {\em
inverse} of the $n$-th digit of the $n$-th computable sequence. If
this sequence is computable, then it must be listed somewhere in the
series of the computable sequences, which however causes a
contradiction because its diagonal element must be both 0 and 1 at the
same time. Therefore this sequence cannot be computable.}. This proof,
although perfectly sound, has the disadvantage that it may leave the
reader with a feeling that ``there must be something wrong''. The
proof which I shall give has not this disadvantage, ...''}\\ (Footnote
added by the author)
\end{quote}
Instead, what he actually did for the proof was summarized in the
following paragraph \cite[p.247]{turing1936}:
\begin{quote}
{\em ``... Now let $K$ be the D.N\footnote{Description Number: An
integer that describes the specifics of a given computational
machine.} of $H$\footnote{A machine that incrementally and indefinitely
computes the diagonal sequence of the infinite matrix made of all the
infinitely long computable sequences enumerated in the order of D.N's
of corresponding machines.}. What does $H$ do in the $K$-th section of
its motion? It must test whether $K$ is satisfactory\footnote{An
integer $N$ is considered satisfactory if the machine whose D.N is $N$
can keep writing symbols indefinitely without falling into a deadlock
(Turing called this property {\em circle-free}).}, giving a verdict
``$s$'' or ``$u$''. Since $K$ is the D.N of $H$ and since $H$ is
circle-free, the verdict cannot be ``$u$''. On the other hand the
verdict cannot be ``$s$''. For if it were, then in the $K$-th section
of its motion $H$ would be bound to compute the first $R(K-1)+1 =
R(K)$\footnote{$R(N)$ denotes how many machines are circle-free within
the range of D.N's up to $N$.} figures of the sequence computed by the
machine with $K$ as its D.N and to write down the $R(K)$-th as a
figure of the sequence computed by $H$. The computation of the first
$R(K)-1$ figures would be carried out all right, but the instructions
for calculating the $R(K)$-th would amount to ``calculate the first
$R(K)$ figures computed by $H$ and write down the $R(K)$-th''. This
$R(K)$-th figure would never be found. {\it I.e.,} $H$ is circular,
contrary both to what we have found in the last paragraph and to the
verdict ``$s$''. Thus both verdicts are impossible and we conclude
that there can be no machine D\footnote{A machine that is assumed
capable of determining whether a given machine is circular or
not. This machine is introduced to construct $H$.}.''}\\ (Footnotes
added by the author)
\end{quote}
In this paragraph, Turing considered the {\em actual behavior} of
intact $H$ on its self-description $K$, and noticed that what this
machine would need to compute is exactly the same situation as the
machine itself is in: {\em ``$H$ is looking at its self-description
$K$.''} Such a self-reference would result in a circular process that
never comes back. Therefore, $H$ cannot make any decision on whether
$K$ is satisfactory or not. This contradiction negatively proves the
possibility of $D$, or a general computational procedure to determine
whether a machine stops writing symbols or not.
\section{Self-replication emerging}
Turing's argument described above gives essentially the same argument
as to what could happen if $M'$ in our notation received its
self-description $p(M')$. In this case $M'$ must compute the value of
$f'(p(M')) = f(p(M'),p(M'))$, and hence it would need to compute the
behavior of the machine described in $p(M')$ on the input $p(M')$,
exactly the same circular situation as that appearing in Turing's
proof. Let us use this example in what follows, as it is much simpler
to understand than Turing's original settings.
What kind of computational task would $M'$ be carrying out in this
circular situation? It tries to compute the behavior of $M'+p(M')$,
which tries to compute the behavior of $M'+p(M')$, which tries to
compute the behavior of $M'+p(M')$, ... Interestingly, this chain of
self-computation takes place in the form identical to that of
self-replication in von Neumann's construction theory shown in Eq.\
(\ref{selfrep}), if ``{\em to compute the behavior}'' is read as
``{\em to construct the structure}''. This similarity may be better
understood by noting that the role of $\cal C$ that attaches $\cal
I(X)$ to $\cal X$, shown in the last line of Eq.\ (\ref{roleofC}),
parallels the role of diagonalization in Eq.\ (\ref{diagonal}); {\em
both attempt to apply a copy of the description to the machine
represented by the description.}
Moreover, if one watched how the actual configuration of the tape of
$M'$ changes during such a self-computing process, he would see that
the information about $M'$ {\em actually self-replicates} on the tape
space, with its representation becoming more and more indirect as the
level of self-computation becomes deeper (Fig.\
\ref{replicating}). Turing might have imagined this kind of
self-replicating dynamics of machines when he developed his argument.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.8\columnwidth]{tm-replicating.eps}
\end{center}
\caption{Self-replication of Turing machine $M'$ on the tape
space. Given its own description $p(M')$, it starts an indefinite
cascade of self-computation, where the information about $M'$ actually
self-replicates on the tape. The representation of the computed
machine becomes more and more indirect as the level of
self-computation becomes deeper.}
\label{replicating}
\end{figure}
In view of the similarity between the above two processes, it is
clearly recognized that von Neumann's design of self-reproducing
automata is by no means just an anomaly in construction
theory. Rather, it correctly reflects the diagonal situation leading
to an infinite self-computation chain of computationally universal
machines, which appears in the proof of the undecidability of the
halting problem presented by Turing.
\section{Related work}
A well-known argument on the computational undecidability related to
self-replication was developed by Cohen \cite{cohen1987}, where he
showed that there is no general algorithm for the detection of
self-replicating computer viruses. The proof is rather simple: If
there were an algorithm, say $S$, that can determine whether a given
computer program is self-replicative, then one could easily create
another contradictory program that has $S$ built in it and
self-replicates if and only if its $S$ classifies the program itself
as non-self-replicative. This is probably the best acknowledged
discussion on the relationship between self-replication and the
undecidable problem so far.
We should note, however, that Cohen's argument suggests that detecting
a computer program that does {\em ``X''} is generally impossible,
where {\em ``X''} could be self-replication but could also be replaced
by any other functions; self-replication is no more than just one of
many possible behaviors of universal machines. In contrast, our
argument discussed in this essay is more fundamental: Universal
machines may fall into undecidable situations {\em because of the
possibility of self-replication (either computation or construction).}
Here self-replication is not just an instance of many possible
behaviors, but is actually the key property that causes the
undecidability of the behavior of universal machines, either
computational or constructional.
Another related work would be the theory of self-replicative recursive
functions discussed in recursion theory in 1960's, where Kleene's
recursion theorem \cite{kleene1938} was applied to prove that there
exist recursive functions (i.e., computer programs) that produce their
own representations as outputs, regardless of given inputs
\cite{myhill1964,rogers1967}. These functions were later implemented
as actual computer programs and named {\em quines}
\cite{hofstadter1979}; writing quines in various programming languages
has been one of the standard amusements in computer science
community. A common way of creating a quine is to embed a partial
representation of the program in itself and use it twice for creating
a main body of the program and for re-embedding a copy of the
representation into the newly created program. This technique of {\em
quining} is exactly the same as what von Neumann proposed in his
formulation of self-replicating machines.
There is, however, at least one fundamental difference between these
self-replicating programs in recursion theory and the self-replicating
constructors in construction theory. In the former case, the
computation process always stops after producing a static
representation of the program, with no direct implication derived on
its relevance to the halting problem. In the latter case, on the other
hand, the construction process never stops because the product of
construction, as active as its constructor, starts its own
construction process once fully constructed.
Interestingly, Turing's argument in his proof of the undecidability of
the halting problem considered the {\em latter} case in computation
theory, where the Turing machine is not simply writing its own
representation (as usual quine programs do), but is actually trying to
compute its own dynamic behavior (as illustrated in Fig.\
\ref{replicating}). This point may be well understood by reminding
that a product being constructed in construction theory corresponds
not to symbols being written, but to a {\em computational process
itself}, in computation theory.
\section{Conclusion}
As Turing showed in his proof, when a computational machine tries to
solve the halting problem of its own computation process, it will fall
into a cycle of self-computation that never ends in a finite time. Our
point is that this corresponds exactly to the cycle of
self-replication in construction theory, and that von Neumann's
self-reproducing automaton model rightly captures this feature in its
formulation. The halting problem solver in construction theory lets
the subject machine construct its product and see if it eventually
stops. If it tries to solve the halting problem of its own
construction process, it will start self-replication, and the entire
process never completes in a finite amount of time.
The insight obtained in the above sections provides us with some new
implications about the connections between computation and
construction. Throughout our argument, we saw that the construction of
another machine in construction theory has the same role and meaning
as does the computation of another machine in computation theory. This
correspondence transcends the second difference between computational
and constructional machines we discussed in the Introduction, where I
said:
\begin{quote}
{\em The computational universality is defined by the ability of
computing the behavior of all the other models of computation, while
the constructional universality is defined by the ability of
constructing all the structures in a given specific product set, which
has nothing to do with the ability of computing the behavior of other
constructors.}
\end{quote}
Interestingly, once construction and computation are identified with
each other, these two universalities become very close---if not
exactly the same---so that the universal constructor indeed has the
ability to compute the behavior of all the other constructors {\em by
physically constructing them and letting them do their jobs}. The idea
of such ``constructor-constructors'' is relevant to the realization of
machines with epigenetic dynamics, which will be one of the more
important subjects in construction theory.
The computation-construction correspondence also gives us a unique
view of biology, suggesting that the relationship between parent and
offspring in biological systems is equivalent to the relationship
between the {\em computing} $M'$ and the {\em computed} $M'$ in
computation theory. From a construction-theoretic perspective, a
biological organism is trying to find out the final result of the
construction task written in its genotypic information by executing
its contents. The final product will be immediately found if the
product of the task is a static structure, such as drugs produced by
genetically modified bacteria. But if the product is another active
machine that will attempt to build other products, then the final
result will depend on what this product will do next. Furthermore, if
the product is identical (or sufficiently similar) to the original
organism itself, the situation represents the conventional
parent-offspring relationship, where offspring are a kind of
intermediate product produced during the whole long-standing
construction process.
In this view, the endless chain of self-replication that living
systems are in, may be reinterpreted as a parallel to the endless
chain of self-computation that a halting problem solver falls in. In a
sense, we may all be in the process initiated billions of years ago by
a first universal constructor, who just tried to see the final product
of its {\em diagonal} construction.
\section*{Acknowledgments}
I would like to thank William R.\ Buckley for his continuous
encouragement and helpful suggestions, and also four anonymous
reviewers who gave me very constructive and insightful comments that
significantly improved the quality and correctness of the ideas
presented here.
\newpage
|
1,108,101,565,206 | arxiv | \section{Introduction}\lb{intro}
Knowing accurately the orbital motions of the major bodies of our solar system \citep{2007SoSyR..41..265K} has historically played a fundamental role over the years in putting to the test alternative theories of gravity to those that had been deemed as established from time to time. Just think of the anomalous perihelion precession of Mercury at that time \citep{LeVer1859} and its successful explanation by \citet{1915SPAW...47..831E} with his newly born general theory of relativity (GTR); for recent overviews of its status and perspectives after one century from its publication, see, e.g., \citet{2016Univ....2...23D} and references therein. Moreover, the most accurate tests of GTR have been performed so far just in the solar system arena, although it can probe its weak-field and slow-motion approximation. Indeed, binary pulsar systems, representing the most direct competitors in terms of the obtainable accuracy of the GTR validity, currently allow to reach the $5\times 10^{-4}$ level \citep{2016IJMPD..2530029K}, which is about one order of magnitude less accurate than the most recent solar system-based results \citep{2014CeMDA.119..237P,2015CeMDA.123..325F,2017AJ....153..121P,2018NatureG}. On the other hand, as far as GTR itself is concerned, not all of its features of the motion of order $\mathcal{O}\ton{c^{-2}}$, where $c$ is the speed of light in vacuum, have yet been tested with planetary motions. Indeed, a novel GTR-induced $N-$body effect was recently predicted by \citet{2018PhRvL.120s1101W}, while the gravitomagnetic Lense$-$Thirring orbital precessions due to the Sun's angular momentum, $\mathbf{S}$ \citep{2011Ap&SS.331..351I}, has escaped from detection so far due to its minuteness; both of them may become measurable in the next few years by means of the Hermean orbital precessions when the data collected by the ongoing spacecraft-based \textit{BepiColombo} mission to Mercury will be finally collected and analyzed. Furthermore, there are currently several modified models of gravity, put forth mainly to cope with issues arisen at galactic (dark matter) and cosmological (dark energy) scales, which, as fortunate by-products, predict effects on the orbital motion of a test particle which could be measured or, at least, constrained in our solar system; see, e.g., \citet{2003PhRvD..67f4002L}, \citet{2009MNRAS.399..474M},
\citet{2011MNRAS.412.2530B},
\citet{2012PhRvD..86d4002G},
\citet{2012CQGra..29w5027H},
\citet{2012MNRAS.426..673M},
\citet{2014PhRvD..89j2002H},
\citet{2017PhLB..769..281L}, and
\citet{2018CQGra..35qLT01W}.
Last but not least, the location of the recently hypothesized new, remote planet of the solar system \citep{2016AJ....151...22B,2016ApJ...824L..23B}, provisionally known as Planet Nine or Telisto, can be effectively constrained by means of the orbital precessions of the other known major bodies of the solar system \citep{2016A&A...587L...8F,2017Ap&SS.362...11I}.
During the last 15 years or so, two independent teams of astronomers, led by E.\,V. Pitjeva and A. Fienga, engaged in the production of more and more accurate planetary ephemerides (Ephemeris of Planets and the Moon (EPM) and Int\'{e}grateur Num\'{e}rique Plan\'{e}taire de l'Observatoire de Paris (INPOP), respectively), have determined increasingly accurate supplementary perihelion\footnote{\citet{2011CeMDA.111..363F} also released supplementary precessions $\Delta\dot\Omega$ of the nodes determined with the INPOP10a ephemerides.} precessions, $\Delta\dot\varpi$, of all of the planets of the solar system by processing increasingly extensive and precise data records of all types. Such supplementary rates are usually estimated by confronting, mainly in a least-square sense, a suite of models, accurate to the first post-Newtonian order $\mathcal{O}\ton{c^{-2}}$, describing the dynamics\footnote{Until the recent advent of the EPM2017 ephemerides \citep{2018AstL...44..554P}, the post-Newtonian gravitomagnetic field of the Sun had never been modeled, with the exception of \citet{2017AJ....153..121P}}
of the major and of most of the minor bodies of the solar system like the asteroids and the trans-Neptunian Objects (TNOs), the propagation of the electromagnetic waves and the functioning of the measurement devices (spacecraft transponders, etc.) with long data records covering about the last century or so. Thus, in principle, $\Delta\dot\varpi$ accounts for any unmodeled and mismodeled features of motion induced, e.g., by some putative exotic interaction of gravitational origin. However, the signatures of the latter ones may have been somewhat removed in the data reduction procedure, having been partly absorbed in the estimated values of, say, the initial state vectors \citep{2012CQGra..29w5027H}. Thus, in some cases which, however, cannot be established a priori, the bounds inferred by a straightforward comparison of $\Delta\dot\varpi$ to the corresponding theoretical perihelion precessions $\dot\varpi_\textrm{th}$ predicted by the dynamical models from time to time of interest may, perhaps, be too optimistically tight, at least to a certain extent. As such, dedicated (and time-consuming) analyses performed by reprocessing the same data records by explicitly modeling the dynamical features under consideration should be performed, and the correlations among the estimated parameters in the resulting covariance matrix should be inspected.
It is a task that would be unrealistic to think that it can be implemented every time one wants to test this or that model, also because it requires specific skills that, basically, only the astronomers responsible for the generation of the planetary ephemerides, whose priorities are often different, have.
Be that as it may, the corrections $\Delta\dot\varpi$ to the standard perihelion rates, which, so far, have always been statistically compatible with zero, are long used by researchers all over the world to put constraints on a variety of modified models of gravity and other dynamical features of motion just by straightforwardly comparing them to theoretical precessions; to fully realize the extent of such a practice, just consult the citation records of, say, \citet{2011CeMDA.111..363F} and \citet{2013MNRAS.432.3431P} in the SAO/NASA ADS database.
\section{The Uncertainties in the Planetary Orbital Rates of Change from the EPM2017 Ephemerides}
Recently, \citet{2018AstL...44..554P} released the EPM2017 ephemerides\footnote{See also http://iaaras.ru/en/dept/ephemeris/epm/2017/ on the Internet.} which, among other things, also include the Lense$-$Thirring field of the Sun in their dynamical models and rely upon the data collected by the spacecraft MESSENGER at Mercury (2011-2015). For some reason, \citet{2018AstL...44..554P} did not provide updated values of the supplementary perihelion advances $\Delta\dot\varpi$, limiting themselves to yield the statistical, formal uncertainties in the estimated values of the semimajor axes $a$ and of the nonsingular orbital elements $h,~k,~p$ and $q$ of all the planets along with Pluto in their Table~3. In view of the previously outlined importance, in Table~\ref{tavola1} I tentatively compiled the formal uncertainties in the long-term rates of change of the Keplerian orbital elements $a,~e,~I,~\Omega$ and $\varpi$ of the eight planets of the solar system and of Pluto as follows.
\begin{table}[!htb]
\begin{center}
\caption{Formal Uncertainties $\upsigma_{\dot a},~\upsigma_{\dot e},~\upsigma_{\dot I},~\upsigma_{\dot\Omega},~\upsigma_{\dot\varpi}$ in the Secular Rates of Change of the Semimajor Axis $a$, Eccentricity $e$, Inclination $I$, Longitude of the Ascending Node $\Omega$, and Longitude of Perihelion $\varpi$ of the Planets of Our Solar System Tentatively Computed from the Formal Errors in the Nonsingular Orbital Elements Listed in Table~3 of \citet{2018AstL...44..554P} and the Temporal Lengths of the Data Records for Each Planet Listed in Table~2 and Figure~2 of Pitjeva \& Pitjev\,(2018; See the Text for Details).
}\lb{tavola1}
\begin{tabular}{cccccccccc}
\hline
& Mercury & Venus & Earth & Mars & Jupiter & Saturn & Uranus & Neptune & Pluto \\
\hline
$\upsigma_{\dot a}$ & $0.003$ & $0.092$ & $0.062$ & $0.099$ & $4650$ & $16.936$ & $31630.3$ & $288035$ & $790006$\\
$\upsigma_{\dot e}$ & $0.0006$ & $0.0021$ & $0.0028$ & $0.0002$ & $2.016$ & $0.0023$ & $2.732$ & $10.696$ & $15.201$\\
$\upsigma_{\dot I}$ & $0.003$ & $0.050$ & $-$ & $0.002$ & $20.41$ & $0.063$ & $3.827$ & $4.733$ & $3.601$\\
$\upsigma_{\dot\Omega}$ & $0.024$ & $0.862$ & $-$ & $0.055$ & $959.1$ & $1.806$ & $269.177$ & $147.214$ & $4.917$\\
$\upsigma_{\dot\varpi}$ & $0.008$ & $0.315$ & $0.033$ & $0.003$ & $33.9$ & $0.067$ & $47.998$ & $1289.26$ & $60.810$\\
\hline
\end{tabular}
\begin{tabular}{>{\RaggedRight}p{\linewidth}}
\textbf{Note.} For the Earth, a spacecraft-based data record 21 yr long was assumed (see the text). The actual uncertainties may be up to one order of magnitude larger. The units are metres per century $\ton{\textrm{m}\,\textrm{cty}^{-1}}$ for $\upsigma_{\dot a}$ and milliarcseconds per century $\ton{\textrm{mas}\,\textrm{cty}^{-1}}$ for $\upsigma_{\dot e}$,\,$\upsigma_{\dot I}$,\,$\upsigma_{\dot\Omega}$,\,$\upsigma_{\dot \varpi}$.
The mean ecliptic and equinox at J2000.0 were used for the computation of $\upsigma_{\dot e}$,\,$\upsigma_{\dot I}$,\,$\upsigma_{\dot\Omega}$,\,$\upsigma_{\dot\varpi}$.
\end{tabular}
\end{center}
\end{table}
First, analytical expressions for $e,~I,~\Omega$ and $\varpi$ as functions of the nonsingular elements $h = e\sin \varpi,~k = e\cos\varpi,~p =\sin I\sin\Omega$ and $q = \sin I\cos\Omega$ were calculated. Then, they were differentiated with respect to $h,~k,~p$ and $q$ in order to calculate the errors in the root-sum-square fashion as, say, $\upsigma_I = \sqrt{ \ton{\partial I/\partial p}^2 \upsigma_p^2 + \ton{\partial I/\partial q}^2 \upsigma_q^2 }$, etc., where
$\upsigma_h,~\upsigma_k,~\upsigma_p,~\upsigma_q$ are the formal errors quoted in Table~3 of \citet{2018AstL...44..554P}.
Finally, the ratios of the previously computed errors $\upsigma_I,~\upsigma_e,~\upsigma_\Omega,~\upsigma_\varpi$ and of $a$ as per Table~3 of \citet{2018AstL...44..554P} to the lengths $\Delta t$ of the data records listed in Table~2 of \citet{2018AstL...44..554P} were taken for each planet, with some exceptions explained below for Venus and Jupiter for which no spacecraft-based data records spanning decades are available.
As pointed out by \citet{2018AstL...44..554P} themselves, the actual uncertainties may be up to one order of magnitude larger.
As far as the Euler-type angles $I,~\Omega$ and $\varpi$ determining the orientation of the orbit in space are concerned, the inclination $I$ exhibits the most accurate precessions whose uncertainties may be as little as $\simeq \upmu\textrm{as~cty}^{-1}$ for Mercury and Mars, while for Saturn it is at the $\simeq 10~\upmu\textrm{as~cty}^{-1}$ level. The perihelion precessions are, essentially, at the same level of accuracy. The uncertainties in the rates of change of the nodes of Mercury and Mars are $\simeq 10~\upmu\textrm{as~cty}^{-1}$, while for Saturn it is of the order of $\simeq 1~\textrm{mas~cty}^{-1}$.
The present approach was tested with the available information from the EPM2011 ephemerides about the perihelia of all the planets apart from Uranus, Neptune, and Pluto. Indeed, Table~3 of \citet{2018AstL...44..554P} displays the formal errors of $a$ and the nonsingular elements also for such earlier ephemerides; Table~4 of \citet{2013MNRAS.432.3431P} explicitly releases the EPM2011-based supplementary perihelion precessions $\Delta\dot\varpi$ along with their uncertainties (in mas yr$^{-1}$), while the data intervals used are quoted in Table~1 and Table~3 of \citet{2013MNRAS.432.3431P}. Thus, it is possible to apply our approach to the EPM2011 uncertainties of Table~3 of \citet{2018AstL...44..554P} with the temporal intervals of Table~1 and Table~3 of \citet{2013MNRAS.432.3431P} in order to calculate our own uncertainties, $\upsigma_{\dot\varpi}$, in the perihelion precessions and compare them with those in Table~4 of \citet{2013MNRAS.432.3431P}.
The resulting agreement is good as long as the temporal intervals, $\Delta t$, with which the rates of change are to be constructed are chosen wisely. For Venus, our strategy is able to reproduce the uncertainty $\upsigma_{\dot\varpi}$ listed in Table~4 of \citet{2013MNRAS.432.3431P} provided that the Magellan or the Venus Express (VEX) data intervals covering $\Delta t = 3-4~\textrm{yr}$ reported in Table~1 of \citet{2013MNRAS.432.3431P} are adopted. Thus, I followed the same approach with the EPM2017 by dividing the computed uncertainty in the orbital elements of Venus by the VEX temporal interval of Figure~2 of \citet{2018AstL...44..554P} which is 7 yr long.
As far as the Earth is concerned, it turns out that, in order to obtain the same uncertainty $\upsigma_{\dot\varpi}$ published in \citet{2013MNRAS.432.3431P}, a time span of $\Delta t = 15~\textrm{yr}$ should be adopted. Thus, when using the uncertainties for EPM2017 in order to compile Table~\ref{tavola1}, a data record of $\Delta t = 21~\textrm{yr}$ was used. In case of Jupiter, I am able to obtain the error $\upsigma_{\dot\varpi}$ quoted in Table~4 of \citet{2013MNRAS.432.3431P} if the time span of $\Delta t = 8~\textrm{yr}$ of the Jovian orbiter Galileo is assumed. Since \citet{2018AstL...44..554P} did not use the most recent data from Juno, also in obtaining Table~\ref{tavola1} I also used the Galileo data interval.
In order to better place in context the figures of Table~\ref{tavola1}, let us remark that an accuracy as good as $\upsigma_{\dot\varpi}=8~\upmu\textrm{as~cty}^{-1}$ for the perihelion precession of Mercury, which is better than the value quoted in Table ~4 of \citet{2013MNRAS.432.3431P} by a factor of about 300, corresponds to an uncertainty as little as $2\times 10^{-7}$ in the combination $\ton{1 + 2\gamma - \beta}/3$ of the PPN parameters $\gamma$ and $\beta$ multiplying the time-honored Schwarzschild-type Hermean precession of $42.98~\textrm{arcsec~cty}^{-1}$.
After all, an inspection of Table~3 of \citet{2018AstL...44..554P} shows that an improvement of more than two orders of magnitude occurred for the nonsingular orbital elements $h,~k$ of Mercury in the transition from the EPM2011 to the EPM2017 ephemerides.
By rescaling $\upsigma_{\dot\varpi}$ by a factor of $\kappa=10$, an uncertainty of $2\times 10^{-6}$ in $\ton{1 + 2\gamma - \beta}/3$ would still be a remarkable result. For the sake of a comparison, in their Table~5, \citet{2014CeMDA.119..237P} claimed $\upsigma_\gamma=6\times 10^{-5},~\upsigma_\beta=3\times 10^{-5}$ obtained with the EPM2011 ephemerides, while \citet{2015CeMDA.123..325F}, who used the INPOP13c ephemerides, quoted $\upsigma_\gamma=7\times 10^{-5},~\upsigma_\beta=5\times 10^{-5}$. More recently, on the basis of the MESSENGER data, \citet{2017AJ....153..121P} yielded $\upsigma_\beta=3.9\times 10^{-5}$, while \citet{2018NatureG} released $\upsigma_\beta=1.8\times 10^{-5}$. On the other hand, it might suggest that, in fact, a factor $\kappa$ somewhat larger than 10 may be more appropriate; $\kappa = 50$ corresponds to an uncertainty of $1\times 10^{-5}$ in $\ton{1 + 2\gamma - \beta}/3$. Thus, it should be somewhat like $10\lesssim \kappa \ll 50$.
\section{Discussion and Conclusions}
Here, I will outline some potential uses of the uncertainties in the planetary orbital rates of change tentatively calculated in Table~\ref{tavola1}.
Since \citet{2018AstL...44..554P}, among other things, modeled also the Solar Lense-Thirring effect assuming its existence as predicted by GTR, the uncertainties of Table~\ref{tavola1}, possibly rescaled by a factor \textcolor{black}{$\kappa \gtrsim 10$}, can be viewed as globally representative of the mismodeling/umodeling in all the standard post-Newtonian dynamics of the solar system including GTR to order $\mathcal{O}\ton{c^{-2}}$, classical N-body effects, oblateness of the Sun, asteroids and TNOs, the uncertainties of the propagation of the electromagnetic waves, and the measurement errors.
Should the gravitomagnetic field of the Sun not be modeled, it would be possible to use the supplementary precessions $\Delta\dot I,~\Delta\dot\Omega,~\Delta\dot\varpi$ of Mercury determined in such a way to try to measure the corresponding Lense-Thirring rates of change to a $\simeq 4\%$ level by disentangling them from the competing classical precessions induced by the Sun's quadrupole mass moment $J_2$. Indeed, both $J_2$ and $\mathbf{S}$ induce long-term precessions on $I,~\Omega$ and $\varpi$ in an arbitrary coordinate system not aligned with the Sun's equator \citep{2011PhRvD..84l4001I}; the expected gravitomagnetic perihelion precession of Mercury amounts to $-2~\textrm{mas~cty}^{-1}$.
The availability of, hopefully, all the extra-precessions $\Delta\dot e,~\Delta\dot I,~\Delta\dot\Omega,~\Delta\dot\varpi$ of as many planets as possible may be useful also in regard to the general relativistic $N-$body effect recently calculated by \citet{2018PhRvL.120s1101W} only for the perihelion which, for Mercury, is of the order of $\simeq 0.1\masy$. Indeed, should it also theoretically affect the other orbital elements, it would be possible, in principle, to use all the supplementary precessions of more than one planet to separate it from the other lager competing Newtonian and post-Newtonian effects.
Even by rescaling the figures of Table~\ref{tavola1} by a factor of $\kappa = 10$, it would allow one to discard the anomalous perihelion precessions predicted by \citet{2017PhLB..769..281L} on the basis of the emergent gravity theory proposed by \citet{2017ScPP....2...16V} which, for Mercury and Mars, amounts to $0.7\masy$ and $0.09\masy$, respectively. Indeed, \citet{2018AstL...44..554P} did not announce any statistically significant non-zero anomaly in their planetary data reduction. Thus, even if no supplementary perihelion advances are displayed in \citet{2018AstL...44..554P}, it is reasonable to speculate that, should they have been produced, they would have been statistically compatible with zero.
The same conclusion holds also for the anomalous perihelion precession of $\simeq 0.5\masy$ \citep{2003PhRvD..67f4002L}, identical for all of the planets up to terms of order $\mathcal{O}\ton{e^2}$, arising from the Dvali$-$Gabadadze$-$Porrati (DGP) braneworld model \citep{2000PhLB..485..208D}.
An important use of accurately determined extra-rates $\Delta\dot e,~\Delta\dot I,~\Delta\dot\Omega,~\Delta\dot\varpi$ for, say, Mars and Saturn would consist of much tighter constraints on the location of the putative distant Planet Nine, known also as Telisto, whose gravitational action perturbs all the orbital elements of the known planets. Indeed, such more accurate constraints could be inferred along the lines of what \citet{2017Ap&SS.362...11I} did with just $\Delta\dot\Omega,~\Delta\dot\varpi$ of Saturn determined with the INPOP10a ephemerides by \citet{2011CeMDA.111..363F}. Moreover, such a seemingly only planetological and astronomical topic is, instead, connected also with fundamental physics. Indeed, \citet{2009MNRAS.399..474M} and \citet{2011MNRAS.412.2530B} showed that the pull of a remote, point-like body located toward the Galactic center is dynamically equivalent to that of the external field effect in the planetary regions of the solar system within the framework of the modified Newtonian dynamics.
Finally, I stress once again the importance that the astronomers responsible for the construction of the planetary ephemerides will determine, hopefully as soon as possible, accurate supplementary rates of change for all the orbital elements of as many planets along with their uncertainties as possible in view of their wide applications in fundamental physics and beyond. Rates accompanied by their uncertainties would be useful for testing various ideas in gravitational physics. Uncertainties alone might be used for planning purposes and sensitivity analyses, but not for tests.
\section*{Acknowledgements}
I am grateful to the anonymous referee for the competent and useful critical remarks.
|
1,108,101,565,207 | arxiv |
\subsection*{Acknowlegements}
This paper contains the main results obtained in the second part of my thesis.
I would thus like to thank my thesis advisors Etienne Ghys and Patrick Popescu-Pampu for their guidance and encouragement;
as well as Francis Bonahon, Louis Funar, Jean-Pierre Otal and Anne Pichon who refereed and carefully read my work.
I am also grateful to Pierre Dehornoy for sharing his knowledge of modular knots.
Finally, I owe Marie Dossin for helping me with the figures in tikz.
\section{Introduction}
\subsection*{Context and motivation}
The modular group $\PSL_2(\Z)$ acts properly discontinuously on the hyperbolic plane $\H\P$ with quotient the modular orbifold $\M$, a hyperbolic surface with conical singularities $i$ \& $j$ of order $2$ \& $3$, and a cusp $\infty$.
The free homotopy classes of loops in $\M$ correspond to the conjugacy classes in its fundamental group $\pi_1(\M)=\PSL_2(\Z)$.
In particular the hyperbolic conjugacy classes in $\PSL_2(\Z)$ correspond to the closed oriented geodesics in $\M$, called \emph{modular geodesics}.
For hyperbolic $A\in \PSL_2(\Z)$ the modular geodesic $\gamma_A$ has length $\lambda_A$ equal to the logarithm of the ratio between its eigenvalues $\epsilon_A^{\pm 1}$, in formula:
\begin{equation*}
\disc(A)
= \left(\epsilon_A-\epsilon_A^{-1}\right)^2
= (\Tr A)^2-4
= 4\left(\sinh \tfrac{1}{2}\lambda_A\right)^2
\end{equation*}
We denote by $I(A,B)$ the geometric intersection number between the associated modular geodesics.
The unit tangent bundle $\U=\PSL_2(\Z)\backslash \PSL_2(\R)$ of $\M=\PSL_2(\Z)\backslash \H\P$ is a $3$-manifold, and the closed oriented geodesics in $\M$ lift to the periodic orbits for the geodesic flow in $\U$.
Hence the primitive hyperbolic conjugacy classes in $\PSL_2(\Z)$ correspond to the so called \emph{modular knots} in $\U$, which from the components of the \emph{master modular link}.
The structure of the Seifert fibration $\U \to \M$ reveals that $\U$ is homeomorphic to the complement of a trefoil knot in the sphere. In particular, one may speak of the linking numbers between modular knots and the trefoil and between one another.
Let us recall a combinatorial parametrization of the infinite order conjugacy classes in $\PSL_2(\Z)$.
The Euclidean algorithm shows that the group $\SL_2(\Z)$ is generated by the transvections $L\&R$, and more precisely that its submonoid $\SL_2(\N)$ of matrices with non-negative entries is freely generated by $L\&R$.
This submonoid can be identified with its image $\PSL_2(\N)\subset \PSL_2(\Z)$.
\begin{equation*}
L=
\begin{psmallmatrix}
1 & 0 \\ 1 & 1
\end{psmallmatrix}
\qquad
R=
\begin{psmallmatrix}
1 & 1 \\ 0 & 1
\end{psmallmatrix}
\end{equation*}
In $\PSL_2(\Z)$, the conjugacy class of an infinite order element intersects $\PSL_2(\N)$ along all cyclic permutations of a non-empty $L\&R$-word.
The conjugacy class is primitive if and only if the cyclic word is primitive, and it is hyperbolic when the cyclic word contains both letters $L$ and $R$.
One may try to relate the geometry and topology of the master modular link with the arithmetics and combinatorics of conjugacy classes in the modular group.
Our previous work \cite{CLS_Conj-PSL2K_2022} relates the geometry of modular geodesics (angles of intersection and lengths of ortho-geodesics) in terms of the arithmetics of conjugacy classes in the modular group (discrimnants, cross-ratios between fixed points on the projective line, and their hilbert symbols).
The main results in this paper will relate the linking numbers of modular knots to the combinatorics of the corresponding cyclic words.
The most immediate measures for the complexity of a binary word are given by the sum and difference between the numbers of letters of each sort.
For an infinite order $A\in \PSL_2(\Z)$ we denote by $\len([A])=\#R+\#L$ the combinatorial length and call $\Rad([A])=\#R-\# L$ the \emph{Rademacher number} of its conjugacy class.
In his paper \cite{Atiyah_log(eta-Dedekind)_1987} on the Logarithm of the Dedekind eta function, M. Atiyah identified the Rademacher function with no less than six other important functions appearing in diverse areas of mathematics, showing how omnipresent it is.
The function $\Rad\colon \PSL_2(\Z) \to \Z$ is a quasi-morphism, meaning that it has a bounded derivative
\begin{equation*}
d\Rad\colon \PSL_2(\Z)\times \PSL_2(\Z)\to \Z
\qquad
d\Rad(A,B)=\Rad(B)-\Rad(AB)+\Rad(A)
\end{equation*}
and is homogeneous, meaning that $\Rad(A^n)=n\Rad(A)$ for infinite order $A\in \PSL_2(\Z)$ and $n\in \Z$.
This enabled \'E. Ghys and J. Barge to recognise it in \cite{BargeGhys_cocycle-euler-maslov_1992} as half the primitive of the bounded euler class in $H^2_b(\PSL_2(\Z);\R)$ and explain its ubiquity.
In \cite{Ghys_knots-dynamics_2006}, \'E. Ghys showed that the linking number of a modular knot with the trefoil equals its Rademacher invariant, and concluded by asking for \emph{arithmetical and combinatorial interpretations of the linking pairing between modular knots}.
In this work, we will derive several formulae for those linking numbers, providing bridges between the arithmetics and geometry, the combinatorics and algebra, or the dynamics and topology of the modular group.
\subsection*{The arithmetic \& geometry of the cosines}
The journey from arithmetics began with our previous work \cite{CLS_Conj-PSL2K_2022}. In particular for a field $\Field\supset \Q$, we described when two hyperbolic elements $A,B\in \PSL_2(\Z)$ of the same discriminant $\Delta$ are conjugate in $\PSL_2(\Field)$.
The obstruction is measured in terms of any intersection angle $\theta$ between the modular geodesics $\gamma_A,\gamma_B$ by the class of $\left(\cos \tfrac{\theta}{2}\right)^2 \in \Field^\times$ modulo the group of norms over $\Field$ of the extension $ \Field[\sqrt{\Delta}]$.
When this obstruction vanishes, the elements of $\PSL_2(\Field)$ which conjugate $A$ to $B$ are paremetrized by the points $(X,Y)\in \Field$ of the generalised Pell-Fermat conic with equation $X^2-\Delta Y^2 = \left(\cos \tfrac{\theta}{2}\right)^2$.
Hence the geometric quantities $\left(\cos \tfrac{\theta}{2}\right)^2=\tfrac{1+\cos(\theta)}{2}$ given for some representatives $A,B$ whose axes intersect in $\H\P$ by
\begin{equation*}
\cos(\theta) = \tfrac{\disc(AB)-\disc(AB^{-1})}{\sqrt{\disc A\disc B}}
\end{equation*}
have an arithmetic meaning, and they will reappear under various forms in the sequel.
\subsection*{Linking functions on the character variety}
Let us introduce, for any pair of modular geodesics $\gamma_A,\gamma_B$, the following summations over their oriented intersection angles $\theta \in \,]0,\pi[$:
\begin{equation*}
\Link_q(A,B)
= \tfrac{1}{2} \sum \left(\cos \tfrac{\theta}{2}\right)^2
\qquad \mathrm{and} \qquad
\Cos_q(A,B)
= \tfrac{1}{2} \sum \left(\cos \theta\right)
\end{equation*}
and study their variations as we deform the metric on $\M$ by opening the cusp.
The complete hyperbolic metrics on the orbifold $\M$ correspond to the faithful and discrete representations $\rho\colon \PSL_2(\Z) \to \PSL_2(\R)$ up to conjugacy.
They form a $1$-dimensional real algebraic set parametrized by $q\in \R^*$ and the matrix $A_q=\rho_q(A)$ is obtained from any $L\&R$-factorisation of $A$ by replacing $L\mapsto L_q$ and $R\mapsto R_q$, where:
\begin{equation*}
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}
\qquad \mathrm{and} \qquad
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}.
\end{equation*}
The primitive hyperbolic conjugacy classes of $\PSL_2(\Z)$ still index the hyperbolic geodesics in the quotient $\M_q=\rho_q(\PSL_2(\Z))\backslash\H\P$ which do not surround the cusp.
We may thus define the analogous sums $\Link_q(A,B)$ and $\Cos_q(A,B)$ over the intersection angles $\theta_q \in \,]0,\pi[$ between the $q$-modular geodesics $\gamma_{A_q},\gamma_{B_q} \subset \M_q^*$ of the $\tfrac{1}{2}\left(\cos \tfrac{1}{2}\theta_q\right)^{2}$ and $\left(\cos \theta_q\right)$.
As $q\to \infty$, the hyperbolic orbifold $\M_q$ has a convex core which retracts onto a thin neighbourhood of the long geodesic arc connecting its conical singularities, whose preimage in the universal cover $\H\P$ is a trivalent tree. In the limit we recover the action of $\PSL_2(\Z)$ on its Bruhat-Tits building, the infinite planar trivalent tree $\Tree$, and by studying its combinatorics we shall prove the following.
\begin{Theorem}[Linking and intersection from boundary evaluations]
For primitive hyperbolic $A,B\in \PSL_2(\Z)$, the limits of the function $\Link_q(A,B)$ and $\Cos_q(A,B)$ at the boundary point of the $\PSL_2(\R)$-character variety of $\PSL_2(\Z)$ recover their linking and intersection numbers:
\begin{align*}
&\Link_q(A,B) \xrightarrow[q\to \infty]{} \lk(A,B)
\\
&\Cos_q(A,B) \xrightarrow[q\to \infty]{}
\lk(A,B)-\lk(A^{-1},B)
=\lk(A,B)-\tfrac{1}{4}I(A, B)
\end{align*}
\end{Theorem}
Hence the functions $\Link_q \& \Cos_q$ interpolate between the geometry at $q=1$ of the arithmetic group $\PSL_2(\Z) \subset \PSL_2(\R)$ and the topology at $q=+\infty$ of the combinatorial action $\PSL_2(\Z) \to \Aut(\Tree)$.
\subsection*{Linking functions and Alexander polynomials}
The graphs of $q\mapsto \Link_q(A,B)$ for various pairs $A,B$ and $q\in \C$ suggest that their zeros tend to accumulate on the unit circle. This reminds us of the various results and conjectures concerning the roots of Alexander polynomials, so we propose a possible thread to follow in this direction.
A primitive hyperbolic conjugacy class in $\PSL_2(\Z)=\pi_1(\M)$ corresponds to a primitive modular geodesic in $\gamma_A\subset \M$. It lifts to a modular knot in $k_A \subset \U$ which in turns yields a conjugacy class in $\BB_3=\pi_1(\U)$.
A conjugacy class in the braid group on three strands defines, by taking its closure, a link $\sigma_A$ in a solid torus.
In \cite[Proposition 5.16]{CLS_phdthesis_2022} we relate the Alexander polynomial $\Delta(\sigma_A)\in \Z[t^{\pm 1}]$ of this link $\sigma_A$ to the Fricke polynomial $\Tr A_q \in \Z[q^{\pm 1}]$ of the modular geodesic $\gamma_A$.
\begin{Proposition}
For a primitive hyperbolic $A\in \SL_2(\N)$, the Alexander polynomial of the link $\sigma_A$ is given in terms of $q=\sqrt{-t}$ by: \[\Delta(\sigma_A)=\tfrac{q^{\Rad(A)}-\Tr(A_q)+q^{-\Rad(A)}}{(q-q^{-1})^2}\]
\end{Proposition}
Now recall that $\Cos_q(A,B)=\Link_q(A,B)-\Link_q(A,B^{-1})$ can be expressed as finite sum of terms:
\begin{equation*}
\cos(A_q,B_q) = \tfrac{\disc(A_qB_q)-\disc(A_qB_q^{-1})}{\sqrt{\disc A_q\disc B_q}}
\qquad \mathrm{where} \qquad
\disc(C_q)= (\Tr C_q)^2-4
\end{equation*}
This is how one may compare the concentration property for the zeros of $\Link_q$ around the unit circle with those of Alexander polynomials, but we will not pursue this direction any further.
\subsection*{Linking numbers and homogeneous quasi-morphisms}
The limiting values $\Cos_q(A,B)\to \lk(A,B)-\lk(A^{-1},B)$ will now provide a bridge from the representation theory to the bounded cohomology of $\PSL_2(\Z)$.
For every group $\Pi$, the real vector space $PX(\Pi;\R)$ of homogeneous quasi-morphisms is a Banach space for the norm $\lVert df \rVert_\infty$, as was shown in \cite{MatsuMorita_Hb(Homeo)_1985, Ivanov_H2b(G)-Banach_1988}.
\begin{Theorem}
For every hyperbolic $A\in \PSL_2(\Z)$, the function $\Cos_A\colon B\mapsto \lk(A,B)-\lk(A^{-1}, B)$ is a homogeneous quasi-morphism $\PSL_2(\Z)\to \Z$.
\end{Theorem}
Let $\mathcal{P}$ denote the set of primitive infinite order conjugacy classes in $\PSL_2(\Z)$, and $\mathcal{P}_0$ the subset of those which are stable under inversion. Choose a partition $\mathcal{P}\setminus \mathcal{P}_0=\mathcal{P}_-\sqcup \mathcal{P}_+$ in two subsets in bijection by the inversion.
We may choose $R\in \mathcal{P}_+$, and denote $\Cos_R:=\Rad$ by convention.
\begin{Theorem}
The collection of $\Cos_A\in PX(\Gamma;\R)$ for $A\in \mathcal{P}_+$ is linearly independant and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{A\in \mathcal{P}_+} cf_A .\Cos_A$ for unique $cf_A \in \R$.
\end{Theorem}
To prove the non-triviality and linear independance of the $\Cos_A$ for $A\in \mathcal{P}_+$, we were led to show the non-degeneracy of the linking form, which is interesting in its own right.
\begin{Theorem}
If hyperbolic $A,B\in \PSL_2(\Z)$ are link equivalent, namely $\lk(A,X)=\lk(B,X)$ for all hyperbolic $X\in \PSL_2(\Z)$,
then they are conjugate.
\end{Theorem}
The results in this section may be compared to the classical representation theory of compact groups, in which the characters of irreducible representations provide an orthonormal basis for the class functions.
Indeed, we have found a family of cosign functions $\Cos_A$ whose periods correspond to the primitive conjugacy classes of $\PSL_2(\Z)$, and they form a basis for the space of quasi-characters $PX(\PSL_2(\Z))$, which is in some sense orthogonal with respect to the linking form (but we refer to the proofs in section \ref{sec:quasi-morphism} for a better explanation of this orthogonality).
\section{The group \texorpdfstring{$\PSL_2(\R)$}{PSL(2;R)}: discriminant and cross-ratio}
\label{sec:disc-bir}
The group $\PSL_2$ acts by conjugacy over itself.
In this section, we recall from \cite{CLS_Conj-PSL2K_2022} the main invariants which enable to describe the orbits of single elements and of pairs of elements.
Those are the discriminant $\disc(A)$ and the cross-ratio $\bir(A,B)$.
\subsection{Over a field \texorpdfstring{$\Field$}{K} with \texorpdfstring{$\operatorname{char}(\Field)\ne 2$}{charnot2}}
\label{subsec:disc-bir_K}
The automorphism group $\PGL_2(\Field)$ of the projective line $\Field\P^1$ acts freely transitively on triples of distinct points, and the unique algebraic invariant of four points $u,v,x,y\in \Field\P^1$ is the cross-ratio:
\begin{equation}
\label{eq:bir}
\bir(u,v,x,y)
= \frac{(v-u)}{(v-x)}
\div \frac{(y-u)}{(y-x)}
\in \Field\P^1
\end{equation}
It satisfies in particular $\bir(z,0,1,\infty)=z$ and $\bir(z,0,w,\infty)=z/w$, whence the cocycle rule:
\begin{equation*}
\bir(z,v,x,y)=\bir(x,v,w,y) \bir(z,v,w,y).
\end{equation*}
For a triple $(x_1,x_2,x_3)$ of distinct points in $\Field\P^1$, we define their Maslov index in $\Field^\times/(\Field^\times)^2$ by lifting them to non-zero vectors $\vec{x_i}\in x_i \subset \Field^2$ with zero sum, and taking the determinant $\det(\vec{x}_1,\vec{x}_2)=\det(\vec{x}_2,\vec{x}_3)=\det(\vec{x}_3,\vec{x}_1)$.
It is preserved by the subgroup $\PSL_2(\Field)$, which acts freely transitively on the triples of distinct points with a given Maslov index.
A non-trivial $A\in \PSL_2(\Field)$ has two fixed points $\alpha_-,\alpha_+$ in the projective line over $\sqrt{\Field}$, that are well defined up to transposition.
The element $\epsilon_A^{\pm 2} := \bir(Ax,\alpha_\mp,x,\alpha_\pm)$ does not depend on $x\in \Field \P^1 \setminus\{\alpha_-,\alpha_+\}$, and is well defined up to inversion: it is called the \emph{period} of $A$.
The unique algebraic function on $\PSL_2(\Field)$ which is invariant by conjugacy is the discriminant:
\begin{equation*}
\disc(A) = \left(\epsilon_A-\epsilon_A^{-1}\right)^2
\end{equation*}
whose class in $\{0\}\sqcup \Field^\times /(\Field^\times)^2$ defines the \emph{type} of $A$.
All non-trivial elements with $\disc=0$ are conjugate. The elements with $\disc \ne 0$ are called \emph{semi-simple}.
The conjugacy classes of elements with $\disc \equiv 1 \bmod{(\Field^\times)^2}$ are uniquely characterised by the value of their discriminant.
Consider $A,B\in \PSL_2(\Field)$ of the same type, and fix a square root of $\disc(A)\disc(B)\in (\Field^\times)^2$.
Then one may order their fixed points $(\alpha_-,\alpha_+)$ and $(\beta_-,\beta_+)$ up to simultaneous inversion, and consistently define their cross-ratio $\bir(A,B)$ by:
\begin{equation*}
\bir(A,B):= \bir(\alpha_-,\alpha_+,\beta_-,\beta_+) \in \Field\P^1
\end{equation*}
which is $\notin \{0,\infty\}$ unless $A$ and $B$ share a fixed point, and satisfies the symmetry property:
\begin{equation*}
\frac{1}{\bir(A,B)}+\frac{1}{\bir(A,B^{-1})}=1.
\end{equation*}
We may also define their cosine (using their adjoint action on the Lie algebra $\Sl_2(\Field)$ as in \cite{CLS_Conj-PSL2K_2022}), which is related to the cross-ratio by:
\begin{equation}
\label{eq:cos-bir}
\cos(A,B)=\frac{1}{\bir(A,B)}-\frac{1}{\bir(A,B^{-1})}.
\end{equation}
\begin{Theorem}
\label{Thm:conj-PSL_2(Z)}
Consider the action of $\PSL_2(\Field)$ by conjugacy on itself.
Two semi-simple elements $A,B$ are conjugate if and only if $\disc(A)=\Delta=\disc(B)$ and $\bir(A,B)\equiv 1 \bmod{\Norm_\Field \Field[\sqrt{\Delta}]}$.
A pair of semi-simple elements $A_1,A_2$ of the same type is conjugate to another pair of semi-simple elements $B_1,B_2$ of the same type if and only if we have $\bir(A_1,A_2)=\bir(B_1,B_2)$ as well as $\disc(A_i)=\Delta_i=\disc(B_i)$ and $\bir(A_i,B_i)\equiv 1 \bmod{\Norm_\Field \Field(\sqrt{\Delta_i})^\times}$.
\end{Theorem}
\subsection{Over the real field}
The automorphism group $\PGL_2(\C)\simeq \PSL_2(\C)$ of the complex projective line $\C\P^1$ contains $\PGL_2(\R)$ as the stabiliser of the real projective line $\R\P^1$.
The index-two subgroup $\PSL_2(\R)$ also preserves the upper half-plane $\H\P=\{z\in \C \mid \Im(z)>0\}\subset \C\P^1$, or equivalently the orientation induced on its boundary $\partial \H\P$, also given by the cyclic order $\cord(x,y,z) \in \{\pm 1\}$ of any triple of distinct points $x,y,z$ of $\R\P^1$.
The complex structure on $\H\P$ is conformal to a unique hyperbolic metric. The hyperbolic distance $\lambda$ between $w,z\in \H\P$ can be deduced from the cross-ratio by $\bir(\Bar{z},z,\Bar{w},w)^{-1}=\left(\cosh \tfrac{\lambda }{2}\right)^{2}$.
This realises $\PSL_2(\R)$ as the positive isometry group of the hyperbolic plane: it preserves the previous cross-ratio and acts simply-transitively on positive triples of distinct points in $\R\P^1$, thus it preserves the hyperbolic metric and acts simply transitively on the unit tangent bundle of $\H\P$.
The type of $A \in \PSL_2(\R)$ is elliptic or parabolic or hyperbolic according to the value of $\sign \disc(A) \in \{-1,0,1\}$, equal to the number of distinct fixed points in $\R\P^1$ minus $1$.
A hyperbolic $A \in \PSL_2(\R)$ acts on $\H\P$ by translation along an oriented geodesic $\gamma_A$ whose endpoints $\alpha_-,\alpha_+ \in \R\P^1$ are its repulsive and attractive fixed points.
With this order the period satisfies $\epsilon_A^2 >1$.
The translation length $\lambda_A = \log(\epsilon_A^2)$ yields $\disc(A)=4\left(\sinh \tfrac{1}{2}\lambda_A\right)^2$%
\begin{Lemma}
\label{Lem:cos-cosh-sinh}
Consider hyperbolic $A,B\in \PSL_2(\R)$ with distinct fixed points.
If we lift them in $\SL_2(\R)$ with positive trace, then we have:
\begin{equation*}
\cos(A,B)= \frac{\Tr(AB)-\Tr(AB^{-1})}{\sqrt{\disc(A)\disc(B)}}
\end{equation*}
Consider the relative position of their oriented hyperbolic geodesics $(\alpha_-,\alpha_+)$ and $(\beta_-,\beta_+)$ in $\H\P$.
If they intersect, their angle $\theta$ is well defined up to a sign, and satisfies $\cos \theta=\cos(A,B)$, thus:
\begin{equation*}
\frac{1}{\bir(A,B)}
= \frac{1 + \cos(\theta)}{2}
= \left(\cos \tfrac{\theta}{2}\right)^{2}
\end{equation*}
If they do not intersect, they have a unique common perpendicular geodesic arc, whose length $\lambda$ satisfies $\cos(A,B)=\pm \cosh \lambda$. The sign $\pm 1$ compares the co-orientations induced by each axis. Thus we have respectively:
\begin{equation*}
\frac{1}{\bir(A,B)}
= \frac{1 + \cosh(\lambda)}{2}
= \left(\cosh \tfrac{\lambda}{2}\right)^{2}
\quad \mathrm{and}\quad
\frac{1}{\bir(A,B)}
= \frac{1 - \cosh(\lambda)}{2}
= \left(\sinh \tfrac{\lambda}{2}\right)^{2}
\end{equation*}
\end{Lemma}
\begin{figure}[h]
\centering
\scalebox{.49}{\input{images/tikz/birapport_angle-theta_cos}}
\scalebox{.49}{\input{images/tikz/birapport_length-lambda_cosh}}
\scalebox{.49}{\input{images/tikz/birapport_length-lambda_sinh}}
\caption*{Angle well defined in $\,]0,\pi[$. Ortho-geodesics well and badly co-orientated.}
\end{figure}
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.3}{\input{images/tikz/co-orientation_angle-mod-sign}}
\scalebox{.3}{\input{images/tikz/co-orientation_ortho-geodesic}}
\caption*{Angle well defined in $\,]0,\pi[$. Ortho-geodesics well and badly co-orientated.}
\end{figure}
\end{comment}
\section{The modular group \texorpdfstring{$\PSL_2(\Z)$}{PSL(2,Z)}}
\subsection{The modular orbifold}
\begin{comment}
The automorphism group $\PGL_2(\C)$ of the complex projective line $\C\P^1$ contains $\PGL_2(\R)$ as the stabiliser of the real projective line $\R\P^1$.
The index-two subgroup $\PSL_2(\R)$ also preserves the upper half-plane $\H\P=\{z\in \C \mid \Im(z)>0\}\subset \C\P^1$, or equivalently the orientation induced on its boundary $\partial \H\P=\R\P^1$.
The complex structure on $\H\P$ is conformal to a unique hyperbolic metric. The hyperbolic distance $\lambda$ between $w,z\in \H\P$ can be deduced from the cross-ratio by:
\begin{equation*}
\frac{1}{\bir(\Bar{z},z,\Bar{w},w)}=\left(\cosh \tfrac{\lambda}{2}\right)^{2}
\end{equation*}
This realizes $\PSL_2(\R)$ as the positive isometry group of the hyperbolic plane: it preserves the previous cross-ratio and acts simply-transitively on positive triples of distinct points of $\R\P^1$, thus it preserves the hyperbolic metric and acts simply transitively on the unit tangent bundle of $\H\P$.
\end{comment}
The subgroup $\PSL_2(\Q)$ of $\PSL_2(\R)$ is the stabiliser of the rational projective line $\Q\P^1$.
The discrete subgroup $\PSL_2(\Z)$ is the stabiliser of the ideal triangulation $\Tri$ of $\H\P$ with vertex set $\Q\P^1$ and edges all geodesics whose endpoints $\tfrac{p}{q},\tfrac{r}{s}$ satisfy $\lvert ps-qr \rvert =1$.
Consider the action of $\PSL_2(\Z)$ on $\Tri$.
It is transitive on the set of edges, which is in bijection with the orbit of $i\in (0,\infty)$, and the stabiliser of $i$ is the subgroup of order $2$ generated by $S$.
It is transitive on the set of triangles, which is in bijection with the orbit of $j=\exp(i\pi/3)\in (0,1,\infty)$, and the stabiliser of $j$ is the subgroup of order $3$ generated by $T$.
Thus it is freely transitive on the flags of $\Tri$, or equivalently on the oriented edges, and we deduce that $\PSL_2(\Z)=\Z/2*\Z/3$ is the free amalgam of its subgroups generated by $S$ and $T$.
\begin{equation*}
S = \begin{pmatrix}
0 & -1 \\ 1 & 0
\end{pmatrix}
\qquad
T = \begin{pmatrix}
1 & -1 \\ 1 & 0
\end{pmatrix}
\end{equation*}
We also find that $\PSL_2(\Z)$ acts properly discontinuously on $\H\P$ with fundamental domain the triangle $(\infty,0,j)$.
We may cut it along the geodesic arc $(i,j)$ to obtain a pair of isometric triangles $(i,j,\infty)$ and $(i,j,0)$. Identifying them along their isometric edges yields the \emph{modular orbifold}
\begin{equation*}
\M = \PSL_2(\Z)\backslash \H\P.
\end{equation*}
It is a hyperbolic two-dimensional orbifold, with conical singularities of order $2$ \& $3$ associated to the fixed points $i$ \& $j$ of $S$ \& $T$, and a cusp associated to the fixed point $\infty \in \partial \H\P$ of $R=S^{-1}T$.
\begin{figure}[h]
\centering
\scalebox{0.57}{\input{images/tikz/lagrangian-complex}}
\hfill
\scalebox{0.57}{\input{images/tikz/PSL2Z-pavage-PH-fundom}}
\caption*{The ideal triangulation of $\H\P$ together with its dual trivalent tree $\Tree$ yield the modular tessellation with fundamental domain $(0,j,\infty)$.}
\label{fig:LagranTree}
\end{figure}
The modular group $\PSL_2(\Z)$ is the orbifold fundamental group of $\M$, so its conjugacy classes correspond to the free homotopy classes of oriented loops in $\M$.
The elliptic conjugacy classes are those of $S$ \& $T^{\pm 1}$ which correspond to oriented loops encircling the singularities, and the parabolic conjugacy classes are those of $R^n$ which correspond to loops encircling the cusp.
The conjugacy class of a hyperbolic $A\in \PSL_2(\Z)$ corresponds to the homotopy class of a unique oriented geodesic $\gamma_A\subset \M$, and its length equals $\lambda_A=\log(\epsilon_A^2)$. These are the called a \emph{modular geodesics}.
\subsection{Acting on a trivalent tree}
The preimage of the segment $(i,j)\subset \M$ in $\H\P$ forms a bipartite tree $\Tree'$, the first barycentric subdivision of a trivalent tree $\Tree$ which is dual to the ideal triangulation.
The \emph{base edge} $(i,j)$ of $\Tree'$ defines the \emph{oriented base edge} $\vec{e}_i$ of $\Tree$.
The action of $\PSL_2(\Z)$ is freely transitive on the set of edges of $\Tree'$ hence on the set of oriented edges of $\Tree$.
It also preserves the cyclic order on the set of edges incident to each vertex (given by the surface embedding $\Tree \subset \H\P$).
This is equivalent to the cyclic order function $\cord(x,y,z)\in \{-1,0,1\}$ of three points $x,y,z\in \Tree \cup \partial \Tree$.
\begin{comment}
This is equivalent to the cyclic order function $\cord(x,y,z)\in \{-1,1\}$ of three distinct points $x,y,z\in \Tree \cup \partial \Tree$, or to the crossing function $\cross(u,v,x,y) \in \{-1,0,1\}$ of four distinct points $u,v,x,y\in \Tree \cup \partial \Tree$ defined by:
\begin{equation*}
\cross(u,v,x,y) = \tfrac{1}{2}\left(\cord(u,x,v) - \cord(u,y,v)
\right)\end{equation*}
that is the algebraic intersection number of the oriented geodesics $(u,v)$ and $(x,y)$.
We denote $\across(u,v,x,y) \in \{0,1\}$ the absolute value of $\cross(u,v,x,y)$ which yields the linking number of the cycles $(u,v)$, $(x,y)$ in the cyclically ordered boundary $\partial \Tree$.
\end{comment}
Thus $\PSL_2(\Z)$ is the full automorphism group of cyclically ordered simplicial tree $(\Tree, \cord)$.
We may now use this action to find some conjugacy invariants of primitive elements by considering their stable subsets.
Recall that an element in $\PSL_2(\Z)$ is called primitive when it generates a maximal cyclic subgroup.
The (primitive) elliptic conjugacy classes correspond to the vertices of $\Tree'$ and the primitive parabolic conjugacy classes correspond to the connected components of $\H\P\setminus \Tree$.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.5}{\input{images/tikz/action_L-R-S-T_tree}}
\hfill
\scalebox{.5}{\input{images/tikz/action_LRL_tree}}
\caption{The action of $S$, $T$, $L$, $R$ and $LRL$ on the dual tree $\Tree$ of $\Tri_2$.}
\label{fig:PSL2Z-elli-para-hyper-tree}
\end{figure}
\end{comment}
Let $A\in \PSL_2(\Z)$ be primitive of infinite order.
It acts on $\Tree$ by translation along an oriented geodesic $g_A$ called its \emph{combinatorial axis}, with endpoints $\alpha_-,\alpha_+\in \partial \Tree = \R\P^1$.
Observe that $g_A$ passes through the oriented base edge $\vec{e}_i$ of $\Tree$ exactly when its endpoints satisfy $\alpha_-\le 0 \le \alpha_+$, which is equivalent to saying that $A$ maps the base triangle $(0,1,\infty)$ to a triangle of the form $(\tfrac{b}{d}, \tfrac{a+b}{c+d}, \tfrac{a}{c})$ with $a,b,c,d\in \N$, in other terms that $A$ belongs to the monoid $\PSL_2(\N)$ freely generated by $L\& R$.
In that case, $g_A$ follows a periodic sequence of left and right turns given by the $L\& R$-factorisation of $A$, or the continued fraction expansion of the periodic number $\alpha$.
The conjugacy class of $A$ corresponds to the orbit of $g_A$ under the action of $\PSL_2(\Z)$ on $\Tree$.
Hence the conjugacy classes of non-elliptic elements in $\PSL_2(\Z)$ correspond to the cyclic words over the alphabet $\{L,R\}$, and the hyperbolic classes yield the cycles in which both letters appear.
The linear representatives of such an $L\&R$-cycle parametrize the intersection of the corresponding conjugacy class with $\PSL_2(\N)$, whose elements are called its \emph{Euclidean representatives}.
For infinite order $A\in \PSL_2(\Z)$, the minimum displacement length $d(x,A\cdot x)$ of a vertex $x\in \Tree$ equals the combinatorial length $\len(A) = \#R+\#L$ of a Euclidean representative.
Since the combinatorial and geometric axes of a hyperbolic $A\in \PSL_2(\Z)$ have the same endpoints $\alpha',\alpha \in \R\P^1 = \partial \H\P = \partial \Tree$, they intersect the ideal triangulation in the same pattern. However the geometric axis also contains the information of the intersection pattern with its first barycentric subdivision, which is equivalent to the isotopy class of the modular geodesic in $\M$.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.55}{\input{images/tikz/modular-tesselation_axis}}
\caption*
Geometric axis $\gamma_A$ inside a $(\log{\sqrt{\Delta}})$-neighbourhood of the combinatorial axis $g_A$.}
\end{figure}
\end{comment}
\begin{figure}[h]
\centering
\scalebox{0.32}{\input{images/tikz/axis-RL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RL-M}}
\hfill
\scalebox{0.32}{\input{images/tikz/axis-RLL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RLL-M}}
\hfill
\scalebox{0.32}{\input{images/tikz/axis-RLLL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RLLL-M}}
\caption*{The geometric axes in $\P\H$ and their projections in $\M$ of $RL$, $RLL$, $RLLL$.}
\end{figure}
In \cite[Chapter 3]{CLS_phdthesis_2022}, we recover the isotopy class of $\gamma_A\subset \M$ from the $L\&R$-cycle of $A$.
We also describe when $\gamma_A$ passes through the singular points $i$ or $j$.
Moreover \cite[Lemma 2.27]{CLS_phdthesis_2022} shows that if a (primitive) hyperbolic conjugacy class in $\PSL_2(\Z)$ is stable under inversion, then it contains (exactly $4$) symmetric matrices, and those all lie in $\PSL_2(\N)$ up to inversion.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{0.8}{\input{images/tikz/loop-RL-M}}
\hspace{1cm}
\scalebox{0.8}{\input{images/tikz/loop-RLL-M}}
\hspace{1cm}
\scalebox{0.8}{\input{images/tikz/loop-RLLL-M}}
\caption*{The modular geodesics $\gamma_A\subset \M$ for $A$ equal to $RL$ and $RLL$ and $RLLL$.}
\end{figure}
\end{comment}
\section{From the geometric cosine to the combinatorial cosing}
\subsection{The functions $\cross$ and $\cosign$.}
We now use the representation $\PSL_2(\Z)=\Aut(\Tree, \cord)$ to find conjugacy invariants for pairs of primitive infinite order elements by comparing the relative positions of their stable subsets, namely their combinatorial axes.
Let us first derive from the cyclic order function of three points, the crossing function of four points $u,v,x,y\in \Tree \cup \partial \Tree$ by:
\begin{equation}
\label{eq:cross}
\cross(u,v,x,y) = \tfrac{1}{2}\left(\cord(u,x,v) - \cord(u,y,v)
\right)\end{equation}
that is the algebraic intersection number of the oriented geodesics $(u,v)$ and $(x,y)$.
We denote $\across(u,v,x,y) \in \{0,\tfrac{1}{2},1\}$ the absolute value of $\cross(u,v,x,y)$ which yields the linking number of the cycles $(u,v)$, $(x,y)$ in the cyclically ordered boundary $\partial \Tree$.
One may compare the formula \eqref{eq:cross} defining $\cross$ with the formula \eqref{eq:bir} defining $\bir$ noticing that $\cord(x,y,z)=\sign \tfrac{y-z}{y-x}$. In particular, for $u,v,x,y\in \R\P^1$ we have \[\across(u,v,x,y) = 1 \iff \bir(u,v,x,y) > 1\]
Now consider two oriented bi-infinite geodesics $g_a$ and $g_b$ of $\Tree$. Their intersection is either empty in which case we define $\cosign(g_A,g_b)=0$, or else it consists in a geodesic containing at least one edge along which we may thus compare their orientations by $\cosign(g_a,g_b)\in \{-1,+1\}$.
The functions $\cross$ and $\cosign$ are $\PSL_2(\Z)$-invariant, symmetric, and inverting the orientation of one argument results in a change of sign.
\begin{figure}[h]
\centering
\scalebox{.65}{\input{images/tikz/cross-cosign-config}}
%
\caption*{Configurations of axes: $\cross$ and $\cosign$. Note that $\cross\ne 0 \implies \cosign = \pm 1$.}
\end{figure}
For hyperbolic $A,B \in \PSL_2(\Z)$ with axes $g_A=(\alpha_-,\alpha_+)$ and $g_B=(\beta_-,\beta_+)$ in $\Tree$, we write $\cross(A,B)=\cross(\alpha_-,\alpha_+,\beta_-,\beta_+)$ and $\cosign(A,B)=\cosign(g_A,g_B)$.
Note that $\cosign(A,B)=1$ if and only if there exists $C\in \PSL_2(\Z)$ such that $CAC^{-1}, CBC^{-1} \in \PSL_2(\N)$ in which case the set of such $C$ corresponds to the edges in $g_A\cap g_B \subset \Tree$.
\begin{Proposition}
\label{Prop:cosign(A,B)=sign(len(AB)-len(A/B))}
For hyperbolic $A,B \in \PSL_2(\Z)$ such that $g_A \cap g_B \ne \emptyset$, we have:
\begin{equation*}
\cosign(A,B)= \sign\left(\len AB -\len AB^{-1} \right).
\end{equation*}
\end{Proposition}
\begin{proof}
This follows from \cite[Proposition 1.6]{Paulin_Gromov-R-trees_1989}, which was corrected by \cite{Conder-Paulin_Erratum-Gromov-R-trees_2020}, and one may also consult \cite[Proposition 2.44]{CLS_phdthesis_2022}.
Compare with the cosine formula in Lemma \ref{Lem:cos-cosh-sinh}.
\end{proof}
\newpage
\subsection{Deforming the \texorpdfstring{$\PSL_2(\Z)$}{PSL(2;Z)}-action on \texorpdfstring{$\H\P$}{HP} to the \texorpdfstring{$\PSL_2(\Z)$}{PSL(2;Z)}-action on \texorpdfstring{$\Tree$}{T}}
Let us define a family of representations $\rho_q \colon \SL_2(\Z) \to \SL_2(\R)$ depending algebraically on the parameter $q\in \R^*$ and with integral coefficients.
The Euclidean algorithm implies that $\SL_2(\Z)$ is generated by $S\&R$, whence by $S\&T$, or $L\&R$.
Fix $S_q=S$ and let $T_q$ be the conjugate of $T$ by $\exp \tfrac{1}{2}\log(q)
\begin{psmallmatrix}
1&0\\0&-1
\end{psmallmatrix}$.
\begin{comment}
\begin{equation*}
T_q =
\begin{pmatrix}
1 & -q \\
q^{-1} & 0
\end{pmatrix}
\quad \mathrm{whence} \quad
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}
\quad \mathrm{and} \quad
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}
\end{equation*}
\end{comment}
Given $A\in \SL_2(\Z)$, we deduce $A_q=\rho_q(A)$ from any $S\&T$-factorisation by replacing $T\mapsto T_q$, eg:
\begin{equation*}
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}
\quad \mathrm{and} \quad
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}.
\end{equation*}
This descends to a representation $\Bar{\rho}_q \colon \PSL_2(\Z)\to \PSL_2(\R)$ which is faithful and discrete (because $\disc(R_q) = (q-q^{-1})^2 >0$),
and positive in the sense that $T_q$ is a $2\pi/3$-rotation of $\H\P$ in the positive direction.
Conversely, every such representation is conjugate to $\Bar{\rho}_q$ for a unique $q>0$.
We have therefore parametrized the Teichm\"uller space of $\PSL_2(\Z)$ by the real algebraic set $\R_+^*$.
This Teichm\"uller space corresponds to the set of hyperbolic metrics $\M_q = \Bar{\rho}_q(\Gamma)\backslash\H\P$ on the modular orbifold as a topological space.
Observe intuitively that when $q\to \infty$, the hyperbolic orbifold $\M_q$ has a convex core which retracts onto the long geodesic arc $(i,j_q)$ connecting the conical singularities.
Lifting this in $\H\P$ yields an $\epsilon$-neighbourhood of a trivalent $\Tree_q$ with $\epsilon= \Theta\left(1/q^2\right)$.
Since the hyperbolic geodesics of $\M_q$ remain in its convex core, their angles must tend to $0\bmod{\pi}$.
\begin{figure}[h]
\centering
\scalebox{0.9}{\input{images/tikz/orbifold-M-open-cusp}}
\qquad \qquad
\scalebox{0.45}{\input{images/tikz/orbifold-M-convex-core-lift-tree}}
\caption*{The convex core of $\M_q$ lifts in $\H\P$ to an $\epsilon$-neighbourhood of $\Tree_q$ with $\epsilon= \Theta\left(1/q^2\right)$.}
\end{figure}
To make this intuition precise, the geometric invariants $\disc$ and $\cos$ of $A_q,B_q$ define algebraic functions of $q$ whose degrees recover the combinatorial invariants $2\len$ and $\cosign$ of $A,B$.
This should not surprise someone acquainted with compactifications of Teichm\"uller space by actions on trees or by valuations \cite{Otal_compactification-varietes-representations_2015, MS_Aut(CV)_2020}.
Here the unique boundary point $q=\infty$ corresponds to the action on $\Tree$ or to the valuation $-\deg_q$.
\begin{Proposition}
\label{Prop:cos_q-limit}
Consider hyperbolic $A,B\in \PSL_2(\Z)$ such that $\across(A,B)=1$.
For all $q>0$ the elements $A_q,B_q\in \PSL_2(\R)$ are hyperbolic, and their oriented geometric axes intersect at an angle whose cosine is an algebraic function of $q$ with limit $\cos(A_q,B_q) \xrightarrow[q\to \infty]{} \cosign(A,B)$.
\end{Proposition}
\begin{proof}
Lemma \ref{Lem:cos-cosh-sinh} expresses the cosine of the angles between the geometric axes of $A$ and $B$ as:
\begin{equation*}
\cos(A_q,B_q)
= \frac{\Tr(A_qB_q)-\Tr(A_qB_q^{-1})}{\sqrt{\disc(A_q)\disc(B_q)}}.
\end{equation*}
For all $C\in \SL_2(\Z)$ the Laurent polynomial $\Tr(C_q)$ is reciprocal of degree $\len(C)$.
To find the limit as $q\to \infty$ we compute the degrees and dominant terms of the polynomials involved in this expression as follows.
Now recall from Proposition \ref{Prop:cosign(A,B)=sign(len(AB)-len(A/B))} that for hyperbolic $A,B \in \PSL_2(\Z)$ whose fixed points are linked we have $\cosign(A,B)= \sign\left(\len AB-\len AB^{-1} \right)$.
This completes the proof.
\end{proof}
\section{The unit tangent bundle to the modular orbifold}
\subsection{Modular knots and links}
\begin{comment}
The Lie group $\PSL_2(\R)$ retracts by deformation onto its maximal compact subgroup $\PSO_2(\R)$.
The fibration $\PSO_2(\R) \to \PSL_2(\R) \to \H\P$ over its symmetric space yields the unit tangent bundle of the hyperbolic plane.
The lattice $\PSL_2(\Z)$ acts on the symmetric space $\H\P$ with quotient $\M$.
Its preimage $\widetilde{\PSL}_2(\Z)$ acts on the left of $\widetilde{\PSL_2}(\R)$ with quotient $\U$ the unit tangent bundle of $\M$.
The fundamental group $\pi_1(\PSL_2(\R))=\pi_1(\PSO_2(\R))=\Z$ corresponds to a discrete normal subgroup of the universal cover $\widetilde{\PSL}_2(\R)$ which is thus central.
We find the diagram of fibrations and covers:
\begin{equation*}
\xymatrix{
\Z \ar[d] \ar[r]
& \R \ar[d] \ar[r]
& \S^1 \ar[d]
\\
\widetilde{\PSL}_2(\Z) \ar[d] \ar[r]
& \widetilde{\PSL}_2(\R) \ar[d] \ar[r]
& \U \ar[d]
\\
\PSL_2(\Z) \ar@[c>][r]
& \H\P \ar[r]
& \M
}
\end{equation*}
Observe the columns: the first is a short exact sequences of groups, the second is a trivial fibration between contractible spaces, and the last is a non-trivial fibration.
The lines are all universal covers, and the first one is a short exact sequence of groups.
\end{comment}
The Lie group $\PSL_2(\R)$ identifies with the unit tangent bundle to the hyperbolic plane $\H\P$.
Its lattice $\PSL_2(\Z)$ acts on the left with quotient $\U=\PSL_2(\Z)\backslash\PSL_2(\R)$ the unit tangent bundle to the modular orbifold $\M= \PSL_2(\Z) \backslash \H\P$.
The fundamental group of $\U$ is the preimage of $\PSL_2(\Z)$ in the universal cover of $\PSL_2(\R)$, given by the central extension:
\begin{equation*}
\Id \to \Z \to \widetilde{\PSL}_2(\Z) \to \PSL_2(\Z) \to \Id
\end{equation*}
and we find that $\pi_1(\U)$ is isomorphic to the braid group on three strands, hence to the fundamental group of a trefoil knot's complement.
In fact, the structure of the Seifert fibration $\U \to \M$ reveals that $\U$ is homeomorphic to the complement of a trefoil knot in the sphere (see \cite{Montesinos_Tesselations_1987, Dehornoy-Pinsky_template-pqr_2018} for such a proof).
In particular, any two disjoint loops in $\U$ have a well defined linking number.
The closed hyperbolic geodesics in $\M$ lift to the periodic orbits for the geodesic flow in its unit tangent bundle $\U$, and the primitive ones trace the so called \emph{modular knots}.
Together, they form the \emph{master modular link} whose components are indexed by the primitive hyperbolic conjugacy classes in the modular group.
We wish to relate the geometry and topology of the master modular link with the arithmetic and combinatorial properties of the modular group.
\begin{figure}[h]
\centering
\includegraphics[width=0.36\textwidth]{images/misc/seifert_fib_big.jpg}
%
\includegraphics[width=0.48\textwidth]{images/misc/two_modular_knots_5,3,333,200_200,333,3,5.jpg}
\caption*{
The Seifert fibration $\U\to \M$ and two modular knots, from the \href{http://www.josleys.com/articles/ams_article/Lorenz3.htm}{online article} \cite{GhyLey_Lorenz-Modular-visual_2016} which proposes an animated introduction to the topology and dynamics of $\U$.
}
\end{figure}
\subsection{The Lorenz template}
To describe the isotopy class of the master modular link, we rely on the construction of the Lorenz template and its embedding in $\U$, following \cite[§3.4]{Ghys_knots-dynamics_2006}.
The Lorenz template $\Lorenz$ is the branched surface obtained from the ideal triangle $(0,1,\infty)$ of $\H\P$ by identifying the side $(1,\infty)$ with the side $(0,\infty)$ through $R^{-1}$ and the side $(0,1)$ with the side $(0,\infty)$ through $L^{-1}$.
It is endowed with a semi-flow defined by the horizontal vector field whose periodic orbits correspond to the non-empty cycles on $\{L,R\}$ (this is an interval exchange map).
After its embedding $\Lorenz \hookrightarrow \U$ suggested in the following figure, those form \emph{the master Lorenz link}.
Consider a primitive hyperbolic conjugacy class in $\PSL_2(\Z)$: the geometric axes of its Euclidean representatives intersect the ideal triangle $(0,1,\infty)$ in a collection of segments which quotient to a closed connected loop in $\Lorenz$.
This loop is isotopic to the periodic orbit of the semi-flow indexed by the corresponding $L\& R$-cycle.
More precisely, \'E. Ghys \cite[§3.4]{Ghys_knots-dynamics_2006} showed the following.
\begin{Theorem}
The master modular link formed by all modular knots is isotopic to the master Lorenz link formed by the primitive periodic orbits of the semi-flow on the Lorenz template.
In particular, the Rademacher invariant of an primitive hyperbolic conjugacy class in $\PSL_2(\Z)$ equals the linking number between the corresponding modular knot and the trefoil.
\end{Theorem}
\begin{proof}[Outline of the proof]
The Fuchsian representation $\Bar{\rho}_q\colon \PSL_2(\Z) \to \PSL_2(\R)$ with quotient the hyperbolic orbifold $\M_q$ lifts to $\widetilde{\PSL}_2(\Z)\to \widetilde{\PSL}_2(\R)$ with quotient its unit tangent bundle $\U_q$.
Varying $q\in ]1,+\infty[$ yields isotopies between the manifolds $\U_q$ which are all homeomorphic to the complement of a trefoil's neighbourhood, and conjugacies between their geodesic flows whose periodic orbits are indexed by the primitive conjugacy classes of infinite order in $\PSL_2(\Z)$.
As $q\to \infty$ the manifold $\U_q$ retracts onto a branched surface homeomorphic to the Lorenz template, and the master $q$-modular link isotopes to the periodic orbits of its semi-flow.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/misc/template-big.jpg}
\caption*{Standard projection on $\S^2$ of the Lorenz template embedded in $\S^3$.}
\label{fig:Lorenz-Template}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.42\textwidth]{images/misc/orbit_deformation_blue_1.jpg}
\includegraphics[width=0.42\textwidth]{images/misc/orbit_deformation_blue_2.jpg}
\caption*{Isotopy of a modular knot to a Lorenz knot. The trefoil is in yellow.}
\label{fig:Modular-knot-in-Lorenz-Template}
\end{figure}
\'E. Ghys concludes his paper \cite{Ghys_knots-dynamics_2006} by asking for an arithmetic interpretation of the linking pairing between modular knots.
Note that the embedded Lorenz template provides a framing for the Lorenz knots, which enables to define their \emph{self-linking number} as the linking number between two parallel copies of the knot in the Lorenz template.
\section{Linking numbers of modular knots}
\label{sec:linking-numbers}
\subsection{Invariants on pairs of conjugacy classes}
\label{subsec:F(A,B)}
The action of $\PSL_2(\Z)$ on $\H\P$ and $\Tree$ enabled us to defined conjugacy invariants for pairs $(A,B)$ by comparing their stable subsets in $\H\P$ or $\Tree$.
We now explain how to average those in order to obtain functions of pairs of conjugacy classes.
\subsubsection*{Summing over double cosets}
Consider a group $\Pi$ acting on a space $\Sigma$ and a function $f$ defined on $\Sigma \times \Sigma$ with values in a commutative group $\Lambda$ which is invariant under the diagonal action of $\Pi$:
\begin{equation*}
f\colon \Sigma \times \Sigma \to \Lambda
\qquad
\forall W \in \Pi, \;
\forall a,b \in \Sigma
\: \colon \:
f(a,b)=f(W\cdot a, W\cdot b)
\end{equation*}
We define an invariant $F$ for pairs of $\Pi$-orbits $[a],[b]$ by summing $f$ over all pairs of representatives of the orbits considered modulo the diagonal action of $\Pi$.
The pairs of representatives for the orbits are parametrized by the $(U\cdot a,V\cdot b)$ for $(U,V)\in \Pi / (\Stab a) \times \Pi / (\Stab b)$, and the quotient of this set by the diagonal action of $\Pi$ by left translations is denoted $\Pi / (\Stab a) \times_\Pi \Pi / (\Stab b)$.
Consequently, the sum indexed by $(U,V) \in \left(\Pi/ \Stab a \right) \times_\Pi \left(\Pi/ \Stab b \right)$ defines our desired invariant:
\begin{equation*}
F([a],[b]) = \sum f(U\cdot A, V\cdot b)
\end{equation*}
This can also be written as the sum over double cosets $W\in (\Stab a) \backslash \Pi / (\Stab b)$:
\begin{equation*}
F([a],[b]) = \sum f(a,W\cdot b)
\end{equation*}
because the map $(\Pi / \Stab a) \times (\Pi / \Stab b) \to (\Stab a) \backslash \Pi / (\Stab b)$ sending $(U,V)$ to $W=U^{-1}V$ is surjective, and its fibers are the orbits under the diagonal action of $\Pi$ by left translations.
We will apply this discussion to the action of $\PSL_2(\Z)$ on itself by conjugacy to obtain invariants for pairs of primitive hyperbolic conjugacy classes.
Note that in $\PSL_2(\Z)$, the centraliser of a hyperbolic $A$ is the infinite cyclic subgroup generated by its primitive root, namely the unique primitive element with a positive power equal to $A$.
Our functions $f(a,b)$ will be expressed in terms of geometrical invariants such as $\bir(A,B)$ or $\cos(A,B)$, as well as combinatorial invariants such as $\cross(A,B)$ and $\cosign(A,B)$.
To ensure that the sum is well defined, it must have finite support or converge in a completion of $\Lambda$ for an appropriate norm, and that depends on the behaviour of $f$.
\subsubsection*{Summing over $L\&R$-words}
Consider a function $f$ over the pairs of coprime primitive hyperbolic $A,B\in \PSL_2(\Z)$, which is invariant under the diagonal action of $\PSL_2(\Z)$ on itself by left conjugacy.
In order to compute the sum defining $F([A],[B])$, we may group the terms $f(UAU^{-1},VBV^{-1})$ according to the $\cosign(UAU^{-1},VBV^{-1}) \in \{-1,0,1\}$ to obtain:
\begin{equation*}
F = F_- + F_0 + F_+
\end{equation*}
The sum $F_+$ has finite support, contained in the set of pairs of Euclidean representatives for the conjugacy classes of $A,B$.
Similarly the sum $F_-$ has finite support, which we may also index by those pairs of Euclidean representatives using the fact that $\cosign(A,B)=-\cosign(A,SBS^{-1})$. Thus for $A,B\in \PSL_2(\N)$ we have the following computable expressions:
\begin{equation*}
F_+([A],[B])=
\sum
f\left(\sigma^iA,\sigma^jB\right)
%
\qquad
%
F_-([A],[B])
=
\sum
f\left(\sigma^iA,S(\sigma^jB)S^{-1}\right)
\end{equation*}
where the indices $i\in [1,\len A]$, $j\in [1,\len B]$ are such that $\sigma^iA$ and $\sigma^jB$ end with different letters.
One may similarly split the sum $F_0$ in two parts according to the relative orientations of the axes (interchanged by the action of $S$ on one of the components of $f$), but their index sets are infinite.
Suppose that $\cosign(A,B)=0\implies f(A,B)=0$ and $f(A,B^{-1})=\epsilon f(A,B)$ with $\epsilon\in \{\pm 1\}$. This holds for $\cross$ \& $\cosign$ with $\epsilon =-1$, and for their product or their absolute values with $\epsilon =1$. Then $F_0=0$ and $F_-(A,B)=\epsilon \cdot F_+(A,{}^t\!B)$ thus $F(A,B)= F_+(A,B)+\epsilon \cdot F_+(A, {}^t\!B)$.
\subsection{Linking numbers from the action on
$(\Tree,\cord)$}
The projection of the Lorenz template yields a diagram for the Lorenz link in which all crossings are positive, and those can be enumerated using the $L\&R$-cycles of the corresponding modular knots. This yields the algorithmic formula \cite[4.27]{CLS_phdthesis_2022} for computing linking numbers, which was used by Pierre Dehornoy in \cite{Dehornoy_noeuds-lorenz_2011}.
We recast it in Proposition \ref{Prop:algo-sum} in terms of the action of $\PSL_2(\Z)$ on $(\Tree,\cord)$,
after introducing the appropriate quantity to be summed.
\begin{Definition}
For oriented bi-infinite geodesics $g_a,g_b \subset \Tree$ with distinct ends we define:
\begin{equation*}
\crocs(g_a,g_b)
=\left(\across \times \frac{1+\cosign}{2}\right)(g_a, g_b)
=\left(\frac{1+\
\cross}{2} \times \frac{1+\cosign}{2}\right)(g_a, g_b)
\end{equation*}
Hence $\crocs(g_a, g_b)=1$ only when the axes cross and their orientation coincides along the intersection.
\end{Definition}
We say that $A,B\in \PSL_2(\Z)$ are \emph{coprime} when their positive powers are never conjugate.
\begin{Proposition}[Algorithmic formula: sum over $L\&R$-words]
\label{Prop:algo-sum}
For coprime hyperbolic elements $A,B\in \PSL_2(\N)$ we have:
\begin{equation*}
\lk(A,B)=\frac{1}{2}\sum \crocs(\sigma^iA, \sigma^jB)
\end{equation*}
ranging over all $i\in [1,\len A]$, $j\in [1,\len B]$ are such that $\sigma^iA$ and $\sigma^jB$ end with different letters.
\end{Proposition}
\begin{proof}
The monoid $\PSL_2(\N)$ is endowed with the lexicographic order extending $L<R$.
The crossings between the Lorenz knots associated to $A,B\in \PSL_2(\N)$ are in bijection with the pairs of Euclidean representatives whose last letters are in the opposite order of the words themselves, so that either $\sigma^iA=w_AL$, $\sigma^jB=w_BR$ with $w_A>w_B$ or $\sigma^iA=w_AR$, $\sigma^jB=L$ with $w_A<w_B$.
\end{proof}
We deduce from the previous paragraph a group theoretical formula in terms of double cosets.
\begin{Theorem}[Algebraic formula: sum over double cosets]
\label{Prop:algebra-sum}
For coprime primitive hyperbolic elements $A,B\in \PSL_2(\Z)$:
\begin{equation*}
\lk(A,B)=\frac{1}{2}\sum \crocs(\tilde{A}, \tilde{B})
\end{equation*}
where the sum extends over pairs of representatives $\tilde{A}=UAU^{-1}$ and $\tilde{B}=VBV^{-1}$ for the conjugacy classes with
$(U,V)\in \Gamma/\langle A\rangle \times_\Gamma \Gamma/\langle B\rangle$.
\end{Theorem}
\begin{Remark}
\label{Rem:intersection-from-link}
In particular, we recover the intersection number between modular geodesics as:
\begin{equation*}
\lk(A,B)+\lk(A,B^{-1})=\frac{1}{2}\sum \across(\tilde{A},\tilde{B}) = \tfrac{1}{2}\cdot I(A,B)
\end{equation*}
whereas the sum of the cosign over pairs of intersecting axes yields:
\begin{equation*}
\lk(A,B)-\lk(A,B^{-1})=\frac{1}{2}\sum \left(\across \times \cosign\right)(\tilde{A},\tilde{B}).
\end{equation*}
We deduce an efficient algorithm computing the intersection number $I(A,B)$ from the $L\&R$-factorisation of $A,B$ by applying algorithmic formula to the linking numbers $\lk(A,B)$ and $\lk(A,B^{-1})$.
Note that if $A$ is conjugate to $B$, then $I(A,B)$ is the intersection number between two parallel copies of the modular geodesic, which is twice its self-intersection number (counted as the number of double points).
For instance, the modular geodesic corresponding to $RLL$ has self-intersection \[\tfrac{1}{2}I([RLL],[RLL])=\lk([RLL],[RLL])+\lk([RLL],[LLR])=\tfrac{1}{2}4+\tfrac{1}{2}2=3.\]
\end{Remark}
\section{Linking function on the character variety and its boundary}
\subsection{Linking function on the character variety and its boundary}
Recall the family of Fuchsian representations $\Bar{\rho}_q\colon \PSL_2(\Z) \to \PSL_2(\R)$ parametrized algebraically by $q\in \R^*$.
\begin{Definition}
For primitive hyperbolic $A,B\in \PSL_2(\Z)$, we define the algebraic functions of $q$:
\begin{equation}\label{eq:Link}\tag{$\Link_q$}
\Link_q(A,B)
= \frac{1}{2} \sum \left(\tfrac{\asrt{\bir>1}}{\bir}\right)\left(\tilde{A}_q, \tilde{B}_q\right)
\end{equation}
\begin{equation}\label{eq:Cos}\tag{$\Cos_q$}
\Cos_q(A,B)
= \frac{1}{2} \sum (\across \times \cos)(\tilde{A}_q, \tilde{B}_q)
\end{equation}
by summing over the pairs of representatives $\tilde{A}=UAU^{-1}$ and $\tilde{B}=VBV^{-1}$ for the conjugacy classes of $A$ and $B$ where $(U,V)\in \Gamma/\Stab(A) \times_\Gamma \Gamma/\Stab(B)$.
\end{Definition}
The appearance of $\asrt{\bir>1}=\across$ as a factor in the terms of \ref{eq:Link} and \ref{eq:Cos} amounts to restricting the summations over pairs of matrices whose axes intersect.
Hence the support of the sums corresponds to the intersection points of the modular geodesics $\gamma_A$ and $\gamma_B$ associated to the conjugacy classes, which must be counted with appropriate multiplicity when $A$ or $B$ is not primitive. Thus:
\begin{equation*}
\Link_q(A,B)
= \frac{1}{2} \sum \left(\cos \tfrac{\theta}{2}\right)^2
\qquad \mathrm{and} \qquad
\Cos_q(A,B)
= \frac{1}{2} \sum \left(\cos \theta\right).
\end{equation*}
Observe that since $\bir(A,B)^{-1}+\bir(A,B^{-1})^{-1}=1$ we have $\Link_q(A,B)+\Link_q(A,B^{-1})=\tfrac{1}{2}I(A,B)$.
\begin{Conjecture}
The angles turning from $\gamma_{A_q}$ to $\gamma_{B_q}$ in the direction prescribed by the orientation of $\M_q$ have cosines $(\cross \times \cos)(\tilde{A_q},\tilde{B_q})$: we believe that they sum up to $0$, as explained in \ref{subsec:Link-Fuchsian-group}.
\end{Conjecture}
\begin{Theorem}
\label{Thm:Bir(A,B)-->lk(A,B)}
For primitive hyperbolic conjugacy classes $[A],[B]$ in $\PSL_2(\Z)$ we have:
\begin{align*}
&\Link_q(A,B) \xrightarrow[q\to \infty]{} \lk(A,B)
\\
&\Cos_q(A,B) \xrightarrow[q\to \infty]{}
\lk(A,B)-\lk(A^{-1},B)
=\lk(A,B)-\tfrac{1}{4}I(A, B)
\end{align*}
\end{Theorem}
\begin{proof}
Recall from Lemma \ref{Lem:cos-cosh-sinh} the relation $1/\bir(A_q,B_q)=\tfrac{1}{2}(1+\cos(A_q,B_q))$ and from Proposition \ref{Prop:cos_q-limit} the limit $\cos(A_q,B_q)\to \cosign(A,B)$ as $q\to \infty$. Hence the terms of the sum defining \ref{eq:Link} converge to those in the sum of Proposition \ref{Prop:algebra-sum}. The limit of \ref{eq:Cos} follows from Remark \ref{Rem:intersection-from-link}.
\end{proof}
Let us display the graphs of $\textcolor{blue}{q\mapsto2\Link_q(A,B)}$ and $\textcolor{red}{q\mapsto2\Link_q(A,B^{-1})}$ along with their average $\textcolor{black!50!green}{\tfrac{1}{2}I(A,B)}$ for some pairs $A,B\in \PSL_2(\N)$. The legend $A=[a_0,a_1,\dots]$ means $A=R^{a_0}L^{n_1}\dots$.
\begin{figure}[h]
\centering
\includegraphics[width=0.36\textwidth]{images/python/qLink_1,2-1,2_sample=42.png}
\hspace{-0.9cm}
\includegraphics[width=0.36\textwidth]{images/python/qLink_1,2-1,2,3,4_sample=42.png}
\hspace{-0.9cm}
\includegraphics[width=0.36\textwidth]{images/python/qLink_3,1,2,3-7,2,1,1_sample=42.png}
\vspace{-0.4cm}
\caption*{\textcolor{blue}{$L_q(A,B)$} interpolates between the arithmetic at $1$ and the topology at $+\infty$.}
\end{figure}
\subsection{Graphs of $\Link_q(A,B)$ for $q\in \C$}
Finally, we represent some graphs of $\Link_q(A,B)$ for $q\in \C$. Since $\Link_q=\Link_{1/q}$ we restrict $\lvert q \rvert < 1+\epsilon$ for some $\epsilon>0$ chosen according to aesthetic criteria.
For this we assign a colour to each point of the complex plane using the HSV colour scheme: the hue varies with the argument, and the brightness varies with the module.
\begin{figure}[h]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.42\textwidth]{images/python/Identity_square=2_res=640}
\vspace{-0.4cm}
\caption*{The identity map for $q\in \C$ with $\lvert q \rvert < 2$.}
\end{figure}
\begin{figure}[h]
\centering
\vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-1,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,5-1,5_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,8-1,8_complex_large_640x640.png}
\\
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-3,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,5-5,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,8-8,1_complex_large_640x640.png}
\hspace{-0.4cm}
\caption*{Graphs of $\Link_q(A,B)$ for for $q\in \C$ with $\lvert q\rvert < 1.6$.}
\end{figure}
The main observation is that the zeros and poles of $\Link_q(A,B)$ seem to concentrate on the unit circle.
This is neither surprising nor obvious, as we explain in the next paragraph.
\subsection{Locating the zeros of \texorpdfstring{$\Link_q$}{L_q}}
\label{subsec:L_q(A,B)=0}
Let us first relate \cite[Proposition 5.16]{CLS_phdthesis_2022}.
A primitive hyperbolic conjugacy class in $\PSL_2(\Z)=\pi_1(\M)$ corresponds to a primitive modular geodesic in $\gamma_A\subset \M$. It lifts to a modular knot in $k_A \subset \U$ which in turns yields a conjugacy class in $\BB_3=\pi_1(\U)$.
A conjugacy class in the braid group on three strands defines, by taking its closure, a link $\sigma_A$ in a solid torus.
In \cite[Proposition 5.16]{CLS_phdthesis_2022} we relate the Alexander polynomial $\Delta(\sigma_A)\in \Z[t^{\pm 1}]$ of this link $\sigma_A$ to the Fricke polynomial $\Tr A_q \in \Z[q^{\pm 1}]$ of the modular geodesic $\gamma_A$.
\begin{Proposition}
For a primitive hyperbolic $A\in \PSL_2(\Z)$, the Alexander polynomial of the link $\sigma_A$ is given in terms of $q=\sqrt{-t}$ by: \[\Delta(\sigma_A)=\tfrac{q^{\Rad(A)}-\Tr(A_q)+q^{-\Rad(A)}}{(q-q^{-1})^2}\]
\end{Proposition}
\begin{figure}[h]
\centering
\vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_2,3-2,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,2-3,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-1,5_complex_large_640x640.png}
\\ \vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_2,3-3,2_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-2,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_3,1,2,3-7,2,1,1_complex_480x480.png}
\\
\vspace{-0.4cm}
\caption*{Graphs of $\Link_q(A,B)$ for $q\in \C$ with $\lvert q\rvert < 1.6$.}
\end{figure}
Locating the zeros of Alexander polynomials of various classes of knots and links has been the subject of various conjectures and results. For instance \cite{Stoimenov_alexander-hoste-conjecture_2019} studies a conjecture of Hoste for links with braid index $3$.
We should also mention that \cite{Dehornoy_zeros-alex-modular-knots_2015} has shown such a concentration property for the zeroes of the Alexander polynomial of a Lorenz knot: they lie an annulus whose inner and outer radii are bounded in terms the genus and the braid index of the knot.
Now recall that $\Link_q(A,B)-\Link_q(A,B^{-1})$ can be expressed as finite sum of terms of the form:
\begin{equation*}
\cos(A_q,B_q) = \tfrac{\disc(A_qB_q)-\disc(A_qB_q^{-1})}{\sqrt{\disc A_q\disc B_q}}
\qquad \mathrm{where} \qquad
\disc(C_q)= (\Tr C_q)^2-4
\end{equation*}
This is why one may guess a concentration property for the zeros of $\Link_q$ around the unit circle.
Still, it would remain a challenge to prove it.
\newpage
\section{Linking numbers and homogeneous quasimorphisms}
\label{sec:quasi-morphism}
\subsection{Combinatorial formula: sum of linked patterns}
We now derive a combinatorial formula for the linking numbers \cite[Proposition 4.34]{CLS_phdthesis_2022} arising from a different count of the crossings in the Lorenz template. It follows from the algorithmic formula, but we propose a visual proof.
The monoid $\PSL_2(\N)$ freely generated by $L\&R$ is given the lexicographic order extending $L<R$.
The monoid $\PSL_2(\N)\setminus\{\Id\}$ maps to the set $\{L,R\}^{\N}$ of infinite binary sequences by sending a finite word $A$ to its periodisation $A^\infty$. This map is increasing and injective in restriction to primitive words.
We denote by $\sigma$ the Bernoulli shift on $\{L,R\}^\N$ which removes the first letter, as well as the cyclic shift on $\PSL_2(\N)\setminus\{\Id\}$ which moves the first letter at the end.
These shifts are intertwined by the periodisation map: $(\sigma^j A)^\infty = \sigma^j(A^\infty)$.
For a pattern $P\in \PSL_2(\N)$ and an infinite order $A\in \PSL_2(\N)$, let $\pref_P(A^\infty)=\asrt{A^\infty\in P\cdot\PSL_2(\N)}\in \{0,1\}$ tell whether $P$ is a prefix of $A^\infty$, and $\occ_P(A) = \sum_{j=1}^{\len A}
\pref_P\left(\sigma^jA^{\infty} \right)$ count the number of cyclic occurrences of $P$ in $A\bmod{\sigma}$.
Recall that $A,B\in \PSL_2(\Z)$ are coprime when their positive powers are never conjugate.
Thus $A,B\in \PSL_2(\N)$ are not coprime when they admit cyclic permutations generating submonoids with non-trivial intersection, in other terms if $A^\infty = B^\infty \bmod{\sigma}$.
\begin{Proposition}[Combinatorial formula: sum of linked patters]
\label{Prop:sum-linked-patterns}
For coprime hyperbolic $A,B\in \PSL_2(\N)$ the corresponding modular knots have linking number:
\begin{equation}
\label{eq:sum-linked-patterns}\tag{SLP}
\lk(A,B) = \frac{1}{2} \sum_{w}
\begin{pmatrix}
\occ_{RwL}(A)\cdot \occ_{LwR}(B)
\\+\\
\occ_{LwR}(A)\cdot \occ_{RwL}(B)
\end{pmatrix}
\end{equation}
where the summation extends over all words $w\in \PSL_2(\N)$ including the empty one.
\end{Proposition}
\begin{proof}[Visual proof sketch]
Split the Lorenz template by extending the dividing line backwards in time, and observe the crossings appearing in its standard planar projection: they occur in regions arranged according to a binary tree indexed by pairs of words of the form $(RwL,LwR)$.
\begin{comment}
\begin{figure}[h]
\centering
\includegraphics[ width=0.49\textwidth]{images/misc/template_coupe_2.jpg}
\hfill
\includegraphics[ width=0.49\textwidth]{images/misc/template_coupe_3.jpg}
\end{figure}
\end{comment}
\end{proof}
\begin{Remark}
\label{rem:long-patterns}
For $\len(P)\ge \len(A)$ we have $\occ_P(A)>0$ if and only if $A^\infty = P^\infty \bmod{\sigma}$, which is equivalent to the non-coprimality of $P$ and $A$.
Hence the coprimality assumption on $A$ and $B$ ensures that the support of the sum \eqref{eq:sum-linked-patterns} is contained in the set of $w$ such that $\len w < \max\{\len A,\len B\}$.
If $A$ and $B$ are not coprime, then they are conjugate to positive powers $C^m$ and $C^n$ of a primitive $C\in \PSL_2(\N)$, and restricting the sum \eqref{eq:sum-linked-patterns} to the indices $w$ with $\len w < \max\{\len A,\len B\}$ yields $mn$ times the self-linking number of the modular knot associated to $C$ with the Lorenz framing.
\end{Remark}
\begin{Remark}
An $L\&R$-cycle $A \bmod{\sigma}$ has a multiset of $L$-exponents and a multiset of $R$-exponents.
Formula \eqref{eq:sum-linked-patterns} shows that $\lk(A,RL^{m+1})-\lk(A,RL^{m})$ counts the number of $L$-exponents which are $>m\ge 1$ and that $\lk(A,LR^{n+1})-\lk(A,LR^{n})$ counts the number of $R$-exponents which are $>n\ge 1$.
\end{Remark}
\begin{Theorem}
\label{Thm:linkeq_implies_conjugate}
If hyperbolic $A,B\in \PSL_2(\Z)$ are link equivalent, namely $\lk(A,C)=\lk(B,C)$ for all hyperbolic $C \in \PSL_2(\Z)$,
then they are conjugate.
\end{Theorem}
\begin{proof}
The set $\mathcal{Z}=R\PSL_2(\N)L\sqcup L\PSL_2(\N)R$ of $L\&R$-words which start \& end with different letters is endowed with the involution $p\mapsto \Bar{p}$ exchanging the extremal letters, having no fixed points.
The free $\Z$-module $\Omega$ generated by the set $\mathcal{Z}$ is naturally decomposed as the direct sum of rank-$2$ sub-modules generated by pairs $\{z,\Bar{z}\}$.
It is therefore endowed with a non-degenerate symmetric bilinear form $\Omega \times \Omega \to \Z$, given by the direct sum of the hyperbolic structures on those planes:
\begin{equation*}
\Omega
= \bigoplus_{z\in \mathcal{Z}} \Z\cdot z
= \bigoplus_{z>\Bar{z}} \Z\cdot z \oplus \Z\cdot \Bar{z}
\qquad
(a\cdot b)
= \sum_{z\in \mathcal{z}} a_z b_{\Bar{z}}
= \sum_{z>\Bar{z}} (a_z b_{\Bar{z}} + a_{\Bar{z}} b_{z})
\end{equation*}
The length function $\len \colon \PSL_2(\N) \to \N$ yields a filtration of the set $\mathcal{Z}$ by the chain of subsets $\mathcal{Z}_n = \{z \in \mathcal{Z} \mid \len(p)\le n\}$ with cardinals $2^{n-1}$, which is invariant by the involution.
This induces a filtration of the module $\Omega$ by the corresponding chain of sub-modules $\Omega_n$ with ranks $2^{n-1}$, all invariant under the orthogonal symmetry.
Thus each unimodular quadratic $\Z$-module $\Omega_n$ (decomposed as a direct sum of hyperbolic planes) is canonically isomorphic to its dual $\Omega_n^*$.
Now consider cyclic words $A,B\in \PSL_2(\N) \bmod{\sigma}$ corresponding to link equivalent hyperbolic conjugacy classes in $\PSL_2(\Z)$.
Let $m=\max\{\len(A),\len(B)\}$ and consider the linear forms on $\Omega_{m}$ defined by the sequences $(\occ_z(A))_{z}$ and $(\occ_z(B))_z$ for $z\in \mathcal{Z}_m$.
Since $A,B$ are link equivalent, the isomorphism $\Omega_m\to \Omega_m^*$ implies by Theorem \ref{Thm:linkeq_implies_conjugate} and Remark \ref{rem:long-patterns} that these linear forms coincide, so that $\occ_z(A)=\occ_z(B)$ for all $P\in \mathcal{Z}_m$.
In particular, for $z$ a linear representative of $B$ we find that $\occ_B(A)=\occ_B(B)>0$ whereby $A=B \bmod{\sigma}$.
\end{proof}
\begin{comment}
\begin{Remark}
Notice that the cyclic shift acting on $\mathcal{Z}$ preserves the $\mathcal{Z}_n$, but does not commute with the involution for $n>2$ as these two actually generate the full group of permutations $\mathfrak{S}_n$.
\end{Remark}
\end{comment}
\subsection{Homogeneous quasi-morphisms on the modular group}
For a group $\Pi$, a function $f\colon \Pi \to \R$ is called a \emph{quasi-morphism} if it has a bounded derivative:
\[df(A,B)=f(B)-f(AB)+f(A).\]
A quasi-morphism $f\colon \Pi \to \R$ is called \emph{homogeneous} if it is a morphism in restriction to the abelian subgroups of $\Pi$ (which in the case $\Pi = \Gamma$ means that $f(A^n)=nf(A)$ for all $A\in \Gamma$ and $n\in \N$).
Observe that a homogeneous quasi-morphism is bounded only if it is trivial, necessarily constant on conjugacy classes, and vanishes on torsion classes.
The real vector space $PX(\Pi;\R)$ of homogeneous quasi-morphisms is a Banach space for the norm $\lVert df \rVert_\infty$, as was shown in \cite{MatsuMorita_Hb(Homeo)_1985, Ivanov_H2b(G)-Banach_1988}.
For a pattern $P\in \PSL_2(\N)$ we define the $P$-asymmetry of an infinite order $A\in\PSL_2(\N)$ by \[\mes_P(A)=\occ_P(A)-\occ_{{}^t\!P}(A)\]
Notice that $\mes_P(A)=\occ_P(A)-\occ_P({}^t\!A)$ and that ${}^t\!A$ is conjugate to $A^{-1}$ by $S\in \PSL_2(\Z)$.
Extending $\mes_P(A)=0$ for elliptic $A$ yields a conjugacy invariant function $\mes_P \colon \PSL_2(\Z)\to \Z$.
In particular for $P=R$ we recover the \emph{Rademacher function} as $\mes_P(A) = \Rad(A)$.
\begin{Lemma}
\label{Lem:cocycle}
For all $P\in \PSL_2(\N)$, the function $\mes_P \colon \PSL_2(\Z)\to \Z$ is a homogeneous quasi-morphism. If $P\ne {}^t\!P$ then $\mes_P$ is unbounded, and if $P$ does not overlap itself then $\lVert d\mes_P \rVert_\infty \le 6$.
\end{Lemma}
\begin{proof}
The proof relies on the ideas in \cite{BarGhys_cocycles-actions-arbres_1991} (see also \cite[Lemma 5.3]{Grigorchuk_bounded-cohomology_1995}).
\end{proof}
\begin{Theorem}
\label{Thm:Cos_A}
For every hyperbolic $A\in \PSL_2(\Z)$, the function $\Cos_A\colon B\mapsto \lk(A,B)-\lk(A^{-1}, B)$ is a homogeneous quasi-morphism $\PSL_2(\Z)\to \Z$, which is unbounded unless $A$ is conjugate to $A^{-1}$.
It can be computed for $A,B\in \PSL_2(\N)$ as:
\begin{equation}
\label{eq:Cos_A} \tag{$\Cos_A$}
\Cos_A(B) = \lk(A,B)-\lk(A, {}^t\!B)
= \frac{1}{2} \sum_{w}
\begin{pmatrix}
\occ_{RwL}(A)\cdot \mes_{LwR}(B)
\\+\\
\occ_{LwR}(A)\cdot \mes_{RwL}(B)
\end{pmatrix}
\end{equation}
where the summation extends over all words $w\in \PSL_2(\N)$ with $\len(w)<\max\{\len A, \len B\}$.
\end{Theorem}
\begin{proof}
The quantity $\Cos_A(B)$ is homogeneous in $A$ and $B$.
Let explain why $\Cos_A$ is a quasi-morphism for $A$ primitive.
Recall that $A^{-1}$ and ${}^t\!A$ are conjugate by $S$ and notice that $\lk(A^{-1},B)=\lk(A, B^{-1})$.
Therefore \eqref{eq:sum-linked-patterns} yields \eqref{eq:Cos_A} and $d\Cos_A=\tfrac{1}{2}\sum_w \left(\occ_{RwL}(A) \cdot d\mes_{LwR}+\occ_{LwR}(A) \cdot d\mes_{RwL}\right)$.
Since $\occ_P(A)\le \len(A)$ it is enough to prove by Proposition \ref{Lem:cocycle} that for every $X,Y\in \PSL_2(\Z)$, the sum $d\Cos_A(X,Y)$ contains at most $\len(A)^2$ non-zero terms with $\len(w)\ge \len(A)$.
The $L\&R$-words $w$ such that $\occ_{LwR}(A)>0$ correspond to the triples $1\le m,n\le \len(A)$ and $k\in \N$ such that $\sigma^mA=LuRv$, $\sigma^nA=RvLu$, and $LwR=L(uRvL)^kuR$ for some $L\& R$-words $u,v$. In this situation we write $P_{mn}^k=LwR=L(uRvL)^kuR$ and $Q_{mn}^k=RwL=R(uRvL)^kuL$.
By construction (and the primitivity of $A$), two distinct $Q_{mn}^k$ cannot overlap except along a prefix and suffix of length $<\len(A)$.
We may thus adapt the argument for \cite[Proposition 5.10]{Grigorchuk_bounded-cohomology_1995}.
The quantity $d\mes_Q(X,Y)$ measures the "$Q$-perimeter" of a tripod $(*,X*,XY*)$ in tree $\Tree$, and it is non-zero only if $Q$ can be matched along a portion covering its incenter.
But for each $(m,n)$, at most two values of $k>1$ may lead to such patterns $Q_{mn}^k$.
The same reasoning applies with $L$\&$R$ interchanged.
This proves the bound on the number of non-zero summands for $d\Cos_A$.
Finally by Theorem \ref{Thm:linkeq_implies_conjugate} we have $d\Cos_A=0$ only if $A$ is conjugate to ${}^t\!A$.
\end{proof}
Let $\mathcal{P}$ denote the set of \emph{Lyndon words}, namely the $L\&R$-words which are greater than every of their cyclic permutations.
Notice that such words are primitive, and cannot overlap themselves.
Hence if a Lyndon word is equal to a cyclic permutation of its transpose then it is actually symmetric. Let $\mathcal{P}_0$ be the subset of symmetric Lyndon words and choose a partition $\mathcal{P}\setminus \mathcal{P}_0=\mathcal{P}_-\sqcup \mathcal{P}_+$ in two subsets which are in bijection by the transposition.
Observe that $\mathcal{P}$ indexes the set of primitive infinite order conjugacy classes in $\Gamma$, and $\mathcal{P}_0$ the subset of those which are stable under inversion.
Of course $\Id \in \mathcal{P}_0$, we may choose $R\in \mathcal{P}_+$, and denote $\Cos_R:=\mes_R=\Rad$ by convention.
\begin{comment}
\begin{Lemma}
\label{Cor:mes_P-basis}
The collection of $\mes_P\in PX(\Gamma;\R)$ for $P\in \mathcal{P}_+$ is linearly independant, and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{P\in \mathcal{P}_+} mf_P \mes_P$ for unique $mf_P \in \R$.
\end{Lemma}
\begin{proof}
This is a reformulation of \cite[Theorem 5.11]{Grigorchuk_bounded-cohomology_1995}.
\end{proof}
\end{comment}
\begin{Proposition}
The collection of $\Cos_A\in PX(\Gamma;\R)$ for $A\in \mathcal{P}_+$ is linearly independant.
\end{Proposition}
\begin{proof}
Consider distinct Lyndon words $A_1,\dots,A_k\in \mathcal{P}_+\setminus\{R\}$, and let $A_1,\dots,A_j$ be those of maximal length $m$.
The $\Cos_{A_j}$ are linearly independant of $\Cos_R$ as $\Cos_{A_j}(R)=0$ whereas $\Cos_R(R)=1$.
Suppose by contradiction that we have a linear relation $\sum r_i \Cos_{A_i}=0$ for $r_i\in \R^*$.
As in the proof of Proposition \ref{Thm:linkeq_implies_conjugate}, this restricts to a linear relation in $\Omega_m^*$, and using the isomorphism $\Omega_m\to \Omega_m^*$ we find that $\sum r_i \occ_P(A_i) = \sum r_i \occ_P({}^t\!A_i)$ for all $P\in \mathcal{Z}_m$.
For $P=A_j$, we have $\occ_{P}(A_i)=0=\occ_{P}({}^t\!A_i)$ for all $i>j$ because $A_j$ cannot overlap itself, and $\occ_{P}(A_i)=0=\occ_{P}({}^t\!A_i)$ for $i<j$ because the Lyndon word $A_j$ is different from $A_i$.
We also have $\occ_{P}({}^t\!A_j)=0$ since $A_j$ is not conjugate to its inverse, whence not link equivalent to its transpose by Proposition \ref{Thm:linkeq_implies_conjugate}.
Since $\occ_{P}(A_j)=1$ we have $r_1=0$, which is the desired contradiction.
\end{proof}
\begin{Proposition}
Every $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{A\in \mathcal{P}_+} cf_A \cdot \Cos_A$ for unique $cf_A \in \R$.
\end{Proposition}
\begin{proof}
A homogeneous quasi-morphism $f\in PX(\Gamma;\R)$ quotients to a function on the set of infinite order primitive conjugacy classes, or on $\mathcal{P}$. It must vanish on $\mathcal{P}_0$, and change sign by transposition, so the function restricted to $\mathcal{P}_+$ uniquely determines $f$.
Since $\Cos_A(R)=0$ for all $A\in \mathcal{P}_+\setminus \{R\}$ and $\Cos_R(R)=1$ we may assume $f(R)=0$ from now on.
Recall the structure of the filtered quadratic space $\Omega$ introduced in the previous proofs.
Notice that the cyclic shift acting on $\mathcal{Z}$ preserves the $\mathcal{Z}_n$ (but does not commute with the involution $z\mapsto \Bar{z}$ for $n>2$ as these two actually generate the full group of permutations $\mathfrak{S}_n$).
Fix $m\in \N$ and consider the subspace $\Lambda^*_n \subset \Omega^*_m$ of elements which are invariant under the shift $\sigma$ and change sign under transposition.
Its elements are uniquely determined by their values on $\mathcal{L}_m := \mathcal{Z}_m \cap \mathcal{P}_+$.
It contains the $(\mes_P(z))_{z\in \mathcal{Z}}$ for $P\in \mathcal{L}_m$, as well as the $(\Cos_A(z))_{z\in \mathcal{z}}$ for $A\in \mathcal{L}_m$.
We know from the the previous proof shows that the latter is free, and can be expressed as linear combinations of the former, so both of these form bases of $\Lambda^*_m$, whose dimension equals cardinal of $\mathcal{L}_m$.
Hence the restriction $f_m\in \Lambda^*_m$ of $f$ to $\mathcal{L}_m$ can be expressed as a linear combination of the $\mes_P$ or of the $\Cos_A$.
We thus have a projective system of elements $f_m$ in the linear vector spaces $\Lambda^*_{n}$ with compatible bases so the coefficients of the limit $f=\varprojlim f_m$ are well defined in either basis.
\end{proof}
In passing, we recovered the following reformulation of \cite[Theorem 5.11]{Grigorchuk_bounded-cohomology_1995}.
\begin{Corollary}
\label{Cor:mes_P-basis}
The collection of $\mes_P\in PX(\Gamma;\R)$ for $P\in \mathcal{P}_+$ is linearly independant, and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{P\in \mathcal{P}_+} mf_P \mes_P$ for unique $mf_P \in \R$.
\end{Corollary}
\section{Further directions of research}
\subsection{Linking forms of Fuchsian groups}
\label{subsec:Link-Fuchsian-group}
To begin with, we may compare the definitions of the functions $\Link_q$ and $\Cos_q$ and their limiting behaviour at $q=\infty$ with similar considerations which have been made for non-oriented loops in a closed surface $S$ of genus $g\ge 2$.
Such loops, corresponding to the conjugacy classes of $\alpha,\beta\in \pi_1(S)$ up to inversion, define trace functions $\Tr(\alpha), \Tr(\beta)$ on the $\SL_2(\C)$-character variety of $\pi_1(S)$, whose real locus contains the Teichm\"uller space of $S$ as a Zariski dense open set.
This character variety carries a natural symplectic structure \cite{Goldman_symplectic-nature-pi1_1984}, given by the Weil-Petersson symplectic form.
The sum $\Cos_q(A,B)$ looks very much like Wolpert's cosine formula \cite{Wolpert_fenchel-nielsen-deformation_1982, Wolpert_formula-cosine-Fenchel-Nielsen_1982} computing the Poisson bracket $\{\Tr(\alpha),\Tr(\beta)\}$ of the trace functions.
In fact, Wolpert sums the $\cross(\alpha,\beta)\cos(\alpha,\beta)$ over the intersection points $p\in \alpha\cap \beta$, that is the cosines of the angles turning from $\alpha$ to $\beta$ in the direction prescribed by the orientation of the surface.
Hence while our cosine formula is a symmetric formula of oriented geodesics, Wolpert's cosine formula yields a skew-symmetric function of non-oriented geodesics.
Note however that the Teichm\"uller space of $\M$ is reduced to a point so any Poisson structure in the usual sense would be trivial, so in our setting we expect Wolpert's sum to be identically zero (as corroborated by our computer experimentation).
Moreover, the Weil-Petersson symplectic form has been extended to several compactifications of the character variety \cite{PapadoPenne_forme-symplectic-bord-Teichmuller_1991, Sozen-Bonahon_weil-petersson-thurston-symplectic_2001, MS_ML-Newton-Poisson_2021}.
The limits of the Poisson bracket $\{\Tr(\alpha), \Tr(\beta)\}$ at the respective boundary points have been interpreted in \cite[Proposition 6]{Bonahon_earthquake-mesaured-laminations_1992} and \cite{MS_ML-Newton-Poisson_2021}.
Thus, we may generalise the definitions of our functions \ref{eq:Link} \& \ref{eq:Cos} to oriented geodesics in hyperbolic surfaces and ask for an interpretation of their limits at boundary points of the Teichm\"uller space.
We believe that \ref{eq:Link} \& \ref{eq:Cos} extend by continuity to pairs $A,B$ of oriented geodesic currents.
This should be analogous to the extension of the intersection form described in Bonahon \cite{Bonahon_geodesic-currents_1988}.
Pursuing this direction, one may also wish to replace $\rho$ with a representation $\Gamma \to \operatorname{Homeo}^+(\S^1)$, a metric of negative curvature, or a generalised cross-ratio \cite{Otal_symplectique-bord-birapport_1992, LabourieMcShane_cross-ratios_2009}.
The aim would be to think of \ref{eq:Link} \& \ref{eq:Cos} as differential forms on the "tangent bundle" to these spaces of representations, metrics or cross-ratios, considered up to appropriate equivalence relations.
For any group $\Pi$, the semi-conjugacy classes of representations $\Pi \to \operatorname{Homeo}^+(\S^1)$ correspond \cite{Ghys_H2b(Homeo(S1);R)_1984} to the "integral points of the unit ball" in the second bounded cohomology group $H^2_b(\Pi;\R)$, namely the elements represented by bounded $2$-cocycles with values in $\{-1,0,1\}$. For $\Pi = \pi_1(S)$ it contains \cite{BargeGhys_H2b(Surface)_1988} the space of differential $2$-forms on $S$, and we suspect that something similar is true for some spaces of generalized cross-ratios, thus we ask:
\begin{Question}
How to interpret $\Link_\rho(A,B)$ or $\Cos_\rho(A,B)$ as "differential forms" on (an appropriate subspace in) the second bounded cohomology group $H^2_b(\Gamma;\R)$ ?
\end{Question}
\subsection{Arithmetic and Geometric deformations}
Let us mention another general context in which our definitions \ref{eq:Link} \& \ref{eq:Cos} seem to apply with almost no changes.
Recall that our definitions of the cross-ratio and cosine in paragraph \ref{subsec:disc-bir_K} hold for pairs of semi-simple elements in $\PGL_2(\Field)$.
Thus for any faithful representation of a group $\rho \colon \Gamma \to \PSL_2(\Field)$ sending $A,B\in \Gamma$ to semi-simple elements, one may define the following invariants for the pair of conjugacy classes: \[\Link_\rho(A,B)=\sum \bir(\rho\tilde{A},\rho \tilde{B})^{-1}\qquad \Cos_\rho(A,B)=\sum \cos(\rho\tilde{A},\rho \tilde{B})\]
where the sum is indexed by the double-coset space $\Stab A \backslash \Gamma / \Stab B$ with some restrictions analog to $\asrt{\bir>1}$ and $\asrt{\across>1}$ ensuring that it has finite support, which we shall comment later on.
These define functions on (a subset in) the space of representations $\Hom(\Gamma,\PSL_2(\Field))$ considered up to $\PSL_2(\Field)$-conjugacy at the target. One may ask for interpretations of their limiting values at special points in its appropriate compactifications.
As explained in the previous paragraph, this construction works in particular for discrete subgroups of $\PSL_2(\R)$.
In general, we may want to specify that $\rho(\Gamma)$ is a discrete subgroup of $\PSL_2(\Field)$ after $\Field$ has been given a topology, or furthermore that $\rho(\Gamma)$ has finite covolume for the Haar measure on $\PSL_2(\Field)$ with respect to a measure on $\Field$.
In that case, one may consider the quotient of the symmetric space of $\PSL_2(\Field)$ by $\rho(\Gamma)$, and observe the relative position between the "cycles" corresponding to $A,B$ in that quotient.
We may now suggest some tantalising connections between arithmetic and topology.
For this, we should compare our summations (\ref{eq:Link}) and (\ref{eq:Cos}) with the modular cocycles introduced in \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017} and the products appearing in \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022}.
Let us note however that \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017} considers the linking numbers $\lk(A+A^{-1},B+B^{-1})$ between cycles obtained by lifting a geodesic and its inverse: this number amounts to the geometric intersection $I(A,B)$ of the modular geodesics.
Furthermore \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022} considers deformations of an arithmetic nature for these intersection numbers.
None of these address the actual linking numbers, and their approach is motivated by the arithmetic of modular forms, while ours will be inspired by the geometry of the character variety.
Thus it would be interesting on the one hand to understand the arithmetic of linking numbers in terms of the modular forms appearing in \cite{Katok_modular-forms-geodesics_1984} or the modular cocycles in \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017}, and on the other hand to relate the $p$-arithmetic intersections numbers considered in \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022} to the special values of functions $\Link_\rho$ \& $\Cos_\rho$ defined for representations $\rho \colon \PSL_2(\Z) \to \PSL_2(\Q_p)$ as suggested above.
\subsection{Special values of Poincar\'e Series}
We may apply the general averaging procedure explained in paragraph \ref{subsec:F(A,B)} to other conjugacy invariants $f_q(A,B)$ and define new functions $F_q(A,B)$ on the character variety of $\PSL_2(\Z)$.
Their limit at the boundary point $q=\infty$ will be expressed in terms of the linking number $\lk(A,B)$ as soon as $f_q(A,B)$ converges to an expression of $\cosign(A,B)$.
Various motivations (including special values for Poincar\'e series \cite{Siegel_advanced-number-theory_1965, Dirichlet_formes-quadratiques-complexes_1842}, and McShane's identity \cite{Bowditch_McShane-Markov_1996}) suggest to choose $f_q(A,B)=(x+\sqrt{x^2-1})^{-s}$ for some variable $s\in \C$ where $x=\frac{1}{4}(\Tr(A_qB_q^{-1})-\Tr(A_qB_q))$ is the numerator of $\tfrac{1}{4}\cos(A_q,B_q)$ in the formula of Lemma \ref{Lem:cos-cosh-sinh}.
This summand $f_q(A,B)$ can also be written $e^{-si\theta}$ where $\theta$ is the angle between the oriented geometric axes of $A_q$ and $B_q$ when they intersect and $e^{-sl}$ where $l$ is the length of the ortho-geodesic arc $\gamma$ connecting the geometric axes of $A_q$ and $B_q$ when they are disjoint.
In formula:
\begin{equation*}
F_{q}(A,B)= \sum \left(x+\sqrt{x^2-1}\right)^{-s} = \sum_{\gamma_A \perp \gamma \perp \gamma_B} \exp(-sl_\gamma) - \sum_{p\in \gamma_A\cap \gamma_B} \exp(-si\theta_p).
\end{equation*}
So the sum over all double cosets splits as a finite sum computable as explained in \ref{subsec:F(A,B)}, and an infinite series which converges for $\Re(s)>1$ (the topological entropy for the action of $\PSL_2(\Z)$ on the hyperbolic plane).
The infinite sum is a bivariate analog (in $(A,B)$) of the univariate Poincar\'e "theta-series" which appeared in the works of Eisenstein: those admit meromorphic continuation to $s\in \C$ and their special values in the variable $s$ have been of interest for arithmetics and dynamics. Similar Poincar\'e series associated to one modular geodesic are also defined in \cite{Katok_modular-forms-geodesics_1984}.
The earliest appearance we found for bivariate series is in \cite[Section 50]{Ford_automorphic-functions_1923}, and the only other in \cite{Paulin_series-poincare_2013}.
When $q=\infty$ and $s=1$, the real part of the finite sum evaluates to $2\lk(A,B)-I(A,B)$, but one may wonder about the infinite series (now the order in which we take limits in $s$ and $q$ may import).
More generally, one strategy to relate modular topology and quadratic arithmetic is to choose $f$ with appropriate symmetries and analyticity properties so that the sum over all double cosets can be understood: then one deduces a relationship between a topologically meaningful finite sum, and the infinite series whose special values may be of interest in arithmetic. The dilogarithm of the cross-ratio also looks like a good candidate \cite{Bridgeman_orthospectra-laminations-dilog-identities_2011}...
\newpage
\bibliographystyle{alpha}
\subsection*{Acknowlegements}
This paper contains the main results obtained in the second part of my thesis.
I would thus like to thank my thesis advisors Etienne Ghys and Patrick Popescu-Pampu for their guidance and encouragement;
as well as Francis Bonahon, Louis Funar, Jean-Pierre Otal and Anne Pichon who refereed and carefully read my work.
I am also grateful to Pierre Dehornoy for sharing his knowledge of modular knots.
Finally, I owe Marie Dossin for helping me with the figures in tikz.
\section{Introduction}
\subsection*{Context and motivation}
The modular group $\PSL_2(\Z)$ acts properly discontinuously on the hyperbolic plane $\H\P$ with quotient the modular orbifold $\M$, a hyperbolic surface with conical singularities $i$ \& $j$ of order $2$ \& $3$, and a cusp $\infty$.
The free homotopy classes of loops in $\M$ correspond to the conjugacy classes in its fundamental group $\pi_1(\M)=\PSL_2(\Z)$.
In particular the hyperbolic conjugacy classes in $\PSL_2(\Z)$ correspond to the closed oriented geodesics in $\M$, called \emph{modular geodesics}.
For hyperbolic $A\in \PSL_2(\Z)$ the modular geodesic $\gamma_A$ has length $\lambda_A$ equal to the logarithm of the ratio between its eigenvalues $\epsilon_A^{\pm 1}$, in formula:
\begin{equation*}
\disc(A)
= \left(\epsilon_A-\epsilon_A^{-1}\right)^2
= (\Tr A)^2-4
= 4\left(\sinh \tfrac{1}{2}\lambda_A\right)^2
\end{equation*}
We denote by $I(A,B)$ the geometric intersection number between the associated modular geodesics.
The unit tangent bundle $\U=\PSL_2(\Z)\backslash \PSL_2(\R)$ of $\M=\PSL_2(\Z)\backslash \H\P$ is a $3$-manifold, and the closed oriented geodesics in $\M$ lift to the periodic orbits for the geodesic flow in $\U$.
Hence the primitive hyperbolic conjugacy classes in $\PSL_2(\Z)$ correspond to the so called \emph{modular knots} in $\U$, which from the components of the \emph{master modular link}.
The structure of the Seifert fibration $\U \to \M$ reveals that $\U$ is homeomorphic to the complement of a trefoil knot in the sphere. In particular, one may speak of the linking numbers between modular knots and the trefoil and between one another.
Let us recall a combinatorial parametrization of the infinite order conjugacy classes in $\PSL_2(\Z)$.
The Euclidean algorithm shows that the group $\SL_2(\Z)$ is generated by the transvections $L\&R$, and more precisely that its submonoid $\SL_2(\N)$ of matrices with non-negative entries is freely generated by $L\&R$.
This submonoid can be identified with its image $\PSL_2(\N)\subset \PSL_2(\Z)$.
\begin{equation*}
L=
\begin{psmallmatrix}
1 & 0 \\ 1 & 1
\end{psmallmatrix}
\qquad
R=
\begin{psmallmatrix}
1 & 1 \\ 0 & 1
\end{psmallmatrix}
\end{equation*}
In $\PSL_2(\Z)$, the conjugacy class of an infinite order element intersects $\PSL_2(\N)$ along all cyclic permutations of a non-empty $L\&R$-word.
The conjugacy class is primitive if and only if the cyclic word is primitive, and it is hyperbolic when the cyclic word contains both letters $L$ and $R$.
One may try to relate the geometry and topology of the master modular link with the arithmetics and combinatorics of conjugacy classes in the modular group.
Our previous work \cite{CLS_Conj-PSL2K_2022} relates the geometry of modular geodesics (angles of intersection and lengths of ortho-geodesics) in terms of the arithmetics of conjugacy classes in the modular group (discrimnants, cross-ratios between fixed points on the projective line, and their hilbert symbols).
The main results in this paper will relate the linking numbers of modular knots to the combinatorics of the corresponding cyclic words.
The most immediate measures for the complexity of a binary word are given by the sum and difference between the numbers of letters of each sort.
For an infinite order $A\in \PSL_2(\Z)$ we denote by $\len([A])=\#R+\#L$ the combinatorial length and call $\Rad([A])=\#R-\# L$ the \emph{Rademacher number} of its conjugacy class.
In his paper \cite{Atiyah_log(eta-Dedekind)_1987} on the Logarithm of the Dedekind eta function, M. Atiyah identified the Rademacher function with no less than six other important functions appearing in diverse areas of mathematics, showing how omnipresent it is.
The function $\Rad\colon \PSL_2(\Z) \to \Z$ is a quasi-morphism, meaning that it has a bounded derivative
\begin{equation*}
d\Rad\colon \PSL_2(\Z)\times \PSL_2(\Z)\to \Z
\qquad
d\Rad(A,B)=\Rad(B)-\Rad(AB)+\Rad(A)
\end{equation*}
and is homogeneous, meaning that $\Rad(A^n)=n\Rad(A)$ for infinite order $A\in \PSL_2(\Z)$ and $n\in \Z$.
This enabled \'E. Ghys and J. Barge to recognise it in \cite{BargeGhys_cocycle-euler-maslov_1992} as half the primitive of the bounded euler class in $H^2_b(\PSL_2(\Z);\R)$ and explain its ubiquity.
In \cite{Ghys_knots-dynamics_2006}, \'E. Ghys showed that the linking number of a modular knot with the trefoil equals its Rademacher invariant, and concluded by asking for \emph{arithmetical and combinatorial interpretations of the linking pairing between modular knots}.
In this work, we will derive several formulae for those linking numbers, providing bridges between the arithmetics and geometry, the combinatorics and algebra, or the dynamics and topology of the modular group.
\subsection*{The arithmetic \& geometry of the cosines}
The journey from arithmetics began with our previous work \cite{CLS_Conj-PSL2K_2022}. In particular for a field $\Field\supset \Q$, we described when two hyperbolic elements $A,B\in \PSL_2(\Z)$ of the same discriminant $\Delta$ are conjugate in $\PSL_2(\Field)$.
The obstruction is measured in terms of any intersection angle $\theta$ between the modular geodesics $\gamma_A,\gamma_B$ by the class of $\left(\cos \tfrac{\theta}{2}\right)^2 \in \Field^\times$ modulo the group of norms over $\Field$ of the extension $ \Field[\sqrt{\Delta}]$.
When this obstruction vanishes, the elements of $\PSL_2(\Field)$ which conjugate $A$ to $B$ are paremetrized by the points $(X,Y)\in \Field$ of the generalised Pell-Fermat conic with equation $X^2-\Delta Y^2 = \left(\cos \tfrac{\theta}{2}\right)^2$.
Hence the geometric quantities $\left(\cos \tfrac{\theta}{2}\right)^2=\tfrac{1+\cos(\theta)}{2}$ given for some representatives $A,B$ whose axes intersect in $\H\P$ by
\begin{equation*}
\cos(\theta) = \tfrac{\disc(AB)-\disc(AB^{-1})}{\sqrt{\disc A\disc B}}
\end{equation*}
have an arithmetic meaning, and they will reappear under various forms in the sequel.
\subsection*{Linking functions on the character variety}
Let us introduce, for any pair of modular geodesics $\gamma_A,\gamma_B$, the following summations over their oriented intersection angles $\theta \in \,]0,\pi[$:
\begin{equation*}
\Link_q(A,B)
= \tfrac{1}{2} \sum \left(\cos \tfrac{\theta}{2}\right)^2
\qquad \mathrm{and} \qquad
\Cos_q(A,B)
= \tfrac{1}{2} \sum \left(\cos \theta\right)
\end{equation*}
and study their variations as we deform the metric on $\M$ by opening the cusp.
The complete hyperbolic metrics on the orbifold $\M$ correspond to the faithful and discrete representations $\rho\colon \PSL_2(\Z) \to \PSL_2(\R)$ up to conjugacy.
They form a $1$-dimensional real algebraic set parametrized by $q\in \R^*$ and the matrix $A_q=\rho_q(A)$ is obtained from any $L\&R$-factorisation of $A$ by replacing $L\mapsto L_q$ and $R\mapsto R_q$, where:
\begin{equation*}
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}
\qquad \mathrm{and} \qquad
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}.
\end{equation*}
The primitive hyperbolic conjugacy classes of $\PSL_2(\Z)$ still index the hyperbolic geodesics in the quotient $\M_q=\rho_q(\PSL_2(\Z))\backslash\H\P$ which do not surround the cusp.
We may thus define the analogous sums $\Link_q(A,B)$ and $\Cos_q(A,B)$ over the intersection angles $\theta_q \in \,]0,\pi[$ between the $q$-modular geodesics $\gamma_{A_q},\gamma_{B_q} \subset \M_q^*$ of the $\tfrac{1}{2}\left(\cos \tfrac{1}{2}\theta_q\right)^{2}$ and $\left(\cos \theta_q\right)$.
As $q\to \infty$, the hyperbolic orbifold $\M_q$ has a convex core which retracts onto a thin neighbourhood of the long geodesic arc connecting its conical singularities, whose preimage in the universal cover $\H\P$ is a trivalent tree. In the limit we recover the action of $\PSL_2(\Z)$ on its Bruhat-Tits building, the infinite planar trivalent tree $\Tree$, and by studying its combinatorics we shall prove the following.
\begin{Theorem}[Linking and intersection from boundary evaluations]
For primitive hyperbolic $A,B\in \PSL_2(\Z)$, the limits of the function $\Link_q(A,B)$ and $\Cos_q(A,B)$ at the boundary point of the $\PSL_2(\R)$-character variety of $\PSL_2(\Z)$ recover their linking and intersection numbers:
\begin{align*}
&\Link_q(A,B) \xrightarrow[q\to \infty]{} \lk(A,B)
\\
&\Cos_q(A,B) \xrightarrow[q\to \infty]{}
\lk(A,B)-\lk(A^{-1},B)
=\lk(A,B)-\tfrac{1}{4}I(A, B)
\end{align*}
\end{Theorem}
Hence the functions $\Link_q \& \Cos_q$ interpolate between the geometry at $q=1$ of the arithmetic group $\PSL_2(\Z) \subset \PSL_2(\R)$ and the topology at $q=+\infty$ of the combinatorial action $\PSL_2(\Z) \to \Aut(\Tree)$.
\subsection*{Linking functions and Alexander polynomials}
The graphs of $q\mapsto \Link_q(A,B)$ for various pairs $A,B$ and $q\in \C$ suggest that their zeros tend to accumulate on the unit circle. This reminds us of the various results and conjectures concerning the roots of Alexander polynomials, so we propose a possible thread to follow in this direction.
A primitive hyperbolic conjugacy class in $\PSL_2(\Z)=\pi_1(\M)$ corresponds to a primitive modular geodesic in $\gamma_A\subset \M$. It lifts to a modular knot in $k_A \subset \U$ which in turns yields a conjugacy class in $\BB_3=\pi_1(\U)$.
A conjugacy class in the braid group on three strands defines, by taking its closure, a link $\sigma_A$ in a solid torus.
In \cite[Proposition 5.16]{CLS_phdthesis_2022} we relate the Alexander polynomial $\Delta(\sigma_A)\in \Z[t^{\pm 1}]$ of this link $\sigma_A$ to the Fricke polynomial $\Tr A_q \in \Z[q^{\pm 1}]$ of the modular geodesic $\gamma_A$.
\begin{Proposition}
For a primitive hyperbolic $A\in \SL_2(\N)$, the Alexander polynomial of the link $\sigma_A$ is given in terms of $q=\sqrt{-t}$ by: \[\Delta(\sigma_A)=\tfrac{q^{\Rad(A)}-\Tr(A_q)+q^{-\Rad(A)}}{(q-q^{-1})^2}\]
\end{Proposition}
Now recall that $\Cos_q(A,B)=\Link_q(A,B)-\Link_q(A,B^{-1})$ can be expressed as finite sum of terms:
\begin{equation*}
\cos(A_q,B_q) = \tfrac{\disc(A_qB_q)-\disc(A_qB_q^{-1})}{\sqrt{\disc A_q\disc B_q}}
\qquad \mathrm{where} \qquad
\disc(C_q)= (\Tr C_q)^2-4
\end{equation*}
This is how one may compare the concentration property for the zeros of $\Link_q$ around the unit circle with those of Alexander polynomials, but we will not pursue this direction any further.
\subsection*{Linking numbers and homogeneous quasi-morphisms}
The limiting values $\Cos_q(A,B)\to \lk(A,B)-\lk(A^{-1},B)$ will now provide a bridge from the representation theory to the bounded cohomology of $\PSL_2(\Z)$.
For every group $\Pi$, the real vector space $PX(\Pi;\R)$ of homogeneous quasi-morphisms is a Banach space for the norm $\lVert df \rVert_\infty$, as was shown in \cite{MatsuMorita_Hb(Homeo)_1985, Ivanov_H2b(G)-Banach_1988}.
\begin{Theorem}
For every hyperbolic $A\in \PSL_2(\Z)$, the function $\Cos_A\colon B\mapsto \lk(A,B)-\lk(A^{-1}, B)$ is a homogeneous quasi-morphism $\PSL_2(\Z)\to \Z$.
\end{Theorem}
Let $\mathcal{P}$ denote the set of primitive infinite order conjugacy classes in $\PSL_2(\Z)$, and $\mathcal{P}_0$ the subset of those which are stable under inversion. Choose a partition $\mathcal{P}\setminus \mathcal{P}_0=\mathcal{P}_-\sqcup \mathcal{P}_+$ in two subsets in bijection by the inversion.
We may choose $R\in \mathcal{P}_+$, and denote $\Cos_R:=\Rad$ by convention.
\begin{Theorem}
The collection of $\Cos_A\in PX(\Gamma;\R)$ for $A\in \mathcal{P}_+$ is linearly independant and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{A\in \mathcal{P}_+} cf_A .\Cos_A$ for unique $cf_A \in \R$.
\end{Theorem}
To prove the non-triviality and linear independance of the $\Cos_A$ for $A\in \mathcal{P}_+$, we were led to show the non-degeneracy of the linking form, which is interesting in its own right.
\begin{Theorem}
If hyperbolic $A,B\in \PSL_2(\Z)$ are link equivalent, namely $\lk(A,X)=\lk(B,X)$ for all hyperbolic $X\in \PSL_2(\Z)$,
then they are conjugate.
\end{Theorem}
The results in this section may be compared to the classical representation theory of compact groups, in which the characters of irreducible representations provide an orthonormal basis for the class functions.
Indeed, we have found a family of cosign functions $\Cos_A$ whose periods correspond to the primitive conjugacy classes of $\PSL_2(\Z)$, and they form a basis for the space of quasi-characters $PX(\PSL_2(\Z))$, which is in some sense orthogonal with respect to the linking form (but we refer to the proofs in section \ref{sec:quasi-morphism} for a better explanation of this orthogonality).
\section{The group \texorpdfstring{$\PSL_2(\R)$}{PSL(2;R)}: discriminant and cross-ratio}
\label{sec:disc-bir}
The group $\PSL_2$ acts by conjugacy over itself.
In this section, we recall from \cite{CLS_Conj-PSL2K_2022} the main invariants which enable to describe the orbits of single elements and of pairs of elements.
Those are the discriminant $\disc(A)$ and the cross-ratio $\bir(A,B)$.
\subsection{Over a field \texorpdfstring{$\Field$}{K} with \texorpdfstring{$\operatorname{char}(\Field)\ne 2$}{charnot2}}
\label{subsec:disc-bir_K}
The automorphism group $\PGL_2(\Field)$ of the projective line $\Field\P^1$ acts freely transitively on triples of distinct points, and the unique algebraic invariant of four points $u,v,x,y\in \Field\P^1$ is the cross-ratio:
\begin{equation}
\label{eq:bir}
\bir(u,v,x,y)
= \frac{(v-u)}{(v-x)}
\div \frac{(y-u)}{(y-x)}
\in \Field\P^1
\end{equation}
It satisfies in particular $\bir(z,0,1,\infty)=z$ and $\bir(z,0,w,\infty)=z/w$, whence the cocycle rule:
\begin{equation*}
\bir(z,v,x,y)=\bir(x,v,w,y) \bir(z,v,w,y).
\end{equation*}
For a triple $(x_1,x_2,x_3)$ of distinct points in $\Field\P^1$, we define their Maslov index in $\Field^\times/(\Field^\times)^2$ by lifting them to non-zero vectors $\vec{x_i}\in x_i \subset \Field^2$ with zero sum, and taking the determinant $\det(\vec{x}_1,\vec{x}_2)=\det(\vec{x}_2,\vec{x}_3)=\det(\vec{x}_3,\vec{x}_1)$.
It is preserved by the subgroup $\PSL_2(\Field)$, which acts freely transitively on the triples of distinct points with a given Maslov index.
A non-trivial $A\in \PSL_2(\Field)$ has two fixed points $\alpha_-,\alpha_+$ in the projective line over $\sqrt{\Field}$, that are well defined up to transposition.
The element $\epsilon_A^{\pm 2} := \bir(Ax,\alpha_\mp,x,\alpha_\pm)$ does not depend on $x\in \Field \P^1 \setminus\{\alpha_-,\alpha_+\}$, and is well defined up to inversion: it is called the \emph{period} of $A$.
The unique algebraic function on $\PSL_2(\Field)$ which is invariant by conjugacy is the discriminant:
\begin{equation*}
\disc(A) = \left(\epsilon_A-\epsilon_A^{-1}\right)^2
\end{equation*}
whose class in $\{0\}\sqcup \Field^\times /(\Field^\times)^2$ defines the \emph{type} of $A$.
All non-trivial elements with $\disc=0$ are conjugate. The elements with $\disc \ne 0$ are called \emph{semi-simple}.
The conjugacy classes of elements with $\disc \equiv 1 \bmod{(\Field^\times)^2}$ are uniquely characterised by the value of their discriminant.
Consider $A,B\in \PSL_2(\Field)$ of the same type, and fix a square root of $\disc(A)\disc(B)\in (\Field^\times)^2$.
Then one may order their fixed points $(\alpha_-,\alpha_+)$ and $(\beta_-,\beta_+)$ up to simultaneous inversion, and consistently define their cross-ratio $\bir(A,B)$ by:
\begin{equation*}
\bir(A,B):= \bir(\alpha_-,\alpha_+,\beta_-,\beta_+) \in \Field\P^1
\end{equation*}
which is $\notin \{0,\infty\}$ unless $A$ and $B$ share a fixed point, and satisfies the symmetry property:
\begin{equation*}
\frac{1}{\bir(A,B)}+\frac{1}{\bir(A,B^{-1})}=1.
\end{equation*}
We may also define their cosine (using their adjoint action on the Lie algebra $\Sl_2(\Field)$ as in \cite{CLS_Conj-PSL2K_2022}), which is related to the cross-ratio by:
\begin{equation}
\label{eq:cos-bir}
\cos(A,B)=\frac{1}{\bir(A,B)}-\frac{1}{\bir(A,B^{-1})}.
\end{equation}
\begin{Theorem}
\label{Thm:conj-PSL_2(Z)}
Consider the action of $\PSL_2(\Field)$ by conjugacy on itself.
Two semi-simple elements $A,B$ are conjugate if and only if $\disc(A)=\Delta=\disc(B)$ and $\bir(A,B)\equiv 1 \bmod{\Norm_\Field \Field[\sqrt{\Delta}]}$.
A pair of semi-simple elements $A_1,A_2$ of the same type is conjugate to another pair of semi-simple elements $B_1,B_2$ of the same type if and only if we have $\bir(A_1,A_2)=\bir(B_1,B_2)$ as well as $\disc(A_i)=\Delta_i=\disc(B_i)$ and $\bir(A_i,B_i)\equiv 1 \bmod{\Norm_\Field \Field(\sqrt{\Delta_i})^\times}$.
\end{Theorem}
\subsection{Over the real field}
The automorphism group $\PGL_2(\C)\simeq \PSL_2(\C)$ of the complex projective line $\C\P^1$ contains $\PGL_2(\R)$ as the stabiliser of the real projective line $\R\P^1$.
The index-two subgroup $\PSL_2(\R)$ also preserves the upper half-plane $\H\P=\{z\in \C \mid \Im(z)>0\}\subset \C\P^1$, or equivalently the orientation induced on its boundary $\partial \H\P$, also given by the cyclic order $\cord(x,y,z) \in \{\pm 1\}$ of any triple of distinct points $x,y,z$ of $\R\P^1$.
The complex structure on $\H\P$ is conformal to a unique hyperbolic metric. The hyperbolic distance $\lambda$ between $w,z\in \H\P$ can be deduced from the cross-ratio by $\bir(\Bar{z},z,\Bar{w},w)^{-1}=\left(\cosh \tfrac{\lambda }{2}\right)^{2}$.
This realises $\PSL_2(\R)$ as the positive isometry group of the hyperbolic plane: it preserves the previous cross-ratio and acts simply-transitively on positive triples of distinct points in $\R\P^1$, thus it preserves the hyperbolic metric and acts simply transitively on the unit tangent bundle of $\H\P$.
The type of $A \in \PSL_2(\R)$ is elliptic or parabolic or hyperbolic according to the value of $\sign \disc(A) \in \{-1,0,1\}$, equal to the number of distinct fixed points in $\R\P^1$ minus $1$.
A hyperbolic $A \in \PSL_2(\R)$ acts on $\H\P$ by translation along an oriented geodesic $\gamma_A$ whose endpoints $\alpha_-,\alpha_+ \in \R\P^1$ are its repulsive and attractive fixed points.
With this order the period satisfies $\epsilon_A^2 >1$.
The translation length $\lambda_A = \log(\epsilon_A^2)$ yields $\disc(A)=4\left(\sinh \tfrac{1}{2}\lambda_A\right)^2$%
\begin{Lemma}
\label{Lem:cos-cosh-sinh}
Consider hyperbolic $A,B\in \PSL_2(\R)$ with distinct fixed points.
If we lift them in $\SL_2(\R)$ with positive trace, then we have:
\begin{equation*}
\cos(A,B)= \frac{\Tr(AB)-\Tr(AB^{-1})}{\sqrt{\disc(A)\disc(B)}}
\end{equation*}
Consider the relative position of their oriented hyperbolic geodesics $(\alpha_-,\alpha_+)$ and $(\beta_-,\beta_+)$ in $\H\P$.
If they intersect, their angle $\theta$ is well defined up to a sign, and satisfies $\cos \theta=\cos(A,B)$, thus:
\begin{equation*}
\frac{1}{\bir(A,B)}
= \frac{1 + \cos(\theta)}{2}
= \left(\cos \tfrac{\theta}{2}\right)^{2}
\end{equation*}
If they do not intersect, they have a unique common perpendicular geodesic arc, whose length $\lambda$ satisfies $\cos(A,B)=\pm \cosh \lambda$. The sign $\pm 1$ compares the co-orientations induced by each axis. Thus we have respectively:
\begin{equation*}
\frac{1}{\bir(A,B)}
= \frac{1 + \cosh(\lambda)}{2}
= \left(\cosh \tfrac{\lambda}{2}\right)^{2}
\quad \mathrm{and}\quad
\frac{1}{\bir(A,B)}
= \frac{1 - \cosh(\lambda)}{2}
= \left(\sinh \tfrac{\lambda}{2}\right)^{2}
\end{equation*}
\end{Lemma}
\begin{figure}[h]
\centering
\scalebox{.49}{\input{images/tikz/birapport_angle-theta_cos}}
\scalebox{.49}{\input{images/tikz/birapport_length-lambda_cosh}}
\scalebox{.49}{\input{images/tikz/birapport_length-lambda_sinh}}
\caption*{Angle well defined in $\,]0,\pi[$. Ortho-geodesics well and badly co-orientated.}
\end{figure}
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.3}{\input{images/tikz/co-orientation_angle-mod-sign}}
\scalebox{.3}{\input{images/tikz/co-orientation_ortho-geodesic}}
\caption*{Angle well defined in $\,]0,\pi[$. Ortho-geodesics well and badly co-orientated.}
\end{figure}
\end{comment}
\section{The modular group \texorpdfstring{$\PSL_2(\Z)$}{PSL(2,Z)}}
\subsection{The modular orbifold}
\begin{comment}
The automorphism group $\PGL_2(\C)$ of the complex projective line $\C\P^1$ contains $\PGL_2(\R)$ as the stabiliser of the real projective line $\R\P^1$.
The index-two subgroup $\PSL_2(\R)$ also preserves the upper half-plane $\H\P=\{z\in \C \mid \Im(z)>0\}\subset \C\P^1$, or equivalently the orientation induced on its boundary $\partial \H\P=\R\P^1$.
The complex structure on $\H\P$ is conformal to a unique hyperbolic metric. The hyperbolic distance $\lambda$ between $w,z\in \H\P$ can be deduced from the cross-ratio by:
\begin{equation*}
\frac{1}{\bir(\Bar{z},z,\Bar{w},w)}=\left(\cosh \tfrac{\lambda}{2}\right)^{2}
\end{equation*}
This realizes $\PSL_2(\R)$ as the positive isometry group of the hyperbolic plane: it preserves the previous cross-ratio and acts simply-transitively on positive triples of distinct points of $\R\P^1$, thus it preserves the hyperbolic metric and acts simply transitively on the unit tangent bundle of $\H\P$.
\end{comment}
The subgroup $\PSL_2(\Q)$ of $\PSL_2(\R)$ is the stabiliser of the rational projective line $\Q\P^1$.
The discrete subgroup $\PSL_2(\Z)$ is the stabiliser of the ideal triangulation $\Tri$ of $\H\P$ with vertex set $\Q\P^1$ and edges all geodesics whose endpoints $\tfrac{p}{q},\tfrac{r}{s}$ satisfy $\lvert ps-qr \rvert =1$.
Consider the action of $\PSL_2(\Z)$ on $\Tri$.
It is transitive on the set of edges, which is in bijection with the orbit of $i\in (0,\infty)$, and the stabiliser of $i$ is the subgroup of order $2$ generated by $S$.
It is transitive on the set of triangles, which is in bijection with the orbit of $j=\exp(i\pi/3)\in (0,1,\infty)$, and the stabiliser of $j$ is the subgroup of order $3$ generated by $T$.
Thus it is freely transitive on the flags of $\Tri$, or equivalently on the oriented edges, and we deduce that $\PSL_2(\Z)=\Z/2*\Z/3$ is the free amalgam of its subgroups generated by $S$ and $T$.
\begin{equation*}
S = \begin{pmatrix}
0 & -1 \\ 1 & 0
\end{pmatrix}
\qquad
T = \begin{pmatrix}
1 & -1 \\ 1 & 0
\end{pmatrix}
\end{equation*}
We also find that $\PSL_2(\Z)$ acts properly discontinuously on $\H\P$ with fundamental domain the triangle $(\infty,0,j)$.
We may cut it along the geodesic arc $(i,j)$ to obtain a pair of isometric triangles $(i,j,\infty)$ and $(i,j,0)$. Identifying them along their isometric edges yields the \emph{modular orbifold}
\begin{equation*}
\M = \PSL_2(\Z)\backslash \H\P.
\end{equation*}
It is a hyperbolic two-dimensional orbifold, with conical singularities of order $2$ \& $3$ associated to the fixed points $i$ \& $j$ of $S$ \& $T$, and a cusp associated to the fixed point $\infty \in \partial \H\P$ of $R=S^{-1}T$.
\begin{figure}[h]
\centering
\scalebox{0.57}{\input{images/tikz/lagrangian-complex}}
\hfill
\scalebox{0.57}{\input{images/tikz/PSL2Z-pavage-PH-fundom}}
\caption*{The ideal triangulation of $\H\P$ together with its dual trivalent tree $\Tree$ yield the modular tessellation with fundamental domain $(0,j,\infty)$.}
\label{fig:LagranTree}
\end{figure}
The modular group $\PSL_2(\Z)$ is the orbifold fundamental group of $\M$, so its conjugacy classes correspond to the free homotopy classes of oriented loops in $\M$.
The elliptic conjugacy classes are those of $S$ \& $T^{\pm 1}$ which correspond to oriented loops encircling the singularities, and the parabolic conjugacy classes are those of $R^n$ which correspond to loops encircling the cusp.
The conjugacy class of a hyperbolic $A\in \PSL_2(\Z)$ corresponds to the homotopy class of a unique oriented geodesic $\gamma_A\subset \M$, and its length equals $\lambda_A=\log(\epsilon_A^2)$. These are the called a \emph{modular geodesics}.
\subsection{Acting on a trivalent tree}
The preimage of the segment $(i,j)\subset \M$ in $\H\P$ forms a bipartite tree $\Tree'$, the first barycentric subdivision of a trivalent tree $\Tree$ which is dual to the ideal triangulation.
The \emph{base edge} $(i,j)$ of $\Tree'$ defines the \emph{oriented base edge} $\vec{e}_i$ of $\Tree$.
The action of $\PSL_2(\Z)$ is freely transitive on the set of edges of $\Tree'$ hence on the set of oriented edges of $\Tree$.
It also preserves the cyclic order on the set of edges incident to each vertex (given by the surface embedding $\Tree \subset \H\P$).
This is equivalent to the cyclic order function $\cord(x,y,z)\in \{-1,0,1\}$ of three points $x,y,z\in \Tree \cup \partial \Tree$.
\begin{comment}
This is equivalent to the cyclic order function $\cord(x,y,z)\in \{-1,1\}$ of three distinct points $x,y,z\in \Tree \cup \partial \Tree$, or to the crossing function $\cross(u,v,x,y) \in \{-1,0,1\}$ of four distinct points $u,v,x,y\in \Tree \cup \partial \Tree$ defined by:
\begin{equation*}
\cross(u,v,x,y) = \tfrac{1}{2}\left(\cord(u,x,v) - \cord(u,y,v)
\right)\end{equation*}
that is the algebraic intersection number of the oriented geodesics $(u,v)$ and $(x,y)$.
We denote $\across(u,v,x,y) \in \{0,1\}$ the absolute value of $\cross(u,v,x,y)$ which yields the linking number of the cycles $(u,v)$, $(x,y)$ in the cyclically ordered boundary $\partial \Tree$.
\end{comment}
Thus $\PSL_2(\Z)$ is the full automorphism group of cyclically ordered simplicial tree $(\Tree, \cord)$.
We may now use this action to find some conjugacy invariants of primitive elements by considering their stable subsets.
Recall that an element in $\PSL_2(\Z)$ is called primitive when it generates a maximal cyclic subgroup.
The (primitive) elliptic conjugacy classes correspond to the vertices of $\Tree'$ and the primitive parabolic conjugacy classes correspond to the connected components of $\H\P\setminus \Tree$.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.5}{\input{images/tikz/action_L-R-S-T_tree}}
\hfill
\scalebox{.5}{\input{images/tikz/action_LRL_tree}}
\caption{The action of $S$, $T$, $L$, $R$ and $LRL$ on the dual tree $\Tree$ of $\Tri_2$.}
\label{fig:PSL2Z-elli-para-hyper-tree}
\end{figure}
\end{comment}
Let $A\in \PSL_2(\Z)$ be primitive of infinite order.
It acts on $\Tree$ by translation along an oriented geodesic $g_A$ called its \emph{combinatorial axis}, with endpoints $\alpha_-,\alpha_+\in \partial \Tree = \R\P^1$.
Observe that $g_A$ passes through the oriented base edge $\vec{e}_i$ of $\Tree$ exactly when its endpoints satisfy $\alpha_-\le 0 \le \alpha_+$, which is equivalent to saying that $A$ maps the base triangle $(0,1,\infty)$ to a triangle of the form $(\tfrac{b}{d}, \tfrac{a+b}{c+d}, \tfrac{a}{c})$ with $a,b,c,d\in \N$, in other terms that $A$ belongs to the monoid $\PSL_2(\N)$ freely generated by $L\& R$.
In that case, $g_A$ follows a periodic sequence of left and right turns given by the $L\& R$-factorisation of $A$, or the continued fraction expansion of the periodic number $\alpha$.
The conjugacy class of $A$ corresponds to the orbit of $g_A$ under the action of $\PSL_2(\Z)$ on $\Tree$.
Hence the conjugacy classes of non-elliptic elements in $\PSL_2(\Z)$ correspond to the cyclic words over the alphabet $\{L,R\}$, and the hyperbolic classes yield the cycles in which both letters appear.
The linear representatives of such an $L\&R$-cycle parametrize the intersection of the corresponding conjugacy class with $\PSL_2(\N)$, whose elements are called its \emph{Euclidean representatives}.
For infinite order $A\in \PSL_2(\Z)$, the minimum displacement length $d(x,A\cdot x)$ of a vertex $x\in \Tree$ equals the combinatorial length $\len(A) = \#R+\#L$ of a Euclidean representative.
Since the combinatorial and geometric axes of a hyperbolic $A\in \PSL_2(\Z)$ have the same endpoints $\alpha',\alpha \in \R\P^1 = \partial \H\P = \partial \Tree$, they intersect the ideal triangulation in the same pattern. However the geometric axis also contains the information of the intersection pattern with its first barycentric subdivision, which is equivalent to the isotopy class of the modular geodesic in $\M$.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{.55}{\input{images/tikz/modular-tesselation_axis}}
\caption*
Geometric axis $\gamma_A$ inside a $(\log{\sqrt{\Delta}})$-neighbourhood of the combinatorial axis $g_A$.}
\end{figure}
\end{comment}
\begin{figure}[h]
\centering
\scalebox{0.32}{\input{images/tikz/axis-RL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RL-M}}
\hfill
\scalebox{0.32}{\input{images/tikz/axis-RLL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RLL-M}}
\hfill
\scalebox{0.32}{\input{images/tikz/axis-RLLL-PH}}
\scalebox{0.57}{\input{images/tikz/loop-RLLL-M}}
\caption*{The geometric axes in $\P\H$ and their projections in $\M$ of $RL$, $RLL$, $RLLL$.}
\end{figure}
In \cite[Chapter 3]{CLS_phdthesis_2022}, we recover the isotopy class of $\gamma_A\subset \M$ from the $L\&R$-cycle of $A$.
We also describe when $\gamma_A$ passes through the singular points $i$ or $j$.
Moreover \cite[Lemma 2.27]{CLS_phdthesis_2022} shows that if a (primitive) hyperbolic conjugacy class in $\PSL_2(\Z)$ is stable under inversion, then it contains (exactly $4$) symmetric matrices, and those all lie in $\PSL_2(\N)$ up to inversion.
\begin{comment}
\begin{figure}[h]
\centering
\scalebox{0.8}{\input{images/tikz/loop-RL-M}}
\hspace{1cm}
\scalebox{0.8}{\input{images/tikz/loop-RLL-M}}
\hspace{1cm}
\scalebox{0.8}{\input{images/tikz/loop-RLLL-M}}
\caption*{The modular geodesics $\gamma_A\subset \M$ for $A$ equal to $RL$ and $RLL$ and $RLLL$.}
\end{figure}
\end{comment}
\section{From the geometric cosine to the combinatorial cosing}
\subsection{The functions $\cross$ and $\cosign$.}
We now use the representation $\PSL_2(\Z)=\Aut(\Tree, \cord)$ to find conjugacy invariants for pairs of primitive infinite order elements by comparing the relative positions of their stable subsets, namely their combinatorial axes.
Let us first derive from the cyclic order function of three points, the crossing function of four points $u,v,x,y\in \Tree \cup \partial \Tree$ by:
\begin{equation}
\label{eq:cross}
\cross(u,v,x,y) = \tfrac{1}{2}\left(\cord(u,x,v) - \cord(u,y,v)
\right)\end{equation}
that is the algebraic intersection number of the oriented geodesics $(u,v)$ and $(x,y)$.
We denote $\across(u,v,x,y) \in \{0,\tfrac{1}{2},1\}$ the absolute value of $\cross(u,v,x,y)$ which yields the linking number of the cycles $(u,v)$, $(x,y)$ in the cyclically ordered boundary $\partial \Tree$.
One may compare the formula \eqref{eq:cross} defining $\cross$ with the formula \eqref{eq:bir} defining $\bir$ noticing that $\cord(x,y,z)=\sign \tfrac{y-z}{y-x}$. In particular, for $u,v,x,y\in \R\P^1$ we have \[\across(u,v,x,y) = 1 \iff \bir(u,v,x,y) > 1\]
Now consider two oriented bi-infinite geodesics $g_a$ and $g_b$ of $\Tree$. Their intersection is either empty in which case we define $\cosign(g_A,g_b)=0$, or else it consists in a geodesic containing at least one edge along which we may thus compare their orientations by $\cosign(g_a,g_b)\in \{-1,+1\}$.
The functions $\cross$ and $\cosign$ are $\PSL_2(\Z)$-invariant, symmetric, and inverting the orientation of one argument results in a change of sign.
\begin{figure}[h]
\centering
\scalebox{.65}{\input{images/tikz/cross-cosign-config}}
%
\caption*{Configurations of axes: $\cross$ and $\cosign$. Note that $\cross\ne 0 \implies \cosign = \pm 1$.}
\end{figure}
For hyperbolic $A,B \in \PSL_2(\Z)$ with axes $g_A=(\alpha_-,\alpha_+)$ and $g_B=(\beta_-,\beta_+)$ in $\Tree$, we write $\cross(A,B)=\cross(\alpha_-,\alpha_+,\beta_-,\beta_+)$ and $\cosign(A,B)=\cosign(g_A,g_B)$.
Note that $\cosign(A,B)=1$ if and only if there exists $C\in \PSL_2(\Z)$ such that $CAC^{-1}, CBC^{-1} \in \PSL_2(\N)$ in which case the set of such $C$ corresponds to the edges in $g_A\cap g_B \subset \Tree$.
\begin{Proposition}
\label{Prop:cosign(A,B)=sign(len(AB)-len(A/B))}
For hyperbolic $A,B \in \PSL_2(\Z)$ such that $g_A \cap g_B \ne \emptyset$, we have:
\begin{equation*}
\cosign(A,B)= \sign\left(\len AB -\len AB^{-1} \right).
\end{equation*}
\end{Proposition}
\begin{proof}
This follows from \cite[Proposition 1.6]{Paulin_Gromov-R-trees_1989}, which was corrected by \cite{Conder-Paulin_Erratum-Gromov-R-trees_2020}, and one may also consult \cite[Proposition 2.44]{CLS_phdthesis_2022}.
Compare with the cosine formula in Lemma \ref{Lem:cos-cosh-sinh}.
\end{proof}
\newpage
\subsection{Deforming the \texorpdfstring{$\PSL_2(\Z)$}{PSL(2;Z)}-action on \texorpdfstring{$\H\P$}{HP} to the \texorpdfstring{$\PSL_2(\Z)$}{PSL(2;Z)}-action on \texorpdfstring{$\Tree$}{T}}
Let us define a family of representations $\rho_q \colon \SL_2(\Z) \to \SL_2(\R)$ depending algebraically on the parameter $q\in \R^*$ and with integral coefficients.
The Euclidean algorithm implies that $\SL_2(\Z)$ is generated by $S\&R$, whence by $S\&T$, or $L\&R$.
Fix $S_q=S$ and let $T_q$ be the conjugate of $T$ by $\exp \tfrac{1}{2}\log(q)
\begin{psmallmatrix}
1&0\\0&-1
\end{psmallmatrix}$.
\begin{comment}
\begin{equation*}
T_q =
\begin{pmatrix}
1 & -q \\
q^{-1} & 0
\end{pmatrix}
\quad \mathrm{whence} \quad
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}
\quad \mathrm{and} \quad
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}
\end{equation*}
\end{comment}
Given $A\in \SL_2(\Z)$, we deduce $A_q=\rho_q(A)$ from any $S\&T$-factorisation by replacing $T\mapsto T_q$, eg:
\begin{equation*}
R_q =
\begin{pmatrix}
q & 1 \\
0 & q^{-1}
\end{pmatrix}
\quad \mathrm{and} \quad
L_q =
\begin{pmatrix}
q & 0 \\
1 & q^{-1}
\end{pmatrix}.
\end{equation*}
This descends to a representation $\Bar{\rho}_q \colon \PSL_2(\Z)\to \PSL_2(\R)$ which is faithful and discrete (because $\disc(R_q) = (q-q^{-1})^2 >0$),
and positive in the sense that $T_q$ is a $2\pi/3$-rotation of $\H\P$ in the positive direction.
Conversely, every such representation is conjugate to $\Bar{\rho}_q$ for a unique $q>0$.
We have therefore parametrized the Teichm\"uller space of $\PSL_2(\Z)$ by the real algebraic set $\R_+^*$.
This Teichm\"uller space corresponds to the set of hyperbolic metrics $\M_q = \Bar{\rho}_q(\Gamma)\backslash\H\P$ on the modular orbifold as a topological space.
Observe intuitively that when $q\to \infty$, the hyperbolic orbifold $\M_q$ has a convex core which retracts onto the long geodesic arc $(i,j_q)$ connecting the conical singularities.
Lifting this in $\H\P$ yields an $\epsilon$-neighbourhood of a trivalent $\Tree_q$ with $\epsilon= \Theta\left(1/q^2\right)$.
Since the hyperbolic geodesics of $\M_q$ remain in its convex core, their angles must tend to $0\bmod{\pi}$.
\begin{figure}[h]
\centering
\scalebox{0.9}{\input{images/tikz/orbifold-M-open-cusp}}
\qquad \qquad
\scalebox{0.45}{\input{images/tikz/orbifold-M-convex-core-lift-tree}}
\caption*{The convex core of $\M_q$ lifts in $\H\P$ to an $\epsilon$-neighbourhood of $\Tree_q$ with $\epsilon= \Theta\left(1/q^2\right)$.}
\end{figure}
To make this intuition precise, the geometric invariants $\disc$ and $\cos$ of $A_q,B_q$ define algebraic functions of $q$ whose degrees recover the combinatorial invariants $2\len$ and $\cosign$ of $A,B$.
This should not surprise someone acquainted with compactifications of Teichm\"uller space by actions on trees or by valuations \cite{Otal_compactification-varietes-representations_2015, MS_Aut(CV)_2020}.
Here the unique boundary point $q=\infty$ corresponds to the action on $\Tree$ or to the valuation $-\deg_q$.
\begin{Proposition}
\label{Prop:cos_q-limit}
Consider hyperbolic $A,B\in \PSL_2(\Z)$ such that $\across(A,B)=1$.
For all $q>0$ the elements $A_q,B_q\in \PSL_2(\R)$ are hyperbolic, and their oriented geometric axes intersect at an angle whose cosine is an algebraic function of $q$ with limit $\cos(A_q,B_q) \xrightarrow[q\to \infty]{} \cosign(A,B)$.
\end{Proposition}
\begin{proof}
Lemma \ref{Lem:cos-cosh-sinh} expresses the cosine of the angles between the geometric axes of $A$ and $B$ as:
\begin{equation*}
\cos(A_q,B_q)
= \frac{\Tr(A_qB_q)-\Tr(A_qB_q^{-1})}{\sqrt{\disc(A_q)\disc(B_q)}}.
\end{equation*}
For all $C\in \SL_2(\Z)$ the Laurent polynomial $\Tr(C_q)$ is reciprocal of degree $\len(C)$.
To find the limit as $q\to \infty$ we compute the degrees and dominant terms of the polynomials involved in this expression as follows.
Now recall from Proposition \ref{Prop:cosign(A,B)=sign(len(AB)-len(A/B))} that for hyperbolic $A,B \in \PSL_2(\Z)$ whose fixed points are linked we have $\cosign(A,B)= \sign\left(\len AB-\len AB^{-1} \right)$.
This completes the proof.
\end{proof}
\section{The unit tangent bundle to the modular orbifold}
\subsection{Modular knots and links}
\begin{comment}
The Lie group $\PSL_2(\R)$ retracts by deformation onto its maximal compact subgroup $\PSO_2(\R)$.
The fibration $\PSO_2(\R) \to \PSL_2(\R) \to \H\P$ over its symmetric space yields the unit tangent bundle of the hyperbolic plane.
The lattice $\PSL_2(\Z)$ acts on the symmetric space $\H\P$ with quotient $\M$.
Its preimage $\widetilde{\PSL}_2(\Z)$ acts on the left of $\widetilde{\PSL_2}(\R)$ with quotient $\U$ the unit tangent bundle of $\M$.
The fundamental group $\pi_1(\PSL_2(\R))=\pi_1(\PSO_2(\R))=\Z$ corresponds to a discrete normal subgroup of the universal cover $\widetilde{\PSL}_2(\R)$ which is thus central.
We find the diagram of fibrations and covers:
\begin{equation*}
\xymatrix{
\Z \ar[d] \ar[r]
& \R \ar[d] \ar[r]
& \S^1 \ar[d]
\\
\widetilde{\PSL}_2(\Z) \ar[d] \ar[r]
& \widetilde{\PSL}_2(\R) \ar[d] \ar[r]
& \U \ar[d]
\\
\PSL_2(\Z) \ar@[c>][r]
& \H\P \ar[r]
& \M
}
\end{equation*}
Observe the columns: the first is a short exact sequences of groups, the second is a trivial fibration between contractible spaces, and the last is a non-trivial fibration.
The lines are all universal covers, and the first one is a short exact sequence of groups.
\end{comment}
The Lie group $\PSL_2(\R)$ identifies with the unit tangent bundle to the hyperbolic plane $\H\P$.
Its lattice $\PSL_2(\Z)$ acts on the left with quotient $\U=\PSL_2(\Z)\backslash\PSL_2(\R)$ the unit tangent bundle to the modular orbifold $\M= \PSL_2(\Z) \backslash \H\P$.
The fundamental group of $\U$ is the preimage of $\PSL_2(\Z)$ in the universal cover of $\PSL_2(\R)$, given by the central extension:
\begin{equation*}
\Id \to \Z \to \widetilde{\PSL}_2(\Z) \to \PSL_2(\Z) \to \Id
\end{equation*}
and we find that $\pi_1(\U)$ is isomorphic to the braid group on three strands, hence to the fundamental group of a trefoil knot's complement.
In fact, the structure of the Seifert fibration $\U \to \M$ reveals that $\U$ is homeomorphic to the complement of a trefoil knot in the sphere (see \cite{Montesinos_Tesselations_1987, Dehornoy-Pinsky_template-pqr_2018} for such a proof).
In particular, any two disjoint loops in $\U$ have a well defined linking number.
The closed hyperbolic geodesics in $\M$ lift to the periodic orbits for the geodesic flow in its unit tangent bundle $\U$, and the primitive ones trace the so called \emph{modular knots}.
Together, they form the \emph{master modular link} whose components are indexed by the primitive hyperbolic conjugacy classes in the modular group.
We wish to relate the geometry and topology of the master modular link with the arithmetic and combinatorial properties of the modular group.
\begin{figure}[h]
\centering
\includegraphics[width=0.36\textwidth]{images/misc/seifert_fib_big.jpg}
%
\includegraphics[width=0.48\textwidth]{images/misc/two_modular_knots_5,3,333,200_200,333,3,5.jpg}
\caption*{
The Seifert fibration $\U\to \M$ and two modular knots, from the \href{http://www.josleys.com/articles/ams_article/Lorenz3.htm}{online article} \cite{GhyLey_Lorenz-Modular-visual_2016} which proposes an animated introduction to the topology and dynamics of $\U$.
}
\end{figure}
\subsection{The Lorenz template}
To describe the isotopy class of the master modular link, we rely on the construction of the Lorenz template and its embedding in $\U$, following \cite[§3.4]{Ghys_knots-dynamics_2006}.
The Lorenz template $\Lorenz$ is the branched surface obtained from the ideal triangle $(0,1,\infty)$ of $\H\P$ by identifying the side $(1,\infty)$ with the side $(0,\infty)$ through $R^{-1}$ and the side $(0,1)$ with the side $(0,\infty)$ through $L^{-1}$.
It is endowed with a semi-flow defined by the horizontal vector field whose periodic orbits correspond to the non-empty cycles on $\{L,R\}$ (this is an interval exchange map).
After its embedding $\Lorenz \hookrightarrow \U$ suggested in the following figure, those form \emph{the master Lorenz link}.
Consider a primitive hyperbolic conjugacy class in $\PSL_2(\Z)$: the geometric axes of its Euclidean representatives intersect the ideal triangle $(0,1,\infty)$ in a collection of segments which quotient to a closed connected loop in $\Lorenz$.
This loop is isotopic to the periodic orbit of the semi-flow indexed by the corresponding $L\& R$-cycle.
More precisely, \'E. Ghys \cite[§3.4]{Ghys_knots-dynamics_2006} showed the following.
\begin{Theorem}
The master modular link formed by all modular knots is isotopic to the master Lorenz link formed by the primitive periodic orbits of the semi-flow on the Lorenz template.
In particular, the Rademacher invariant of an primitive hyperbolic conjugacy class in $\PSL_2(\Z)$ equals the linking number between the corresponding modular knot and the trefoil.
\end{Theorem}
\begin{proof}[Outline of the proof]
The Fuchsian representation $\Bar{\rho}_q\colon \PSL_2(\Z) \to \PSL_2(\R)$ with quotient the hyperbolic orbifold $\M_q$ lifts to $\widetilde{\PSL}_2(\Z)\to \widetilde{\PSL}_2(\R)$ with quotient its unit tangent bundle $\U_q$.
Varying $q\in ]1,+\infty[$ yields isotopies between the manifolds $\U_q$ which are all homeomorphic to the complement of a trefoil's neighbourhood, and conjugacies between their geodesic flows whose periodic orbits are indexed by the primitive conjugacy classes of infinite order in $\PSL_2(\Z)$.
As $q\to \infty$ the manifold $\U_q$ retracts onto a branched surface homeomorphic to the Lorenz template, and the master $q$-modular link isotopes to the periodic orbits of its semi-flow.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/misc/template-big.jpg}
\caption*{Standard projection on $\S^2$ of the Lorenz template embedded in $\S^3$.}
\label{fig:Lorenz-Template}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.42\textwidth]{images/misc/orbit_deformation_blue_1.jpg}
\includegraphics[width=0.42\textwidth]{images/misc/orbit_deformation_blue_2.jpg}
\caption*{Isotopy of a modular knot to a Lorenz knot. The trefoil is in yellow.}
\label{fig:Modular-knot-in-Lorenz-Template}
\end{figure}
\'E. Ghys concludes his paper \cite{Ghys_knots-dynamics_2006} by asking for an arithmetic interpretation of the linking pairing between modular knots.
Note that the embedded Lorenz template provides a framing for the Lorenz knots, which enables to define their \emph{self-linking number} as the linking number between two parallel copies of the knot in the Lorenz template.
\section{Linking numbers of modular knots}
\label{sec:linking-numbers}
\subsection{Invariants on pairs of conjugacy classes}
\label{subsec:F(A,B)}
The action of $\PSL_2(\Z)$ on $\H\P$ and $\Tree$ enabled us to defined conjugacy invariants for pairs $(A,B)$ by comparing their stable subsets in $\H\P$ or $\Tree$.
We now explain how to average those in order to obtain functions of pairs of conjugacy classes.
\subsubsection*{Summing over double cosets}
Consider a group $\Pi$ acting on a space $\Sigma$ and a function $f$ defined on $\Sigma \times \Sigma$ with values in a commutative group $\Lambda$ which is invariant under the diagonal action of $\Pi$:
\begin{equation*}
f\colon \Sigma \times \Sigma \to \Lambda
\qquad
\forall W \in \Pi, \;
\forall a,b \in \Sigma
\: \colon \:
f(a,b)=f(W\cdot a, W\cdot b)
\end{equation*}
We define an invariant $F$ for pairs of $\Pi$-orbits $[a],[b]$ by summing $f$ over all pairs of representatives of the orbits considered modulo the diagonal action of $\Pi$.
The pairs of representatives for the orbits are parametrized by the $(U\cdot a,V\cdot b)$ for $(U,V)\in \Pi / (\Stab a) \times \Pi / (\Stab b)$, and the quotient of this set by the diagonal action of $\Pi$ by left translations is denoted $\Pi / (\Stab a) \times_\Pi \Pi / (\Stab b)$.
Consequently, the sum indexed by $(U,V) \in \left(\Pi/ \Stab a \right) \times_\Pi \left(\Pi/ \Stab b \right)$ defines our desired invariant:
\begin{equation*}
F([a],[b]) = \sum f(U\cdot A, V\cdot b)
\end{equation*}
This can also be written as the sum over double cosets $W\in (\Stab a) \backslash \Pi / (\Stab b)$:
\begin{equation*}
F([a],[b]) = \sum f(a,W\cdot b)
\end{equation*}
because the map $(\Pi / \Stab a) \times (\Pi / \Stab b) \to (\Stab a) \backslash \Pi / (\Stab b)$ sending $(U,V)$ to $W=U^{-1}V$ is surjective, and its fibers are the orbits under the diagonal action of $\Pi$ by left translations.
We will apply this discussion to the action of $\PSL_2(\Z)$ on itself by conjugacy to obtain invariants for pairs of primitive hyperbolic conjugacy classes.
Note that in $\PSL_2(\Z)$, the centraliser of a hyperbolic $A$ is the infinite cyclic subgroup generated by its primitive root, namely the unique primitive element with a positive power equal to $A$.
Our functions $f(a,b)$ will be expressed in terms of geometrical invariants such as $\bir(A,B)$ or $\cos(A,B)$, as well as combinatorial invariants such as $\cross(A,B)$ and $\cosign(A,B)$.
To ensure that the sum is well defined, it must have finite support or converge in a completion of $\Lambda$ for an appropriate norm, and that depends on the behaviour of $f$.
\subsubsection*{Summing over $L\&R$-words}
Consider a function $f$ over the pairs of coprime primitive hyperbolic $A,B\in \PSL_2(\Z)$, which is invariant under the diagonal action of $\PSL_2(\Z)$ on itself by left conjugacy.
In order to compute the sum defining $F([A],[B])$, we may group the terms $f(UAU^{-1},VBV^{-1})$ according to the $\cosign(UAU^{-1},VBV^{-1}) \in \{-1,0,1\}$ to obtain:
\begin{equation*}
F = F_- + F_0 + F_+
\end{equation*}
The sum $F_+$ has finite support, contained in the set of pairs of Euclidean representatives for the conjugacy classes of $A,B$.
Similarly the sum $F_-$ has finite support, which we may also index by those pairs of Euclidean representatives using the fact that $\cosign(A,B)=-\cosign(A,SBS^{-1})$. Thus for $A,B\in \PSL_2(\N)$ we have the following computable expressions:
\begin{equation*}
F_+([A],[B])=
\sum
f\left(\sigma^iA,\sigma^jB\right)
%
\qquad
%
F_-([A],[B])
=
\sum
f\left(\sigma^iA,S(\sigma^jB)S^{-1}\right)
\end{equation*}
where the indices $i\in [1,\len A]$, $j\in [1,\len B]$ are such that $\sigma^iA$ and $\sigma^jB$ end with different letters.
One may similarly split the sum $F_0$ in two parts according to the relative orientations of the axes (interchanged by the action of $S$ on one of the components of $f$), but their index sets are infinite.
Suppose that $\cosign(A,B)=0\implies f(A,B)=0$ and $f(A,B^{-1})=\epsilon f(A,B)$ with $\epsilon\in \{\pm 1\}$. This holds for $\cross$ \& $\cosign$ with $\epsilon =-1$, and for their product or their absolute values with $\epsilon =1$. Then $F_0=0$ and $F_-(A,B)=\epsilon \cdot F_+(A,{}^t\!B)$ thus $F(A,B)= F_+(A,B)+\epsilon \cdot F_+(A, {}^t\!B)$.
\subsection{Linking numbers from the action on
$(\Tree,\cord)$}
The projection of the Lorenz template yields a diagram for the Lorenz link in which all crossings are positive, and those can be enumerated using the $L\&R$-cycles of the corresponding modular knots. This yields the algorithmic formula \cite[4.27]{CLS_phdthesis_2022} for computing linking numbers, which was used by Pierre Dehornoy in \cite{Dehornoy_noeuds-lorenz_2011}.
We recast it in Proposition \ref{Prop:algo-sum} in terms of the action of $\PSL_2(\Z)$ on $(\Tree,\cord)$,
after introducing the appropriate quantity to be summed.
\begin{Definition}
For oriented bi-infinite geodesics $g_a,g_b \subset \Tree$ with distinct ends we define:
\begin{equation*}
\crocs(g_a,g_b)
=\left(\across \times \frac{1+\cosign}{2}\right)(g_a, g_b)
=\left(\frac{1+\
\cross}{2} \times \frac{1+\cosign}{2}\right)(g_a, g_b)
\end{equation*}
Hence $\crocs(g_a, g_b)=1$ only when the axes cross and their orientation coincides along the intersection.
\end{Definition}
We say that $A,B\in \PSL_2(\Z)$ are \emph{coprime} when their positive powers are never conjugate.
\begin{Proposition}[Algorithmic formula: sum over $L\&R$-words]
\label{Prop:algo-sum}
For coprime hyperbolic elements $A,B\in \PSL_2(\N)$ we have:
\begin{equation*}
\lk(A,B)=\frac{1}{2}\sum \crocs(\sigma^iA, \sigma^jB)
\end{equation*}
ranging over all $i\in [1,\len A]$, $j\in [1,\len B]$ are such that $\sigma^iA$ and $\sigma^jB$ end with different letters.
\end{Proposition}
\begin{proof}
The monoid $\PSL_2(\N)$ is endowed with the lexicographic order extending $L<R$.
The crossings between the Lorenz knots associated to $A,B\in \PSL_2(\N)$ are in bijection with the pairs of Euclidean representatives whose last letters are in the opposite order of the words themselves, so that either $\sigma^iA=w_AL$, $\sigma^jB=w_BR$ with $w_A>w_B$ or $\sigma^iA=w_AR$, $\sigma^jB=L$ with $w_A<w_B$.
\end{proof}
We deduce from the previous paragraph a group theoretical formula in terms of double cosets.
\begin{Theorem}[Algebraic formula: sum over double cosets]
\label{Prop:algebra-sum}
For coprime primitive hyperbolic elements $A,B\in \PSL_2(\Z)$:
\begin{equation*}
\lk(A,B)=\frac{1}{2}\sum \crocs(\tilde{A}, \tilde{B})
\end{equation*}
where the sum extends over pairs of representatives $\tilde{A}=UAU^{-1}$ and $\tilde{B}=VBV^{-1}$ for the conjugacy classes with
$(U,V)\in \Gamma/\langle A\rangle \times_\Gamma \Gamma/\langle B\rangle$.
\end{Theorem}
\begin{Remark}
\label{Rem:intersection-from-link}
In particular, we recover the intersection number between modular geodesics as:
\begin{equation*}
\lk(A,B)+\lk(A,B^{-1})=\frac{1}{2}\sum \across(\tilde{A},\tilde{B}) = \tfrac{1}{2}\cdot I(A,B)
\end{equation*}
whereas the sum of the cosign over pairs of intersecting axes yields:
\begin{equation*}
\lk(A,B)-\lk(A,B^{-1})=\frac{1}{2}\sum \left(\across \times \cosign\right)(\tilde{A},\tilde{B}).
\end{equation*}
We deduce an efficient algorithm computing the intersection number $I(A,B)$ from the $L\&R$-factorisation of $A,B$ by applying algorithmic formula to the linking numbers $\lk(A,B)$ and $\lk(A,B^{-1})$.
Note that if $A$ is conjugate to $B$, then $I(A,B)$ is the intersection number between two parallel copies of the modular geodesic, which is twice its self-intersection number (counted as the number of double points).
For instance, the modular geodesic corresponding to $RLL$ has self-intersection \[\tfrac{1}{2}I([RLL],[RLL])=\lk([RLL],[RLL])+\lk([RLL],[LLR])=\tfrac{1}{2}4+\tfrac{1}{2}2=3.\]
\end{Remark}
\section{Linking function on the character variety and its boundary}
\subsection{Linking function on the character variety and its boundary}
Recall the family of Fuchsian representations $\Bar{\rho}_q\colon \PSL_2(\Z) \to \PSL_2(\R)$ parametrized algebraically by $q\in \R^*$.
\begin{Definition}
For primitive hyperbolic $A,B\in \PSL_2(\Z)$, we define the algebraic functions of $q$:
\begin{equation}\label{eq:Link}\tag{$\Link_q$}
\Link_q(A,B)
= \frac{1}{2} \sum \left(\tfrac{\asrt{\bir>1}}{\bir}\right)\left(\tilde{A}_q, \tilde{B}_q\right)
\end{equation}
\begin{equation}\label{eq:Cos}\tag{$\Cos_q$}
\Cos_q(A,B)
= \frac{1}{2} \sum (\across \times \cos)(\tilde{A}_q, \tilde{B}_q)
\end{equation}
by summing over the pairs of representatives $\tilde{A}=UAU^{-1}$ and $\tilde{B}=VBV^{-1}$ for the conjugacy classes of $A$ and $B$ where $(U,V)\in \Gamma/\Stab(A) \times_\Gamma \Gamma/\Stab(B)$.
\end{Definition}
The appearance of $\asrt{\bir>1}=\across$ as a factor in the terms of \ref{eq:Link} and \ref{eq:Cos} amounts to restricting the summations over pairs of matrices whose axes intersect.
Hence the support of the sums corresponds to the intersection points of the modular geodesics $\gamma_A$ and $\gamma_B$ associated to the conjugacy classes, which must be counted with appropriate multiplicity when $A$ or $B$ is not primitive. Thus:
\begin{equation*}
\Link_q(A,B)
= \frac{1}{2} \sum \left(\cos \tfrac{\theta}{2}\right)^2
\qquad \mathrm{and} \qquad
\Cos_q(A,B)
= \frac{1}{2} \sum \left(\cos \theta\right).
\end{equation*}
Observe that since $\bir(A,B)^{-1}+\bir(A,B^{-1})^{-1}=1$ we have $\Link_q(A,B)+\Link_q(A,B^{-1})=\tfrac{1}{2}I(A,B)$.
\begin{Conjecture}
The angles turning from $\gamma_{A_q}$ to $\gamma_{B_q}$ in the direction prescribed by the orientation of $\M_q$ have cosines $(\cross \times \cos)(\tilde{A_q},\tilde{B_q})$: we believe that they sum up to $0$, as explained in \ref{subsec:Link-Fuchsian-group}.
\end{Conjecture}
\begin{Theorem}
\label{Thm:Bir(A,B)-->lk(A,B)}
For primitive hyperbolic conjugacy classes $[A],[B]$ in $\PSL_2(\Z)$ we have:
\begin{align*}
&\Link_q(A,B) \xrightarrow[q\to \infty]{} \lk(A,B)
\\
&\Cos_q(A,B) \xrightarrow[q\to \infty]{}
\lk(A,B)-\lk(A^{-1},B)
=\lk(A,B)-\tfrac{1}{4}I(A, B)
\end{align*}
\end{Theorem}
\begin{proof}
Recall from Lemma \ref{Lem:cos-cosh-sinh} the relation $1/\bir(A_q,B_q)=\tfrac{1}{2}(1+\cos(A_q,B_q))$ and from Proposition \ref{Prop:cos_q-limit} the limit $\cos(A_q,B_q)\to \cosign(A,B)$ as $q\to \infty$. Hence the terms of the sum defining \ref{eq:Link} converge to those in the sum of Proposition \ref{Prop:algebra-sum}. The limit of \ref{eq:Cos} follows from Remark \ref{Rem:intersection-from-link}.
\end{proof}
Let us display the graphs of $\textcolor{blue}{q\mapsto2\Link_q(A,B)}$ and $\textcolor{red}{q\mapsto2\Link_q(A,B^{-1})}$ along with their average $\textcolor{black!50!green}{\tfrac{1}{2}I(A,B)}$ for some pairs $A,B\in \PSL_2(\N)$. The legend $A=[a_0,a_1,\dots]$ means $A=R^{a_0}L^{n_1}\dots$.
\begin{figure}[h]
\centering
\includegraphics[width=0.36\textwidth]{images/python/qLink_1,2-1,2_sample=42.png}
\hspace{-0.9cm}
\includegraphics[width=0.36\textwidth]{images/python/qLink_1,2-1,2,3,4_sample=42.png}
\hspace{-0.9cm}
\includegraphics[width=0.36\textwidth]{images/python/qLink_3,1,2,3-7,2,1,1_sample=42.png}
\vspace{-0.4cm}
\caption*{\textcolor{blue}{$L_q(A,B)$} interpolates between the arithmetic at $1$ and the topology at $+\infty$.}
\end{figure}
\subsection{Graphs of $\Link_q(A,B)$ for $q\in \C$}
Finally, we represent some graphs of $\Link_q(A,B)$ for $q\in \C$. Since $\Link_q=\Link_{1/q}$ we restrict $\lvert q \rvert < 1+\epsilon$ for some $\epsilon>0$ chosen according to aesthetic criteria.
For this we assign a colour to each point of the complex plane using the HSV colour scheme: the hue varies with the argument, and the brightness varies with the module.
\begin{figure}[h]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.42\textwidth]{images/python/Identity_square=2_res=640}
\vspace{-0.4cm}
\caption*{The identity map for $q\in \C$ with $\lvert q \rvert < 2$.}
\end{figure}
\begin{figure}[h]
\centering
\vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-1,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,5-1,5_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,8-1,8_complex_large_640x640.png}
\\
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-3,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,5-5,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,8-8,1_complex_large_640x640.png}
\hspace{-0.4cm}
\caption*{Graphs of $\Link_q(A,B)$ for for $q\in \C$ with $\lvert q\rvert < 1.6$.}
\end{figure}
The main observation is that the zeros and poles of $\Link_q(A,B)$ seem to concentrate on the unit circle.
This is neither surprising nor obvious, as we explain in the next paragraph.
\subsection{Locating the zeros of \texorpdfstring{$\Link_q$}{L_q}}
\label{subsec:L_q(A,B)=0}
Let us first relate \cite[Proposition 5.16]{CLS_phdthesis_2022}.
A primitive hyperbolic conjugacy class in $\PSL_2(\Z)=\pi_1(\M)$ corresponds to a primitive modular geodesic in $\gamma_A\subset \M$. It lifts to a modular knot in $k_A \subset \U$ which in turns yields a conjugacy class in $\BB_3=\pi_1(\U)$.
A conjugacy class in the braid group on three strands defines, by taking its closure, a link $\sigma_A$ in a solid torus.
In \cite[Proposition 5.16]{CLS_phdthesis_2022} we relate the Alexander polynomial $\Delta(\sigma_A)\in \Z[t^{\pm 1}]$ of this link $\sigma_A$ to the Fricke polynomial $\Tr A_q \in \Z[q^{\pm 1}]$ of the modular geodesic $\gamma_A$.
\begin{Proposition}
For a primitive hyperbolic $A\in \PSL_2(\Z)$, the Alexander polynomial of the link $\sigma_A$ is given in terms of $q=\sqrt{-t}$ by: \[\Delta(\sigma_A)=\tfrac{q^{\Rad(A)}-\Tr(A_q)+q^{-\Rad(A)}}{(q-q^{-1})^2}\]
\end{Proposition}
\begin{figure}[h]
\centering
\vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_2,3-2,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,2-3,1_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-1,5_complex_large_640x640.png}
\\ \vspace{-0.4cm}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_2,3-3,2_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_1,3-2,3_complex_large_640x640.png}
\hspace{-3cm}
\includegraphics[width=0.52\textwidth]{images/python/qLink_3,1,2,3-7,2,1,1_complex_480x480.png}
\\
\vspace{-0.4cm}
\caption*{Graphs of $\Link_q(A,B)$ for $q\in \C$ with $\lvert q\rvert < 1.6$.}
\end{figure}
Locating the zeros of Alexander polynomials of various classes of knots and links has been the subject of various conjectures and results. For instance \cite{Stoimenov_alexander-hoste-conjecture_2019} studies a conjecture of Hoste for links with braid index $3$.
We should also mention that \cite{Dehornoy_zeros-alex-modular-knots_2015} has shown such a concentration property for the zeroes of the Alexander polynomial of a Lorenz knot: they lie an annulus whose inner and outer radii are bounded in terms the genus and the braid index of the knot.
Now recall that $\Link_q(A,B)-\Link_q(A,B^{-1})$ can be expressed as finite sum of terms of the form:
\begin{equation*}
\cos(A_q,B_q) = \tfrac{\disc(A_qB_q)-\disc(A_qB_q^{-1})}{\sqrt{\disc A_q\disc B_q}}
\qquad \mathrm{where} \qquad
\disc(C_q)= (\Tr C_q)^2-4
\end{equation*}
This is why one may guess a concentration property for the zeros of $\Link_q$ around the unit circle.
Still, it would remain a challenge to prove it.
\newpage
\section{Linking numbers and homogeneous quasimorphisms}
\label{sec:quasi-morphism}
\subsection{Combinatorial formula: sum of linked patterns}
We now derive a combinatorial formula for the linking numbers \cite[Proposition 4.34]{CLS_phdthesis_2022} arising from a different count of the crossings in the Lorenz template. It follows from the algorithmic formula, but we propose a visual proof.
The monoid $\PSL_2(\N)$ freely generated by $L\&R$ is given the lexicographic order extending $L<R$.
The monoid $\PSL_2(\N)\setminus\{\Id\}$ maps to the set $\{L,R\}^{\N}$ of infinite binary sequences by sending a finite word $A$ to its periodisation $A^\infty$. This map is increasing and injective in restriction to primitive words.
We denote by $\sigma$ the Bernoulli shift on $\{L,R\}^\N$ which removes the first letter, as well as the cyclic shift on $\PSL_2(\N)\setminus\{\Id\}$ which moves the first letter at the end.
These shifts are intertwined by the periodisation map: $(\sigma^j A)^\infty = \sigma^j(A^\infty)$.
For a pattern $P\in \PSL_2(\N)$ and an infinite order $A\in \PSL_2(\N)$, let $\pref_P(A^\infty)=\asrt{A^\infty\in P\cdot\PSL_2(\N)}\in \{0,1\}$ tell whether $P$ is a prefix of $A^\infty$, and $\occ_P(A) = \sum_{j=1}^{\len A}
\pref_P\left(\sigma^jA^{\infty} \right)$ count the number of cyclic occurrences of $P$ in $A\bmod{\sigma}$.
Recall that $A,B\in \PSL_2(\Z)$ are coprime when their positive powers are never conjugate.
Thus $A,B\in \PSL_2(\N)$ are not coprime when they admit cyclic permutations generating submonoids with non-trivial intersection, in other terms if $A^\infty = B^\infty \bmod{\sigma}$.
\begin{Proposition}[Combinatorial formula: sum of linked patters]
\label{Prop:sum-linked-patterns}
For coprime hyperbolic $A,B\in \PSL_2(\N)$ the corresponding modular knots have linking number:
\begin{equation}
\label{eq:sum-linked-patterns}\tag{SLP}
\lk(A,B) = \frac{1}{2} \sum_{w}
\begin{pmatrix}
\occ_{RwL}(A)\cdot \occ_{LwR}(B)
\\+\\
\occ_{LwR}(A)\cdot \occ_{RwL}(B)
\end{pmatrix}
\end{equation}
where the summation extends over all words $w\in \PSL_2(\N)$ including the empty one.
\end{Proposition}
\begin{proof}[Visual proof sketch]
Split the Lorenz template by extending the dividing line backwards in time, and observe the crossings appearing in its standard planar projection: they occur in regions arranged according to a binary tree indexed by pairs of words of the form $(RwL,LwR)$.
\begin{comment}
\begin{figure}[h]
\centering
\includegraphics[ width=0.49\textwidth]{images/misc/template_coupe_2.jpg}
\hfill
\includegraphics[ width=0.49\textwidth]{images/misc/template_coupe_3.jpg}
\end{figure}
\end{comment}
\end{proof}
\begin{Remark}
\label{rem:long-patterns}
For $\len(P)\ge \len(A)$ we have $\occ_P(A)>0$ if and only if $A^\infty = P^\infty \bmod{\sigma}$, which is equivalent to the non-coprimality of $P$ and $A$.
Hence the coprimality assumption on $A$ and $B$ ensures that the support of the sum \eqref{eq:sum-linked-patterns} is contained in the set of $w$ such that $\len w < \max\{\len A,\len B\}$.
If $A$ and $B$ are not coprime, then they are conjugate to positive powers $C^m$ and $C^n$ of a primitive $C\in \PSL_2(\N)$, and restricting the sum \eqref{eq:sum-linked-patterns} to the indices $w$ with $\len w < \max\{\len A,\len B\}$ yields $mn$ times the self-linking number of the modular knot associated to $C$ with the Lorenz framing.
\end{Remark}
\begin{Remark}
An $L\&R$-cycle $A \bmod{\sigma}$ has a multiset of $L$-exponents and a multiset of $R$-exponents.
Formula \eqref{eq:sum-linked-patterns} shows that $\lk(A,RL^{m+1})-\lk(A,RL^{m})$ counts the number of $L$-exponents which are $>m\ge 1$ and that $\lk(A,LR^{n+1})-\lk(A,LR^{n})$ counts the number of $R$-exponents which are $>n\ge 1$.
\end{Remark}
\begin{Theorem}
\label{Thm:linkeq_implies_conjugate}
If hyperbolic $A,B\in \PSL_2(\Z)$ are link equivalent, namely $\lk(A,C)=\lk(B,C)$ for all hyperbolic $C \in \PSL_2(\Z)$,
then they are conjugate.
\end{Theorem}
\begin{proof}
The set $\mathcal{Z}=R\PSL_2(\N)L\sqcup L\PSL_2(\N)R$ of $L\&R$-words which start \& end with different letters is endowed with the involution $p\mapsto \Bar{p}$ exchanging the extremal letters, having no fixed points.
The free $\Z$-module $\Omega$ generated by the set $\mathcal{Z}$ is naturally decomposed as the direct sum of rank-$2$ sub-modules generated by pairs $\{z,\Bar{z}\}$.
It is therefore endowed with a non-degenerate symmetric bilinear form $\Omega \times \Omega \to \Z$, given by the direct sum of the hyperbolic structures on those planes:
\begin{equation*}
\Omega
= \bigoplus_{z\in \mathcal{Z}} \Z\cdot z
= \bigoplus_{z>\Bar{z}} \Z\cdot z \oplus \Z\cdot \Bar{z}
\qquad
(a\cdot b)
= \sum_{z\in \mathcal{z}} a_z b_{\Bar{z}}
= \sum_{z>\Bar{z}} (a_z b_{\Bar{z}} + a_{\Bar{z}} b_{z})
\end{equation*}
The length function $\len \colon \PSL_2(\N) \to \N$ yields a filtration of the set $\mathcal{Z}$ by the chain of subsets $\mathcal{Z}_n = \{z \in \mathcal{Z} \mid \len(p)\le n\}$ with cardinals $2^{n-1}$, which is invariant by the involution.
This induces a filtration of the module $\Omega$ by the corresponding chain of sub-modules $\Omega_n$ with ranks $2^{n-1}$, all invariant under the orthogonal symmetry.
Thus each unimodular quadratic $\Z$-module $\Omega_n$ (decomposed as a direct sum of hyperbolic planes) is canonically isomorphic to its dual $\Omega_n^*$.
Now consider cyclic words $A,B\in \PSL_2(\N) \bmod{\sigma}$ corresponding to link equivalent hyperbolic conjugacy classes in $\PSL_2(\Z)$.
Let $m=\max\{\len(A),\len(B)\}$ and consider the linear forms on $\Omega_{m}$ defined by the sequences $(\occ_z(A))_{z}$ and $(\occ_z(B))_z$ for $z\in \mathcal{Z}_m$.
Since $A,B$ are link equivalent, the isomorphism $\Omega_m\to \Omega_m^*$ implies by Theorem \ref{Thm:linkeq_implies_conjugate} and Remark \ref{rem:long-patterns} that these linear forms coincide, so that $\occ_z(A)=\occ_z(B)$ for all $P\in \mathcal{Z}_m$.
In particular, for $z$ a linear representative of $B$ we find that $\occ_B(A)=\occ_B(B)>0$ whereby $A=B \bmod{\sigma}$.
\end{proof}
\begin{comment}
\begin{Remark}
Notice that the cyclic shift acting on $\mathcal{Z}$ preserves the $\mathcal{Z}_n$, but does not commute with the involution for $n>2$ as these two actually generate the full group of permutations $\mathfrak{S}_n$.
\end{Remark}
\end{comment}
\subsection{Homogeneous quasi-morphisms on the modular group}
For a group $\Pi$, a function $f\colon \Pi \to \R$ is called a \emph{quasi-morphism} if it has a bounded derivative:
\[df(A,B)=f(B)-f(AB)+f(A).\]
A quasi-morphism $f\colon \Pi \to \R$ is called \emph{homogeneous} if it is a morphism in restriction to the abelian subgroups of $\Pi$ (which in the case $\Pi = \Gamma$ means that $f(A^n)=nf(A)$ for all $A\in \Gamma$ and $n\in \N$).
Observe that a homogeneous quasi-morphism is bounded only if it is trivial, necessarily constant on conjugacy classes, and vanishes on torsion classes.
The real vector space $PX(\Pi;\R)$ of homogeneous quasi-morphisms is a Banach space for the norm $\lVert df \rVert_\infty$, as was shown in \cite{MatsuMorita_Hb(Homeo)_1985, Ivanov_H2b(G)-Banach_1988}.
For a pattern $P\in \PSL_2(\N)$ we define the $P$-asymmetry of an infinite order $A\in\PSL_2(\N)$ by \[\mes_P(A)=\occ_P(A)-\occ_{{}^t\!P}(A)\]
Notice that $\mes_P(A)=\occ_P(A)-\occ_P({}^t\!A)$ and that ${}^t\!A$ is conjugate to $A^{-1}$ by $S\in \PSL_2(\Z)$.
Extending $\mes_P(A)=0$ for elliptic $A$ yields a conjugacy invariant function $\mes_P \colon \PSL_2(\Z)\to \Z$.
In particular for $P=R$ we recover the \emph{Rademacher function} as $\mes_P(A) = \Rad(A)$.
\begin{Lemma}
\label{Lem:cocycle}
For all $P\in \PSL_2(\N)$, the function $\mes_P \colon \PSL_2(\Z)\to \Z$ is a homogeneous quasi-morphism. If $P\ne {}^t\!P$ then $\mes_P$ is unbounded, and if $P$ does not overlap itself then $\lVert d\mes_P \rVert_\infty \le 6$.
\end{Lemma}
\begin{proof}
The proof relies on the ideas in \cite{BarGhys_cocycles-actions-arbres_1991} (see also \cite[Lemma 5.3]{Grigorchuk_bounded-cohomology_1995}).
\end{proof}
\begin{Theorem}
\label{Thm:Cos_A}
For every hyperbolic $A\in \PSL_2(\Z)$, the function $\Cos_A\colon B\mapsto \lk(A,B)-\lk(A^{-1}, B)$ is a homogeneous quasi-morphism $\PSL_2(\Z)\to \Z$, which is unbounded unless $A$ is conjugate to $A^{-1}$.
It can be computed for $A,B\in \PSL_2(\N)$ as:
\begin{equation}
\label{eq:Cos_A} \tag{$\Cos_A$}
\Cos_A(B) = \lk(A,B)-\lk(A, {}^t\!B)
= \frac{1}{2} \sum_{w}
\begin{pmatrix}
\occ_{RwL}(A)\cdot \mes_{LwR}(B)
\\+\\
\occ_{LwR}(A)\cdot \mes_{RwL}(B)
\end{pmatrix}
\end{equation}
where the summation extends over all words $w\in \PSL_2(\N)$ with $\len(w)<\max\{\len A, \len B\}$.
\end{Theorem}
\begin{proof}
The quantity $\Cos_A(B)$ is homogeneous in $A$ and $B$.
Let explain why $\Cos_A$ is a quasi-morphism for $A$ primitive.
Recall that $A^{-1}$ and ${}^t\!A$ are conjugate by $S$ and notice that $\lk(A^{-1},B)=\lk(A, B^{-1})$.
Therefore \eqref{eq:sum-linked-patterns} yields \eqref{eq:Cos_A} and $d\Cos_A=\tfrac{1}{2}\sum_w \left(\occ_{RwL}(A) \cdot d\mes_{LwR}+\occ_{LwR}(A) \cdot d\mes_{RwL}\right)$.
Since $\occ_P(A)\le \len(A)$ it is enough to prove by Proposition \ref{Lem:cocycle} that for every $X,Y\in \PSL_2(\Z)$, the sum $d\Cos_A(X,Y)$ contains at most $\len(A)^2$ non-zero terms with $\len(w)\ge \len(A)$.
The $L\&R$-words $w$ such that $\occ_{LwR}(A)>0$ correspond to the triples $1\le m,n\le \len(A)$ and $k\in \N$ such that $\sigma^mA=LuRv$, $\sigma^nA=RvLu$, and $LwR=L(uRvL)^kuR$ for some $L\& R$-words $u,v$. In this situation we write $P_{mn}^k=LwR=L(uRvL)^kuR$ and $Q_{mn}^k=RwL=R(uRvL)^kuL$.
By construction (and the primitivity of $A$), two distinct $Q_{mn}^k$ cannot overlap except along a prefix and suffix of length $<\len(A)$.
We may thus adapt the argument for \cite[Proposition 5.10]{Grigorchuk_bounded-cohomology_1995}.
The quantity $d\mes_Q(X,Y)$ measures the "$Q$-perimeter" of a tripod $(*,X*,XY*)$ in tree $\Tree$, and it is non-zero only if $Q$ can be matched along a portion covering its incenter.
But for each $(m,n)$, at most two values of $k>1$ may lead to such patterns $Q_{mn}^k$.
The same reasoning applies with $L$\&$R$ interchanged.
This proves the bound on the number of non-zero summands for $d\Cos_A$.
Finally by Theorem \ref{Thm:linkeq_implies_conjugate} we have $d\Cos_A=0$ only if $A$ is conjugate to ${}^t\!A$.
\end{proof}
Let $\mathcal{P}$ denote the set of \emph{Lyndon words}, namely the $L\&R$-words which are greater than every of their cyclic permutations.
Notice that such words are primitive, and cannot overlap themselves.
Hence if a Lyndon word is equal to a cyclic permutation of its transpose then it is actually symmetric. Let $\mathcal{P}_0$ be the subset of symmetric Lyndon words and choose a partition $\mathcal{P}\setminus \mathcal{P}_0=\mathcal{P}_-\sqcup \mathcal{P}_+$ in two subsets which are in bijection by the transposition.
Observe that $\mathcal{P}$ indexes the set of primitive infinite order conjugacy classes in $\Gamma$, and $\mathcal{P}_0$ the subset of those which are stable under inversion.
Of course $\Id \in \mathcal{P}_0$, we may choose $R\in \mathcal{P}_+$, and denote $\Cos_R:=\mes_R=\Rad$ by convention.
\begin{comment}
\begin{Lemma}
\label{Cor:mes_P-basis}
The collection of $\mes_P\in PX(\Gamma;\R)$ for $P\in \mathcal{P}_+$ is linearly independant, and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{P\in \mathcal{P}_+} mf_P \mes_P$ for unique $mf_P \in \R$.
\end{Lemma}
\begin{proof}
This is a reformulation of \cite[Theorem 5.11]{Grigorchuk_bounded-cohomology_1995}.
\end{proof}
\end{comment}
\begin{Proposition}
The collection of $\Cos_A\in PX(\Gamma;\R)$ for $A\in \mathcal{P}_+$ is linearly independant.
\end{Proposition}
\begin{proof}
Consider distinct Lyndon words $A_1,\dots,A_k\in \mathcal{P}_+\setminus\{R\}$, and let $A_1,\dots,A_j$ be those of maximal length $m$.
The $\Cos_{A_j}$ are linearly independant of $\Cos_R$ as $\Cos_{A_j}(R)=0$ whereas $\Cos_R(R)=1$.
Suppose by contradiction that we have a linear relation $\sum r_i \Cos_{A_i}=0$ for $r_i\in \R^*$.
As in the proof of Proposition \ref{Thm:linkeq_implies_conjugate}, this restricts to a linear relation in $\Omega_m^*$, and using the isomorphism $\Omega_m\to \Omega_m^*$ we find that $\sum r_i \occ_P(A_i) = \sum r_i \occ_P({}^t\!A_i)$ for all $P\in \mathcal{Z}_m$.
For $P=A_j$, we have $\occ_{P}(A_i)=0=\occ_{P}({}^t\!A_i)$ for all $i>j$ because $A_j$ cannot overlap itself, and $\occ_{P}(A_i)=0=\occ_{P}({}^t\!A_i)$ for $i<j$ because the Lyndon word $A_j$ is different from $A_i$.
We also have $\occ_{P}({}^t\!A_j)=0$ since $A_j$ is not conjugate to its inverse, whence not link equivalent to its transpose by Proposition \ref{Thm:linkeq_implies_conjugate}.
Since $\occ_{P}(A_j)=1$ we have $r_1=0$, which is the desired contradiction.
\end{proof}
\begin{Proposition}
Every $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{A\in \mathcal{P}_+} cf_A \cdot \Cos_A$ for unique $cf_A \in \R$.
\end{Proposition}
\begin{proof}
A homogeneous quasi-morphism $f\in PX(\Gamma;\R)$ quotients to a function on the set of infinite order primitive conjugacy classes, or on $\mathcal{P}$. It must vanish on $\mathcal{P}_0$, and change sign by transposition, so the function restricted to $\mathcal{P}_+$ uniquely determines $f$.
Since $\Cos_A(R)=0$ for all $A\in \mathcal{P}_+\setminus \{R\}$ and $\Cos_R(R)=1$ we may assume $f(R)=0$ from now on.
Recall the structure of the filtered quadratic space $\Omega$ introduced in the previous proofs.
Notice that the cyclic shift acting on $\mathcal{Z}$ preserves the $\mathcal{Z}_n$ (but does not commute with the involution $z\mapsto \Bar{z}$ for $n>2$ as these two actually generate the full group of permutations $\mathfrak{S}_n$).
Fix $m\in \N$ and consider the subspace $\Lambda^*_n \subset \Omega^*_m$ of elements which are invariant under the shift $\sigma$ and change sign under transposition.
Its elements are uniquely determined by their values on $\mathcal{L}_m := \mathcal{Z}_m \cap \mathcal{P}_+$.
It contains the $(\mes_P(z))_{z\in \mathcal{Z}}$ for $P\in \mathcal{L}_m$, as well as the $(\Cos_A(z))_{z\in \mathcal{z}}$ for $A\in \mathcal{L}_m$.
We know from the the previous proof shows that the latter is free, and can be expressed as linear combinations of the former, so both of these form bases of $\Lambda^*_m$, whose dimension equals cardinal of $\mathcal{L}_m$.
Hence the restriction $f_m\in \Lambda^*_m$ of $f$ to $\mathcal{L}_m$ can be expressed as a linear combination of the $\mes_P$ or of the $\Cos_A$.
We thus have a projective system of elements $f_m$ in the linear vector spaces $\Lambda^*_{n}$ with compatible bases so the coefficients of the limit $f=\varprojlim f_m$ are well defined in either basis.
\end{proof}
In passing, we recovered the following reformulation of \cite[Theorem 5.11]{Grigorchuk_bounded-cohomology_1995}.
\begin{Corollary}
\label{Cor:mes_P-basis}
The collection of $\mes_P\in PX(\Gamma;\R)$ for $P\in \mathcal{P}_+$ is linearly independant, and every element $f\in PX(\Gamma;\R)$ can be written as $f=\sum_{P\in \mathcal{P}_+} mf_P \mes_P$ for unique $mf_P \in \R$.
\end{Corollary}
\section{Further directions of research}
\subsection{Linking forms of Fuchsian groups}
\label{subsec:Link-Fuchsian-group}
To begin with, we may compare the definitions of the functions $\Link_q$ and $\Cos_q$ and their limiting behaviour at $q=\infty$ with similar considerations which have been made for non-oriented loops in a closed surface $S$ of genus $g\ge 2$.
Such loops, corresponding to the conjugacy classes of $\alpha,\beta\in \pi_1(S)$ up to inversion, define trace functions $\Tr(\alpha), \Tr(\beta)$ on the $\SL_2(\C)$-character variety of $\pi_1(S)$, whose real locus contains the Teichm\"uller space of $S$ as a Zariski dense open set.
This character variety carries a natural symplectic structure \cite{Goldman_symplectic-nature-pi1_1984}, given by the Weil-Petersson symplectic form.
The sum $\Cos_q(A,B)$ looks very much like Wolpert's cosine formula \cite{Wolpert_fenchel-nielsen-deformation_1982, Wolpert_formula-cosine-Fenchel-Nielsen_1982} computing the Poisson bracket $\{\Tr(\alpha),\Tr(\beta)\}$ of the trace functions.
In fact, Wolpert sums the $\cross(\alpha,\beta)\cos(\alpha,\beta)$ over the intersection points $p\in \alpha\cap \beta$, that is the cosines of the angles turning from $\alpha$ to $\beta$ in the direction prescribed by the orientation of the surface.
Hence while our cosine formula is a symmetric formula of oriented geodesics, Wolpert's cosine formula yields a skew-symmetric function of non-oriented geodesics.
Note however that the Teichm\"uller space of $\M$ is reduced to a point so any Poisson structure in the usual sense would be trivial, so in our setting we expect Wolpert's sum to be identically zero (as corroborated by our computer experimentation).
Moreover, the Weil-Petersson symplectic form has been extended to several compactifications of the character variety \cite{PapadoPenne_forme-symplectic-bord-Teichmuller_1991, Sozen-Bonahon_weil-petersson-thurston-symplectic_2001, MS_ML-Newton-Poisson_2021}.
The limits of the Poisson bracket $\{\Tr(\alpha), \Tr(\beta)\}$ at the respective boundary points have been interpreted in \cite[Proposition 6]{Bonahon_earthquake-mesaured-laminations_1992} and \cite{MS_ML-Newton-Poisson_2021}.
Thus, we may generalise the definitions of our functions \ref{eq:Link} \& \ref{eq:Cos} to oriented geodesics in hyperbolic surfaces and ask for an interpretation of their limits at boundary points of the Teichm\"uller space.
We believe that \ref{eq:Link} \& \ref{eq:Cos} extend by continuity to pairs $A,B$ of oriented geodesic currents.
This should be analogous to the extension of the intersection form described in Bonahon \cite{Bonahon_geodesic-currents_1988}.
Pursuing this direction, one may also wish to replace $\rho$ with a representation $\Gamma \to \operatorname{Homeo}^+(\S^1)$, a metric of negative curvature, or a generalised cross-ratio \cite{Otal_symplectique-bord-birapport_1992, LabourieMcShane_cross-ratios_2009}.
The aim would be to think of \ref{eq:Link} \& \ref{eq:Cos} as differential forms on the "tangent bundle" to these spaces of representations, metrics or cross-ratios, considered up to appropriate equivalence relations.
For any group $\Pi$, the semi-conjugacy classes of representations $\Pi \to \operatorname{Homeo}^+(\S^1)$ correspond \cite{Ghys_H2b(Homeo(S1);R)_1984} to the "integral points of the unit ball" in the second bounded cohomology group $H^2_b(\Pi;\R)$, namely the elements represented by bounded $2$-cocycles with values in $\{-1,0,1\}$. For $\Pi = \pi_1(S)$ it contains \cite{BargeGhys_H2b(Surface)_1988} the space of differential $2$-forms on $S$, and we suspect that something similar is true for some spaces of generalized cross-ratios, thus we ask:
\begin{Question}
How to interpret $\Link_\rho(A,B)$ or $\Cos_\rho(A,B)$ as "differential forms" on (an appropriate subspace in) the second bounded cohomology group $H^2_b(\Gamma;\R)$ ?
\end{Question}
\subsection{Arithmetic and Geometric deformations}
Let us mention another general context in which our definitions \ref{eq:Link} \& \ref{eq:Cos} seem to apply with almost no changes.
Recall that our definitions of the cross-ratio and cosine in paragraph \ref{subsec:disc-bir_K} hold for pairs of semi-simple elements in $\PGL_2(\Field)$.
Thus for any faithful representation of a group $\rho \colon \Gamma \to \PSL_2(\Field)$ sending $A,B\in \Gamma$ to semi-simple elements, one may define the following invariants for the pair of conjugacy classes: \[\Link_\rho(A,B)=\sum \bir(\rho\tilde{A},\rho \tilde{B})^{-1}\qquad \Cos_\rho(A,B)=\sum \cos(\rho\tilde{A},\rho \tilde{B})\]
where the sum is indexed by the double-coset space $\Stab A \backslash \Gamma / \Stab B$ with some restrictions analog to $\asrt{\bir>1}$ and $\asrt{\across>1}$ ensuring that it has finite support, which we shall comment later on.
These define functions on (a subset in) the space of representations $\Hom(\Gamma,\PSL_2(\Field))$ considered up to $\PSL_2(\Field)$-conjugacy at the target. One may ask for interpretations of their limiting values at special points in its appropriate compactifications.
As explained in the previous paragraph, this construction works in particular for discrete subgroups of $\PSL_2(\R)$.
In general, we may want to specify that $\rho(\Gamma)$ is a discrete subgroup of $\PSL_2(\Field)$ after $\Field$ has been given a topology, or furthermore that $\rho(\Gamma)$ has finite covolume for the Haar measure on $\PSL_2(\Field)$ with respect to a measure on $\Field$.
In that case, one may consider the quotient of the symmetric space of $\PSL_2(\Field)$ by $\rho(\Gamma)$, and observe the relative position between the "cycles" corresponding to $A,B$ in that quotient.
We may now suggest some tantalising connections between arithmetic and topology.
For this, we should compare our summations (\ref{eq:Link}) and (\ref{eq:Cos}) with the modular cocycles introduced in \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017} and the products appearing in \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022}.
Let us note however that \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017} considers the linking numbers $\lk(A+A^{-1},B+B^{-1})$ between cycles obtained by lifting a geodesic and its inverse: this number amounts to the geometric intersection $I(A,B)$ of the modular geodesics.
Furthermore \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022} considers deformations of an arithmetic nature for these intersection numbers.
None of these address the actual linking numbers, and their approach is motivated by the arithmetic of modular forms, while ours will be inspired by the geometry of the character variety.
Thus it would be interesting on the one hand to understand the arithmetic of linking numbers in terms of the modular forms appearing in \cite{Katok_modular-forms-geodesics_1984} or the modular cocycles in \cite{Duke-Imamoglu-Toth_modular-cocycles-linking_2017}, and on the other hand to relate the $p$-arithmetic intersections numbers considered in \cite{Darmon-Vonk_arithmetic-intersections-modular-geodesics_2022} to the special values of functions $\Link_\rho$ \& $\Cos_\rho$ defined for representations $\rho \colon \PSL_2(\Z) \to \PSL_2(\Q_p)$ as suggested above.
\subsection{Special values of Poincar\'e Series}
We may apply the general averaging procedure explained in paragraph \ref{subsec:F(A,B)} to other conjugacy invariants $f_q(A,B)$ and define new functions $F_q(A,B)$ on the character variety of $\PSL_2(\Z)$.
Their limit at the boundary point $q=\infty$ will be expressed in terms of the linking number $\lk(A,B)$ as soon as $f_q(A,B)$ converges to an expression of $\cosign(A,B)$.
Various motivations (including special values for Poincar\'e series \cite{Siegel_advanced-number-theory_1965, Dirichlet_formes-quadratiques-complexes_1842}, and McShane's identity \cite{Bowditch_McShane-Markov_1996}) suggest to choose $f_q(A,B)=(x+\sqrt{x^2-1})^{-s}$ for some variable $s\in \C$ where $x=\frac{1}{4}(\Tr(A_qB_q^{-1})-\Tr(A_qB_q))$ is the numerator of $\tfrac{1}{4}\cos(A_q,B_q)$ in the formula of Lemma \ref{Lem:cos-cosh-sinh}.
This summand $f_q(A,B)$ can also be written $e^{-si\theta}$ where $\theta$ is the angle between the oriented geometric axes of $A_q$ and $B_q$ when they intersect and $e^{-sl}$ where $l$ is the length of the ortho-geodesic arc $\gamma$ connecting the geometric axes of $A_q$ and $B_q$ when they are disjoint.
In formula:
\begin{equation*}
F_{q}(A,B)= \sum \left(x+\sqrt{x^2-1}\right)^{-s} = \sum_{\gamma_A \perp \gamma \perp \gamma_B} \exp(-sl_\gamma) - \sum_{p\in \gamma_A\cap \gamma_B} \exp(-si\theta_p).
\end{equation*}
So the sum over all double cosets splits as a finite sum computable as explained in \ref{subsec:F(A,B)}, and an infinite series which converges for $\Re(s)>1$ (the topological entropy for the action of $\PSL_2(\Z)$ on the hyperbolic plane).
The infinite sum is a bivariate analog (in $(A,B)$) of the univariate Poincar\'e "theta-series" which appeared in the works of Eisenstein: those admit meromorphic continuation to $s\in \C$ and their special values in the variable $s$ have been of interest for arithmetics and dynamics. Similar Poincar\'e series associated to one modular geodesic are also defined in \cite{Katok_modular-forms-geodesics_1984}.
The earliest appearance we found for bivariate series is in \cite[Section 50]{Ford_automorphic-functions_1923}, and the only other in \cite{Paulin_series-poincare_2013}.
When $q=\infty$ and $s=1$, the real part of the finite sum evaluates to $2\lk(A,B)-I(A,B)$, but one may wonder about the infinite series (now the order in which we take limits in $s$ and $q$ may import).
More generally, one strategy to relate modular topology and quadratic arithmetic is to choose $f$ with appropriate symmetries and analyticity properties so that the sum over all double cosets can be understood: then one deduces a relationship between a topologically meaningful finite sum, and the infinite series whose special values may be of interest in arithmetic. The dilogarithm of the cross-ratio also looks like a good candidate \cite{Bridgeman_orthospectra-laminations-dilog-identities_2011}...
\newpage
\bibliographystyle{alpha}
|
1,108,101,565,208 | arxiv | \section{Introduction}
Metals are important constituents of the interstellar medium. Abundant
metals such as calcium and iron are among the most heavily depleted
elements in dense clouds, and therefore form a significant component of
interstellar dust. The process of dust formation is not well
understood (e.g., Draine 2009), but much of the raw material is
supplied by mass loss from evolved stars, especially the asymptotic
giant branch (AGB) stars.
In the winds of oxygen-rich AGB stars, the dust is primarily in the
form of metal silicates, so that the metals play an important role in the
formation of dust and the mass loss process. In carbon-rich AGB stars,
the situation is less clear. The dust is believed to consist mainly of
graphite or amorphous carbon, and silicon carbide. The extent to which
metals contribute to dust formation, and the form in which they are
returned to the interstellar medium is essentially unknown.
In this paper, we report the first comprehensive search for gas phase
atomic metals in a carbon-rich circumstellar envelope. IRC+10216 (CW
Leo) is the nearest carbon star with a thick circumstellar envelope,
and serves as an archetype for the study of mass loss on the AGB. The
star is relatively faint, $\sim$16.0~mag.\ in the $R$-band and much fainter at
shorter wavelengths because of obscuration by the envelope, but at
longer wavelengths the circumstellar dust and gas are seen in
emission, and are brighter than for any similar object. IRC+10216 has
therefore been intensively observed, with more than 50 molecular
species detected in the envelope, including several metal bearing
species (e.g., Olofsson 2005; Ziurys 2006a).
The technique that we use here to search for atomic metals in the
envelope of IRC+10216 is optical absorption spectroscopy, using a
background source of illumination. The star itself is too faint and
its spectrum too complex to serve as a useful source for detailed
study, although circumstellar C$_2$ and CN have been observed in this
way (Bakker et al.\ \cite{bakker97}). There are, however, other stars
in the field. IRC+10216 is nearby, at a distance of $\sim 120$~pc
(e.g., Ramstedt et al.\ 2008), so the circumstellar envelope extends a
considerable angle on the sky. It is detected out to a distance of
$\sim 3\arcmin$ from the center in millimeter CO emission (Huggins et
al.\ 1988), and 9\arcmin\ in infrared dust emission observed with IRAS
(Young et al.\ 1993). Although IRC+10216 is at a relatively high
galactic latitude ($l = 221\degr$, $b = +47\degr$), there are several
stars in this region of the sky that are candidates for background
sources, as seen in the wide field image in Fig.~1 of Mauron \&
Huggins (1999).
One of these stars, Star~6 in the UBV photometric sequence of the
field by Mauron et al.\ (2003), is well suited for absorption line
studies. It is located behind the envelope at an angular offset of
35\arcsec\ from the center, and is bright enough for high resolution
spectroscopy. This star has been observed with the UVES spectrograph
at the VLT by Kendall et al.\ (2002). Their main objective was to
search for diffuse bands that might originate in the circumstellar
gas. No diffuse bands were found, but these authors noted deep
absorption lines of \ion{Na}{i}\ and \ion{K}{i}, which they attributed to
circumstellar gas.
Here we report a comprehensive search for metal lines in the
circumstellar envelope of IRC+10216 along this line of sight. Our
objectives are to measure the degree of metal depletion
in this carbon rich-environment, and to determine the distribution
of solid and gas phase metals returned by the star to the interstellar
medium. Sect.~2 describes the observational material. Sect.~3
presents the absorption lines, and Sect.~4 the derived column densities
and abundances. The results are discussed in terms of dust formation in
Sect.~5, and metal chemistry in Sect.~6. Our main conclusions are
given in Sect.~7.
\section{Observations}
The observations of the envelope of IRC+10216 were made using
absorption line spectroscopy with Star~6 (USNO\,0975-0633-6975) as the
background source of illumination. Star~6 lies 35\arcsec\ from
IRC+10216 at position angle 165\degr. Its visual magnitude is
$V$=16.0, and it is the nearest star to the center of the envelope
suitable for high resolution spectroscopy. The field is shown in
Fig.~1, where the envelope is seen in dust-scattered Galactic light.
From its magnitude and spectral type (type G), star~6 is at a
distance of $\sim 1400$~pc, well beyond IRC+10216, as discussed by
Kendall et al.\ (\cite{kendall02}).
The observations were made with the UVES spectrograph at the VLT, and
were previously used by Kendall et al.\ (2002) to search for diffuse
bands. The data were obtained over seven nights in December 2000 and
January 2001, with an effective exposure of 4~hr at each wavelength.
The sky transparency was moderate to good, and the seeing was 0\farcs8.
The data consist of 4 spectra, each covering the complete range from
3\,000 to 10\,000~\AA\ with a resolving power of 50\,000 ($\Delta v =
6$~km\,s$^{-1}$). These spectra have been reduced with the ESO UVES pipeline
and have been summed after correction to the heliocentric reference
frame. All radial velocities given in this paper are heliocentric,
except where specified otherwise. The signal-to-noise ratio of the
final spectrum at 4200~\AA\ is $\sim 60$ per resolution element.
The heliocentric velocity of Star~6 measured from numerous
photospheric absorption lines is $+52.4\pm0.6$~km\,s$^{-1}$. For comparison,
the systemic heliocentric radial velocity of IRC+10216 is
$-$19.3~km\,s$^{-1}$, which is derived from millimeter observations of the
envelope ($v_{\rm LSR} = -26$~km\,s$^{-1}$, Loup et al.\ 1993). Thus a given
line in the envelope of IRC+10216 is well separated in velocity from
the same line formed in the photosphere of Star~6. In addition, the
expansion velocity of the envelope is 14.1~km\,s$^{-1}$\ with a small
turbulent component (Huggins \& Healy 1986), so the absorption lines
in the envelope are expected to be wide ($\sim 30$~km\,s$^{-1}$). In
contrast, the weak stellar lines are relatively narrow, with widths
comparable to the instrumental profile, and the strong stellar lines have
characteristic damping profiles with broad wings.
\begin{figure}[!ht]
\resizebox{8cm}{!}{\rotatebox{-00}{\includegraphics{100f1.ps}}}
\caption[]{$V$-band image of the circumstellar envelope of IRC+10216,
made with the VLT. The field size is $90\arcsec
\times90\arcsec$. The mass-losing carbon star is located at the
center of the image, and Star~6 is the bright source near the
bottom, 35\arcsec\ from the center. North is to the top, East to the
left.}
\label{fig01}
\end{figure}
\begin{table}[!ht]
\caption{Metal lines in the envelope of IRC+10216 }
\label{table:1}
\centering
\begin{tabular}{lcccc}
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
Species & $\lambda_\circ$ & $f$-value & $W_{\lambda}^{\mathrm a}$ & $N$ \\
& (\AA) & & (m\AA) & (cm$^{-2}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\ion{Na}{i} & 5895.924 & 0.3180 & 625 & 4.6(14)$^{\mathrm b}$ \\
& 5889.951 & 0.6311 & 625 & \\
& 3302.368 & 0.0090 & 230 & \\
\noalign{\smallskip}
\ion{Al}{i} & 3944.006 & 0.1134 & $<$44 & $<$2.8(12) \\
\noalign{\smallskip}
\ion{K}{i} & 7698.974 & 0.3393 & 530 & 4.6(12)$^{\mathrm c}$ \\
& 7664.911 & 0.6816 & 635 & \\
& 4044.143 & 0.0061 & $<$29 & \\
\noalign{\smallskip}
\ion{Ca}{i} & 4226.728 & 1.7530 & 300 & 1.9(12) \\
\noalign{\smallskip}
\ion{Ca}{ii} & 3968.468 & 0.3145 & 255 & 7.0(12) \\
& 3933.663 & 0.6346 & 390 & \\
\noalign{\smallskip}
\ion{Ti}{i} & 3635.462 & 0.2229 & $<$49 & $<$1.9(12) \\
\noalign{\smallskip}
\ion{Ti}{ii} & 3383.768 & 0.3401 & $<$69 & $<$2.0(12) \\
\noalign{\smallskip}
\ion{Cr}{i} & 4289.716 & 0.0622 & 37: & 1.4(12)$^{\mathrm d}$ \\
& 4274.796 & 0.0839 & 49: & \\
& 4254.332 & 0.1099 & 38: & \\
& 3605.322 & 0.2248 & 27: & \\
& 3593.482 & 0.2897 & 62: & \\
& 3578.683 & 0.3663 & $<$62 & \\
\noalign{\smallskip}
\ion{Mn}{i} & 4034.483 & 0.0257 & $<$33 & $<$3.1(12) \\
& 4033.062 & 0.0402 & $<$33 & \\
& 4030.753 & 0.0565 & $<$33 & \\
\noalign{\smallskip}
\ion{Fe}{i} & 3859.911 & 0.0217 & 225 & 8.8(13)$^{\mathrm e}$ \\
& 3824.444 & 0.0048 & 80 & \\
& 3719.934 & 0.0412 & 197 & \\
& 3440.606 & 0.0236 & 170 & \\
\noalign{\smallskip}
\ion{Sr}{ii} & 4077.709 & 0.7010 & $<$28 & $<$2.7(11) \\
\noalign{\smallskip}
\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{a}}$] Upper limits are 3$\sigma$.
\item[$^{\mathrm{b}}$] D-lines are saturated, $N$ based on the 3302~\AA\ line.
\item[$^{\mathrm{c}}$] Based on the 7699, 7665~\AA\ doublet.
\item[$^{\mathrm{d}}$] : indicates 2--3$\sigma$ features; $N$
based on weighted mean.
\item[$^{\mathrm{e}}$] Based on the 3860, 3720~\AA\ lines which have
better S/N.
\end{list}
\end{table}
Table~1 lists the metal lines that we searched for in the envelope of
IRC+10216. Column~2 gives the laboratory wavelength in air
($\lambda_\circ$) for each transition, and column~3 gives the
oscillator strength ($f$) from Morton (1991, 2000). The line list includes
the strongest ground-state transitions for the species that are
potentially observable in the wavelength range covered. These lines
are seen in interstellar clouds and/or in circumstellar environments
with similar physical conditions.
At long wavelengths, the spectrum of the background source, Star~6, is
relatively free of photospheric lines, so an absorption line arising
in the intervening circumstellar envelope is straightforward to
identify. At shorter wavelengths, the spectrum is more crowded with
photospheric lines, and absorption by the circumstellar envelope is
often more difficult to identify. In these cases we use a template
technique to extract the envelope signal. We first make a least
squares fit to the spectrum of Star~6 using a template spectrum covering
a region of $\sim 15$~\AA\ around (but excluding) the expected
envelope line. We then use this as the effective continuum to
search for residual absorption from the envelope.
One form of template that we tried was based on the library of
synthetic spectra from Coelho et al.\ (2005), but these did not produce a
good match to the photospheric spectrum. A second approach that was
successful, was using a scaled solar spectrum. By chance the spectrum
of Star~6 is very similar to that of the Sun, and a template based on
scaling the solar spectrum gives a close match to the stellar
spectrum. The solar spectrum that we use here was published by
Delbouille et al.~(1973) and covers the wavelength region
3\,000--10\,000~\AA\ with a step size of about 0.0125 \AA. In order to
fit the stellar spectrum $F_*(\lambda)$, we shift the solar spectrum
$F_{\odot}(\lambda)$ to match the
radial velocity, re-bin it to match the resolution, and scale the flux
according to:
\[ F_*(\lambda) = \alpha F_{\odot}(\lambda) + \beta\,, \]
where $\alpha$ and $\beta$ are constants that are determined from the
least-squares fit to the local region of spectrum under consideration.
The use of this template as the effective continuum reveals spectral
features of the circumstellar envelope that are not otherwise easily
observed. Examples of the application of this technique are given in
Sects. 3.3 and 3.4.
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f2.eps}}
\caption[]{Spectra of the \ion{K}{i}\ doublet at 7664.91~\AA\ (\emph{upper
panel}) and 7698.97~\AA\ (\emph{lower panel}). In each panel, the
arrow on the right marks the photospheric line ($V_{\mathrm r} =
+52.4$~km\,s$^{-1}$), and the arrow in the center marks the envelope
absorption of IRC+10216 ($V_{\mathrm r} = -19.3$~km\,s$^{-1}$). Other
photospheric lines are marked $+$, and telluric lines are marked
$\circ$.
}
\label{fig02}
\end{figure}
\section{Envelope absorption lines}
\subsection{\ion{K}{i}}
Fig.~2 shows the observed spectral regions covering the \ion{K}{i}\
doublet. The 7665~\AA\ line is in the upper panel and the 7699~\AA\
line is in the lower panel. Strong photospheric \ion{K}{i}\ lines of Star~6
are present at the stellar radial velocity of $V_{\mathrm r} =
+52.4$~km\,s$^{-1}$, which is marked with an arrow at the right of each panel.
Also present are two weaker photospheric lines (marked $+$) and four
telluric lines (marked $\circ$).
The dominant feature in the middle of each spectral region is strong
\ion{K}{i}\ absorption from the envelope of IRC+10216. This can be
unambiguously ascribed to the circumstellar envelope. It is centered
near the systemic radial velocity of IRC+10216 at $-19.3$~km\,s$^{-1}$\
(marked with an arrow in the figure) and has the broad width expected
from the expanding envelope. With the low interstellar absorption in
this region of the sky ($E_{B-V} \la 0.03$), any interstellar \ion{K}{i}\
would contribute $\la$ a few percent of the strong lines observed,
based on the survey of interstellar \ion{K}{i}\ by Chaffee \& White (1982).
We also searched for the much weaker ground state \ion{K}{i}\ line at
4044~\AA\ which has an $f$-value $\sim110$ times less than the
7664~\AA\ line. The 4044~\AA\ line was not detected, and the upper
limit is given in Table~1.
Although the \ion{K}{i}\ doublet lines formed in the envelope are broad, they
are clearly composed of several components. The presence of the
components is important because it affects the saturation of the lines
(see Sect.~4). Assuming gaussian broadening, we found that a best fit
synthesis taking into account the instrumental profile requires four
components to match the line shapes. A preliminary fit with
unconstrained parameters gave consistent estimates for the radial
velocities of the individual components in each line. We then fixed
the velocities at their mean values, and fit the profiles by varying
the relative column densities of the components and their line
broadening b-values, which we constrained to be the same for both
lines. The final mean parameters are given in Table 2, where the
strength is the column density of each component, relative to the
strongest component. The uncertainty in the velocities is $\la
0.5$~km\,s$^{-1}$\ and the relative strengths found from the two lines are the
same to within $\la 5$\%. Thus the results from the two lines are in
good agreement, as expected from their similar line profiles.
To illustrate the quality of the multi-component fit, Fig.~3 shows the
results of a synthesis of the 7699~\AA\ line using the parameters of
Table~2. It can be seen that the model provides an excellent fit to the
observational data.
We interpret the multiple component profiles of the \ion{K}{i}\ lines as the
effects of the multiple shell structure in the envelope (Mauron \&
Huggins 1999, 2000; see also Fig.~1), where the line of sight passes
through regions of enhanced density. Although the gaussian line shapes
give a good fit to the spectra, we do not know the detailed velocity
distribution of the shells along the line of sight, so we cannot
reliably determine the gas density contrast between the shell and
inter-shell regions. However, the stronger components are clearly
separate, and suggest that the contrast is a least a factor of a
few. This is consistent with a large shell inter-shell contrast in the
dust density derived from images in dust-scattered light by Mauron \&
Huggins (2000).
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f3.eps}}
\caption[]{Comparison of the observed (black) and synthesized (grey)
profiles of the \ion{K}{i}\ 7698.97~\AA\ line, using the
parameters given in Table~2. The individual components are also
shown. }
\label{fig03}
\end{figure}
\begin{table}[bht]
\caption{Parameters of the \ion{K}{i}\ components in IRC+10216}
\label{table:2}
\centering
\begin{tabular}{cccc}
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
Component & Strength & $b$ & $V_{\mathrm r}$ \\
& (rel.) & (km\,s$^{-1}$) & (km\,s$^{-1}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
1 & 0.27 & 2.8 & $-$32.4 \\
2 & 0.32 & 2.9 & $-$28.0 \\
3 & 1.00 & 5.5 & $-$21.0 \\
4 & 0.51 & 3.9 & $-$09.2 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f4.eps}}
\caption[]{ Spectra of the \ion{Na}{i}\ doublet at 5889.95~\AA\ (\emph{upper
panel}) and 5895.92~\AA\ (\emph{lower panel}). Arrows mark the
photospheric and envelope components, as in Fig.~2. The triangles mark
an interstellar \ion{Na}{i}\ component (at $V_{\mathrm r} \sim +9.5$~km\,s$^{-1}$).
}
\label{fig04}
\end{figure}
\subsection{\ion{Na}{i}}
Fig.~4 shows the spectral regions covering the \ion{Na}{i}\ D lines. The
5890~\AA\ line is in the upper panel and the 5896~\AA\ line is in the
lower panel. The photospheric \ion{Na}{i}\ lines of Star~6 (marked with
arrows at the right in each panel) are strong, and there is another
weaker photospheric line in the lower panel. The dominant feature in
the middle of each spectrum is \ion{Na}{i}\ absorption from the envelope of
IRC+10216, centered near the systemic velocity. It can be seen that
the circumstellar lines are extremely strong. All the components are
highly saturated and the residual intensities are close to zero across
both lines.
There is an additional, weak \ion{Na}{i}\ component that appears in both
spectra near $+$9.5~km\,s$^{-1}$\ (marked with triangles) which we identify as an
interstellar component along this line of sight. Similar weak \ion{Na}{i}\
absorption near this velocity is reported by Kendall et al.\ (2002)
along lines of sight to two other stars in this region of the sky, at
angular distances of 153\arcsec\ and 2.5\degr\ from IRC+10216.
We also searched for the much weaker ground state \ion{Na}{i}\ line at
3302~\AA, which has an $f$-value $\sim 70$ times less than the
5890~\AA\ D line. The 3302~\AA\ line lies in a crowded, low
signal-to-noise part of the spectrum, but the template approach
(Sect.~2) reveals a $\sim$5 sigma detection, and the equivalent width
is given in Table~1. The signal-to-noise ratio is too low to show any
details of the profile.
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f5.eps}}
\caption[]{ Spectrum of the \ion{Ca}{i}\ line at 4226.73~\AA. \emph{Upper
panel:} The observed spectrum (full line) and fitted template (dotted
line). Arrows indicate the photospheric and envelope components as in
Fig.~2. \emph{Lower panel:} Spectrum normalized to the template. The
dotted line replaces the spectrum over the photosperic core region.
}
\label{fig05}
\end{figure}
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f6.eps}}
\caption[]{Spectra of the \ion{Ca}{ii}\ H and K doublet at 3933.66~\AA\ (\emph{upper
panel}) and 3968.46~\AA\ (\emph{lower panel}). Arrows mark the
photospheric and envelope components, as in Fig.~2. The solid lines
show the observed spectra, the dotted lines show the template spectra.
}
\label{fig06}
\end{figure}
\subsection{\ion{Ca}{i}\ and \ion{Ca}{ii}}
Fig.~5 shows the spectral region around the \ion{Ca}{i}\ line at 4226~\AA.
The line falls in a very crowded region of the spectrum and provides a
good illustration of the template method.
The solid line in the upper panel shows the observed spectrum, and the
envelope absorption is not immediately apparent. The dotted line in
the upper panel shows the stellar template, scaled to match the
observed spectrum. It can be seen that it gives a good overall fit to
the data, and reveals the excess absorption from the envelope. The
lower panel of Fig.~5 shows the envelope line, using the fitted
template as the effective continuum. The small wavelength range around
the core of the photospheric \ion{Ca}{i}\ line has been masked (with the
horizontal dotted line) because the effective continuum is poorly
defined at the low intensity of the line core. The main deviation from
a flat continuum in the normalized spectrum is caused by slight
differences between the \ion{Ca}{i}\ line profiles of Star~6 and the Sun,
which probably arise from slightly different surface gravities.
It can be seen that the template fitting technique recovers the \ion{Ca}{i}\
absorption in the envelope very effectively. The line shows a profile
with narrow components similar to those seen in the \ion{K}{i}\ lines.
We also detected \ion{Ca}{ii}\ absorption in the H and K lines shown in
Fig.~6. The 3933~\AA\ line is in the upper panel, and the 3968~\AA\
line is in the lower panel. For these spectral regions the crowding by
photospheric lines is not severe, and the envelope absorption can be
seen in the direct spectra. The adjacent \ion{Ca}{ii}\ photospheric lines are
very strong, and their blue wings form the local (tilted) continuum
for the circumstellar absorption. In the photospheric line core, the
template is not a good fit on account of differences in the core
reversals between Star 6 and the Sun.
The \ion{Ca}{ii}\ line profiles show approximately the same profiles as the
\ion{K}{i}\ lines but are somewhat broader, by $\sim$ 5--10~km\,s$^{-1}$ (FWHM),
suggesting additional absorption. This could be an interstellar
contribution, or additional circumstellar absorption. Compared to the
neutral lines, the ionized \ion{Ca}{ii}\ lines sample the line of sight
through the envelope to larger distances from the central star, and
the kinematics of these outer regions have never previously been
observed.
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f7.eps}}
\caption[]{ Spectrum of the \ion{Fe}{i}\ line at 3719.93~\AA. Details as in Fig.~5.
}
\label{fig07}
\end{figure}
\begin{figure} [!t]
\resizebox{8.2cm}{!}{\includegraphics{100f8.eps}}
\caption[]{ Spectrum of the \ion{Fe}{i}\ line at 3859.91~\AA. Details as in Fig.~5.
}
\label{fig08}
\end{figure}
\subsection{\ion{Fe}{i}}
Figs.~7 and 8 show the spectral regions around the \ion{Fe}{i}\ lines at
3720~\AA\ and 3860~\AA, respectively. These fall in relatively
crowded regions of the photospheric spectrum, and the template is
needed to determine the envelope absorption. The upper panel in each
figure shows the observed spectrum (solid line) and the fitted stellar
template (dotted line). The lower panel shows the normalized spectrum
with the envelope absorption centered near the systemic velocity of
IRC+10216. The signal-to-noise ratio of these spectra is lower than
those discussed above, and the template fit is affected by differences
in the solar and stellar spectra. The limited quality of the fit is
probably responsible for the fact that the equivalent width of the
3720~\AA\ line is slightly less than that of the 3860~\AA\ line, even
though the f-value is larger (see Table~1). Nevertheless the lines
are well detected, and the multi-component character of the envelope
absorption is similar to that seen in the lines of the other species.
We also searched for two additional \ion{Fe}{i}\ lines at 3824~\AA\ and
3441~\AA. The 3824~\AA\ line is weaker on account of a significantly
lower $f$-value, and is marginally detected at $\sim$4 sigma. The
3441~\AA\ line lies at shorter wavelengths where the spectrum is
poorer. It is detected with a strength comparable to the \ion{Fe}{i}\ lines
shown in Figs.~7 and 8, but with a lower signal-to-noise ratio.
\subsection{Other lines}
Searches were also made for lines of several other species, including
\ion{Al}{i}, \ion{Mn}{i}, \ion{Cr}{i}, \ion{Ti}{i}\ and \ion{Ti}{ii}, and \ion{Sr}{ii}, as listed in
Table~1. With the exception of \ion{Cr}{i}, no significant envelope
absorption was detected. For \ion{Cr}{i}, weak ($\sim$2--3$\sigma$) features
are seen at the wavelengths of most of the accessible lines. When
appropriately weighted by the oscillator strengths and noise levels
they provide an overall 4-sigma detection of this species.
\section{Abundances in the envelope}
\subsection{Column densities }
The equivalent widths ($W_\lambda$) of the absorption lines observed
in the envelope are listed in column~4 of Table~1. For the
non-detected lines, we list 3-sigma limits given by:
\[ W_\lambda < 3 \, \sigma \Delta \lambda / \sqrt n\, , \] where $\sigma$
is the noise level and $n$ is the number of resolution elements in the
line width $\Delta \lambda$.
The column density ($N$) derived for each species is given in
column~5 of Table~1. It is based on the equivalent widths and the
$f$-values given in the table. For the weak lines, the column density
is directly related to the equivalent width by the optically thin
formula:
\[ N = 1.13\times10^{20}\, W_\lambda / f \lambda^2, \]
where $ W_{\lambda}$ and $\lambda$ are in \AA\ units.
For the stronger lines, where the optical depths are larger, we
determine the relation between $W_{\lambda}$ and $N$ using the
multi-component model used to fit the high signal-to-noise profiles of
the \ion{K}{i}\ lines (Table~2). With this model we find that the \ion{K}{i}\ lines
are moderately saturated; the column densities derived from the 7699
and 7665~\AA\ lines are larger than the optically thin values by
factors of 1.7 and 2.3, respectively. The column densities from the
two lines agree to within 20\%, and are consistent with a limit of
$N({\ion{K}{i}}) \la 3\times10^{13}$~cm$^{-2}$ determined from the upper
limit to the optically thin 4044~\AA\ \ion{K}{i}\ line. For \ion{Na}{i}, the D-lines
are extremely optically thick, and therefore insensitive to the column
density, although the very low residual intensity across the lines
yields a lower limit of $N(\ion{Na}{i}) \ga
$1--$2\times10^{13}$~cm$^{-2}$. Fortunately, we also detect the
optically thinner \ion{Na}{i}\ line at 3302~\AA\ line, and we use this with
the multi-component saturation curve to determine the column density
of \ion{Na}{i}\ given in Table~1. The result is consistent with the lower
limit from the D-lines.
The profiles of the ionized lines might be expected to differ from
those of the neutral lines because the absorption can occur at
different locations along the line of sight. This does not affect the
column density estimates for \ion{Sr}{ii}\ and \ion{Ti}{ii}\ because the lines are
very weak (below the detection level) and are therefore optically
thin. For the stronger \ion{Ca}{ii}\ lines, the profiles are found to be
similar to the \ion{K}{i}\ profiles, but as noted in Sect.~3.3, they are
slightly broader with an additional contribution to the absorption. We
therefore model the \ion{Ca}{ii}\ lines as the sum of two contributions; the
fit of the \ion{K}{i}\ multi-component model to the main part of the profile,
and an additional, optically thin component to give the total
equivalent width. The additional component contributes 32\% and 17\%
of the equivalent widths (22\% and 13\% of the derived column
densities) for the 3933 and 3968~\AA\ lines, respectively. We assume
that the additional absorption is circumstellar, but since it is
relatively small, the results do not depend sensitively on this
assumption.
\subsection{Column density of hydrogen}
Although there are no direct observations of hydrogen along the line
of sight, we can use estimates of the mass loss rate of the envelope
to determine the hydrogen column density. Since the early work by Kwan
\& Hill (1977) there have been numerous estimates of the mass loss
rate based on millimeter CO observations, and the results are fairly
consistent when the different distances, CO abundances, dust-gas
heating rates, and He content are taken into account. For a distance
of 120~pc, we adopt a mass loss rate $\dot{M}_{\mathrm H}$ (in
hydrogen) of $1.25 \times 10^{-5}$~$M_{\odot}$~yr$^{-1}$
(corresponding to a total mass loss rate of $1.75 \times
10^{-5}$~$M_{\odot}$~yr$^{-1}$), based on the analysis of Sch\"{o}ier
\& Olofsson (2001) approximately corrected for the effects of He.
This value is consistent with other recent estimates.
For $\dot{M}_{\mathrm H} =1.25 \times 10^{-5}$~$M_{\odot}$~yr$^{-1}$,
the column density of hydrogen $N$(H), where $N$(H) = $N$(\ion{H}{i}) +
$2N$(H$_2$), is $1.3 \times 10^{21}$~cm$^{-2}$ along the line of sight
35\arcsec\ from the center. The uncertainty is a factor $\sim 2$,
which results from uncertainties in $\dot{M}$. In addition to a smooth
decrease of the column density with distance from the center, there
are other variations caused by the multiple shell structure in the
envelope. From an analysis of scattered light images (e.g., Fig.~1) we
find typical variations in the dust column density of $\pm20\%$, and
somewhat smaller variations near the region of Star 6. These
variations are much smaller than the density contrast of the shells
because the column density averages the density along the line of sight
through several shell and inter shell regions. The envelope structure
is therefore not a major source of uncertainty for $N$(H).
\subsection{Ionization fraction}
The line of sight passes through regions of the circumstellar envelope
where the metals are partially ionized. Since most of the observed
lines arise from single stages of ionization, we need to consider
ionization corrections in order to determine the total gas phase
abundances. We estimate the corrections from the relative column
densities of the ionization stages of each metal, obtained using the
photo-ionization model of an expanding envelope discussed by Glassgold
\& Huggins (1986).
In the model, neutral atoms (which may result from the dissociation of
molecules) emerge from the dense, shielded, inner envelope and are
photo-ionized by the ambient interstellar radiation field. We
calculate the ionization fraction as a function of radius, and use
this to calculate the ionization fraction along the line of sight. The
ionization of each element is governed by equation 4.9 of Glassgold \&
Huggins (1986), with no contribution from chromospheric radiation. The
ionization depends on the photo-ionization rate (given by the
interstellar rate and the envelope shielding) and recombination. In
solving the ionization equations we use the interstellar
photo-ionization rates and recombination rates from P\'equignot \&
Aldrovandi (1986), except for Cr and Sr which are not included in
their compilation; for these the photo-ionization rates are from
Glassgold \& Huggins (1986) and the recombination rates from Bernat
(1976). The radial dependence of the electron abundance is adopted
from Cordiner et al.\ (2007), although recombination is relatively
unimportant except for Al. For the shielding of interstellar radiation
in the envelope we use the standard carbon dust model and dust-to-gas
ratio of Cherchneff et al.\ (1993) and the shielding function of
Morris \& Jura (1983).
There are a number of uncertainties in the parameters of the model,
the most important being the strength of the ambient radiation field
(which determines the interstellar photo-ionization rates) and the
shielding in the envelope (which includes uncertainties in the dust
parameters, the mass loss rate, and the envelope
geometry). Fortunately we observe both the neutral and ionized column
densities of Ca, and we use this to set the ionization level in the
model. For the nominal parameters given above, the predicted ratio
$N(\ion{Ca}{ii})/N(\ion{Ca}{i})$ is a factor 3.6 larger than observed. This is fair
agreement considering that the ionization in the envelope has not
previously been constrained in this way. However, we can do much
better, by fine tuning the ionization level in the envelope to fit the
Ca ionization exactly. This can be done by adjusting the ambient
radiation field (by a factor of 0.46) or the shielding optical depth
(by a factor 1.5). Both variations are within their respective
uncertainties. For specificity, we adopt the reduced radiation field
to calculate the ionization of the other elements. The resulting
ionization corrections ($C_{\mathrm i}$ = $N$({\sc i} + {\sc
ii})/$N$({\sc i}) for neutral species, and $N$({\sc i} + {\sc
ii})/$N$({\sc ii}) for singly ionized species), are given in column~4
of Table~3. Adjusting the shielding instead, by the amount given
above, produces essentially the same ionization corrections.
The ionization correction for \ion{Al}{i}\ in Table~3 is much larger than for
the other metals because of its relatively high photo-ionization
rate. Our observations are therefore not very sensitive probes of the
column density of Al in the gas phase because the Al is nearly
completely ionized along the line of sight. The ionization
corrections for the other neutral species are much smaller. For
example, Na is predominantly neutral. This is in contrast to typical
interstellar clouds with a similar column density of hydrogen. In the
circumstellar envelope, the characteristic outflow time at 35\arcsec\
is shorter than the photo-ionization time given by Glassgold \&
Huggins (1986). Hence, even without dust shielding, the Na atoms are
expected to be largely neutral, as given in the table.
\subsection{Abundances}
Except for Ca and Ti, the total gas phase column densities of the
metals are determined from the observed column densities and the
ionization corrections. The results are given in column 5 of Table~3.
Although the $N(\ion{Ca}{ii})/N(\ion{Ca}{i})$ ratio was used to determine the
ionization level in the envelope, we use the sum of the ionization
stages to obtain the total Ca column density, independent of the
ionization. Similarly, the limit for Ti is based on the observed
limits for \ion{Ti}{i}\ and \ion{Ti}{ii}, and so is independent of the ionization.
The gas phase abundances of the metals relative to hydrogen ($X$) are
determined from the total column densities and the value of
$N({\mathrm H})$ from Sect.~4.2, and are given in column~6 of Table~3.
For reference, the solar values ($X_\odot$) are given in column~7 of
the table, taken from Lodders (\cite{lodders03}). Comparison of the
envelope and solar abundances shows that there are large deficiencies
in the gas phase abundances in the envelope, which vary from metal to
metal.
\begin{table*}[!ht]
\caption{Abundances in the envelope of IRC+10216}
\label{table:3}
\centering
\begin{tabular}{lccccccccc}
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
El. & Ion & $N$ & $C_{\mathrm i}^{\mathrm{a}}$ & $N$({\sc i}+{\sc ii}) & $N$({\sc
i}+{\sc ii})/$N$(H) & $X_\odot$ & $\log_{10} \delta$ &
$\log_{10} \delta_{7027}$ & $\log_{10} \delta_{\zeta\,\mathrm{Oph}}$ \\
& & (cm$^{-2}$) & & (cm$^{-2}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Na & \sc{i} & $\phantom{<}$4.6(14) & 1.22 & 5.6(14) & 4.2($-$7) &
2.00($-$6) & $-$0.68 & $-$0.06 & $-$0.95 \\
\noalign{\smallskip}
Al & \sc{i} & $<$2.8(12) & 3.5(3)& $<$9.8(15) & $<$7.3($-$6)
& 2.88($-$6) & $\ldots^{\mathrm{c}}$ & $\ldots$ & $\ldots$ \\
\noalign{\smallskip}
K & \sc{i} & $\phantom{<}$4.6(12) & 1.60 & 7.4(12) & 5.5($-$9) &
1.29($-$7) & $-$1.37 & $-$0.17 & $-$1.09\\
\noalign{\smallskip}
Ca & \sc{i} & $\phantom{<}$1.9(12) & 4.70 & 8.9(12)$^{\mathrm{b}}$
& 6.6($-$9) & 2.19($-$6) & $-$2.52 & $-$0.75 & $-$3.73 \\
& \sc{ii} & $\phantom{<}$7.0(12) & 1.27 & $\ldots$ & $\ldots$
& $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$\\
\noalign{\smallskip}
Ti & \sc{i} & $<$1.9(12) & 4.26 & $<$3.9(12)$^{\mathrm{b}}$ & $<$2.9($-$9) &
8.32($-$8) & $<$\,$-$1.45 & $\ldots$ & $-$3.02\\
& \sc{ii} & $<$2.0(12) & 1.31 & $\ldots$ & $\ldots$
& $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \\
\noalign{\smallskip}
Cr & \sc{i} & $\phantom{<}$1.4(12) & 10.6 & 1.5(13) & 1.1($-$8) &
4.47($-$7) & $-$1.60 & $\ldots$ & $-$2.28 \\
\noalign{\smallskip}
Mn & \sc{i} & $<$3.1(12) & 2.21 & $<$6.9(12) &
$<$5.1($-$9) & 3.16($-$7) & $<$\,$-$1.79 & & $-$1.45 \\
\noalign{\smallskip}
Fe & \sc{i} & $\phantom{<}$8.8(13) & 2.58 & 2.3(14) & 1.7($-$7) &
2.95($-$5) & $-$2.24 & $-$1.68 & $-$2.27 \\
\noalign{\smallskip}
Sr & \sc{ii} & $<$2.7(11) & 3.82 & $<$1.0(12) & $<$7.6($-$10) &
8.13($-$10)& $\ldots^{\mathrm{c}}$ & $\ldots$ & $\ldots$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{a}}$] Ionization correction $C_{\mathrm i} = N({\sc I} + {\sc
II}) /N({\sc I})$ for neutral species and $N({\sc I} + {\sc II})
/N({\sc II})$ for singly ionized species.
\item[$^{\mathrm{b}}$] Based on observed values of $N$({\sc I}) + $N$({\sc
II}), independent of $C_{\mathrm i}$.
\item[$^{\mathrm{c}}$] Upper limit on $X > X_\odot$, no useful limit
on $\delta$.
\end{list}
\end{table*}
\section{Dust condensation}
Several processes affect the state of the circumstellar material as it
moves from the stellar photosphere out through the envelope into the
interstellar medium. In order of increasing distance from the star
these processes include: dust condensation in conditions of
approximate thermodynamic equilibrium; gas phase chemical reactions;
photo-dissociation of the molecules; and eventual photo-ionization of
the atomic constituents in the outer envelope (e.g., Gilman 1969;
Tsuji 1973; McCabe et al.\ 1979; Huggins \& Glassgold 1982; Lafont et
al.\ 1982). In the carbon-rich environment of IRC+10216, the main
component of the dust is amorphous carbon with a minor component of
SiC (e.g., Martin \& Rogers 1987). The state of other elements,
especially the refractory metals, is not well understood (e.g., Turner
1995). Hence our observations of the gas phase metals in the outer
envelope provide new constraints on the gas phase chemistry and the
dust condensation.
\subsection{Observed depletions}
Comparison of the envelope abundances with the solar abundances in
Table~3 shows that most of the gas phase metal atoms in the
envelope are ``missing''. It is most unlikely that they are in the
form of gas phase molecules. The fractional abundance of even the most
extreme case of Ca, where $X(\mathrm{Ca})/X_{\odot} \sim 3\times
10^{-3}$, is an order of magnitude \emph{larger} than the largest
fractional abundance of any metal bearing molecule detected outside of
the core region (see Table~4). In addition, most molecules are
dissociated closer to the star than the 35\arcsec\ offset of our line
of sight. It is therefore reasonable to infer that the atomic
abundances observed are good approximations to the total gas phase
metal abundances in the envelope, and that the missing atoms are
depleted onto dust grains.
In column (8) of Table~3 we give the conventional measure of depletion
$\log_{10} \delta$, where $\delta = X/X_{\odot}$ is the depletion
factor. The observational limits for the abundances of Al and Sr are
$\ga$ the solar values, so in these cases we have no significant
limits for $\delta$, although Sr is an $s$-process element and may be
enhanced in IRC+10216 \emph{and} depleted.
Based on the measured depletions, there are two immediate
conclusions. First, the metals in this carbon-rich archetype are
primarily in the form of solids, and this is the dominant form
returned to the interstellar medium. Second, in spite of the depletion, a
significant residue of metallic atoms remains in the gas phase and varies
from metal to metal.
\subsection{Condensation and adsorption}
There are no detailed predictions for the depletion of metals in
IRC+10216, but there are some important considerations that bear on
the issue. A commonly used approach to the condensation of solids in
circumstellar envelopes is the assumption of thermodynamic equilibrium
in the warm, dense, inner envelope, where the chemical time scales are
rapid compared with the expansion time scale. Under these conditions
the formation of solid particles is controlled by the condensation
temperature of the primary condensate of each species.
For a carbon-rich envelope, the condensation sequence depends somewhat
on the C/O ratio and the gas pressure. The following sequence, for C/O
= 1.1 and a pressure of $10^{-6}$~bar (from Lodders \& Fegley 1995,
updated for Fe by Lodders \& Fegley 1999) is representative: C
(1670~K), TiC (1640~K), SiC (1460~K), FeSi (1230~K), AlN (1170~K), CaS
(1150~K), MgS (960~K), with other more volatile metals such as Na and
K at lower temperatures. Cr and Mn probably form sulphides but their
location in the sequence is uncertain. Thus the metals Ti, Fe, Al,
Ca, and then K and Na, are expected to be removed from the gas phase
successively. Those with lower condensation temperatures are less
likely to go to completion because of the decreasing density with
radius.
This qualitative picture is largely consistent with the observed
depletion pattern. The observations provide a firm upper limit on the
gas phase abundance of Ti (from the \ion{Ti}{i}\ and \ion{Ti}{ii}\ lines);
Fe and Ca are strongly depleted, although Ca is more depleted than Fe (in
reverse order to the condensation sequence); and Na and K are less
depleted.
The simplifying assumption of thermodynamic equilibrium is not a
complete physical theory of condensation because kinetic effects must
play a role near freeze-out. In addition, once formed, the grains can
act as sites for the adsorption of gas phase species further out in
the envelope (Jura \& Morris 1985). The adsorption depends on the
binding energy of the species to the grain surface, and on the
sticking probability ($p$), which is essentially unknown. Using the
analysis of Jura \& Morris (1985), Turner (1995) finds that for
IRC+10216, a volatile species such as K is depleted by adsorption to a
fractional abundance of 0.84 (for $p=0.1$) and 0.17 (for $p=1$), and a
more refractory species such as Al is depleted to a fractional
abundance of 0.05 (for $p=0.1$) and $10^{-10}$ (for $p=1$). Thus
adsorption of metals may be as important as the initial condensation.
Our observations of a residual atomic component in the gas phase
constrain the efficiency of both condensation and adsorption. Below
about 1150~K, the phase diagrams of Lodders \& Fegley (1995) show that
Ca is essentially completely removed from the gas phase, but our
observations show that even for this refractory metal there is a
residual component in the gas phase. Metal bearing molecules with Na
and K (albeit with low fractional abundances) are seen in the core
region (see Sect.~6) but not in the extended envelope. This may be the
result of adsorption. On the other hand, adsorption of the more
refractory species to levels of $10^{-10}$ are clearly ruled out by
the observations.
We conclude that current theoretical ideas are qualitatively in accord
with our observations of metal depletion, but improved models with
specific quantitative predictions are needed to discriminate the
underlying processes.
\subsection{Comparison with PNe}
Carbon-rich planetary nebulae (PNe) are the immediate descendants of
carbon-rich AGB stars, in which the circumstellar material has
undergone major changes. The gas has been photo-ionized, and the dust
grains are exposed to intense radiation fields and high ($\sim
10^4$~K) temperatures. A comparison of depletions in AGB stars and PNe
may therefore reveal some aspects of grain evolution.
Element abundances have been extensively measured in the ionized gas
in PNe. The abundances are, however, subject to systematic, and
sometimes, large uncertainties, and relatively few metals have
accessible lines covering the appropriate stages of ionization. For
comparison with IRC+10216 we focus on NGC~7027, which is one of the
most intensively studied, carbon-rich PNe. NGC~7027 is relatively
young and still surrounded by a substantial envelope of molecular gas
(Cox et al.\ \cite{cox02}). The circumstellar conditions before the
formation of the nebula were therefore somewhat similar to the current
state of IRC+10216.
There have been numerous abundance analyses of NGC~7027. We have taken
the abundances of Na, K, Ca, and Fe from recent, comprehensive studies
by Keyes et al.\ (1990), Middlemass et al.\ (1990), Bernard Salas et
al.\ (2001), and Zhang et al.\ (2005), and we give the corresponding
depletions in column (9) of Table~3. For species in common the
depletions have been averaged. Even with these state-of-the-art
analyses, the differences in abundances between the different studies
range up to a factor of $\sim 3$.
Comparison of the depletions in IRC+10216 and NGC~7027 reveals some
significant differences. First, the metals Na and K are much less
depleted in the ionized nebula, where they are close to the solar
values. Second, Fe and Ca are still significantly depleted in the
nebula but, evidently less than in the circumstellar envelope. These
results suggest the following evolutionary effects in the transition
from AGB star to PN: the nearly complete evaporation of the volatile
species Na and K, and the partial erosion of more refractory species
Ca, and possibly Fe. Further evidence for this view comes from the
study of NGC~7027 by Kingdon et al.\ (1995), who argue that Ca is
depleted by more than 2 orders of magnitude near the periphery of the
nebula (as in IRC+10216), but is much less depleted near the center of
the ionized nebula.
The abundances in NGC~7027 are fairly similar to other carbon-rich
PNe, e.g., the Fe depletion is similar to the typical Fe depletion
found in a sample of low ionization PNe by Delgado Inglada et al.\
(2009). The trends noted here may therefore be a general
characteristic of the evolution of dust from the AGB to PNe.
\subsection{Comparison with the ISM}
It is also of interest to compare our results for IRC+10216 with
depletions in the ISM. We take the line of sight towards $\zeta$~Oph
as representative of the ISM, bearing in mind that the overall level
of IS depletions varies with location but the general pattern
remains the same. The depletions towards $\zeta$~Oph, from Savage \&
Sembach (1996), are listed in column (10) of Table~3.
In spite of the different physical and chemical environments that lead
to grain formation in IRC+10216 and the ISM, it can be seen that the
depletion patterns are qualitatively similar. Na and K are the least
depleted, at comparable levels; Mn, Fe, and Cr are probably similar
although the detailed pattern may differ; and Ca is the most depleted
in both data sets.
In studies of the ISM it is found that element depletions correlate
with the (oxygen-rich) condensation temperature of the element-bearing
solid. This was interpreted in terms of grain formation at high
temperatures in the winds of mass-losing giants (Field 1974). More
recent studies indicate that a significant component of the dust in
the ISM is formed in situ. The correlation with the condensation
temperature may therefore reflect some aspect which is shared by the
formation process in the ISM. The similarity of the pattern that we
find in IRC+10216 may have some bearing on this question, and deserves
further attention.
\begin{table}[!ht]
\caption{Metal bearing molecules in IRC+10216}
\label{table:4}
\centering
\begin{tabular}{lcccc}
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
Species & $\theta^{\mathrm a}_{\mathrm r} $ & $X^{\mathrm b}_{\mathrm{mol}}$ & $X_{\mathrm{mol}}/X_\odot$ & Ref. \\
& (\arcsec) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
NaCl & 2.5 & 1.3($-$09) & 6.3($-$4) & 1 \\
NaCN & 2.5 & 6.8($-$09) & 3.4($-$3) & 1 \\
\noalign{\smallskip}
KCl & 2.5 & 3.8($-$10) & 2.9($-$3) & 2 \\
\noalign{\smallskip}
AlF & 2.5 & 4.6($-$08) & 1.6($-2$) & 1 \\
AlCl & 2.5 & 2.8($-$08) & 9.8($-$3) & 1 \\
AlNC & 5--15 & 2.3($-$10) & 7.9($-$5) & 3 \\
\noalign{\smallskip}
MgNC & 10--20 & 6.3($-$09) & 1.8($-$4) & 1 \\
MgCN & 10--20 & 2.9($-$10) & 8.0 ($-$6) & 4 \\
\noalign{\smallskip}
\hline
\end{tabular}
\begin{list}{}{}
\item
$^{\mathrm a}$ Source radius.
\item
$^{\mathrm b}$ Abundance relative to H, for $d=120$~pc, $\dot{M}(\mathrm{H})=1.25 \times 10^{-5}$~$M_{\odot}$~yr$^{-1}$.
\item
References: (1) Highberger \& Ziurys\ (\cite{highberger03}). (2)
Assuming $X({\rm{KCl}}) = 0.3 X({\rm{NaCl}})$ Cernicharo \&
Gu\'elin (\cite{cernicharo87}). (3) Ziurys et al.\
(\cite{ziurys02}). (4) Assuming $X({\rm{MgCN}}) = 0.045 X({\rm{MgNC}})$ Ziurys et al.\ (\cite{ziurys95}).
\end{list}
\end{table}
\section{Metal chemistry}
The circumstellar envelope of IRC+10216 exhibits a remarkable gas
phase chemistry, with more than 50 molecular species detected to date,
mainly through their rotational transitions at millimeter wavelengths.
The majority of the molecules are formed of the abundant elements H,
C, N, and O, several include Si and S, but a small number unexpectedly
include metals, Na, K, Mg, and Al. The first of these were discovered
by Cernicharo \& Gu\'elin (1987). Our observations of gas phase metal
atoms in IRC+10216 constrain some aspects of the metal chemistry in
the envelope.
The metal bearing molecules detected in the envelope are listed in
Table~4. In addition to these, numerous other metal bearing species
have been searched for at comparable levels, but not detected (e.g.,
Turner 1995). There are also other metal species whose rotational
frequencies have not yet been measured in the laboratory (e.g., Ziurys
2006b).
For comparison with the atomic abundances reported here, we list
updated abundances for the metal molecules in column (3) of
Table~4. These are based on the column densities given in the
references in the table, but are derived for the distance and mass
loss rate used in this paper, and are relative to hydrogen. The
fractional abundance $X_{\mathrm{mol}}/X_{\odot}$, is the fraction of
each metal in a particular molecular species. $\theta_{\mathrm r}$ is
the measured, or inferred, angular radius of the molecular
distribution, adopted from the references.
The metal bearing molecules divide naturally into two groups. The
first is confined to the dense core region represented by
$\theta_{\mathrm r} \la 2.5\arcsec$ in the table. The second, which
includes only Mg and Al bearing molecules, is found in distinct shells
in the envelope, in the photo-dissociation region. The general level
of incorporation of the metals into molecules is very small in both
groups.
In the outer envelope, the fractional abundances of the metal bearing
molecules are $\la 0.02$~per cent, while the atoms that we detect have
fractional abundances of 0.3--20~per cent. This we used in Sect.~5.1
to justify the assumption that the observed atoms are the dominant gas
phase metal carriers along the line of sight.
Even in the core region the molecules do not seem to be dominant. The
observed fraction of Na in the core in the form of molecules is $\sim
0.4$~per cent, and we detect 20~percent in the form of atoms farther
out. Similarly, the fraction of K in the core in the form of molecules
is $\sim 0.3$~per cent, and we detect 4~per cent in atoms farther
out. For these species the numbers are consistent with the idea that
apart from condensation or adsorption onto dust grains, the dominant
gas phase species is atomic throughout the envelope. There is lack of
specific molecular information for the other metals, but the absence
of detected molecular species argues that this applies to them as
well.
Our finding that residual metals are present in the envelope in the form of
neutral atoms and ions provides an observational foundation for
understanding certain aspects of the metal chemistry. For example, it
has been proposed that metal cyanides are formed by reactions of metal
ions with cyanopolyyenes (Dunbar \& Petrie 2002), and the model of
Cordiner \& Millar (2009) shows that this can account for the observed
abundance of MgCN if enough Mg$^+$ is present in the gas phase. We
have not observed Mg, but we expect that it behaves like the other
metals. Its condensation temperature is significantly less than that
of Ca and Fe, so it is likely to be less depleted in the envelope. The
gas phase abundances of the metals that we observe are given in column
(6) of Table~3. These may undergo reactions with cyanopolyyenes and
other abundant neutral molecules, such as unsaturated hydrocarbons,
and lead to a variety of metal bearing species. The largest
abundances that we observe in the gas are those of Na and Fe (whose
large depletion is balanced by its high cosmic abundance). Thus Fe
offers interesting possibilities for a potentially observable Fe
chemistry.
\section{Conclusions}
The observations of IRC+10216 reported in this paper represent the
first comprehensive study of atomic metals in a carbon-rich
circumstellar envelope. We detect lines of Na, K, Ca, Cr, and Fe, and
obtain upper limits for Al, Ti, Mn, and Sr. Combined with a simple
model of the ionization, the observations provide estimates of the gas
phase metal abundances in the outer envelope.
The results show that the metals, especially Ca and Fe, are
significantly depleted onto dust grains in the circumstellar envelope,
and this is the dominant form returned to the ISM. The depletion
pattern has some similarity with depletion in the ISM, and is roughly
consistent with expectations of dust condensation in a carbon-rich
envelope.
Although the metals are depleted in the envelope, atomic metals in the
form of neutral atoms and ions appear to be the major metal species in
the gas phase. As such, they likely play a key role in the metal
chemistry of the envelope.
\begin{acknowledgements}
We thank Dr.~K.~Lodders for helpful information on dust condensation. We also
thank an anonymous referee for helpful comments.
This work is supported in part by the National Science Foundation,
grant AST 08-06910 (PJH).
\end{acknowledgements}
|
1,108,101,565,209 | arxiv | \section{Introduction}
The main feature of Finsler geometry is that the associated {\em generalised} metric tensor $\tilde g$ (also called {\em fundamental tensor}) has a dependence on the directions. More precisely, at any point $p$ of a spacetime $\tilde M$ there are infinitely many scalar products $\tilde g_v$, one for each direction $v$ where $\tilde g$ is defined.
In this respect, a gravitation theory based on Finsler geometry is a metric one and it can include, as its isotropic case, general relativity. There have indeed been several works in relativistic physics where Finsler geometry is used.\footnote{For applications of Finsler geometry to non-relativistic physics and biology we recommend \cite{AnInMa93}.} We recall here the pioneering work of G. Randers \cite{Rander41} about asymmetry of time intervals, where it is introduced a class of Finsler metrics, nowadays called Randers metrics, together with its connection with 5-dimensional Kaluza-Klein theory; the work of G. Y. Bogoslovsky on Lorentz symmetry violation (see e.g. \cite{Bogosl77a, Bogosl77b, Bogosl94}) and the finding by G.W. Gibbons et al. \cite{GiGoPo07} that general very special relativity is the group of transformations that leave invariant a Finsler metric introduced by Bogoslovsky; the work of H.E. Brandt about maximal proper acceleration (see e.g. \cite{Brandt1999}) where generalised metric on the tangent bundle of the spacetime are used; the extension of the Fermat's principle to Finsler spacetime by V. Perlick \cite{perlick06}, motivated by optics in anisotropic non-dispersive media; the applications to quantum gravity by F. Girelli et al. \cite{GiLiSi2007}. More recently, mathematical models where Finsler geometry replaces Lorentzian one have been considered in gravitation, see e.g. \cite{amir, Bar12, FusPab16, KoStSt12, LiChang, Minguz15b, PfeWol11}, cosmology \cite{HohPfe17, papa, stavac,vacaru12} and in the so-called Standard Model Extension (see e.g. \cite{Colladay, Koste, Russell, Russel15, shreck}).
In this work we study Finsler spacetimes endowed with a timelike Killing vector field and we introduce a new class of Finsler spacetimes that can be viewed as a Finslerian extension of the class of the standard stationary Lorentzian manifolds (see, e.g., \cite{mas, FoGiMa95, JavS08}).
A Finsler spacetime is here defined as a smooth, connected, paracompact manifold $\tilde M$ of dimension $n+1$, $n\geq 1$, endowed with a generalised metric tensor $\tilde g$, defined on an open subset $A\subset T\tilde M$, having index $1$ for each $v\in A$ and which is the Hessian w.r.t. to the velocities of a Lorentz-Finsler function $L$ (see Definition~\ref{fst} below). This is a function
$L:T\tilde M\rightarrow\mathbb{R}$ which is positively homogeneous of degree $2$ in the velocities, i.e. $L(p,\lambda v)=\lambda^{2}L(p,v)$, $\forall\lambda>0$.
The domain $A$ where $\tilde g$ is well defined and has index $1$ is in general, a smooth, cone subset of $T\tilde M\setminus 0$, where $0$ denotes the zero section in $T\tilde M$. By ``smooth cone subset'' we mean that $\tilde\pi(A)=\tilde M$, where $\tilde\pi:T\tilde M\rightarrow \tilde M$ is the natural projection from the tangent bundle $T\tilde M$ to $\tilde M$, and, for every $p\in M$, $A_p:=A\cap T_p\tilde M$ is an open linear cone (without the vertex $\{0\}$) of the tangent space $T_{p}\tilde M$, i.e. if $v\in A_p$ then $\lambda v\in A_p$ for each $\Lambda>0$. Moreover $A_p$ varies smoothly with $p\in M$ in the sense that $A_p$ is defined by the union of the solutions of a finite number of of systems of inequalities
\[
\begin{cases}E_{1,k}(p,v)>0\\
\ldots\\
E_{m_k,k}(p,v)>0\end{cases}\]
where, for each $k\in\{1,\ldots,l\}$, $E_{1,k}, \ldots, E_{m_k,k}\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ are $m_k$ smooth functions on $T\tilde M$, positively homogeneous of degree $1$ in $v$.
In our paper, below Section~\ref{kvf}, $A_p$ will be equal to $T^+_p\tilde M\setminus \mathcal T_p$ or, in some cases, to $T_p\tilde M\setminus \mathcal T_p$, where $T^+_p\tilde M$ is a half-space in $T_p\tilde M$ whose boundary is an hyperplane passing through $\{0\}$ and $\mathcal T_p$ is a one-dimensional subspace intersecting $T^+_p\tilde M$. We will denote the set $A=\cup_{p\in\tilde M}A_p$ by $T^+\tilde M\setminus \mathcal T$ in the former case and with $T\tilde M\setminus \mathcal T$ in the latter. Indeed, the cone subsets $T^+_p\tilde M\setminus \mathcal T_p$ and $T\tilde M\setminus \mathcal T$ are the natural candidate domains for a generalised metric tensor $\tilde g$ if one asks for a Lorentz-Finsler function $L$ on a product manifold $\tilde M=\ensuremath{\mathbb R}\xspace\times M$ such that, on $TM$, $L$ reduces to the square of a classical Finsler metric. In fact, $L$ cannot be smoothly extended
to vectors which project on $0$ in $TM$, because the square of a classical Finsler metric is not twice differentiable on zero vectors.
More generally, as suggested in \cite{LPH}, $L$ could be smooth only on $T\tilde M\setminus \mathcal Z$ where $\mathcal Z$ is a zero measure subset in $T\tilde M$. It is worth to recall that it would be possible to define $L$ on a cone subset $A$ where $L$ is negative and, at each point $p\in \tilde M$, $A_p$ is a convex salient cone and $L$ is extendible and smooth on a cone subset around the set of lightlike vectors $\{v\in T\tilde M\setminus 0:L(v)=0\}$ which defines the boundary of $A$ in $T\tilde M\setminus 0$, \cite{amir} ($A$ is then the cone subset of the Finslerian future pointing timelike vectors).
Let us now give some further details about the generalised metric tensor $\tilde g$. Let $A\subset T\tilde M$ be a cone subset as above and
let $\pi\colon A\to \tilde M$ be the restriction of the canonical projection, $\tilde \pi:T\tilde M\to \tilde M$, to $A$. Moreover, let $\pi^*(T^*\tilde M)$ the pull-back cotangent bundle over $A$. We consider the tensor product bundle $\pi^*(T^*\tilde M)\otimes\pi^*(T^*\tilde M)$ over $A$ and a section $\tilde g:v\in A\mapsto \tilde g_v\in T^*_{\pi(v)}\tilde M\otimes T^*_{\pi(v)}\tilde M$. We say that $\tilde g$ is {\em symmetric} if $\tilde g_v$ is symmetric for all $v\in A$. Analogously, $\tilde g$ is said {\em non-degenerate} if $\tilde g_v$ is non-degenerate for each $v\in A$ and its index will be the common index of the symmetric bilinear forms $\tilde g_v$; moreover, $\tilde g$ will be said homogeneous if, for all $\lambda>0$ and $v\in A$, $\tilde g_{\lambda v}=\tilde g_{v}$. A a smooth, symmetric, homogeneous, non-degenerate section $\tilde g$ of the tensor bundle $\pi^*(T^*\tilde M)\otimes \pi^*(T^*\tilde M)$ over $A$ will be said a {\em generalised metric tensor}.
\begin{dfn}\label{fst}
A {\em Finsler spacetime} is a smooth $(n+1)$-dimensional manifold $\tilde M$, $n\geq 1$, endowed with a generalised metric tensor $\tilde g$, defined on a (maximal) cone subset $A\subset T\tilde M\setminus 0$, such that $\tilde g_{(p,v)}$ has index $1$, for each $(p,v)\in A$, and it is the fiberwise Hessian of a {\em Lorentz-Finsler function} $L$:
(i) $L:T\tilde M \rightarrow\mathbb{R}$,\quad $L\in C^0(T\tilde M)\cap C^3(A)$,
(ii) $L(p,\lambda v)=\lambda L(p, v)$, for all $(p,v)\in T\tilde M$,
(iii)
\begin{equation}\label{tildeg}
\tilde g_{(p,v)}(u_1,u_2):=\frac 1 2 \frac{ \partial^{2}L}{\partial s_1\partial s_2}(p,v+s_1u_1+s_2u_2)|_{(s_1,s_2)=(0,0)},
\end{equation}
for all $(p,v)\in A$. Moreover, there exists a smooth vector field $Y$ such that $Y_p\in \bar{A_p}$ and $L(p,Y_p)<0$, for all $p\in \tilde M$, where $\bar{A_p}$ is the closure of $A_p$ in $T_p\tilde M\setminus \{0\}$.
We denote a Finsler spacetime by $(\tilde M, L)$; in some circumstances, to emphasize that $\tilde g$ is defined and has index $1$ only on $A\subset T\tilde M\setminus 0$, we denote it by $(\tilde M,L,A)$.
\end{dfn}
\begin{rmk}\label{good}
Observe that,
whenever $\tilde g$ is defined on $T\tilde M\setminus 0$, Definition~\ref{fst} coincides with that in \cite[Def. 3]{Minguzzi}.
We emphasize that $A$ has to be intended as the maximal open domain in $T\tilde M\setminus 0$ where $\tilde g$ is well defined and has index $1$. We do not assume a priori that the connected component of $\bar A_p$ that contains $Y_p$ is convex and that all the lightlike vectors ($v\in T\tilde M\setminus 0$, such that $L(v)=0$) in such a component belong also to $A_p$; anyway both these properties should hold for obtaining reasonable local and global causality properties (see \cite{Minguzzi, amir, Minguz15b}) and indeed they are satisfied by the class of stationary Finsler spacetimes that we introduce below. On the other hand,
some Finslerian models do not satisfy the above second requirement: for example, in (deformed) Very Special Relativity minus the square of the line element (see \cite{GiGoPo07, FusPab16}) is given by
\[L(v):=-(-g(v,v))^{1-b}(-\omega(v))^{2b},\]
where $g$ is a Lorentzian metric on $\tilde M$ admitting a global smooth timelike vector field $Y$, which gives to $(\tilde M,g)$ a time orientation, and $\omega$ is a one-form on $\tilde M$ which is equivalent, w.r.t the metric $g$, to a future-pointing lightlike vector $u$. According to the value of the parameter $b\neq 1$, the fundamental tensor $\tilde g$ of $L$ is not defined or vanishing at $u$, which is also lightlike for $L$, while for all the timelike future-pointing vectors $v$ of $g$, we have that $L(v)<0$.
\end{rmk}
\begin{rmk}
We could allow more generality by not prescribing the existence of a Lorentz-Finsler function.
This is a quite popular generalisation of classical Finsler geometry, see, e.g., \cite[\S 3.4.2]{AnInMa93} or \cite{Lovas04,MeSzTo03}, and the references therein, where such structures are indeed called generalised metrics.
Anyway as observed above and in \cite[Remark 2.11]{CapSta16} the existence of a {\em good} (in the sense explained in Remark~\ref{good}) Lorentz-Finsler function avoids the occurrence of some causality issues.
\end{rmk}
Henceforth we will omit the dependence from the point on the manifold $\tilde M$, writing simply $L(v)$, $\tilde g_{v}$, etc., unless to reintroduce it in necessary cases (as in the statement of Theorem~\ref{charstat} where we use the notation $L(z,\cdot)$ to denote the map from $T_z\tilde M\to \ensuremath{\mathbb R}\xspace$ obtained from $L$ by fixing $z\in \tilde M$).
\section{Killing vector fields}\label{kvf}
In this section we extend the notion of Killing vector field to Finsler spacetimes following the approach of \cite{Lovas04}, with the difference that the base space in our setting is the open subset $A\subset T\tilde M$, while in \cite{Lovas04} it is the standard one for Finsler geometry, i.e. the slit tangent bundle. We will only consider Killing vector fields that are vector fields on $\tilde M$ by passing to their complete lifts on $T\tilde M$ and then restricting them to the open base space $A$. Clearly a more general approach is possible by considering {\em generalised vector field}, i.e. sections of $\pi^*(T\tilde M)$, as in \cite{OoYaIs17}. Another interesting approach is given in \cite[\S 2.9]{java}.
Let us give some preliminary notions.
Let $f\colon \tilde M\to \ensuremath{\mathbb R}\xspace$ be a smooth function. The {\em complete lift of $f$ on $T\tilde M$} is the function $f^c$ defined as $f^{c}(v):=v(f)$ for any $v\in T\tilde M$.
Let now $X$ be a vector field on $\tilde M$ and set $X^c$ the \emph{complete lift of $X$ to $T\tilde M$}, defined by
\baln
&X^c(f \circ \pi):=X(f),\text{ for all smooth functions $f$ on $\tilde M$,}\\
&X^c(f^c):=(X(f))^c.
\ealn
Observe that if $(x^0,\ldots,x^n)$ are local coordinates on $\tilde M$ and $(x^0,\ldots, x^n,y^0,\ldots, y^n)$ are the induced ones on $T\tilde M$ (by an abuse of notation we denote the induced coordinate $x^i\circ\tilde \pi$ again by $x^i$), then $(x^i)^c=y^i$, for all $i=0,\ldots, n$; so it is easy to check that in local coordinates $(X^c)_{(x,y)}$ is given by
\begin{equation}\label{completelift}X^{h}(x)\frac{\partial }{\partial x^{h}}+\frac{\partial X^{h}}{\partial x^{i}}(x)y^{i}\frac{\partial}{\partial y^{h}},\end{equation}
where we have used the Einstein summation convention; here $(x,y)\in T\tilde M$ has coordinates $(x^0,\ldots,x^{n},y^0,\ldots, y^{n})$, and $X^h(x)$, $h=0,\ldots, n$, are the components of $X$ w.r.t. $\left(\frac{\partial}{\partial x^h}\right)_{h\in\{0,\ldots,n\}}$.
\begin{rmk}\label{restrictions}
It is worth to observe that complete lifts on $A$ are well defined by restricting functions and fields to the open subset $A$ and, in the following, we will consider such restrictions and we will denote them, with an abuse of notation, always by $f^c$ and $X^c$.
\end{rmk}
The canonical vertical bundle map between $\tilde \pi^*(T\tilde M)$ and $T(T\tilde M)$ induces an injective bundle map $i:\pi^*(T\tilde M)\rightarrow T(A)$; in local coordinate $(x^i,y^i)$ of $T\tilde M$, if $z_y= z^{i}(x,y)\frac{\partial}{\partial x^i}|_{(x,y)}$, $(x,y)\in A$, it holds
\begin{equation*}
i(z_y)=z^i(x,y)\frac{\partial}{\partial y^i}|_{(x,y)}.
\end{equation*}
Observe that the map $i$ induces also a injective homomorphism between $\mathfrak{X}(\pi)$ and $\mathfrak{X}(A)$, denoted always by $i$, where $\mathfrak{X}(\pi)$ and $\mathfrak{X}(A)$ are the sets of smooth sections of $\pi^*(T\tilde M)$ (over $A$) and, respectively, of $T(A)$. In analogous way, a map $j:T(A)\rightarrow\pi^{*}(T\tilde M)$ can be defined as $j(w):=d\pi_y(w)$, for every $w\in T_{y}A$. Observe that $ i(\pi^*(T\tilde M))=\ker j$ and we have the following exact sequence
\begin{equation*}
0\rightarrow\pi^*(T\tilde M)\xrightarrow{i}T(A)\xrightarrow{j}\pi^{*}(T\tilde M)\rightarrow 0.
\end{equation*}
Thus another homomorphism between $\mathfrak{X}(A)$ and $\mathfrak{X}(\pi)$, denoted always by $j$, is defined and it holds the following exact sequence
\begin{equation*}
0\rightarrow\mathfrak{X}(\pi)\xrightarrow{i}\mathfrak{X}(A)\xrightarrow{j}\mathfrak{X}(\pi)\rightarrow 0.
\end{equation*}
The \emph{vertical vectors fields } are the elements of $ i(\mathfrak X(\pi))$.
We define the \emph{Lie derivative} relative to any smooth vector field $X$ on $\tilde M$ on the tensor product bundles of the pull-back bundles $\pi^*(T\tilde M)$ and $\pi^*(T^*\tilde M)$ over $A$ such that:
\begin{equation}\label{LD}
\mathcal{L}_{X}f:=X^c(f),\quad\mathcal{L}_{X} Y:=i^{-1}([X^c,i(Y)]),
\end{equation}
for any smooth function $f$ on $A$ and any $Y\in \mathfrak X(\pi)$, where $[\cdot, \cdot]$ is the Lie bracket on $A$ (recall Remark~\ref{restrictions}).
Then $\mathcal L_X$ is extended to any section of the tensor product bundles of the pull-back bundles $\pi^*(T\tilde M)$ and $\pi^*(T^*\tilde M)$ over $A$ by the
generalised Willmore's theorem for tensor derivations (see, e.g.
\cite[\S 1.32]{Szilas03}). Observe that the second equation in (\ref{LD}) is well posed, namely $[X^c,i( Y)]$ is vertical. In fact, it is almost immediate to see that the Lie bracket of any vector field $X^c$ and any vertical vector field is vertical; in local coordinates $(x^i,y^i)$ of $T\tilde M$, if $Y=Y^k(x,y)\frac{\partial}{\partial x^k}$ and $X=X^h(x)\frac{\partial}{\partial x^h}$, we have indeed
\begin{equation}\label{vLie}
[X^c,i(Y)]=\left(X^h\frac{\partial Y^k}{\partial x^h}+\frac{\partial X^h}{\partial x^i}y^i\frac{\partial Y^k}{\partial y^h}-Y^{h}\frac{\partial X^k}{\partial x^h}\right)\frac{\partial}{\partial y^k}.
\end{equation}
The Lie derivative $\mathcal L_X$ on $\pi^*(T^*\tilde M)\otimes\pi^*(T^*\tilde M)$ is then,
\begin{equation}\label{Lie}
\mathcal L_{X}\tilde g(Y,Z):=X^c(\tilde g(Y,Z))-\tilde g(\mathcal L_X Y,Z)-\tilde g(Y,\mathcal L_X Z),
\end{equation}
for any $\tilde g\in \pi^*(T^*\tilde M)\otimes\pi^*(T^*\tilde M)$ and for every $Y, Z\in \mathfrak{X}(\pi)$.
Observe that in a local base $\left(\widehat{\frac{\partial }{\partial x^0}},\ldots,\widehat{\frac{\partial }{\partial x^n}}\right)$ of $\mathfrak X(\pi)$,
$\widehat{\frac{\partial }{\partial x^i}}:=\frac{\partial }{\partial x^i}\circ\pi$, for each $i\in\{0,\ldots,n\}$, we have:
\begin{eqnarray}\label{jjj}
\nonumber\mathcal L_{X}\tilde g\left(\widehat{\frac{\partial}{\partial x^l}},\widehat{\frac{\partial}{\partial x^j}}\right)&=&\nonumber X^{c}(\tilde g_{lj})-\tilde g\left(\mathcal L_X\widehat{\frac{\partial}{\partial x^l}},\widehat{\frac{\partial}{\partial x^j}}\right)-\tilde g\left(\widehat{\frac{\partial}{\partial x^l}},\mathcal L_{X}\widehat{\frac{\partial}{\partial x^j}}\right)\\&=&X^{c}(\tilde g_{lj})+\tilde g\left(\frac{\partial X^h}{\partial x^l}\widehat{\frac{\partial}{\partial x^h}},\widehat{\frac{\partial}{\partial x^j}}\right)+\tilde g\left(\widehat{\frac{\partial}{\partial x^l}},\frac{\partial X^h}{\partial x^j}\widehat{\frac{\partial}{\partial x^h}}\right)\nonumber\\&=& X^{c}(\tilde g_{lj})+\frac{\partial X^h}{\partial x^l}\tilde g_{hj}+\frac{\partial X^h}{\partial x^j}\tilde g_{lh},
\end{eqnarray}
where $\tilde g_{ij}:=\tilde g\left(\widehat{\frac{\partial }{\partial x^i}},\widehat{\frac{\partial }{\partial x^j}}\right)$, for all $i,j\in \{0,\ldots,n\}$; here, in the second equality, we have used (\ref{vLie}) and the fact that $i(\widehat{\frac{\partial }{\partial x^i}})=\frac{\partial}{\partial y^i}$.
\begin{dfn}
Let $(\tilde M, L, A)$ be a Finsler spacetime, $K$ be a smooth vector field on $\tilde M$ and $\psi$ its flow. We say that $K$ is a {\em Killing vector field of $(\tilde M,L,A)$} if $\mathcal L_K\tilde g=0$.
\end{dfn}
The following characterization of Killing vector fields holds:
\begin{prop}\label{Linvariant}
Let $(\tilde M, L, A)$ be a Finsler spacetime (hence $L\in C^0(T\tilde M)\cap C^3(A)$, according to Definition~\ref{fst}), then $K$ is a Killing vector field, if and only if $K^c(L)|_A=0$.
\end{prop}
\begin{proof}
Observe that from \eqref{completelift} we have
\begin{eqnarray*}
K^{c}(L)(x,y)&=&K^{h}(x)\frac{\partial L}{\partial x^{h}}(x,y)+\frac{\partial K^{h}}{\partial x^{i}}(x)y^{i}\frac{\partial L}{\partial y^{h}}(x,y)\\&=&K^{h}(x)\frac{\partial }{\partial x^{h}}\big(\tilde g_{lj}(x,y)y^l y^j\big)+\frac{\partial K^{h}}{\partial x^{i}}(x)y^{i}\frac{\partial}{\partial y^{h}}\big(\tilde g_{lj}(x,y)y^l y^j\big)\\&=&\left(K^{c}(\tilde g_{lj})(x,y)+\frac{\partial K^h}{\partial x^l}(x)\tilde g_{hj}(x,y)+\frac{\partial K^h}{\partial x^j}(x)\tilde g_{lh}(x,y)
\right)y^{l}y^j
\end{eqnarray*}
for every $(x,y)\in A$.
Thus, if $K$ is Killing, by (\ref{jjj}), $K^{c}(L)(x,y)= \big(\mathcal L_{K}\tilde g\big)_{(x,y)}(y,y)=0$, for every $(x,y)\in A$.
Let us now assume that $K^c(L)|_A=0$. Observe that if there exists an open subset $U$ of $\tilde M$ where $K$ vanishes then also $K^c|_U\equiv 0$ and, from \eqref{jjj}, $(\mathcal L_K\tilde g)_{(x,y)}=0$ for all $(x,y)\in TU\cap A$. Let, then, $p\in \tilde M$ such that $K_p\neq 0$ and let us consider, in a neighborhood $V\subset \tilde M$ of $p$, a coordinate system $(x^0,\ldots, x^n)$ such that $\frac{\partial}{\partial x^0}=K$. In the induced coordinates on $T\tilde M$ we have that $K^c=\frac{\partial}{\partial x^0}\circ\tilde{\pi}$ (recall \eqref{completelift}) and, so, $K^c(L)(x,y)=\frac{\partial L}{\partial x^0}(x,y)=0$ for all $(x,y)\in TV\cap A$. Hence,
\[\frac{\partial^3 L}{\partial y^i\partial y^j \partial x^0}(x,y)=\frac{\partial^3 L}{\partial x^0\partial y^i\partial y^j }(x,y)=2\frac{\partial\tilde g_{ij}}{\partial x^0}(x,y)=0,\]
for all $(x,y)\in TV\cap A$ and for all $i,j\in \{0,\ldots,n\}$; from \eqref{jjj}, $(\mathcal L_K\tilde g)_{(x,y)}=0$ for each $(x,y)\in TV\cap A$. Thus, $(\mathcal L_K\tilde g)_{(p,y)}=0$ for any $(p,y)\in A$ such that $K_p\neq 0$. By continuity, $(\mathcal L_K\tilde g)_{(q,y)}=0$ for any $(q,y)\in A$ such that $q$ belongs to the closure of $\{p\in \tilde M:K_p\neq 0\}$ and then $\mathcal L_K\tilde g=0$ everywhere in $A$.
\end{proof}
Let us now see how the flow of $K^c$ behaves w.r.t. $A$.
\begin{lem}\label{Ainvariant}
Let $\tilde M$ be a manifold and $A$ a cone subset of $T\tilde M$ (according to the definition in the Introduction). Let $X$ be a smooth vector field on $\tilde M$. Then for each $\bar p\in \tilde M$ there exists an interval $I_{\bar p}$, $0\in I_{\bar p}$, and a neighborhood $U$ of $\bar p$ in $\tilde M$ such that the flow $\tilde \psi$ of $X^c$ is well defined on $I_{\bar p}\times TU$ and $\tilde \psi\big(I_{\bar p}\times (TU\cap A)\big)\subset A$.
\end{lem}
\begin{proof}
Let us denote by $\psi$ the flow of $X$. It is well known that for any $v\in T\tilde M$ there exists a neighborhood $V \subset \ensuremath{\mathbb R}\xspace\times T\tilde M$ of $(0,v)$ such that
\[\tilde \psi : V\to T\tilde M,\quad \quad \tilde\psi(t,v)=(\psi(t,p),d\psi_t(v)),\]
$p=\tilde \pi(v)$,
is a local flow of $X^c$. In fact,
\[\frac{\partial \tilde \psi}{\partial t}(t,v)=\left(\frac{\partial \psi}{\partial t}(t, p),\frac{\partial }{\partial t}\big(\partial_x \psi(t,p)(v)\big)\right),\]
where $\partial_x \psi(t,p)(v)$ denotes the partial differential of $\psi$ w.r.t. the second variable at $(t,p)$, evaluated in $v$ (hence $\partial_x \psi(t,p)(v)=d \psi_t(v)$). Thus, in local coordinates on $T\tilde M$, we have
\begin{align*}
\frac{\partial \tilde \psi}{\partial t}(t,v)&= X^h\big(\psi(t,p)\big)\frac{\partial}{\partial x^h}+\frac{\partial }{\partial t}\left(\frac{\partial \psi^h}{\partial x^j}(t,p) v^j\right)\frac{\partial}{\partial y^h}\\
&=X^h\big(\psi(t,p)\big)\frac{\partial}{\partial x^h}+\frac{\partial^2 \psi^h}{\partial t\partial x^j}(t,p)v^j\frac{\partial}{\partial y^h}\\
&=X^h\big(\psi(t,p)\big)\frac{\partial}{\partial x^h}+\frac{\partial }{ \partial x^j}\Big(X^h\big(\psi(t,p)\big)\Big)v^j\frac{\partial}{\partial y^h}\\
&=X^h\big(\psi(t,p)\big)\frac{\partial}{\partial x^h}+\frac{\partial X^h}{\partial x^l}\big(\psi(t,p)\big)\frac{\partial \psi^l}{\partial x^j}(t,p)v^j\frac{\partial}{\partial y^h}\\
&=X^c\big(\tilde \psi(t,p)\big).
\end{align*}
As $\frac{\partial \psi^l}{\partial x^j}(0,p)=\delta^l_j$ for all $p\in M$, where $\delta^l_j$ are the Kronecker symbols, and $\psi$ is smooth, we have that for any $\bar p\in \tilde M$ any $\epsilon>0$ there exists an interval $I_{\bar p}$, centered at $0$, and a neighborhood $U$ of $\bar p$ in $\tilde M$ such that $\psi$ is well defined in $I_{\bar p}\times U$ and
\[ \left|\frac{\partial \psi^l}{\partial x^j}(t,p)-\delta^l_j\right|<\epsilon,\]
for all $(t,p)\in I_{\bar p}\times U$ and each $l,j\in\{0,\ldots, n\}$. Hence, for any $u=(u^0,\ldots, u^n)\in \ensuremath{\mathbb R}\xspace^n$ such that $|u|=1$ we have
\[ \left|\frac{\partial \psi^l}{\partial x^j}(t,p)u^j-\delta^l_ju^j\right|<(n+1)\epsilon,\]
for each $l\in\{0,\ldots, n\}$.
Being $A$ open and $A_p$ a cone for all $p\in \tilde M$, we then conclude that the vector $\left(\frac{\partial \psi^0}{\partial x^j}(t,p)v^j, \ldots, \frac{\partial \psi^n}{\partial x^j}(t,p)v^j\right)\in A_{\psi(t,p)}$, for all $(t,p)\in I_{\bar p}\times U$, provided that $(v^0,\ldots, v^n)\in A_p$. Thus the flow of $X^c$ is well defined on $I_{\bar p}\times TU$ and $\tilde \psi \big(I_{\bar p}\times (TU\cap A)\big)\subset A$.
\end{proof}
From Propositions~\ref{Linvariant} and \ref{Ainvariant} it follows that $L$ is invariant under the flow of $K^c$. In fact, $0=\big(K^c(L)\big)(\tilde \psi_s(v))=\frac{d}{dt}L(\tilde \psi_{s+t} (v))|_{t=0}$, for all $v\in A$ and $s\in I_{\pi(v)}$, hence $s\in I_{\pi(v)}\mapsto L(\tilde \psi_s(v))$ is constant. From this observation
we get that Killing vector fields are also the infinitesimal generators of local $\tilde g$-isometries:
\begin{prop}
Let $(\tilde M, L, A)$ be a Finsler spacetime, $K$ be a smooth vector field on $\tilde M$ and let us denote by $\psi$ the flow of $K$. Then $K$ is a Killing vector field if and only if
for each $v\in A$ and for all $v_1,v_2\in T_{\pi(v)}\tilde M$, we have
\begin{equation}\label{isometry}
\tilde g_{d\psi_{t}(v)}\big(d \psi_{ t}(v_1),d \psi_{t}(v_2)\big)= \tilde g_{v}(v_1,v_2),
\end{equation}
for all $t\in I_p$, where $I_p\subset\ensuremath{\mathbb R}\xspace$ is an interval containing $0$ such that the stages $\psi_{t}$ are well defined in a neighbourhood $U\subset \tilde M$ of $p=\pi(v)$ and $d\psi_t(v)\in A$, for each $t\in I_p$. \end{prop}
\begin{proof}
Let $v\in A$ and $p=\pi(v)$. From Lemma~\ref{Ainvariant}, the flow $\tilde \psi$ of $K^c$ is well defined in $I_p\times TU$, for an interval $I_p$ containing $0$, ($\tilde \psi(0,v)=v$), and a neighborhood $U$ of $p$ in $\tilde M$ and, moreover, $\tilde \psi\big(I_p\times(TU\cap A)\big)\subset A$.
If \eqref{isometry} holds, then in particular $L(d\psi_t(v))=g_{d\psi_{t}(v)}\big(d \psi_{ t}(v),d \psi_{t}(v)\big)=g_v(v,v)=L(v)$, for all $t\in I_p$. Hence $0=\frac{d}{dt}L(d\psi_t (v))|_{t=0}=\big(K^c(L)\big)(v)$ and then we conclude using Proposition~\ref{Linvariant}. The converse follows observing that, being $L$ invariant under the flow of $K^c$,
\baln
\lefteqn{g_{d\psi_{t}(v)}\big(d \psi_{ t}(v_1),d \psi_{t}(v_2)\big)}&\\
&=\frac 1 2 \frac{ \partial^{2}L}{\partial s_1\partial s_2}\big(d\psi_{t}(v)+s_1d\psi_{t}(v_1)+s_2d\psi_{t}(v_2)\big)|_{(s_1,s_2)=(0,0)}\\&=
\frac 1 2 \frac{ \partial^{2}L}{\partial s_1\partial s_2}\big(d\psi_{t}(v+s_1v_1+s_2v_2)\big)|_{(s_1,s_2)=(0,0)}\\
&=\frac 1 2 \frac{ \partial^{2}L}{\partial s_1\partial s_2}(v+s_1v_1+s_2v_2)|_{(s_1,s_2)=(0,0)}=g_v(v_1,v_2).
\ealn
\end{proof}
\section{Stationary splitting Finsler spacetimes}
As in the Lorentzian setting, we say that a Finsler spacetime $(\tilde M,L, A)$ is \emph{stationary} if it admits a timelike Killing vector field. Here {\em timelike} means that $L(K_p)<0$ for all $p\in\tilde M$.\footnote{\label{formal} Analogously a vector $v\in T\tilde M$ is said lightlike (resp. spacelike; causal ) if $L(v)=0$ (resp. either $L(v)>0$ or $v=0$; $L(v)\leq 0$); anyway observe that, being $\tilde g$ defined only on $A$, $L(v)=\tilde g_v(v,v)$ only for vectors $v\in A$, so whenever $v\not \in \bar A$ this causal character is purely formal, and it is no way related to the generalised metric $g$.}
A particular type of stationary Lorentzian manifolds (called {\em standard stationary}) can be obtained starting from a product manifold $\tilde M=\ensuremath{\mathbb R}\xspace\times M$, a Riemannian metric $g$, a one-form $\omega$ and a positive function $\Lambda$ on $M$, by considering the Lorentzian metric:
\begin{equation}
\tilde g=-\Lambda dt^2+\omega\otimes d t+d t\otimes\omega+g.\label{sss}
\end{equation}
It is well known (see e.g. \cite[Appendix C]{GiaPic99}) that any stationary Lorentzian spacetime is locally isometric to a standard one.
Looking at the quadratic form associated to the Lorentzian metric \eqref{sss} with the aim of introducing a Finslerian analogue, we are led to the Lagrangian $L:T\tilde M\to \ensuremath{\mathbb R}\xspace$,
\begin{equation}\label{stationary}
L(\tau,v)=-\Lambda\tau^2+2B(v)\tau+F^2 (v),
\end{equation}
where $\Lambda$ is a positive function on $M$, $B\colon TM\to \ensuremath{\mathbb R}\xspace$ is a fiberwise positively homogeneous of degree $1$ Lagrangian which is at least $C^3$ on $TM\setminus 0$ and $F\colon TM\to [0,+\infty)$, $F\in C^0(TM)\cap C^3(TM\setminus 0)$ is a classical Finsler metric on $M$, i.e, it is fiberwise positively homogeneous of degree $1$ and
\[g_v(u,u):=\frac 1 2\frac{\partial^2 F^2}{\partial s_1\partial s_2} (v+s_1u+s_2u)|_{(s_1,s_2)=(0,0)}>0\]
for all $v\in TM\setminus 0$ and all $u\in T_{\pi_M(v)}M$, where $\pi_M$ is the canonical projection $\pi_M:TM \to M$.
Let us now introduce some notation.
We will denote coordinates $(t,x^1, \ldots, x^n)$, in $\tilde M=\ensuremath{\mathbb R}\xspace\times M$ by $z$, i.e. $z=(t,x^1, \ldots, x^n)$. Natural coordinates in $T\tilde M$,
will be then denoted by $(z,\dot z)$, that is $(z,\dot z)=(t,x^1, \ldots, x^n,\tau, y^1,\ldots, y^n)$.
For a Lagrangian $A:TM\to\ensuremath{\mathbb R}\xspace$ and $v\in TM\setminus 0$, let us denote by $(\partial_yA)_v$ and $(\partial^2_{yy}A)_v$, respectively, the fiberwise differential and Hessian of $A$ at $v$, i.e. for all $u,u_1,u_2\in TM$
\baln
&(\partial_yA)_v(u):=\frac{d}{ ds}\big(A(v+su)\big)|_{s=0},\\ &(\partial^2_{yy}A)_v(u_1,u_2):= \frac{\partial^2 }{\partial s_1\partial s_2}\big(A(v+s_1u_1+s_2u_2)\big)|_{(s_1,s_2)=(0,0)}.\ealn
These are respectively sections of the pull-back bundles $\pi^{*}_M(T^{*}M)$ and $\pi^{*}_{M}(T^{*}M)\otimes\pi^{*}_{M}(T^{*}M)$ over $TM\setminus 0$.
The analogous fiberwise derivatives, for a Lagrangian $L\colon T\tilde M\to\ensuremath{\mathbb R}\xspace$ on $\tilde M$, are denoted by $(\partial_{\dot z}L)_{w}$ and $(\partial^2_{\dot z\dot z}L)_{w}$, $w\in T\tilde M$,
and when $L$ is a Lorentz-Finsler function on $\tilde M$ then $\frac 12 (\partial^2_{\dot z\dot z}L)_{w}$ is, then, the generalised metric tensor $\tilde g_w$ already introduced in \eqref{tildeg}.
Let us denote by $\mathcal T$ the trivial line subbundle of $T\tilde M$ defined by the vector field $\partial_t$. Let $z=(t,x)\in \tilde M$ and let us denote by $T^+_p\tilde M$ and $T^-_p\tilde M$ respectively the open half-spaces of $T_p\tilde M$ given by $T^+_p\tilde M:=\{(\tau, v)\in T_p\tilde M: \tau >0\}$ and $T^-_p\tilde M:=\{(\tau, v)\in T_p\tilde M: \tau <0\}$; moreover let $\bar T^+_p\tilde M$ and $\bar T^-_p\tilde M$ be their closures in $T_p\tilde M$. Let us then denote by
$T^+\tilde M\setminus \mathcal T$ (resp. $T^-\tilde M\setminus \mathcal T$) the open cone subset of $T\tilde M$ given by $T^+\tilde M\setminus \mathcal T:=\cup_{p\in \tilde M}T^+_p\tilde M\setminus \mathcal T_p$ (resp. $T^-\tilde M\setminus \mathcal T:=\cup_{p\in \tilde M}T^-_p\tilde M\setminus \mathcal T_p$) and by
$\bar T^+\tilde M\setminus \mathcal T$ (resp. $\bar T^-\tilde M\setminus \mathcal T$) the cone subset defined by $\bar T^+\tilde M\setminus \mathcal T:=\cup_{p\in \tilde M}\bar T^+_p\tilde M\setminus \mathcal T_p$ (resp. $\bar T^-\tilde M\setminus \mathcal T:=\cup_{p\in \tilde M}\bar T^-_p\tilde M\setminus \mathcal T_p$). Finally, let us denote by $T\tilde M\setminus \mathcal T$ the open cone subset of $T\tilde M$ defined as $T\tilde M\setminus \mathcal T:=\cup_{p\in \tilde M}T_p\tilde M\setminus \mathcal T_p$.
\begin{rmk} Notice that $L$ is continuous on $T\tilde M$ and at least $C^3$ on $T\tilde M\setminus\mathcal T$. Since, in general, $B$ is not differentiable at the zero section of $TM$, $L$ is not differentiable at vectors $y\in \mathcal T$. An exception is when $B$ reduces to a one-form on $M$ (being, so, differentiable at $0$ too) so that $L$ is $C^1$ on $T\tilde M\setminus 0$:
\end{rmk}
\begin{prop}\label{linearity}
Let $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary} and $w\in \mathcal T_{(t,x)}$, $(t,x)\in\ensuremath{\mathbb R}\xspace\times M$. Then $L$ admits fiberwise derivative $(\partial_{\dot z} L)_{w}$ if and only if the map $B_x:T_xM\to \ensuremath{\mathbb R}\xspace$, $B_x(v):=B(v)$, is odd. Moreover, in this case, $(\partial_{\dot z} L)_{w}$ is a linear map on $T_{(t,x)}\tilde M\equiv\ensuremath{\mathbb R}\xspace\times T_xM$ if and only if $B_x$ is linear.
\end{prop}
\begin{proof}
Let $w=(\tau,0)\in \ensuremath{\mathbb R}\xspace\times T_xM$ and let us compute $(\partial_{\dot z} L)_{w}$. For all $u\equiv(\rho,v)\in \ensuremath{\mathbb R}\xspace\times T_xM$ we have
\[
L(w+su)=-\Lambda(x)(\tau+s\rho)^2+2B_x(sv)(\tau+s\rho)+F^2(sv),
\]
hence the right and left derivative at $s=0$ of $L(w+su)$ do exist and are respectively equal to $-2\Lambda(x)\tau \rho +2B_x(v)\tau$ and $-2\Lambda(x)\tau \rho -2B_x(-v)\tau$;
thus they are equal if and only if $B_x(v)=-B_x(-v)$. In this case, being $B_x$ positively homogeneous of degree $1$, $(\partial_{\dot z} L)_{w}(u)=-2\Lambda(x)\tau \rho +2B_x(v)\tau$ is linear in $u\equiv(\rho,v)$ if and only if $B_x$ is linear.
\end{proof}
We characterize now when, for $L$ in \eqref{stationary}, $(\partial^2_{\dot z\dot z}L)_{w}$ has index $1$ for all $w\in T^{\pm}\tilde M\setminus\mathcal T$, and for all $w\in T\tilde M\setminus\mathcal T$.
\begin{prop}\label{index1}
Let $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary}, then $(\partial^2_{\dot z\dot z}L)_{w}$ has index $1$ at $w\in \bar T^+\tilde M\setminus \mathcal T$ (resp. $w\in \bar T^-\tilde M\setminus \mathcal T$), $w=(\tau, v)$,
if $(\partial^2_{yy}B)_v$ is positive semi-definite (resp. $(\partial^2_{yy}B)_v$ is negative semi-definite). Conversely, if there exists $\bar\tau>0$ (resp. $\bar \tau <0$) such that $(\partial^2_{\dot z\dot z}L)_{(\tau,v)}$ has index $1$ for $v\in TM\setminus 0$ and for all $\tau>\bar \tau$ (resp. $\tau<\bar \tau $) then $(\partial^2_{yy}B)_v$ is positive semi-definite (resp. $(\partial^2_{yy}B)_v$ is negative semi-definite).
\end{prop}
\begin{proof}
Let us prove the sufficient condition in the first equivalence. For $(\tau,v)\in \bar T^+\tilde M\setminus \mathcal T$, the fiberwise Hessian of $L$ is given by
\begin{equation}\label{tildegstat}
(\partial^2_{\dot z\dot z}L)_{(\tau, v)}=-\Lambda d t^2 +(\partial_y B)_v \otimes d t +d t\otimes (\partial_y B)_v+\tau (\partial ^2_{yy}B)_v+g_v
\end{equation}
First notice that, if $(\partial_y B)_v=0$ then,
being $\Lambda$ positive and $\tau$ non-negative, we immediately get that $(\partial^2_{\dot z\dot z}L)_{(\tau, v)}$ has index $1$. Assume now that $(\partial_y B)_v\neq 0$ and consider the vector $v_1\in T_{\pi_M(v)}M$ representing $(\partial_y B)_v$ w.r.t. the scalar product $\langle \cdot, \cdot\rangle_{v,\tau}$ defined by $g_v+\tau(\partial^2_{yy}B)_v$. Now take a $(\langle \cdot, \cdot\rangle_{v,\tau})$-orthonormal basis $v_2,...,v_{n}$ of $\mathrm{Ker}((\partial_y B)_v)$. The matrix of $ (\partial^2_{\dot z\dot z}L)_{(\tau, v)}$ relative to the basis $e,v_1,v_2,...,v_{n}$ of $T_{\pi((\tau,v))}\tilde M$, where $e=\frac{\partial}{\partial t}$ is given by
\[\left(\begin{array}{ccccc}
-\Lambda&(\partial_yB)_v(v_1)&0&\ldots&0\\ (\partial_yB)_v(v_1)&\langle v_1, v_1\rangle_{v,\tau}&0&\ldots&0\\ 0&0&&&\\
\vdots&\vdots&&I&\vspace{6pt}\\
0&0&&&
\end{array}
\right)
\]
Since $-\Lambda \langle v_1, v_1\rangle_{v,\tau}-\big((\partial_yB)_v(v_1)\big)^2
<0$, we conclude that $(\partial^2_{\dot z\dot z}L)_{(\tau, v)}$ has index $1$. Conversely, let us assume that $ (\partial^2_{\dot z\dot z}L)_{(\tau, v)}$ has index $1$, for any $(\tau,v)\in T^+\tilde M\setminus \mathcal T$ with $\tau>\bar{\tau}$. For a fixed $(\tau,v)\in T^+\tilde M\setminus \mathcal T$, with $\tau>\bar \tau$, consider the Lorentzian metric $\tilde g_{(\tau,v)}:= (\partial^2_{\dot z\dot z}L)_{(\tau, v)}$ and the $g_{(\tau,v)}$-orthogonal complement, $\mathcal D_{\pi((\tau,v))}$, in $T_{\pi((\tau,v))}\tilde M$ of the one-dimensional subspace generated by the vector $(1,0)$. This is given by vectors $(\tau_1,v_1)$ with $\tau_1=\frac{(\partial_{\dot z}B)_v(v_1)}{\Lambda}$, for each $v_1\in TM$. Thus
\begin{multline}
\tilde g_{(\tau,v)}\left(\Big(\frac{(\partial_{\dot z}B)_v(v_1)}{\Lambda}, v_1\Big), \Big(\frac{(\partial_{\dot z}B)_v(v_1)}{\Lambda}, v_1\Big)\right)\\=-\frac{\big((\partial_{\dot z}B)_v(v_1)\big)^2}{\Lambda}+\tau (\partial ^2_{yy}B)_v(v_1,v_1)+g_v(v_1,v_1)>0,\label{onorto}
\end{multline}
for all $v_1\in TM\setminus 0$. If there exists $v_1\in TM\setminus 0$ such that $(\partial ^2_{yy}B)_v(v_1,v_1)<0$ then, being $\mathcal D_{\pi((\tau,v))}$ independent of $\tau$, for any fixed $v\in TM$, we can consider the limit as $\tau\to+\infty$ in the left-hand side of \eqref{onorto}, obtaining a negative quantity for $\tau$ big enough, in contradiction with \eqref{onorto}.
It is easy to check that the analogous statement involving $(\partial ^2_{yy}B)_v$ negative semi-definite, holds as the previous one with obvious modifications.
\end{proof}
\begin{cor}\label{oneform}
Let $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary} and $x\in M$. Then $(\partial^2_{\dot z\dot z}L)_{(\tau,v)}$ has index $1$ for all $v\in T_xM\setminus\{0\}$ and all $\tau \in \ensuremath{\mathbb R}\xspace$ if and only if $B(x,\cdot)$ is linear form on $T_xM$.
\end{cor}
\begin{proof}
It is trivial to check that if $B(x,\cdot)$ is a linear form on $T_x M$ then $(\partial^2_{\dot z\dot z}L)_{(\tau,v)}$ has index $1$ for all $\tau\in\ensuremath{\mathbb R}\xspace$ and all $v\in T_x M\setminus \{0\}$. Conversely, from Proposition~\ref{index1} $(\partial ^2_{yy}B)_v$ must vanish on $T_xM\setminus 0$ and, being $B$ fiberwise positively homogeneous of degree $1$, it must be linear on $T_xM$.
\end{proof}
As the fiberwise Hessian of a classical Finsler metric is positive semi-definite, from Proposition~\ref{index1} we immediately get:
\begin{cor}
Let $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary} with $B=\omega+F_1$ (resp. $B=\omega-F_1$), where $\omega$ and $F_1$ are, respectively, a one-form and a Finsler metric on $M$. Then $(\partial^2_{\dot z\dot z}L)_{w}$ has index $1$ for all $w\in T^+\tilde M\setminus \mathcal T$ (resp. $w\in T^-\tilde M\setminus \mathcal T$).
\end{cor}
\begin{rmk} \label{whole}
We have found that if the conditions of Proposition~\ref{index1} and Corollary~\ref{oneform} hold on the whole $T^+\tilde M\setminus \mathcal T$ (resp. $T^-\tilde M\setminus \mathcal T$; $T\tilde M\setminus \mathcal T$) then $(\mathbb R\times M,L, T^+\tilde M\setminus \mathcal T)$ (resp. $(\mathbb R\times M,L, T^-\tilde M\setminus \mathcal T)$; $(\mathbb R\times M, L, T\tilde M\setminus \mathcal T)$) is a Finsler spacetime.
Notice, indeed, that the role of the vector field $Y$, such that $Y_p\in \bar A_p$ for all $p\in \mathbb R\times M$, can be taken by $\partial_t$ in the first and in the last case and by $-\partial_t$ in the second one.
\end{rmk}
\begin{rmk}
We could consider the case when the assumptions on $B$ hold pointwise for all $x\in M$ being all the three cases possible. Anyway, take into account that we would not get a Finsler spacetime due to the impossibility of fulfilling the assumption about the existence of a smooth vector field $Y$ such that $Y_p\in \bar{A_p}$ and $L(p,Y_p)<0$, for all $p\in \tilde M$. On the other hand, the case when $(\partial_{yy} B)_v$ is either positive or negative semi-definite for all $v\in TM$ include also the possibility that $B(x,\cdot)$ is a linear form on $T_xM$ for some $x\in M$.
\end{rmk}
Henceforth, we will denote by $(\tilde M, L)$, $\tilde M=\mathbb R\times M$, each of the Finsler spacetimes $(\tilde M,L, T^+\tilde M\setminus \mathcal T)$, $(\tilde M,L, T^-\tilde M\setminus \mathcal T)$, $(\tilde M, L, T\tilde M\setminus \mathcal T)$, associated to $L$ given in \eqref{stationary}, implicitly assuming that if $\partial ^2_{yy}B$ is positive semi-definite (resp. $\partial ^2_{yy}B$ is negative semi-definite; $B$ is a one-form on $M$) then $\tilde g$ is defined and has index $1$ on the cone subset $A$ given by $T^+\tilde M\setminus \mathcal T$ (resp. $T^-\tilde M\setminus \mathcal T$; $T\tilde M\setminus \mathcal T$).
\begin{rmk}\label{futurepastpoint}
In analogy with the Lorentzian case, we say that a causal vector $w$ (recall footnote~\ref{formal}), with $w\in \bar A\setminus 0$ is {\em future-pointing} (resp. {\em past-pointing}) if $g_w(w,Y)< 0$ (resp. $g_w(w,Y)> 0$), whenever $w\in A$, or $w$ is causal and belongs to the closure of the set of future-pointing vectors in $A$. In the case of a stationary splitting Finsler spacetime, taking into account that when $A=T^-\tilde M\setminus \mathcal T$ we pick $-\partial_t$ as the vector field $Y$, we have that a causal vector $(\tau,v)$ of $(\tilde M, L, T^+\tilde M\setminus \mathcal T)$ (resp. of $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$; $(\tilde M, L, T\tilde M\setminus \mathcal T)$)
with $(\tau, v)\in T^+\tilde M\setminus \mathcal T$ (resp. $(\tau, v)\in T^-\tilde M\setminus \mathcal T$; $(\tau, v)\in T\tilde M\setminus \mathcal T$ ) is {\em future-pointing} if $-\Lambda \tau +(\partial_y B)_v (v)<0$ (resp. $-\Lambda \tau +(\partial_y B)_v (v)>0$; $-\Lambda \tau +B(v)<0 $). By homogeneity, the first (resp. second) inequality becomes $-\Lambda \tau +B(v)<0$ (resp. $-\Lambda \tau +B(v)>0$).
Being $B(0)=0$, the vectors of the type $(\tau,0)$, $\tau>0$, (resp. $\tau<0$) are then also timelike and future-pointing (resp. past-pointing). We will see in Remark~\ref{futureinside} that the causal future-pointing causal vectors of $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$) at $p\in\tilde M$ are all and only the causal vectors in $\bar T^+\tilde M:=\cup_{p\in \tilde M}\bar T^+_p\tilde M$ (resp. $\bar T^-\tilde M:=\cup_{p\in \tilde M}\bar T^-_p\tilde M$).
\end{rmk}
\begin{prop}\label{desudetkilling}
Assume that $(\tilde M, L)$, $\tilde M=\mathbb R\times M$ and $L$ as in \eqref{stationary}, is a Finsler spacetime, then $\partial_t$ is timelike and Killing.
\end{prop}
\begin{proof}
We consider the fundamental tensor $\tilde g$ of $L$ in \eqref{tildegstat}. Let us prove that $\mathcal L_{\partial_t}\tilde g(\widehat{\partial_{z^i}}, \widehat{\partial_{z^j}})=0$ for all $i,j\in\{0, \ldots, n\}$, where $z^0=t$ and $z^i=x^i$ for all $i\in\{1,\ldots,n\}$. From \eqref{jjj}, it is enough to prove that $(\partial_t)^c (\tilde g_{ij})=0$, for all $i,j\in\{0, \ldots, n\}$.
From \eqref{completelift}, we have
$(\partial_t)^c (\tilde g_{ij})=\partial_t\tilde g_{ij}=0$.
\end{proof}
Proposition~\ref{index1}, Corollary~\ref{oneform}, Remark~\ref{whole} and Proposition~\ref{desudetkilling} justify the following definition:
\begin{dfn}
Let $\tilde M=\ensuremath{\mathbb R}\xspace\times M$, $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary}, such that $\partial^2_{yy} B$ is positive semi-definite on $TM\setminus 0$ (resp. $\partial^2_{yy} B$ is negative semi-definite on $TM\setminus 0$) then we call $(\tilde M, L)$ a \emph{stationary splitting Finsler spacetime}.
\end{dfn}
\section{On the local structure of stationary Finsler spacetimes}
An important role in several geometric properties of stationary Lorentzian manifolds is played by the distribution $\mathcal D$ orthogonal to the Killing vector field $K$. For example, since $K$ is timelike, $\mathcal D$ is spacelike and then, in a standard stationary Lorentzian manifold $(\ensuremath{\mathbb R}\xspace\times M, \tilde g)$, $\tilde g$ given in \eqref{sss}, it is the horizontal distribution of the semi-Riemannian submersion $\pi\colon \ensuremath{\mathbb R}\xspace\times M\to M$, where the Riemannian metric on $M$ is equal to $g+\frac{\omega}{\Lambda}\otimes \omega$ (see, e.g., \cite{CaJaPi10}). Moreover, if $\mathcal D$ is integrable then a stationary Lorentzian manifold $(\tilde M,\tilde g)$ is said {\em static} (see \cite[Def. 12.35]{O'Neil}) and it is locally isometric to a warped product $(a,b)\times S$, with the metric $-\Lambda d t^2 + g$, where $S$ is an integral manifold of the distribution, $(a,b)\subset \ensuremath{\mathbb R}\xspace$, $\phi_*(\partial_t)=K$, being $\phi\colon (a,b)\times S\to M$ a local isometry, $g(\phi^{*}u,\phi^{*}v)=\tilde g(u,v)$, for all $u,v\in \mathcal D$
(see \cite[Prop. 12.38]{O'Neil}).
A natural generalisation of the orthogonal distribution to $K$ in the Finsler setting is the distribution in $T\tilde M$ defined as $\ker(\partial_{\dot z} L(K))$ where $\partial_{\dot z}L(K)$ denotes the one-form on $\tilde M$ given by $\frac{\partial L}{\partial \dot z^i}(K)dz^i$.
\begin{rmk} \label{gooddistro}
In order to get a well defined distribution $\mathrm{Ker}\big(\partial_{\dot z} L(K)\big)$, we need that $L$ is differentiable at $K_z$ for all $z\in \tilde M$. Thus, we assume that $L$ is $C^1$ on $T\tilde M$ whenever we need to consider such a distribution, as in Theorem~\ref{charstat}. Recall that for a stationary splitting Finsler spacetime $(\ensuremath{\mathbb R}\xspace\times M, L)$, this assumption implies that $B$ reduces to a one-form on $M$ (Proposition~\ref{linearity}) that we will denote, in this section, by $\omega$. Recall that from Corollary~\ref{oneform}, $\tilde g$ is then defined on $T\tilde M\setminus \mathcal T$.
\end{rmk}
Following \cite{CapSta16}, we introduce the next two definitions:
\begin{dfn}
We say that a Finsler spacetime $(\tilde M, L, A)$ is {\em static} if there exists a timelike Killing vector field $K$ such that the distribution of hyperplanes $\ker(\partial_{\dot z} L(K))$ is integrable.
\end{dfn}
\begin{dfn}
We say that a Finsler spacetime $(\tilde M, L, T\tilde M\setminus\mathcal T)$, where $\mathcal T$ is a line subbundle of $T\tilde M$, is {\em standard static} if there exist a smooth non vanishing global section $K$ of $\mathcal T$, a Finsler manifold $(M,F)$, a positive function $\Lambda$ on $M$ and a smooth diffeomorphism $f\colon \ensuremath{\mathbb R}\xspace\times M\to \tilde M$, $f=f(t,x)$, such that $\partial_t=f^*(K)$ and $L(f_*(\tau, v))=-\Lambda \tau^2+ F^2(v)$, for all $(\tau, v)\in T(\ensuremath{\mathbb R}\xspace\times M)$.
\end{dfn}
In relation to the local structure of a stationary Finsler spacetime, we introduce also the following definition:
\begin{dfn}\label{LSS}
A stationary Finsler spacetime $(\tilde M,L, T\tilde M\setminus\mathcal T )$, where $\mathcal T$ is a line subbundle of $\tilde M$, with timelike Killing field $K$, $K_z\in\mathcal T_z$, for all $z\in \tilde M$, is {\em locally a standard stationary splitting} if for any point $z \in \tilde M$ there exists a neighborhood $U_z\subset \tilde M$ of $z$ and a diffeomorphism $\phi:I_{z}\times S_z\rightarrow U_z$, where $I_z=(-\varepsilon_z,\varepsilon_z)$ is an interval in $\mathbb{R}$ and $S_z$ a manifold, such that, named $t$ the natural coordinate of $I_{z}$, $\phi_{*}(\partial_t)=K|_{U_z}$,
and for all $(\tau,v)\in T(I_{z}\times S_{z})$, $L\circ\phi_*((\tau,v))=-\Lambda\tau^2+\omega(v)\tau+F^{2}(v)$, where $\Lambda$, $\omega$ and $F$ are respectively a positive function, a one-form and a Finsler metric on $S_z$. Moreover, we say that $(\tilde M,L)$ is {\em locally standard static} if for any $z\in \tilde M$ there exists a map $\phi$ as above such that $\omega =0$.
\end{dfn}
\begin{rmk}
We observe that, although $L$ might be not twice differentiable along vectors $w\in\mathcal{T}$, its fiberwise second derivative at $w\in \mathcal T\setminus 0$, evaluated at any couple of vectors $u_1,u_2\in \mathcal T_{\pi(w)}$, $(\partial^{2}_{\dot z\dot z} L)_{w}(u_1,u_2):=\frac{\partial^2}{\partial s_1\partial s_2}L(w+s_1u_1+s_2u_2)|_{(s_1,s_2)=(0,0)}$, does exist. Indeed, let $\lambda_1,\lambda_2\in\ensuremath{\mathbb R}\xspace$ such that $u_1=\lambda_1 w$, $u_2=\lambda_2w$, then by homogeneity, we have
\begin{equation*
L(w+s_1u_1+s_2u_2)=L((1+s_1\lambda_1+s_2\lambda_2)w)=(1+s_1\lambda_1+s_2\lambda_2)^2L(y),
\end{equation*}
for small $s_1,s_2\in \mathbb{R}$. Thus
$(\partial^{2}_{\dot z\dot z } L)_w(u_1,u_2)=2\lambda_1\lambda_2L(w)$.
This fact will be used in the following propositions, where we aim to characterize stationary and static Finsler spacetimes which are locally standard.
\end{rmk}
Recalling Remark~\ref{gooddistro}, we assume that $L\in C^1(T\tilde M)$ and we denote by $\mathcal{D}$ the distribution of hyperplanes in $T\tilde M$ given by $\ker(\partial_{\dot z} L(K))$.
\begin{rmk}\label{transv}
If $L$ is differentiable on $T\tilde M$ and it is fiberwise positively homogeneous of degree $2$, we have that $(\partial_{\dot z}L)_{K_z}(K_z)=2L(K_z)<0$ hence $(\partial_{\dot z}L)_{K_z}\neq 0$, $K_z\not\in\mathcal D_z$ and $T_{z}\tilde M=\mathcal{D}_z\oplus [K_z]$, for all $z\in \tilde M$.
\end{rmk}
Let us define the map
\[\tilde B(w):= w\in T\tilde M\mapsto \frac 1 2(\partial_{\dot z}L)_w(K_{\pi(w)}).\]
\begin{lem}\label{tildeB}
Let $(\tilde M,L, A)$ be a stationary Finsler spacetime with timelike Killing vector field $K$.
Assume that $L\in C^1(T\tilde M)$, $L(K)=L(-K)$ and
\[L(w\pm K_{\pi(w)})=L(w)+L(K_{\pi(w)}),\]
for all $w\in \mathcal{D}$. Then
\[\tilde B(w)=\frac 1 2 \big (L(w+K_{\pi(w)})-L(w)-L(K_{\pi(w)})\big),\]
for all $w\in T\tilde M$. Moreover, $\tilde B$ is fiberwise linear.
\end{lem}
\begin{proof}
For each $w\in \mathcal D$, let $w_{\mathcal D}\in \mathcal D$ and $\lambda_w\in\ensuremath{\mathbb R}\xspace$ be such that $w=w_{\mathcal D}+\lambda_w K_{\pi(w)}$ (recall Remark~\ref{transv}). Moreover, let $\ensuremath{\epsilon}\xspace(x)=\mathrm{sign}(x)$, if $x\in \ensuremath{\mathbb R}\xspace\setminus\{0\}$, and $\ensuremath{\epsilon}\xspace(0)=0$. By definition and our assumptions, we obtain
\bal
\tilde B(w)&=\frac 1 2\frac{d }{d s}L(w+sK_{\pi(w)})|_{s=0}=\frac 1 2\frac{d }{d s}L(w_{\mathcal D}+\lambda_wK_{\pi(w)}+sK_{\pi(w)})|_{s=0}\nonumber\\
&=\frac 1 2\frac{d }{d s}\left(L(w_{\mathcal D})+(\lambda_w+s)^2L\big(\ensuremath{\epsilon}\xspace(\lambda_w+s)K_{\pi(w)}\big)\right)\big|_{s=0}\nonumber\\
&=\frac 1 2\frac{d}{d s}\left(L(w_{\mathcal D})+(\lambda_w+s)^2L\big(K_{\pi(w)}\big)\right)\big|_{s=0}\nonumber\\
&=\lambda_wL(K_{\pi(w)}).\label{linearB}
\eal
On the other hand,
\baln
\lefteqn{\frac 12 \big(L(w+K_{\pi(w)})-L(w)-L(K_{\pi(w)})\big)}&\\
&=\frac 12\big( L(w_{\mathcal D}+\lambda_wK_{\pi(w)}+K_{\pi(w)})-L(w_{\mathcal D}+\lambda_wK_{\pi(w)})-L(K_{\pi(w)})\big)\\
&=\frac 1 2 \big(L(w_{\mathcal D})+(\lambda_w+1)^2L(K_{\pi(w)})-L(w_{\mathcal D})-(\lambda_w)^2L(K_{\pi(w)})-L(K_{\pi(w)})\big)\\
&= \lambda_wL(K_{\pi(w)})
\ealn
This proves the first part of the Lemma.
As $\tilde B(w)=\lambda_w L(K)$ we immediately get that $\tilde B$ is linear in $w$.
\end{proof}
\begin{thm}\label{charstat}
Let $\mathcal T$ be a line subbundle of $T\tilde M$ and $(\tilde M,L, T\tilde M\setminus\mathcal T)$ be a stationary Finsler spacetime with timelike Killing vector field $K$.
Then $(\tilde M,L,T\tilde M\setminus\mathcal T)$ is locally a standard stationary splitting
if and only if the following conditions are satisfied:
$(a)$ $L\in C^1(T\tilde M)\cap C^2(T\tilde M\setminus \mathcal T)$ and $K_z\in \mathcal T_z$ at every point $z\in \tilde M$ where $L(z,\cdot)$ is not twice differentiable on $T_z\tilde M$ (so, at these points $z$, $L(z,\cdot)$ is not the the quadratic form defined by a Lorentzian metric on $T_z\tilde M$.)
$(b)$ $L(K)=L(-K)$;
$(c)$ $L(w\pm K_{\pi(w)})=L(w)+L(K_{\pi(w)})$, for all $w\in \mathcal{D}$.
Furthermore, it is locally standard static if and only if $(a)$, $(b)$ and $(c)$ hold and $\mathcal D$ is integrable.
\end{thm}
\begin{proof}
($\Rightarrow$) Let $z\in \tilde M$, $U_z\subset \tilde M$ be a neighborhood of $z$,
and
$\phi\colon I_z\times S_z\to \tilde U_z$ be a diffeomorphism such that $\phi_*(\partial_t)=K|_{U_z}$ and $L\circ\phi_*(\tau,v)=-\Lambda \tau^2+\omega(v)\tau+F^2(v)$ (recall Definition~\ref{LSS}).
If $\bar x\in S_z$ is such that $F^2(\bar x,\cdot)$ is not the square of the norm defined by a Riemannian metric on $T_{\bar x}S_z$, $L\circ \phi_*\big((t, \bar x), (\cdot,\cdot)\big)$, $t\in I_z$, is not twice differentiable at any vector $(\tau,0)\in\ensuremath{\mathbb R}\xspace\times T_{\bar x}S_z$. As $d\phi_{(t,\bar x)}(\partial_t)\equiv d \phi_{(t,\bar x)}(1,0)=K_{\phi(t,\bar x)}$, we deduce that $K_{\phi(t,\bar x)}$ must belong to $\mathcal T_{\phi(t,\bar x)}$ for all $t\in I_z$ and this proves $(a)$. For $(b)$, let $z\in\tilde M$ and take a map $\phi$ as above (with $z=\phi(0,x)$); then
\begin{equation*}
L(K_z)=L\big(d \phi_{(0,x)}(1,0)\big)=-\Lambda(x)=L\big (d\phi_{(0,x)}(-1,0)\big)=L(-K_z).
\end{equation*}
In order to prove $(c)$, let $y\in \mathcal D_z$ and $(\tau,v)\in \ensuremath{\mathbb R}\xspace\times T_{x}M$ be such that $y=d\phi_{(0,x)}(\tau,v)$. Hence
\baln
0=(\partial_{\dot z}L)_{K_z}(y) &=(\partial_{\dot z}L)_{d\phi_{(0,x)}(1,0)}(d\phi_{(0,x)}(\tau,v))\\
&=\frac{d}{d s}L\big(d\phi_{(0,x)}(1,0)+sd\phi_{(0,x)}(\tau,v)\big)|_{s=0}\\
&=\frac{d}{d s}L\big(d\phi_{(0,x)}\big(1+s\tau,sv)\big)|_{s=0}\\
&=\frac{d}{d s}\Big(-\Lambda(x)(1+s\tau)^2+2s\omega(v)(1+s\tau)+F^2(sv)\Big)\Big|_{s=0}\\
&=2\big(-\tau\Lambda(x)+w(v)\big);
\ealn
thus $\tau=\omega(v)/\Lambda(x)$ and
\bmln
L(y\pm K_z)=L\left(d\phi_{(0,x)}\left(\frac{\omega(v)}{\Lambda(x)}\pm 1,v\right)\right)=
\frac{\omega^2(v)}{\Lambda(x)}+F^2(v)-\Lambda(x)\\
=L\left(d\phi_{(0,x)}\left(\frac{\omega(v)}{\Lambda(x)},v\right)\right)+
L\big(d \phi_{(0,x)}(1,0)\big)=L(y)+L(K_z).
\emln
($\Leftarrow$) Let $\bar z\in \tilde M$ and $S_{\bar z}$ be a small smooth hypersurface in $\tilde M$ such that $\bar z\in S_{\bar z}$ and $T_{\bar z}S_{\bar z}= \mathcal D_{\bar z}$. Recalling Remark~\ref{transv}, we can assume that $K_x$ is transversal to $S_{\bar z}$, i.e. $T_x \tilde M=T_xS_{\bar z}\oplus [K_x]$, for all $x\in S_{\bar z}$. From $(b)$ and $(c)$ we get, for any $y, u\in T_{\bar z}S_{\bar z}$,
\bal
\tilde g_y(K_{\bar z},K_{\bar z})=\frac 1 2(\partial_{\dot z\dot z}L)_y(K_{\bar z},K_{\bar z})&=\frac 1 2\frac{\partial^2}{\partial s_1\partial s_2}L(y+(s_1+s_2)K_{\bar z})|_{(s_1,s_2)=(0,0)}\nonumber\\
&=\frac 1 2\frac{\partial^2}{\partial s_1\partial s_2}\big(L(y)+(s_1+s_2)^2L(K_{\bar z})\big)|_{(s_1,s_2)=(0,0)}\nonumber\\
&=L(K_{\bar z})<0,\label{Ktimelike}\eal
and
\bal
\tilde g_y(u,K_{\bar z})=\frac 1 2(\partial_{\dot z\dot z}L)_y(u,K_{\bar z})&=\frac 1 2\frac{\partial^2}{\partial s_1\partial s_2}L(y+s_1u+s_2K_{\bar z})|_{(s_1,s_2)=(0,0)}\nonumber\\
&=\frac{\partial^2}{\partial s_1\partial s_2}\big(L(y+s_1u)+ s_2^2L(K_{\bar z})\big)|_{(s_1,s_2)=(0,0)}=0,\label{Kortho}
\eal
that is $K_{\bar z}$ is timelike w.r.t. the Lorentzian scalar product $\tilde g_y$ on $T_{\bar z}\tilde M$ and $T_{\bar z}S_z$ is a spacelike hyperplane, for all $y\in T_{\bar z}S_{\bar z}$.
Let $UTS_{\bar z}$ be the unit tangent bundle of $S_{\bar z}$ (with respect to any auxiliary Riemannian metric on $\tilde M$). As $\tilde g_y$ is positively homogeneous of degree $0$ in $y$, by continuity of the map $y\in UTS_{\bar z}\mapsto \tilde g_y$, we get that (up to consider a smaller hypersurface $S_z$) $K_x$ is timelike and $T_x S_{\bar z}$ is spacelike w.r.t. $\tilde g_y$, for each $y\in T_xS_{\bar z}$ and for any $x\in S_{\bar z}$.
Let now $w\in T_x\tilde M\setminus \mathcal T_x$, $x\in S_{\bar z}$, and $w_S\in T_{x}S_{\bar z}$, $\tau_w\in\ensuremath{\mathbb R}\xspace$ such that $w=w_S+\tau_wK_x$. Let us evaluate $\tilde g_w(u,u)$, for any $u\in T_xS_{\bar z}$. From Lemma~\ref{tildeB} we have
\baln
\tilde g_w(u,u)&=\frac{1}{2}\frac{\partial^2}{\partial s_1\partial s_2}L\big(w+(s_1+s_2)u\big)|_{(s_1,s_2)=(0,0)}\\
&=\frac 1 2 \frac{\partial^2}{\partial s_1\partial s_2}L\big(w_S+\tau_wK_x+(s_1+s_2)u\big)|_{(s_1,s_2)=(0,0)}\\
&=\frac 1 2\frac{\partial^2}{\partial s_1\partial s_2}\Big(L\big(w_S+(s_1+s_2)u\big)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad+\tau_w^2L(K_x)+\tilde B\big(w_S+(s_1+s_2)u\big)\Big)\Big|_{(s_1,s_2)=(0,0)}.
\ealn
Since $\tilde B$ is linear, we have that $\frac{\partial^2}{\partial s_1\partial s_2}\tilde B\big(w_S+(s_1+s_2)u\big)\big|_{(s_1,s_2)=(0,0)}=0$ and then
\begin{equation} \tilde g_w(u,u)=\tilde g_{w_S}(u,u)>0.\label{g0}\end{equation}
By using $w=w_{\mathcal D}+\lambda_w K_x$ and $w_S=(w_S)_{\mathcal D}+\lambda_{w_S}K_x$, we also get, as in \eqref{Ktimelike},
\begin{equation}\tilde g_w(K_x,K_x)=L(K_x)=\tilde g_{w_{S}}(K_x,K_x).\label{Lambdax}\end{equation}
Moreover, recalling \eqref{linearB}, we obtain
\bal
\tilde g_w(u, K_x)&=\frac{1}{2}\frac{\partial^2}{\partial s_1\partial s_2}L\big(w+s_1u+s_2K_x\big)|_{(s_1,s_2)=(0,0)}\nonumber\\
&=\frac 1 2 \frac{\partial^2}{\partial s_1\partial s_2}L\big(w_{\mathcal D}+\lambda_wK_x+s_1u_{\mathcal D}+s_1\lambda_uK_x+s_2 K_x\big)|_{(s_1,s_2)=(0,0)}\nonumber\\
&=\frac 1 2 \frac{\partial^2}{\partial s\partial t}\big(L(w_{\mathcal D}+s_1u_{\mathcal D})+(\lambda _w+s_1\lambda_u+s_2)^2L(K_x)\big)|_{(s_1,s_2)=(0,0)}\nonumber\\
&=\lambda_uL(K_x)=\tilde B(u).\label{omegax}
\eal
Let $I_{\bar z}=(-\varepsilon_{\bar z},\varepsilon_{\bar z})$ be an interval such that the map $\phi:I_{\bar z}\times S_{\bar z}\rightarrow\tilde M$, $\phi(t,x)=\psi_t(x)$, where $\psi$ is the flow of $K$, is a diffeomorphism onto a neighborhood $U_{\bar z}$ of ${\bar z}$ in $\tilde M$. Consider a non vanishing smooth section $W:S_{\bar z}\rightarrow T\tilde M$, such that $W_x\not\in\mathcal T_x$, for all $x\in S_{\bar z}$. Set $Y_{z}=(d\psi_{t})_{x}(W_x)$, with $z=\phi(t,x)$. So $Y$ is a non vanishing smooth vector field in $U_{\bar z}$.
The evaluation $\tilde g_Y$ of the fundamental tensor of $L$ in $Y$ becomes, then, a Lorentzian metric on $U_{\bar z}$ (and, by definition of $Y$, $K$ is a Killing vector field for $\tilde g_Y$).
In particular, $\tilde g_{Y_z}(w_1,w_2)=\tilde g_{W_x}(v_1,v_2)$, for all $z\in U_{\bar z}$, $z=\phi(t,x)$ and
$w_i=(d \phi)_{(t,x)}(v_i)$, $i=1,2$.
Thus, $\phi^*\tilde g_Y$ in $I_{\bar z}\times S_{\bar z}$ is given by
\[\phi^*\tilde g_Y\big((\tau,v),(\tau,v)\big)=-\tilde g_{W_x}(K_x,K_x)\tau^2+2\tilde g_{W_x}(v,K_x)\tau+\tilde g_{W_x}(v,v).\]
for all $(t,x)\in I_{\bar z}\times U_{\bar z}$ and $(\tau,v)\in \ensuremath{\mathbb R}\xspace\times T_xS_{\bar z}$.
From \eqref{g0}, \eqref{Lambdax}, \eqref{omegax} we then obtain:
\baln
\phi^*\tilde g_Y\big((\tau,v),(\tau,v)\big)&=-\tilde g_{(W_x)_S}(K_x,K_x)\tau^2+2\tilde g_{(W_x)_S}(v,K_x)\tau+\tilde g_{(W_x)_S}(v,v)\\
&=-\Lambda(x)\tau^2+\omega_x(v)\tau+g_{(W_x)_S}(v,v),
\ealn
where $-\Lambda(x)=g_{(W_S)_x}(K_x,K_x)=L(K_x)$ and $\omega$ is the one-form on $S_{\bar z}$ defined by $\omega=\tilde B|_{TS_{\bar z}}$.
Thus, for all $(\tau, v)\in \ensuremath{\mathbb R}\xspace\times TS_{\bar z}\setminus 0$,
\[L\circ\phi_x(\tau,v)=\tilde g_{\phi_*(\tau,v)}\big(\phi_*(\tau,v),\phi_*(\tau,v)\big)=
\Lambda \tau^2+\omega(v)\tau+F^2(v),\]
where $F$ is the Finsler metric on $S_{\bar z}$ defined by
$F(v)=\sqrt{\tilde g_v(v,v)}=\sqrt{L(v)}$ and
\[L\circ\phi_x(\tau,0)=\tau^2L(K_x)=-\Lambda(x)\tau^2,
\]
Hence, for all $(\tau, v)\in \ensuremath{\mathbb R}\xspace\times TS_{\bar z}$,
\[L\circ\phi_x(\tau,v)=
\Lambda \tau^2+\omega(v)\tau+F^2(v).\]
This concludes the proof of the implication to the left.
For the last part of the theorem, it is enough to observe that in a standard static splitting $(0,v)\in \mathcal D$, for all $v\in TS_{\bar z}$ since $\omega=0$. On the other hand, if $\mathcal D$ is integrable then we can take an integral manifold $S_{\bar z}$ and then as in \eqref{Kortho} we get $\tilde g_{W_x}(u,K_x)=0$ for all $u\in T_xS_{\bar z}$ and all $x\in S_{\bar z}$.
\end{proof}
\section{The optical metrics of a stationary splitting Finsler spacetime}
Generally speaking, by {\em optical metric} is meant a metric tensor that comes into play in the description of the optical geometry of a curved spacetime (see, e.g. \cite{HehObu03}). For static and stationary Lorentzian spacetimes, it usually denotes a Riemannian metric which is conformal to the one induced by the spacetime metric on the space of orbits of the Killing field \cite{AbCaLa88}. For a standard stationary Lorentzian manifold $(\ensuremath{\mathbb R}\xspace\times M,\tilde g)$, $\tilde g$ given in \eqref{sss}, it becomes the Riemannian metric on $M$ given by $\omega/\Lambda \otimes \omega/\Lambda +g/\Lambda$. The role attributed to this metric seems to come from the static case ($\omega=0$) where the metric $g/\Lambda$ does fully describe its optical geometry, in the sense that light rays in $(\ensuremath{\mathbb R}\xspace\times M,\tilde g)$ project on geodesics of $(M,g/\Lambda)$). The same is not true in the more general stationary case where the equation satisfied by the projected curves is the one of a unit positive or negative charged test particle moving on the Riemannian manifold $(M, \omega/\Lambda \otimes \omega/\Lambda +g/\Lambda)$ under the action of the magnetic field $B=d(\omega/\Lambda)$ (the positive charge correspond to future-pointing lightlike geodesics, the negative one to past-oriented). This equation (actually, these two equations) can effectively be interpreted as the equation of geodesics, parametrized with constant velocity w.r.t. $\omega/\Lambda \otimes \omega/\Lambda +g/\Lambda$, of a Finsler metric of Randers type on $M$ and of its reverse metric.
Several results about lightlike and timelike geodesics in the standard stationary spacetime can then be deduced by studying geodesics of such Finsler metrics \cite{CaJaMa11, CaJaMa10, BilJav08, CaJaMa10a, Cap10, CaJaMa13}.
The properties of these Randers metrics encode also the causal structure \cite{CaJaMa11, CJS} and the topological lensing \cite{CapGerSan12, Werner12} in a standard stationary Lorentzian spacetime, moreover they give information about its c-boundary \cite{FlHeSa13} and its curvature \cite{Gib09}.
Such correspondence between spacetime geometry and Finsler geometry has also been extended to more general Lorentzian spacetimes introducing some generalised Finsler-type structures \cite{CJS2}.
Our aim in this section is to prove that the correspondence still holds for a wide class of stationary splitting Finsler spacetimes $(\ensuremath{\mathbb R}\xspace\times M,L)$.
Observe that if $\gamma=(\theta,\sigma)$ is a lightlike curve on $(\ensuremath{\mathbb R}\xspace\times M,L)$, $L$ defined as in \eqref{stationary} then, by definition, it satisfies the equation
\begin{equation*}
0=L(\dot{\gamma})=-\Lambda(\sigma)\dot\theta^2+2B(\dot\sigma)\dot\theta+F^2(\dot\sigma),
\end{equation*}
so
\[
\dot\theta=\frac{B(\dot\sigma)}{\Lambda(\sigma)}\pm\sqrt{\frac{B(\dot\sigma)^2}{\Lambda^2(\sigma)}+\frac{F^2(\dot\sigma)}{\Lambda(\sigma)}}.
\]
Solving the above equation w.r.t. to $\dot\theta$, we get the following non-negative and fiberwise positively homogeneous of degree 1 Lagrangians on $TM$ associated to the stationary splitting Finsler spacetime $(\ensuremath{\mathbb R}\xspace\times M,L)$:
\begin{equation}\label{FBFB-}
\begin{split}F_B&=\frac{B}{\Lambda}+\sqrt{\frac{B^2}{\Lambda^2}+\frac{F^2}{\Lambda}},\\ F^{-}_{B}&=-\frac{B}{\Lambda}+\sqrt{\frac{B^2}{\Lambda^2}+\frac{F^2}{\Lambda}}.
\end{split}
\end{equation}
The same assumptions ensuring that $L$ is a Lorentz-Finsler function on $\tilde M=\ensuremath{\mathbb R}\xspace\times M$, plus a definite sign of $B$ when it does not reduce to a one-form on $M$, give that $F_B$ and $F^-_B$ are Finsler metrics on $M$.
\begin{thm}\label{fermatmetrics}
Let $L\colon T\tilde M\to \ensuremath{\mathbb R}\xspace$ defined as in \eqref{stationary}. Assume that \begin{enumerate}
\item $(\partial^2_{yy}B)_v$ is positive semi-definite (resp. $(\partial^2_{yy}B)_v$ is negative semi-definite), for all $v\in TM\setminus 0$;
\item for each $x\in M$ either $B(x,v)\geq 0$ (resp. $B(x,v)\leq 0$) for all $v\in T_xM$ or $B(x,\cdot )$ is linear on $T_xM$;
\end{enumerate}
then $F_B$ (resp. $F^{-}_B$) is a Finsler metric on $M$.
\end{thm}
\begin{proof}
Let us prove the statement for $F_B$, i.e. in the case when $(\partial^2_{yy}B)_v$ is positive semi-definite and either $B(x,\cdot )$ is non-negative or it is linear.
The only non trivial part of the proof is to show that the fiberwise Hessian of the square of $F_B$ is positive definite.
Let us define
\begin{equation} G=\sqrt{B^2 +\Lambda F^2},\label{Gfinsler}\end{equation}
so that $F_B=\frac{1}{\Lambda}(B+G)$. Let us equivalently compute the fiberwise Hessian of $\frac 1 2 (\Lambda F_B)^2$ at $v\in TM\setminus 0$:
\bml\frac{\Lambda^2}{2}(\partial^2_{yy}F^2_B)_v=\big((\partial_yB)_v+(\partial_yG)_v)\big)\otimes\big((\partial_yB)_v+(\partial_yG)_v)\big)
\\+(B(v)+G(v))\big((\partial^2_{yy}B)_v+(\partial^2_{yy}G)_v\big). \label{lambdaF}
\eml
Let us now show that $G$ is a Finsler metric on. Clearly, $G$ is non-negative and vanishes only at zero vectors, it is continuous in $TM$ and smooth outside the zero section, moreover it is fiberwise positively homogeneous of degree $1$. It remains to prove that the fiberwise Hessian of $\frac{1}{2}G^2$ is positive definite on $TM$.
Let us evaluate,
\[\frac 1 2 (\partial^2_{yy}G^2)_v=(\partial_yB)_v\otimes (\partial_yB)_v+B(v)(\partial^2_{yy}B)_v+\frac{\Lambda}{2}(\partial^2_{yy} F^2)_v.
\]
As $(\partial_yB)_v\otimes (\partial_yB)_v+\frac{\Lambda}{2}(\partial^2_{yy} F^2)_v$ is positive definite, we see that $\frac 1 2 (\partial^2_{yy}G^2)_v$ is positive definite provided that $B$ is linear on $T_{\pi_M(v)}M$ (thus, $(\partial^2_{yy}B)_v=0$ for all $v\in T_{\pi_M(v)}M\setminus 0$) or $B(v)\geq 0$ and $(\partial^2_{yy}B)_v$ is positive semi-definite.
Since $G$ is a Finsler metric, we know that $(\partial^2_{yy}G)_v$ is positive semi-definite and $(\partial^2_{yy}G)_v(u,u)=0$ if and only if $u=v$.
Thus, as $B(v)+G(v)\leq 0$ and it vanishes only if $v=0$, from \eqref{lambdaF}, we see $\frac{\Lambda^2}{2}(\partial^2_{yy}F^2_B)_v$ is positive semi-definite. Let us then assume, by contradiction that there exist $u\in T_{\pi_M(v)} M$, $u\neq 0$, such that $\frac{\Lambda^2}{2}(\partial^2_{yy}F^2_B)_v(u,u)=0$. This implies that
$(\partial^2_{yy}G)_v(u,u)=0$ and then $u=v$. Hence, by homogeneity, $(\partial^2_{yy}B)_v(v,v)=0$ and \[0=\big((\partial_yB)_v+(\partial_yG)_v)\big)\otimes\big((\partial_yB)_v+(\partial_yG)_v)\big)(v,v)=(B(v)+G(v))^2,\]
which implies that $v=0$, a contradiction.
If $(\partial^2_{yy}B)_v$ is negative semi-definite and $B(x,\cdot )$ is non-positive or linear, we get that $F_B^-$ is a Finsler metric on $M$ as above taking into account that
$F_B^-=\frac{1}{\Lambda}(G-B)$.
\end{proof}
\begin{rmk}
A part from the case when $B$ is a one-form on $M$, a significant class of maps satisfying the assumptions of Theorem~\ref{fermatmetrics} is given (up to the sign) by Randers variations of Finsler metrics: $B=\pm (\omega+F_1)$, where $F_1$ is a Finsler metric on $M$ and $\omega$ is a one-form such that $|\omega(v/F_1(v))|<1$, for all $v\in TM\setminus 0$.
\end{rmk}
\section{Trivial isocausal static Finsler spacetimes}
The optical metrics have been already introduced in \cite{CapSta16} for a standard static Finsler spacetime. Since in this case $B=0$, both $F_B$ and $F_B^-$ reduce to the Finsler metric $F/\Lambda$ on $M$. The causal properties of a standard static Finsler spacetime can then be described in terms of metric properties of $F/\Lambda$. Thanks to the metrics $F_B$ and $F_B^-$, we easily see that the causal properties of a stationary splitting Finsler spacetime coincides with the ones of a couple of standard static Finsler spacetimes and therefore they can be described using $F_B$ and $F^-_B$. Indeed, under the assumptions of Theorem~\ref{fermatmetrics}, we can consider the Lorentz-Finsler functions on $\tilde M$ given by
\begin{equation}\label{LBLB-}
L_B(\tau,v):=-\tau^2+F_{B}^2(v), \quad\quad L_{B^-}(\tau,v):=-\tau^2+(F^-_{B})^2(v),
\end{equation}
and the Finsler spacetimes $(\ensuremath{\mathbb R}\xspace\times M, L_B,T^+\tilde M\setminus \mathcal T )$, $(\ensuremath{\mathbb R}\xspace\times M, L_{B^-},T^-\tilde M\setminus \mathcal T)$ that we call \emph{trivial isocausal static Finsler spacetimes} associated to $(\tilde M, L)$.
Isocausality is a relation between Lorentzian spacetimes introduced in \cite{GarSen03}. If $V$ and $W$ are spacetimes they are said {\em isocausal} if there exists two diffeomorphisms $\varphi:V\to W$ and $\psi:W\to V$ such that $\varphi_*$ and $\psi_*$ map future-pointing causal vectors into future pointing- causal vectors. Clearly this notion make sense also for Finsler spacetimes. However, notice that in the Finsler setting future and past-pointing causal vectors are in general not related by the symmetry $v\mapsto -v$ and so one should consider separately causal relations involving future and past, provided that both the future and past causal cones are defined. In our case, we have that $(\tilde M, L, T^+\tilde M\setminus \mathcal T)$ and $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$ are both trivially related (i.e. $\varphi=\psi=i_{\ensuremath{\mathbb R}\xspace\times M}$) respectively to $(\ensuremath{\mathbb R}\xspace\times M, L_B,T^+\tilde M\setminus \mathcal T )$ and $(\ensuremath{\mathbb R}\xspace\times M, L_{B^-},T^-\tilde M\setminus \mathcal T)$ as the following proposition shows.
\begin{prop}\label{samecausality}
Let $(\tilde M, L)$ be a stationary splitting Finsler spacetime satisfying the assumptions of Theorem~\ref{fermatmetrics}.
Then $(\tau,v)\in T\tilde M$, with $\tau>0$ (resp. $\tau<0$) is a causal vector of $(\tilde M,L)$ if and only if it is a causal, future-pointing (resp. past-pointing) vector of $(\tilde M,L_B)$ (resp. $(\tilde M,L_{B^-})$).
\end{prop}
\begin{proof}
By definition, a non-zero vector $(\tau,v)\in T\tilde M$ is causal for $(\tilde M, L)$ if and only if $L(\tau,v)\leq 0$ i.e.
$-\Lambda\tau^2+2B(v)\tau+F^2(v)\leq 0$
which is equivalent to
$\tau\ge F_B(v)$ or $\tau\leq-F^{-}_B(v)$. Then the equivalence follows recalling that (see Remark~\ref{futurepastpoint}), a causal, future-pointing (resp. past-pointing) vector in an ultra-static standard Finsler spacetime $(\tilde M, L_1)$, $L_1(\tau,v)= -\tau^2 +F_1^2(v)$, where $Y=\partial_t$, is a non-zero vector $(\tau,v)$ such that $\tau\geq F_1(v)$ (resp. $\tau\leq -F_1(v)$).
\end{proof}
\begin{rmk} \label{futureinside}
In particular, the statement of Proposition~\ref{samecausality} holds with the words ``timelike'' or ``lightlike'' replacing ``causal''.
Notice also that the causal future-pointing vectors of $(\tilde M, L, T^+\tilde M\setminus \mathcal T)$ (resp. of $(\tilde M, L, T^-\tilde M\setminus \mathcal T))$ are all and only the causal vectors in $\bar T^+\tilde M:=\cup_{p\in \tilde M}\bar T^+_p\tilde M$ (resp. $\bar T^-\tilde M:=\cup_{p\in \tilde M}\bar T^-_p\tilde M$). In fact, $(\tau, v)\in T^+\tilde M$ (resp. $(\tau, v)\in T^-\tilde M$) is causal for $(\tilde M, L, T^+\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$) if and only
$\tau \geq F_B (v)$ (resp. $\tau\leq -F_{B}^-$). As $F_B(v)=\frac{1}{\Lambda}(B(v)+G(v))>\frac{B(v)}{\Lambda}$ (resp. $-F_B^-(v)=\frac{1}{\Lambda}(B(v)-G(v))<\frac{B(v)}{\Lambda}$), provided that $v\neq 0$, we conclude by recalling Remark~\ref{futurepastpoint}.
When $B$ is a one-form on $M$ both the sets of future and past-pointing vectors of $(\tilde M, L, T\tilde M\setminus \mathcal T)$ are non empty and, being in this case $Y=\partial_t$, they coincide with with the causal vectors belonging respectively in $\bar T^+\tilde M$ and $\bar T^-\tilde M$.
\end{rmk}
\begin{rmk}\
As in Lemma 2.21 in \cite{CapSta16}, the epigraph (resp. hypograph) of the function $\tau=F_B (v)$ (resp. $\tau=-F^-_B (v)$) in $T_p \tilde M$ is connected and convex, for all $p\in \tilde M$, i.e. the set of the future-pointing causal vectors of $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$) at $p\in \tilde M$ is connected and convex.
Moreover, for each $c>0$, the set of the future-pointing timelike vectors $(\tau,v)$ of $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$) in $T_{p}\tilde M$ such that $L(\tau, v)\leq -c\}$ is also connected and strictly convex.
\end{rmk}
Let $p_0\in\tilde M$ and let us denote by $I^+(p_0)$ (resp. $I^-(p_0)$) the subset of $\tilde M$ given by all the points $p\in \tilde M$ such that there exists a timelike future-pointing curve of $(\tilde M,L)$ connecting $p_0$ to $p$ (resp. $p$ to $p_0$).
From Proposition~\ref{samecausality} and Remark~\ref{futureinside}, it follows that these sets coincide with those of the corresponding trivial isocausal static Finsler spacetime. Thus from \cite[Prop. 3.2]{CapSta16} we obtain:
\begin{prop}\label{causality1}
Let $(\tilde M, L, T^+\tilde M\setminus \mathcal T)$ be a stationary splitting Finsler spacetime (thus $(\partial^2_{yy}B)_v$ is positive semi-definite, for all $v\in TM\setminus 0$) such that, for each $x\in M$, either $B(x,v)\geq 0$, for all $v\in T_xM$, or $B(x,\cdot )$ is linear on $T_xM$. Then, for all $p_0=(t_0,x_0)\in \tilde M$ we have:
\[I^{+}(p_0)=\bigcup_{r>0}\left(\{t_0+r\}\times B^{+}(x_0,r)\right), \quad I^{-}(p_0)=\bigcup_{r>0}\left(\{t_0-r\}\times B^{-}(x_0,r)\right),\]
where $B^{+}(x_0,r)$ and $B^-(x_0,r)$ denote, respectively, the forward and the backward open ball of centre $x_0$ and radius $r$ (see, e.g. \cite{Chern} for the definitions of these balls) of the Finsler metric $F_B$. Moreover, $I^{\pm}(p_0)$ are open subsets of $\tilde M$.
\end{prop}
\begin{rmk} Taking into account that, in $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$, under the assumptions $(\partial^2_{yy}B)_v$ is negative semi-definite, for all $v\in TM\setminus 0$ and, for each $x\in M$, either $B(x,v)\leq 0$, for all $v\in T_xM$, or $B(x,\cdot )$ is linear on $T_xM$, a timelike future-pointing vector is past-pointing for the standard static Finsler spacetime $(\tilde M, L_{B^-})$, an analogous proposition holds for a stationary splitting spacetime of the type $(\tilde M, L, T^-\tilde M\setminus \mathcal T)$ by considering the forward and
backward ball of the metric $F^-_{B}$ and replacing $t_0+r$ with $t_0-r$ in the first equality and $t_0-r$ with $t_0+r$ in the second one.
\end{rmk}
Analogously, using also the Fermat's principle (see Appendix~\ref{fermatprinc}), from the results in Section 3 of \cite{CapSta16}, we get the following proposition which extends to stationary splitting Finsler spacetimes some results obtained in \cite{CJS} for standard stationary Lorentzian spacetimes (we refer to \cite{CapSta16} and to \cite{Minguzzi Sanchez} for the definitions of the causality properties involved in its statement, while for the notions of forward and backward completeness of a Finsler metric we refer to \cite{Chern}).
\begin{prop}\label{causality2}
Under the assumptions of Theorem~\ref{fermatmetrics}, let $(\tilde M, L,T^+\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L,T^-\tilde M\setminus \mathcal T)$) be a stationary splitting Finsler spacetime. Then the following propositions hold true:
\begin{enumerate}
\item $(\tilde M, L,T^+\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L,T^-\tilde M\setminus \mathcal T)$) is causally simple if and only if for any $x,y\in M\times M$ there exists a geodesic of $F_B$ (resp. $F^-_B$) joining $x$ to $y$, with length equal to the distance associated to $F_B$ (resp. $F_B^-$);
\item a slice (and then any slice) $S_t=\{t\}\times M$ is a Cauchy hypersurface if and only the Finsler manifold $(M,F_B)$ (resp. $(M,F^-_B)$) is forward and backward complete;
\item $(\tilde M, L,T^+\tilde M\setminus \mathcal T)$ (resp. $(\tilde M, L,T^-\tilde M\setminus \mathcal T)$) is globally hyperbolic if and only if $ \bar B^{+}(x,r)\cap \bar B^{-}(y,s)$ is compact, for every $x,y\in M$ and $r,s>0$, where $\bar B^{\pm}(x_0,r_0)$ are the closure of the forward and backward balls on $M$ associated to metric $F_B$ (resp. $F_B^-$).
\end{enumerate}
\end{prop}
\section{Conclusions}
In this work, we have introduced a Lorentz-Finsler function, Eq. \eqref{stationary}, on $\ensuremath{\mathbb R}\xspace\times M$ which admits a timelike Killing vector field and can be considered as a natural generalisation of the quadratic form of a standard stationary Lorentzian metric. We have characterized when a Finsler spacetime with a timelike Killing vector field is locally of the type introduced here. Moreover, we have seen that the optical geometry of these Finsler spacetimes can be described by (at least) one of two classical Finsler metrics on $M$, Eq. \eqref{FBFB-}, leading to causal relations between this class of stationary Finsler spacetimes and $\ensuremath{\mathbb R}\xspace\times M$ endowed with two possible static Lorentz-Finsler functions, Eq. \eqref{LBLB-}, corresponding respectively to some sign assumptions on $B$ and its fiberwise Hessian. These relations hold also for a standard stationary spacetime, Eq. \eqref{sss}, as its optical metrics are Finslerian too: $F_B$ is a Randers metric and $F_{B}^{-}$ is its reverse metric (see \cite{CaJaMa11}), showing that isocausality can hold between Lorentzian and Finsler spacetimes as well.
The isocausality between stationary splitting Finsler spacetimes and static Finsler spacetimes allowed us to deduce some results about the former class from already known ones valid for the latter \cite{CapSta16}, as shown in Propositions~\ref{causality1} and \ref{causality2}. In particular these results hold whenever the map $B$ reduces to a one form on $M$. Thus, imitating the modification of the Schwarzschild metric in \cite{LPH}, a Finslerian perturbation of the Kerr metric given (in geometric units) as
\[
L(\tau,\dot r,\dot\theta,\dot\varphi):=
-\Big(1 - \frac{2Mr}{\rho^2}+\psi_0(r)\Big)\tau^2-\frac{2Mra\sin^2\theta}{\rho^2}\tau\dot\varphi + F^2(\dot r,\dot\theta,\dot\varphi),\]
where
\bmln
F(\dot r,\dot\theta,\dot\varphi):=
\Big(\big(\frac{\rho^4}{\Delta^2}+\psi_1(r)\big)\dot r^4
+\big(\rho^4+\psi_2(r)\big)\dot\theta^4 \\+\big((r^2+a^2+\frac{2Ma^2\sin^2\theta}{\rho^2}\big)^2\sin^4\theta+\psi_3(r)\big)\dot\varphi^4
\Big)^{1/4},
\emln
$\rho^2:=r^2+a^2\cos^2\theta$, $\Delta:=r^2-2Mr+a^2$, $(r,\theta, \varphi)$ are spherical coordinates on $\ensuremath{\mathbb R}\xspace^3$ and $(\tau, \dot r, \dot\theta, \dot\varphi)$ the induced ones on $T_{p}\ensuremath{\mathbb R}\xspace^4$, for each $p=(t,r,\theta,\varphi)\in\ensuremath{\mathbb R}\xspace^4$, $M>0$ and $a\neq 0$,
belongs (on the region where $1 - \frac{2Mr}{\rho^2}+\psi_0(r)>0$ and for small enough functions $\psi_i(r)$) to the class of Lorentz-Finsler functions for which our results hold.
Nevertheless, more general Lorentz-Finsler functions $L$ can be considered by changing the Finsler metric $F$ on $\ensuremath{\mathbb R}\xspace^3$. In particular, as $F$ can be taken non-reversible ($F(\dot r,\dot\theta,\dot\varphi)\neq F(-\dot r,-\dot\theta,-\dot\varphi)$), the frame dragging effect,
which emphasizes the bending angle of light rays that propagate in the direction of rotation of the Kerr black hole \cite{amir2, IyeHan09, IyeHan09b}, might be fine-tuned by a suitable choice of a Finsler metric $F$ depending in a non symmetric way from $\dot\varphi$.
An example of a stationary splitting Finsler spacetime appeared as a solution of the Finslerian gravitational field equations proposed by S. F. Rutz in \cite{Rutz} as a generalisation of the Einstein field equations in vacuum. It is described by the spherical symmetric Lorentz-Finsler function (compare also with \cite[Eq. (37)]{Bar12})
\bmln
L(\tau,\dot r,\dot\theta,\dot\varphi):=-\Big(1 - \frac{2M}{r}\Big)\tau^2+\Big(1 - \frac{2M}{r}\Big)^{-1}\dot r^2+r^2(\dot\theta^2+\sin^2\theta\dot\varphi^2)\\+\epsilon\Big(1 - \frac{2M}{r}\Big)\tau B(\dot r,\dot\theta,\dot\varphi),
\emln
where $B(\dot r,\dot\theta,\dot\varphi):=(\dot\theta^2+\sin^2\theta\dot\varphi^2)^{1/2}$. Observe that when the parameter $\epsilon$ is equal to $0$, $L$ reduces to the quadratic form associated to the Schwarzschild metric. The map $B$ satisfies the assumption of Theorem~\ref{fermatmetrics} only on the open region defined by $0<\theta<\pi$ and, for each $(r,\theta,\varphi)\in \big((0,+\infty)\setminus\{2M\}\big)\times (0,\pi)\times [0,2\pi]$, on the cone $\dot\theta^2+\sin^2\theta\dot\varphi^2>0$ in $\ensuremath{\mathbb R}\xspace^3$, where $B$ admits fiberwise Hessian. This example suggests that it would be interesting to weaken our setting allowing that $B$ is twice differentiable only on a cone subset $A_M$ of $TM\setminus 0$, giving rise to two conic (in the sense of \cite{JavSan14}) optical metrics $F_B$ and $F^-_B$, smooth only on $A_M$. This can be further generalized starting from a conic Finsler metric $F$ or to a Killing vector field $\partial_t$ which is not timelike everywhere \cite{CJS2}. However, the study of causality, geodesics and Fermat's principle in the corresponding Lorentz-Finsler spacetime would be much more delicate than the case where $F$ is a standard Finsler metric on $M$.
\section*{Acknowledgements}
We are grateful to the referees for their valuable comments.
We would also like to thank Miguel Angel Javaloyes for some fruitful conversations about Killing fields for Finsler metrics and Miguel S\'anchez for suggesting to study the case when the map $B$ is more general than a one-form.
|
1,108,101,565,210 | arxiv | \section{Introduction}
The Quantum Cellular Automaton (QCA) is the quantum version of the popular cellular automaton of von
Neumann \cite{neumann1966theory}. It describes the finite evolution of a discrete set of quantum
systems, each one interacting with a finite number of neighbors via the unitary transformation of
a single step evolution. The idea of a quantum version of a cellular automaton was already contained in the early
work of Feynman \cite{feynman1982simulating}, and later has been object of investigation in the
quantum-information community \cite{schumacher2004reversible,arrighi2011unitarity,gross2012index},
with special enphasis on the so-called Quantum Walks (QW) which decribes the one particle sector of
QCA's with evolution linear in a quantum field
\cite{grossing1988quantum,succi1993lattice,meyer1996quantum,bialynicki1994weyl,ambainis2001one}.
The interest in QCAs is motivated by their potential applications in several fields, like the
statistical mechanics of lattice systems and the quantum computation with microtraps
\cite{cirac2000scalable} and with optical lattices \cite{bloch2004quantum}. Moreover, Quantum Walks
have been used in the design of new quantum algorithms with a computational
speed-up \cite{childs2003exponential,farhi2007quantum}.
Recently, the idea that QCA could be used to describe a more fundamental discrete Plank scale
dynamics from which the usual Quantum Field Theory emerges
\cite{darianopla,BDTqcaI,d2013derivation}, is gathering increasing attention
\cite{farrelly2014causal,arrighi2013dirac}. The proposal of modeling Planck scale physics with a
classical automaton on a discrete background first appeared in the work of 't Hooft
\cite{t1990quantization}, and Quantum Walks were considered for the simulation of Lorentz-covariant
differential equations in Refs.
\cite{succi1993lattice,bialynicki1994weyl,meyer1996quantum,PhysRevA.73.054302,Yepez:2006p4406}.
Up to now, most of the interest was focused on the emergence of the Dirac equation for a free
Fermionic field. The choice of cosidering Fermions as the elementary physical systems is motivated
by the idea that the amount of information that can be stored in a finite volume must be finite, as
also suggested by black hole physics \cite{bekenstein1973black,hawking1975particle}. However, the
question whether a Fermionic QCA could recover the dynamics of a Bosonic field was never addressed
before. Here we will see how free electrodynamics emerges from two Weyl QCAs \cite{d2013derivation}
with Fermionic fields. The dynamical equations resulting in the limit of small wavevector $\bvec k}\def\bq{\bvec q$ are
the Maxwell's equations. However, for high value of $\bvec k}\def\bq{\bvec q$ the discreteness of the Planck scale
manifests itself, producing deviations from Maxwell. Most notably, the QCA dynamics introduces a
$\bvec k}\def\bq{\bvec q$-dependent speed of light, a feature that was already considered in some approaches to quantum
gravity, and that could be in principle experimentally detected in astrophysical observations
\cite{ellis1992string,lukierski1995classical,Quantidischooft1996,amelino1998tests,amelino2001testable,amelino2001planck,PhysRevLett.88.190403,PhysRevLett.96.051301,ellis2013probes}.
In the present approach the photon turns out to be a composite particle made of a pair of correlated
massless Fermions. This scenario closely resembles the neutrino theory of light of De Broglie
\cite{de1934nouvelle,jordan1935neutrinotheorie,kronig1936relativistically,perkins1972statistics,perkins2002quasibosons}
which suggested that the photon could be composed of a neutrino-antineutrino pair bound by some
interaction. The failure of the neutrino theory of light was determined by the fact that a
composite particle cannot obey the exact Bosonic commutation relations \cite{pryce1938neutrino}.
However, as it was shown in Ref. \cite{perkins2002quasibosons}, the non-Bosonic terms introduce
negligible contribution at ordinary energy densities. In our case, as a consequence of the
composite nature of the photon, we have that the number of photons that can occupy a single mode is
bounded. However, as we will see, a saturation effect originated by the Fermionic nature of the
photon is far beyond the current laser technology.
In Section \ref{sec:quant-cell-autom}, after recalling some basic notions about the QCA,we review
the Weyl automaton of Ref. \cite{d2013derivation}. In Section \ref{s:Maxw} we build a set of
Fermionic bilinear operators, which in Sect. \ref{sec:recov-maxw-dynam} are proved to evolve
according to the Maxwell equations. In Section \ref{sec:photons-as-composite} we will show that the
polarization operators introduced in Sect. \ref{sec:recov-maxw-dynam} can be considered as Bosonic
operators in a low energy density regime. As a spin-off of this analysis we found a result that
completes the proof, given in Ref. \cite{PhysRevLett.104.070402}, that the amount of entanglement
quantifies whether pairs of Fermions can be considered as independent Bosons. Section
\ref{sec:phen-analys} presents the phenomenological consequences of the present QCA theory, the most
relevant one being the the appearence of a $\bvec k}\def\bq{\bvec q$-dependent speed of light. In the same section we
discuss possible experimental tests of such $\bvec k}\def\bq{\bvec q$-dependence in the astrophysical domain, and we
compare our result with those from Quantum Gravity literature
\cite{ellis1992string,lukierski1995classical,Quantidischooft1996,amelino1998tests,amelino2001testable,amelino2001planck,PhysRevLett.88.190403,PhysRevLett.96.051301,ellis2013probes}.
We conclude with Section \ref{sec:conclusions} where we review the main results and discuss future
developments.
\section{The Weyl automaton: a review}\label{sec:quant-cell-autom}
The basic ingredient of the Maxwell automaton is Weyl's, whis has been derived in Ref. \cite{d2013derivation} from
first principles. Here, we will briefly review the construction for completeness.
A QCA represents the evolution of a numerable set $G$ of cells $g\in G$, each one containing an
array of Fermionic local modes. The evolution occurs in discrete identical steps, and in each one
every cell interacts with a the others. The Weyl automaton is derived from the following principles:
unitarity, linearity, locality, homogeneity, transitivity, and isotropy. Unitarity means just that
each step is a unitary evolution. Linearity means that the unitary evolution is linear in the field.
Locality means that at each step every cell interacts with a finite number of others. We call cells
interacting in one step {\em neighbors}. The neighboring notion also naturally defines a graph over
the automaton, with $g$ as vertices and the neighboring couples as edges. Homogeneity means both
that all steps are the same, all cells are identical systems, and the set of interactions with
neigbours is the same for each cell, hence also the number of neigbours, and the dimension of the
cell field array, which we will denote by $s>0$. We will denote by $A$ the matrix representing the
linear unitary step. Transitivity means that every two cells are connected by a path of neighbours.
Isotropy means that the neighboring relation is symmetric, and there exists a group of automorphisms
for the graph for which the automaton itself is covariant. Homogeneity, transitivity, and isotropy
together imply that $G$ is a group, and the graph is a Cayley graph $\Gamma(G,S_+)$ where
$G=\<S_+|R\>$ is a presentation of $G$ with generator set $S_+$ and relator set $R$. The set of
neighboring cells is then given by $S:=S_+\cup S_-$ where $S_-$ is the set of the inverse
generators. Linearity, locality, and homogeneity imply that each step can be described in terms of
transition matrices $A_h\in\rm{M}(\mathbb{C},s)$ for each $h\in S$, and then the step is described
mathematically as follows
\begin{align}
\label{eq:automagraph}
\psi_g(t+1) = \sum_{h\in S}A_h\psi_{hg}(t)
\end{align}
where $\psi_g(t)$ is the $s$-array of field operators at $g$ at step $t$. Therefore, upon denoting
by $T_g$ $g\in G$ the unitary representation of $G$ on $\ell^2(G)$, $T_g|f\>:=|gf\>$, for $f\in G$,
$A$ is a unitary operator on $\ell^2(G)\otimes\mathbb{C}^s$ of the form
\begin{align}
A:=\sum_{h\in S}T_h\otimes A_h.
\label{eq:walk}
\end{align}
Covariance of the isotropy property means precisely that the group $L$ of automorphisms of the graph
is a transitive permutation group of $S_+$, and there exists a (generally projective) unitary
representation $U_l$ $l\in L$ of $L$ such that
\begin{align}
A=\sum_{h\in S}T_{lh}\otimes U_l A_{h}U_l^\dag,\qquad \forall l\in L.
\label{eq:iso}
\end{align}
In Ref.~\cite{d2013derivation} attention was restricted to group $G$ quasi-isometrically embeddable
in an Euclidean space, which is then {\em virtually Abelian} \cite{Cornulier07}, namely it has an
Abelian subgroup $G'\subset G$ of finite index, namely with a finite number of cosets. Then it can
be shown the automaton is equivalent to another one with group $G'$ and dimension $s'$ multiple of
$s$. We further assume that the representation of the isotropy group $L$ induced by the embedding is
orthogonal, which implies that the graph neighborhood is embedded in a sphere. We call such a
property {\em orthogonal isotropy}.
For $s=1$ the automaton is trivial, namely $A=I$. For $s=2$ and for Euclidean space $\mathbb R^3$
one has $G=\mathbb Z^3$, and the Cayley graphs satisfying orthogonal isotropy are the Bravais
lattices. The only lattice that has a nontrivial set of transition matrices giving a unitary
automaton is the BCC lattice. We will label the group element as vectors $\bvec x\in\mathbb{Z}^3$, and
use the customary additive notation for the group composition, whereas the unitary representation of
$\mathbb{Z}^3$ is expressed as follows
\begin{equation}
T_{\bvec z}|\bvec x\>=|\bvec z+\bvec x\>.
\end{equation}
Being the group Abelian, we can Fourier transform, and the operator $A$ can be easily block-diagonalized
in the $\bvec k}\def\bq{\bvec q$ representation as follows
\begin{align}
\label{eq:weylautomata}
A = \int_B\operatorname d^3 \! \bvec k}\def\bq{\bvec q \, |{\bvec k}\def\bq{\bvec q}\>\< {\bvec k}\def\bq{\bvec q}| \otimes A_{\bvec k}\def\bq{\bvec q}
\end{align}
with $A_\bvec k}\def\bq{\bvec q:=\sum_{\bvec h\in S}\bvec k\,e^{-i\bvec k\cdot\bvec h}A_\bvec h$ unitary for every $\bvec k}\def\bq{\bvec q\in
B$, and the vectors $|{\bvec k}\def\bq{\bvec q}\>$ given by
\begin{equation}
|\bvec k}\def\bq{\bvec q\>:=\frac1{\sqrt{2\pi}^3}\sum_{\bvec x\in G}e^{i\bvec k}\def\bq{\bvec q\cdot\bvec x}|\bvec x\>,
\end{equation}
is a Dirac-notation for the direct integral over $\bvec k}\def\bq{\bvec q$, and the domain $B$ is the first Brillouin zone of
the BCC. There are only two QCAs, with unitary matrices
\begin{equation}\label{eq:weylautomata2}
A^{\pm}_{\bvec k}\def\bq{\bvec q} := d^{\pm}_{\bvec k}\def\bq{\bvec q} I+\tilde{\bvec n}^{\pm}_{\bvec k}\def\bq{\bvec q}\cdot\boldsymbol{\sigma}
=\exp[-i\bvec{n}^{\pm}_{\bvec k}\def\bq{\bvec q} \cdot \boldsymbol{\sigma}],
\end{equation}
where
\begin{align}
&\tilde{\bvec n}^{\pm}_{\bvec k}\def\bq{\bvec q} :=
\begin{pmatrix}
s_x c_y c_z \mp c_x s_y s_z\\
\mp c_x s_y c_z - s_x c_y s_z\\
c_x c_y s_z \mp s_x s_y c_z
\end{pmatrix}\!\!,\,
{\bvec n}^{\pm}_{\bvec k}\def\bq{\bvec q}:=\frac{\lambda^{\pm}_{\bvec k}\def\bq{\bvec q}\tilde{\bvec n}^{\pm}_{\bvec k}\def\bq{\bvec q}}{\sin\lambda^{\pm}_{\bvec k}\def\bq{\bvec q}},\nonumber\\
&\operatorname{d}^{\pm}_{\bvec k}\def\bq{\bvec q} := (c_x c_y c_z \pm s_x s_y s_z ),\;
\lambda^{\pm}_{\bvec k}\def\bq{\bvec q}:=\arccos(d^{\pm}_{\bvec k}\def\bq{\bvec q}),\nonumber
\end{align}
and
\begin{equation}
c_\alpha := \cos({k}_\alpha/\sqrt{3}),\;s_\alpha:= \sin({k}_\alpha/\sqrt{3}),\;\alpha = x,y,z.\nonumber
\end{equation}
The matrices $A_{\bvec k}\def\bq{\bvec q}^\pm$ in Eq. \eqref{eq:weylautomata2} describe the evolution of a two-component
Fermionic field,
\begin{align}
\label{eq:automa1}
{\psi} ({\bvec k}\def\bq{\bvec q},t+1) =
A_{\bvec k}\def\bq{\bvec q}^\pm {\psi} ({\bvec k}\def\bq{\bvec q},t),
\quad
{\psi} ({\bvec k}\def\bq{\bvec q},t) : =
\begin{pmatrix}
{\psi}_R ({\bvec k}\def\bq{\bvec q},t)\\
{\psi}_L ({\bvec k}\def\bq{\bvec q},t)
\end{pmatrix}.
\end{align}
The adimensional framework of the automaton corresponds to measure everything in Planck units. In
such a case the limit $|{\bvec k}\def\bq{\bvec q}|\ll 1$ corresponds to the relativistic limit, where on has
\begin{equation}
\bvec n^{\pm}({\bvec k}\def\bq{\bvec q})\sim\tfrac{{\bvec k}\def\bq{\bvec q}}{\sqrt{3}},\quad A^{\pm}_{\bvec k}\def\bq{\bvec q}\sim\exp[-i\tfrac{{\bvec k}\def\bq{\bvec q}}{\sqrt{3}} \cdot\boldsymbol{\sigma}],
\end{equation}
corresponding to the Weyl's evolution, with $\tfrac{{\bvec k}\def\bq{\bvec q}}{\sqrt{3}}$ playing the role of momentum.
\section{The Maxwell automaton}\label{s:Maxw}
In order to build the Maxwell dynamics, we need to consider two different Weyl QCAs the first one
acting on a Fermionic field $\psi(\bvec k}\def\bq{\bvec q)$ by matrix $A_\bvec k}\def\bq{\bvec q$ as in Eq. (\ref{eq:automa1}), and the
second one acting on the field $\varphi(\bvec k}\def\bq{\bvec q)$ by the complex conjugate matrix $A_\bvec k}\def\bq{\bvec q^*=\sigma_y
A_\bvec k}\def\bq{\bvec q\sigma_y$, i.e.
\begin{align}
\label{eq:automa2}
{\varphi} (\bvec k}\def\bq{\bvec q,t+1) = A_\bvec k}\def\bq{\bvec q^*{\varphi} (\bvec k}\def\bq{\bvec q,t), \quad
{\varphi} (\bvec k}\def\bq{\bvec q,t) =
\begin{pmatrix}
{\varphi}_R (\bvec k}\def\bq{\bvec q,t)\\
{\varphi}_L (\bvec k}\def\bq{\bvec q,t)
\end{pmatrix}.
\end{align}
The matrix $A_\bvec k}\def\bq{\bvec q$ can be either one of the Weyl matrices $A^\pm_{\bvec k}\def\bq{\bvec q}$, and the whole derivation is
independent of the choice.
The Fermionic fields ${\varphi}$ and ${\psi}$ are independent and obey the following
anti-commutation relations
\begin{align}
&[\psi_i(\bvec k}\def\bq{\bvec q),\psi_j(\bvec k}\def\bq{\bvec q') ]_+ =
[\varphi_i(\bvec k}\def\bq{\bvec q),\varphi_j(\bvec k}\def\bq{\bvec q') ]_+ =\nonumber\\
&[\varphi_i(\bvec k}\def\bq{\bvec q),\psi_j(\bvec k}\def\bq{\bvec q') ]_+=
[\varphi_i(\bvec k}\def\bq{\bvec q),\psi^\dagger_j(\bvec k}\def\bq{\bvec q') ]_+
=0
\nonumber \\
&[\psi_i(\bvec k}\def\bq{\bvec q),\psi^\dagger_j(\bvec k}\def\bq{\bvec q') ]_+ =
[\varphi_i(\bvec k}\def\bq{\bvec q),\varphi^\dagger_j(\bvec k}\def\bq{\bvec q') ]_+
=
\delta_B(\bvec k}\def\bq{\bvec q-\bvec k}\def\bq{\bvec q')\delta_{i,j} \nonumber \\
&i,j = R,L \qquad \bvec k}\def\bq{\bvec q,\bvec k}\def\bq{\bvec q' \in B,
\label{eq:commutationrel}
\end{align}
where $\delta_B(\bvec k}\def\bq{\bvec q)$ is the 3d Dirac's comb delta-distribution (which repeats periodically with
$\mathbb R^3$ tasselated into Brillouin zones).
Given now two arbitrary fields ${\eta}(\bvec k}\def\bq{\bvec q)$ and ${\theta}(\bvec k}\def\bq{\bvec q)$ we
define the following bilinear function
\begin{align}
\label{eq:gimu}
\!\! G_f^{\mu}(\eta,\theta,\bvec k}\def\bq{\bvec q) := \!\!
\int\!\!\frac{\operatorname{d}\bq}{(2\pi)^3} f_{\bvec k}\def\bq{\bvec q}(\bq)
{{\eta}}^T
\left(\tfrac{\bvec k}\def\bq{\bvec q}{2}-\bq\right)
\sigma^{\mu}
{\theta}
\left(\tfrac{\bvec k}\def\bq{\bvec q}{2}+\bq\right)
\end{align}
where $\sigma^0:= I $, $\sigma^1:= \sigma^x$, $\sigma^2:= \sigma^y $, $\sigma^3:=\sigma^z$ and
$\int\frac{\operatorname{d}\bq}{(2\pi)^3} |f_{\bvec k}\def\bq{\bvec q}(\bq)|^2 =1, \forall \bvec k}\def\bq{\bvec q$. In the following we will also treat
the vector part $\boldsymbol\sigma:=(\sigma^1,\sigma^2,\sigma^3)$ of the four-vector $\sigma^\mu$
separately. This allows us to define the following operators
\begin{align}
\label{eq:bilinears}
F^{\mu}(\bvec k}\def\bq{\bvec q) :=G_f^\mu(\varphi,\psi,\bvec k}\def\bq{\bvec q)
\end{align}
In the following sections we study the evolution of the bilinear functions $F^{\mu}(\bvec k}\def\bq{\bvec q)$ and their
commutation relations and show that, in the relativistic limit and for small particle densities the
quantum Maxwell equations are recovered for both choices of $A_\bvec k}\def\bq{\bvec q=A^\pm_\bvec k}\def\bq{\bvec q$.
\section{The Maxwell dynamics}\label{sec:recov-maxw-dynam}
In the following we will use the short notations
\begin{equation}\label{eq:notaz}
[Z\eta](\bvec k}\def\bq{\bvec q):=Z_{\bvec k}\def\bq{\bvec q}\eta(\bvec k}\def\bq{\bvec q),\quad [ZW]_\bvec k}\def\bq{\bvec q:=Z_{\bvec k}\def\bq{\bvec q}W_\bvec k}\def\bq{\bvec q,
\end{equation}
for $\eta$ a field and $Z$ and $W$ matrices. If the fields $\psi$ and $\varphi$ evolve according to
Eqs. \eqref{eq:automa1} and \eqref{eq:automa2}, then the evolution of the bilinear functions
$F^{\mu}(\bvec k}\def\bq{\bvec q)$ introduced in Eq. \eqref{eq:bilinears} obeys the following equation
\begin{align}
\label{eq:exactevolution}
&F^{\mu}(\bvec k}\def\bq{\bvec q,t) = G_f^\mu([{A^*}^t\varphi],[A^t\psi],\bvec k}\def\bq{\bvec q),
\end{align}
where we used the notation in (\ref{eq:notaz}). Now, let us define
\begin{align}
&\tilde F^\mu(\bvec k}\def\bq{\bvec q,t):= G_f^\mu([{U^{\bvec k}\def\bq{\bvec q,t}}^*\varphi],[U^{\bvec k}\def\bq{\bvec q,t}\psi],\bvec k}\def\bq{\bvec q), \nonumber\\
&U^{\bvec k}\def\bq{\bvec q,t}_\bq :=A^{-t}_{\tfrac{\bvec k}\def\bq{\bvec q}2} A^t_\bq,
\label{eq:defU}
\end{align}
where we remind that $[{U^{\bvec k}\def\bq{\bvec q,t}}^*\varphi](\bq):={U_\bq^{\bvec k}\def\bq{\bvec q,t}}^*\varphi(\bq)$. Clearly, one has
$[A^t\eta]=[A_{\frac{\bvec k}\def\bq{\bvec q}{2}}^tU^{\bvec k}\def\bq{\bvec q,t}\eta]$. We now need the identity
\begin{align}
&\exp (-\tfrac{i}{2}\bvec{v}\cdot \boldsymbol{\sigma})
\boldsymbol{\sigma} \exp (\tfrac{i}{2}\bvec{v} \cdot \boldsymbol{\sigma}) =
\operatorname{Exp}(-i\bvec{v} \cdot \bvec{J}) \boldsymbol{\sigma},\nonumber\\
&\exp (-\tfrac{i}{2}\bvec{v}\cdot \boldsymbol{\sigma})
\sigma^0 \exp (\tfrac{i}{2}\bvec{v} \cdot \boldsymbol{\sigma}) =\sigma^0,
\end{align}
where the matrix $\operatorname{Exp}(-i\bvec{v}\cdot\bvec{J})$ acts on $\boldsymbol{\sigma}$ regarded as a vector,
and $\bvec J=(J_x, J_y,J_z)$ is the vector of angular momentum operators. We can then recast
Eq.~\eqref{eq:exactevolution} in terms of the following functions
\begin{align}
\bvec{F}(\bvec k}\def\bq{\bvec q,t) &:=
(
F^{1}(\bvec k}\def\bq{\bvec q,t), F^{2}(\bvec k}\def\bq{\bvec q,t), F^{3}(\bvec k}\def\bq{\bvec q,t)
) ^T, \label{eq:ftilde}
\end{align}
and $\tilde{\bvec{F}}(\bvec k}\def\bq{\bvec q,t)$ similarly defined, obtaining
\begin{align}
\label{eq:evolutionwithrotation}
& F^{0}(\bvec k}\def\bq{\bvec q,t) =
\tilde{F}^{0}(\bvec k}\def\bq{\bvec q,t), \nonumber\\
& \bvec{F}(\bvec k}\def\bq{\bvec q,t) =
\operatorname{Exp}\left(-2i {\bvec{n}}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}} \cdot
\bvec{J}t\right)
\tilde{\bvec{F}}(\bvec k}\def\bq{\bvec q,t).
\end{align}
If we assume that
\begin{align}
\label{eq:approxf}
\int_{|\bq| \geq \bar{q}(\bvec k}\def\bq{\bvec q)}\frac{\operatorname{d}\bq}{(2\pi)^3} |f_{\bvec k}\def\bq{\bvec q}(\bq)|^2
\ll 1 \quad\mbox{for}\ \bar{q}(\bvec k}\def\bq{\bvec q) \ll |\bvec k}\def\bq{\bvec q|,
\end{align}
by taking the Taylor expansion of ${\bvec{n}}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}+\bq}$
with respect to $\bq$ we can make the approximation
\begin{align}
{U}^{\bvec k}\def\bq{\bvec q,t}_{\tfrac{\bvec k}\def\bq{\bvec q}2\pm\bq}&\simeq \exp\left(i
{\bvec n}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}} \cdot \boldsymbol{\sigma}t\right)
\exp\left[-i\left({\bvec n}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}}
\pm\bvec{l}_{\bvec k}\def\bq{\bvec q,\bq}\right) \cdot
\boldsymbol{\sigma}t\right] \nonumber \\
&
\simeq
\exp \left(\pm
i c_{\bvec k}\def\bq{\bvec q,\bq}
\frac{{\bvec{n}}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|{\bvec{n}}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \cdot \boldsymbol{\sigma}
t\right)+ O \big( \tfrac{\bar{q}(\bvec k}\def\bq{\bvec q)}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|}
\big) \label{eq:approxU0}
,
\end{align}
where $\bvec{l}_{\bvec k}\def\bq{\bvec q, \bq} := J_{{\bvec{n}}}\left(\frac{\bvec k}\def\bq{\bvec q}{2}\right)\bq $ and
$J_{{\bvec{n}}}\left(\frac{\bvec k}\def\bq{\bvec q}{2}\right)$ denotes the Jacobian matrix of the function
$\bvec{n}_\bvec k}\def\bq{\bvec q$ evaluated at $\frac{\bvec k}\def\bq{\bvec q}{2}$ and $c_{\bvec k}\def\bq{\bvec q,\bq} :=
\frac{{\bvec{n}}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|{\bvec{n}}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \cdot
\bvec{l}_{\bvec k}\def\bq{\bvec q, \bq}$ (the proof of Eq. \ref{eq:approxU0} is given
in Appendix \ref{sec:proof-eq.-eqref}).
By introducing the transverse field operators
\begin{align}
\label{eq:transverse}
\begin{split}
\tilde{\bvec{F}}_T(\bvec k}\def\bq{\bvec q,t) :=\tilde{\bvec{F}}(\bvec k}\def\bq{\bvec q,t)-
\left(\frac{\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \cdot
\tilde{\bvec{F}}(\bvec k}\def\bq{\bvec q,t) \right)
\frac{\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \\
\bvec{F}_T(\bvec k}\def\bq{\bvec q,t) := \bvec{F}(\bvec k}\def\bq{\bvec q,t) -
\left(\frac{\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \cdot
{\bvec{F}}(\bvec k}\def\bq{\bvec q,t) \right)
\frac{\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|}.
\end{split}
\end{align}
and using Eq. \eqref{eq:approxU0} into Eq. \eqref{eq:ftilde} we
get
(see Appendix \ref{sec:proof-eq.-eqref-1})
\begin{align}
\label{eq:transverse2}
\begin{split}
\tilde{\bvec{F}}_T(\bvec k}\def\bq{\bvec q,t) =
{\bvec{F}}_T(\bvec k}\def\bq{\bvec q)
+
O \big( \tfrac{\bar{q}(\bvec k}\def\bq{\bvec q)}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \big).
\end{split}
\end{align}
Finally, combining Eq. \eqref{eq:transverse2}
with Eq. \eqref{eq:evolutionwithrotation} we obtain a closed
expression for the time evolution of the operator
${\bvec{F}_T}(\bvec k}\def\bq{\bvec q)$,
\begin{align}
\begin{split}
\label{eq:maxwell}
\bvec{F}_T(\bvec k}\def\bq{\bvec q,t) =
\exp\left[\left(2 \bvec{n}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}} \cdot
\bvec{J}\right)t\right]
{\bvec{F}_T}(\bvec k}\def\bq{\bvec q) +\Lambda(\bvec k}\def\bq{\bvec q,t),
\end{split}
\end{align}
where $\|\Lambda(\bvec k}\def\bq{\bvec q,t)\|= O \big( \tfrac{\bar{q}(\bvec k}\def\bq{\bvec q)}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \big)$. Taking
the time derivative in Eq. \eqref{eq:maxwell} and reminding the definition \eqref{eq:transverse} we
obtain
\begin{align}
\begin{split} \label{eq:maxwell2}
&\partial_t\bvec{F}_T(\bvec k}\def\bq{\bvec q,t) =
2\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}} \times \bvec{F}_T(\bvec k}\def\bq{\bvec q,t)+
\partial_t \Lambda(\bvec k}\def\bq{\bvec q,t)\\
&2\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}} \cdot \bvec{F}_T(\bvec k}\def\bq{\bvec q,t) = 0,
\end{split}
\end{align}
where $\|\partial_t \Lambda(\bvec k}\def\bq{\bvec q,t)\|=O \big(
\tfrac{\bar{q}(\bvec k}\def\bq{\bvec q)}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \big)$ (see Appendix \ref{sec:proof-eq.-eqref-1}).
Let now $\bvec{E}$ and
$\bvec{B}$ be two Hermitian operators defined by the relation
\begin{align}
\label{eq:electric and magnetic field}
&\bvec{E}:=|{\bvec n}_{\tfrac\bk2}|(\bvec{F}_T+\bvec{F}_T^\dag),\quad\bvec{B}:=i|{\bvec n}_{\tfrac\bk2}|(\bvec{F}_T^\dag-\bvec{F}_T),\nonumber\\
&2|{\bvec n}_{\tfrac\bk2}|\bvec{F}_T=\bvec{E} + i \bvec{B}.
\end{align}
We now show that
in the limit of small wavevectors $\bvec k}\def\bq{\bvec q$
and by interpreting
$ \bvec{E}$ and $ \bvec{B} $
as the electric and magnetic field
the usual vacuum Maxwell's equations can be recovered.
For $|\bvec k}\def\bq{\bvec q| \ll 1$ one has $2\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}} \simeq \bvec k}\def\bq{\bvec q/\sqrt3$, and
Eq. \eqref{eq:maxwell2} becomes
\begin{align}
\begin{split} \label{eq:maxwell3}
&\partial_t\bvec{F}_T(\bvec k}\def\bq{\bvec q,t) =
\frac{\bvec k}\def\bq{\bvec q}{\sqrt3} \times \bvec{F}_T(\bvec k}\def\bq{\bvec q,t)
\\
&\bvec k}\def\bq{\bvec q\cdot \bvec{F}_T(\bvec k}\def\bq{\bvec q,t) = 0
\end{split} \; .
\end{align}
As in Ref. \cite{d2013derivation}, we recover physical dimensions from the previous adimensional
equations using Planck units, taking $c:=l_P/t_P$, time measured in Planck times $t\to t*t_P$,
and lengths measured in Planck lenghts as $x\to x*\sqrt{3}l_P$, the $\sqrt{3}l_P$ corresponding to
the distance between neighboring cells. Then Eq. \eqref{eq:maxwell3} becomes
\begin{align}
\begin{split} \label{eq:maxwellposition}
&\partial_t\bvec{F}_T(\bvec{x},t) =
-ic\nabla\times \bvec{F}_T(\bvec{x},t)
\\
&\nabla \cdot \bvec{F}_T(\bvec{x},t) = 0
\end{split}
\end{align}
which in terms of $\bvec{E}$ and $\bvec{B}$ become the vacuum Maxwell's equations
\begin{align}
\label{eq:maxwellstandard}
\begin{array}{lcl}
\nabla \cdot \bvec{E}=0 & &\nabla \cdot \bvec{B} =0\\
\partial_t \bvec{E} = c\nabla \times \bvec{B} && \partial_t \bvec{B} = -c\nabla \times \bvec{E} \;\;.
\end{array}
\end{align}
Introducing the polarization vectors $\bvec{u}_\bvec k}\def\bq{\bvec q^1$ and $\bvec{u}_\bvec k}\def\bq{\bvec q^2$ satisfying
\begin{equation}
\bvec{u}^i_\bvec k}\def\bq{\bvec q \cdot \bvec n_{\bvec k}\def\bq{\bvec q} =\bvec u^1_\bvec k}\def\bq{\bvec q\cdot\bvec u^2_\bvec k}\def\bq{\bvec q= 0,\ |\bvec u^i_\bvec k}\def\bq{\bvec q|=1,\
(\bvec u^1_\bvec k}\def\bq{\bvec q\times\bvec u^2_\bvec k}\def\bq{\bvec q)\cdot\bvec n_\bvec k}\def\bq{\bvec q>0,
\end{equation}
we can now interpret the following operators
\begin{align}
\gamma^i(\bvec k}\def\bq{\bvec q) &:= \bvec{u}^i_\bvec k}\def\bq{\bvec q\cdot\bvec{F}(\bvec k}\def\bq{\bvec q,0),\quad i=1,2,
\label{eq:polarization}
\end{align}
as the two polarization operators of the field. In the light of this analysis, one can conclude
that the automaton discrete evolution leads to modified Maxwell's equations in the form of Eqs.
\eqref{eq:maxwell2}, with the electromagnetic field rotating around $\bvec n_{\tfrac{\bvec k}\def\bq{\bvec q}{2}}$ instead
of $\bvec k}\def\bq{\bvec q$.
Moreover, since in this framework the photon is a composite particle, the internal dynamics of the
consitutent Fermions is responsible for an additional term $O \big(
\tfrac{\bar{q}(\bvec k}\def\bq{\bvec q)}{|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|} \big)$. As a consequence of this distorsion, one
can immediately see that the electric and magnetic fields are no longer exactly transverse to the
wave vector but we have the appearence of a longitudinal component of the polarization (see Fig.
\ref{fig:elmwave}). In Section \ref{sec:phen-analys} we discuss the new phenomenology that emerges
from Eqs. \eqref{eq:maxwell2}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm ]{graph_wave_QCATOL-beta-c.pdf}
\caption{(colors online) A
rectilinear polarized electromagnetic wave. We notice that the
polarization plane (in green) is sligtly tilted with respect the
plane orthogonal to $\bvec k}\def\bq{\bvec q$ (in gray).}
\label{fig:elmwave}
\end{center}
\end{figure}
\section{Photons as composite Bosons}\label{sec:photons-as-composite}
In the previous section we proved that the operators defined
in Eq. \eqref{eq:electric and magnetic field} dynamically evolve
according to the free Maxwell's equation. However,
in order to interpret
$\bvec{E}(\bvec k}\def\bq{\bvec q)$ and $\bvec{B}(\bvec k}\def\bq{\bvec q)$ as the electric and magnetic fields we need
to show that they obey the correct commutation relation.
The aim of this paragraph is to show that, in a regime of low energy
density, the polarization operators defined in Eq. \eqref{eq:polarization}
actually behave as independent Bosonic modes.
In order to avoid the technicalities of the continuum we now
suppose to confine the system in finite volume $\mathcal{V}$.
The finiteness of the volume introduces a discretization of the
momentum space and the operators $\psi(\bvec k}\def\bq{\bvec q)$, $\varphi(\bvec k}\def\bq{\bvec q)$,
obey Eq.~\eqref{eq:commutationrel} where the periodic Dirac delta is replaced by the Kronecker delta.
All the integrals over the Brillouin zone are then
replaced by sums, and the polarization operators of Eq. \eqref{eq:polarization} become
\begin{equation}
\gamma^i(\bvec k}\def\bq{\bvec q) :=
\sum_{\bq}
f_{\bvec k}\def\bq{\bvec q}(\bq)
{\varphi}^T \left(\tfrac{\bvec k}\def\bq{\bvec q}{2}-\bq\right)
(
\bvec{u}^i_{\tfrac{\bvec k}\def\bq{\bvec q}2}
\cdot
\boldsymbol{\sigma}
)
{\psi} \left(\tfrac{\bvec k}\def\bq{\bvec q}{2}+\bq\right).
\end{equation}
These operators can be simply expressed in terms of the functions $\gamma_{\alpha,\beta}(\bvec{k})$ defined as follows
\begin{align}
\gamma_{\alpha,\beta}(\bvec{k}) := \sum_{\bvec{q}}
{f}_{\bvec k}\def\bq{\bvec q}(\bvec{q})
\varphi_\alpha
\left(\tfrac{\bvec k}\def\bq{\bvec q}{2}-\bq\right)
\psi_\beta
\left(\tfrac{\bvec k}\def\bq{\bvec q}{2}+\bq\right),
\nonumber \\
\alpha, \beta = R,L.
\label{eq:basicobjects}
\end{align}
Since the polarisation operators $\gamma^i(\bvec k}\def\bq{\bvec q)$ are linear
combinations of $\gamma_{\alpha,\beta}(\bvec k}\def\bq{\bvec q)$, it is useful to compute
the commutation relations of the latter. We have
\begin{align}
\label{eq:basiccommutation}
&[\gamma_{\alpha,\beta}(\bvec{k}) ,\gamma_{\alpha',\beta'}(\bvec{k}') ]_{-} = 0,\nonumber\\
&[\gamma_{\alpha,\beta}(\bvec{k})
,\gamma^\dagger_{\alpha',\beta'}(\bvec{k}') ]_{-} =
\delta_{\alpha,\alpha'} \delta_{\beta,\beta'}
\delta_{\bvec{k},\bvec{k}'} -\Delta_{\alpha,\alpha',\beta,\beta',\bvec k}\def\bq{\bvec q,\bvec k}\def\bq{\bvec q'},\nonumber\\
&\Delta_{\alpha,\alpha',\beta,\beta',\bvec k}\def\bq{\bvec q,\bvec k}\def\bq{\bvec q'}:=\left( \delta_{\alpha,\alpha'}H^+_{\psi, \beta', \beta,\bvec{k}',\bvec{k}}+ \delta_{\beta,\beta'}H^-_{\varphi, \alpha', \alpha,\bvec{k}',\bvec{k}}\right), \nonumber\\
&H^\pm_{\eta, \alpha', \alpha,\bvec{k}',\bvec{k}} := \sum_{\bvec{q}}{f}_{\bvec k}\def\bq{\bvec q}(\bvec{q}) {f}_{\bvec k}\def\bq{\bvec q'}^*(\tfrac{\bvec{k}' - \bvec{k}}2+\bvec{q})\nonumber\\
&\qquad\times\eta_{\alpha'}^\dagger \left(\tfrac{ 2\bvec{k}' -
\bvec{k}}2\pm\bvec{q}\right) \eta_{\alpha} \left(
\tfrac{\bvec{k}}2\pm{\bvec{q}}\right).
\end{align}
Then the operators
$\gamma_{\alpha,\beta}$ fail to be Bosonic annihilation operators
because of
the apperance of the operator
$\Delta_{\alpha,\alpha', \beta, \beta',\bvec{k},\bvec{k}'}$ in the commutation relation
\eqref{eq:basiccommutation}.
However, if we restrict to the subset $\mathcal{S}$ of states such that
$\operatorname{Tr}[\rho H^-_{\varphi, \beta', \beta,\bvec{k}',\bvec{k}}] \simeq 0$ and
$\operatorname{Tr}[\rho H^+_{\psi, \alpha', \alpha,\bvec{k}',\bvec{k}}] \simeq 0$ for all $\rho \in
\mathcal{S}$, we could make the approximation
$[\gamma_{\alpha,\beta}(\bvec{k}) ,\gamma^\dagger_{\alpha',\beta'}(\bvec{k}')
]_{-} \simeq
\delta_{\alpha,\alpha'} \delta_{\beta,\beta'}
\delta_{\bvec{k},\bvec{k}'} $.
If we consider the modulus of the expectation value
of the operators
$H^\pm_{\eta, \beta', \beta,\bvec{k}',\bvec{k}} $
we have
\begin{align}
\label{eq:boundfordelta}
&|\< H^\pm_{\eta, \beta', \beta,\bvec{k}',\bvec{k}} \>|
\leq\sum_{\bvec{q}}
\left| {f}_{\bvec k}\def\bq{\bvec q}(\bvec{q}) \right |
\left| {f}_{\bvec k}\def\bq{\bvec q'}^*(\tfrac{\bvec{k}' - \bvec{k}}2+\bvec{q}) \right |\nonumber\\
&\quad\times
\left|\left\< \eta_{\beta'}^\dagger \left( \tfrac{2\bvec{k}' -
\bvec{k}}2\pm\bvec{q}\right)
\eta_{\beta} \left(\tfrac{ \bvec{k}}2\pm\bvec{q}\right )
\right \> \right | \leq\nonumber\\
&\qquad\sqrt{
\<
\Gamma^\pm_{\eta,\beta,\bvec{k}}
\>
\<
\Gamma^\pm_{\eta,\beta',\bvec{k}'}
\>
},\\
&\Gamma^\pm_{\eta,\beta,\bvec{k}} =
\sum_{\bvec{q}}
\left|{f}_{\bvec k}\def\bq{\bvec q}(\bvec{q}) \right |^2
\eta^\dagger_{\beta} \left(\tfrac{ \bvec{k}}2\pm\bvec{q}\right )
\eta_{\beta} \left(\tfrac{ \bvec{k}}2\pm\bvec{q}\right ),
\end{align}
where we repeatedly applied the Schwartz inequality.
The operators
$\Gamma^-_{\varphi,\beta,\bvec{k}}$ and
$\Gamma^+_{\psi, \alpha,\bvec{k}}$ can be interpreted as
number operators ``shaped'' by the probability distribution
$|{f}_{\bvec k}\def\bq{\bvec q}(\bvec{q})|^2$.
If we suppose
$| {f}_\bvec k}\def\bq{\bvec q(\bvec{q})|^2$ to be a constant function
over a region $\Omega_\bvec k}\def\bq{\bvec q$ which contains $N_\bvec k}\def\bq{\bvec q$ modes,
i.e. $|{f}_\bvec k}\def\bq{\bvec q(\bvec{q})|^2 = \tfrac{1}{N_\bvec k}\def\bq{\bvec q}$ if $\bvec{q}\in \Omega_\bvec k}\def\bq{\bvec q$
and $|{f}_\bvec k}\def\bq{\bvec q(\bvec{q})|^2 = 0 $ if $\bvec{q} \not\in \Omega_\bvec k}\def\bq{\bvec q$, we have
\begin{align*}
\left\<
\Gamma^+_{\psi, \alpha,\bvec{k}}
\right\> =
\frac{1}{N_\bvec k}\def\bq{\bvec q}\sum_{\bvec{q}\in\Omega_\bvec k}\def\bq{\bvec q}
\left\<
\psi^\dagger_{\alpha} \left( \tfrac{\bvec{k}}2+\bvec{q}\right )
\psi_{\alpha} \left(\tfrac{ \bvec{k}}2+\bvec{q}\right )
\right\> = \frac{M_{\psi,\alpha,\bvec{k}}}{N_\bvec k}\def\bq{\bvec q}
\end{align*}
where we denoted with $M_{\psi,\alpha,\bvec{k}}$ the number of
$\psi_{\alpha}$ Fermions in the region $\Omega_k $ (clearly the same
result applies to $\Gamma^-_{\varphi, \beta,\bvec{k}}$). Then, if we
consider states $\rho$ such that $M_{\xi,\chi,\bvec{k}}/ N_\bvec k}\def\bq{\bvec q \leq
\varepsilon$ for all $\xi_{\chi}$ and $\bvec{k}$ and for $\varepsilon \ll 1$
we can safely assume $[\gamma_{\alpha,\beta}(\bvec{k})
,\gamma^\dagger_{\alpha',\beta'}(\bvec{k}') ]_{-} =
\delta_{\alpha,\alpha'} \delta_{\beta,\beta'}
\delta_{\bvec{k},\bvec{k}'} $ in Eq. \eqref{eq:basiccommutation} which
after an easy calculation gives
\begin{align}
\label{eq:commutationpolarization}
[\gamma^i (\bvec{k}),{\gamma^j}^\dag (\bvec{k}')]_- =
\delta_{i,j} \delta_{\bvec{k},\bvec{k}'}\quad i = 0,1,2,3.
\end{align}
In Eq. \eqref{eq:commutationpolarization}, besides the previously defined transverse polarizations
$\gamma^1 (\bvec{k}) $ and
$\gamma^2 (\bvec{k}) $,
we considered also the ``longitudinal''
polarization operator
$\gamma^3 (\bvec{k}) := \sum_{\bvec{q}}
f_\bvec k}\def\bq{\bvec q(\bvec{q})
{\varphi}^T
\left( \tfrac{\bvec{k}}2 -\bvec{q} \right)
(
\bvec{e}_{\tfrac{\bvec k}\def\bq{\bvec q}2} \cdot
\boldsymbol{\sigma}
)
{\psi}
\left(\tfrac{
\bvec{k}}2 +\bvec{q}
\right)$, where $\bvec e_{\bvec k}\def\bq{\bvec q}:=\bvec n_\bvec k}\def\bq{\bvec q/|\bvec n_\bvec k}\def\bq{\bvec q|$,
and the ``timelike'' polarization operator
$\gamma^0 (\bvec{k}) := \sum_{\bvec{q}}
{f}_\bvec k}\def\bq{\bvec q(\bvec{q})
{\varphi}^T
\left( \tfrac{ \bvec{k}}2 - \bvec{q} \right)
I
{\psi}
\left(
\tfrac{\bvec{k}}2+\bvec{q}
\right)$.
This result tells us that, as far as we restrict ourselves to states in
$\mathcal{S}$ we are allowed to interpret the operators
$\gamma^i(\bvec{k})$ as $4$ independent Bosonic field modes and then
to interpret $\bvec{E}$ and $\bvec{B}$ defined in
Eq. \eqref{eq:electric and magnetic field}
as the electric and the magnetic field operators.
This fact together with the evolution given by Eq. \eqref{eq:maxwell3}
proves that we realized a consistent model of quantum electrodynamics
in which the photons are composite particles made by correlated
Fermions whose evolution is described by a cellular automaton.
\subsection{Composite Bosons and entanglement}\label{sec:comp-bosons-entangl}
The results that we had in this section are in agreement with the recent works
\cite{combescot2001new, rombouts2002maximum, avancini2003compositeness,combescot2003n} which studied
the conditions under which a pair of Fermionic fields can be considered as a Boson. In Refs.
\cite{PhysRevA.71.034306, PhysRevLett.104.070402} it was shown that a sufficient condition is that
the two Fermionic fields $\psi,\phi$ are sufficiently entangled. More precisely, for a composite
Boson $c := \sum_{i} f(i) \psi_i \phi_i $, $\sum_{i} |f(i)|^2=1$ one has
\begin{equation}
[c,c^\dag] = 1- (\Gamma_\psi + \Gamma_{\phi}),
\end{equation}
where
\begin{equation}
\Gamma_\psi = \sum_{i} |f(i)|^2 \psi^\dag_i \psi_i,\quad\Gamma_\phi = \sum_{i} |f(i)|^2 \phi^\dag_i \phi_i,
\end{equation}
and in Ref.~\cite{PhysRevLett.104.070402} it was shown that the following bound holds
\begin{equation}
\forall N \geq 1,\quad NP \geq \<{N}|\Gamma_\psi |{N} \>\geq P,
\end{equation}
and the same holds for $\Gamma_\phi$, where $P = \sum_{i=1}^N |f(i)|^4$ is the purity of the reduced
state of a single particle and $|{N}\> = \tfrac{1}{\sqrt{N!}} \chi_N(c^\dag)^N |{0}\>$ ($\chi_N$ is
a normalization constant). From this result, the authors of Ref. \cite{PhysRevLett.104.070402}
concluded that, as far as $P,NP \approx 0$, $c$ and $c^\dag$ can be safely considered as a Bosonic
annihilation/creation pair. Our criterion, which restricts the state $\rho$ to satisfy $\operatorname{Tr}[\rho
\Gamma_\psi],\operatorname{Tr}[\rho\Gamma_\phi] \leq \varepsilon$ in this simplified scenario, gives the criterion
in Refs. \cite{PhysRevA.71.034306, PhysRevLett.104.070402} for $\rho=|N\>\<N|$. Moreover it is
interesting to show that the technique applied in the derivation of Eq.~\eqref{eq:boundfordelta} can
be used to answer an open question raised in Ref. \cite{PhysRevLett.104.070402}.
The conjecture is that, given two different composite Bosons $c_1 = \sum_{i} f_1(i) \psi_i \phi_i$ and
$c_2 = \sum_{i} f_2(i) \psi_i \phi_i$ such that $ \sum_{i} f_1(i) f_2(i)^* =0$, the commutation
relation $[c_1,c_2^\dag ]$ should vanish as the two purities $P_1$ and $P_2$ ($P_a = \sum_{i=1}^N
|f_a(i)|^4$) decrease. Since $[c_1,c_2^\dag ] = - \sum_i f_1(i) f_2(i)^* (\psi_i^\dag \psi_i +
\phi_i^\dag \phi_i )$ we have
\begin{equation}
|\< [c_1,c_2^\dag ]\>| \leq \sum_x \sqrt{\<\Gamma^{(1)}_x \>
\<\Gamma^{(2)}_x \> },
\end{equation}
by the same reasoning that we followed in the derivation of Eq. \eqref{eq:boundfordelta}.
Combining this last inequality with the condition $ \< N |\Gamma^{(i)}_x | N\> \leq NP$ we have
$|\<N| [c_1,c_2^\dag ]|N\>| \leq 2NP $ which proves the conjecture.
\section{Phenomenological analysis}\label{sec:phen-analys}
We now investigate the new phenomenology predicted from
the modified Maxwell equations \eqref{eq:maxwell2} and the
modified commutation relations \eqref{eq:basiccommutation},
with a particular focus on practically testable effects.
Let us first have a closer look at the
dynamics described by Eq. \eqref{eq:maxwell}.
If $\bvec{u}_+$ and $\bvec{u}_-$
are the two eigenvectors
of the matrix $\operatorname{Exp} [( 2 \bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}
\cdot \bvec{J} ) t ]$, corresponding to eigenvalues
$e^{\mp i2 |\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|t} $,
Eq. \eqref{eq:maxwell} can be written as
\begin{align}
\label{eq:maxwellsolved}
\bvec{F}_T(\bvec k}\def\bq{\bvec q,t) =
e^{-i2|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|t} \gamma_+(\bvec k}\def\bq{\bvec q) \bvec{u}_+
+
e^{i2|\bvec{n}_{\frac{\bvec k}\def\bq{\bvec q}{2}}|t} \gamma_-(\bvec k}\def\bq{\bvec q) \bvec{u}_-
\end{align}
where the corresponding polarization operators
$\gamma_\pm(\bvec k}\def\bq{\bvec q)$ are defined according to Eq. \eqref{eq:polarization}.
According to Eq. \eqref{eq:maxwellsolved}
the angular frequency of the electromagnetic waves
is given by the modified dispersion relation
\begin{align}
\label{eq:modifieddisprelmax}
\omega(\bvec k}\def\bq{\bvec q) = 2 | \bvec{n}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}} | .
\end{align}
The usual relation $\omega(\bvec k}\def\bq{\bvec q) = | \bvec k}\def\bq{\bvec q | $
is recovered in the $| \bvec k}\def\bq{\bvec q | \ll 1$ regime.
The speed of light is the group velocity of the electromagnetic
waves, i.e.~the gradient of the dispersion relation. The major consequence
of Eq. \eqref{eq:modifieddisprelmax} is that the speed of light depends on
the value of $\bvec k}\def\bq{\bvec q$, as for Maxwell's equations in a dispersive medium.
The phenomenon of a $\bvec k}\def\bq{\bvec q$-dependent speed of light was already analyzed in the in the context of
quantum gravity where many authors considered the hypothesis that the existence of an invariant
length (the Planck scale) could manifest itself in terms of modified dispersion relations
\cite{ellis1992string,lukierski1995classical,Quantidischooft1996,amelino2001testable,PhysRevLett.88.190403}.
In these models the $\bvec k}\def\bq{\bvec q$-dependent speed of light $c(\bvec k}\def\bq{\bvec q)$, at the leading order in $k :=| \bvec k}\def\bq{\bvec q |$,
is expanded as $c(\bvec k}\def\bq{\bvec q) \approx 1 \pm \xi k^{\alpha}$, where $\xi $ is a numerical factor of order
$1$, while $\alpha$ is an integer. This is exactly what happens in our framework, where the
intrinsic discreteness of the quantum cellular automata $A^\pm$ leads to the dispersion relation of
Eq. \eqref{eq:modifieddisprelmax} from which the following $\bvec k}\def\bq{\bvec q$-dependent speed of light
\begin{align} \label{eq:freqdepsol}
c^\mp(\bvec k}\def\bq{\bvec q) \approx 1 \pm 3\frac{k_x k_y k_z}{|\bvec k}\def\bq{\bvec q|^2} \approx
1 \pm \tfrac{1}{\sqrt{3}}k,
\end{align}
can be obtained by computing the modulus of the group velocity and power expanding in $\bvec k}\def\bq{\bvec q$ with the
assumption $ k_x = k_y = k_z = \tfrac{1}{\sqrt{3}} k $, ($k = |\bvec k}\def\bq{\bvec q|$). It is interesting to observe
that depending on the automaton $A^{+}(\bvec k}\def\bq{\bvec q)$ of $A^{-}(\bvec k}\def\bq{\bvec q)$ in Eq. \eqref{eq:weylautomata2} we
obtain corrections to the speed of light with opposite sign. Moreover the correction is not
isotropic and can be superluminal, though uniformly bounded for all $\bvec k}\def\bq{\bvec q$ as shown for the Weyl
automaton in Ref. \cite{d2013derivation}.
Models leading to modified dispersion relations recently received attention because they allow one
to derive falsifiable predictions of the Plank scale hypothesis. These can be experimentally tested
in the astrophysical domain, where the tiny corrections to the usual relativistic dynamics can be
magnified by the huge time of flight. For example, observations of the arrival times of pulses
originated at cosmological distances, like in some $\gamma$-ray
bursts\cite{amelino1998tests,abdo2009limit,vasileiou2013constraints,amelino2009prospects}, are now
approaching a sufficient sensitivity to detect corrections to the relativistic dispersion relation
of the same order as in Eq. \eqref{eq:freqdepsol}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm ]{graph_vectors_QCATOL-rc2.pdf}
\caption{(colors online) The graphics shows the vector $2\bvec{n}_{\tfrac{\bvec k}\def\bq{\bvec q}{2}}$ (in green),
which is orthogonal to the polarization plane, the wavevector $\bvec k}\def\bq{\bvec q$ (in red) and the group
velocity $\nabla \omega (\bvec k}\def\bq{\bvec q)$ (in blue) as function of $\bvec k}\def\bq{\bvec q$ for the value $|\bvec k}\def\bq{\bvec q|= 0.8$ and
different directions. Notice that the three vectors are not parallel and the angles between
them depend on $\bvec k}\def\bq{\bvec q$. Such anisotropic behavior can be traced back to the anisotropy of the
dispersion relation of the Weyl automaton.}
\label{fig:relvectors}
\end{center}
\end{figure}
A second distinguishing feature of Eq. \eqref{eq:maxwell2} is that the polarization plane is neither
orthogonal to the wavevector, nor to the group velocity, which means that the electromagnetic waves
are no longer exactly transverse (see Figs.~\ref{fig:elmwave} and \ref{fig:relvectors}). However
the angle $\theta$ between the polarization plane and the plane orthogonal to $\bvec k}\def\bq{\bvec q$ or
$\nabla\omega(\bvec k}\def\bq{\bvec q)$ is of the order $\theta \approx 2k$, which gives $10^{-15}\mathrm{rad}$ for a
$\gamma$-ray wavelength, a precision which is not reachable by the present technology. Since for a
fixed $\bvec k}\def\bq{\bvec q$ the polarization plane is constant, exploiting greater distances and longer times does
not help in magnifying this deviation from the usual electromagnetic theory.
Finally, the third phenomenological consequence of our modelling is that, since the photon is
described as a composite Boson, deviations from the usual Bosonic statistics are in order. As we
proved in Section \ref{sec:photons-as-composite}, the choice of the function ${f}_\bvec k}\def\bq{\bvec q(\bvec{q})$
determines the regime where the composite photon can be approximately treated as a Boson. However,
independently on the details of function ${f}_\bvec k}\def\bq{\bvec q(\bvec{q})$ one can easily see that a Fermionic
saturation of the Boson is not visible, e.g. for the most powerful laser \cite{dunne2007high} one
has a approximately an Avogadro number of photons in $10^{-15}$cm${}^3$, whereas in the same volume
on has around $10^{90}$ Fermionic modes.
Another test for the composite nature of photons is provided by the prediction of
deviations from the Planck's distribution in Blackbody
radiation experiments. A similar analysis was carried out in
Ref. \cite{perkins2002quasibosons}, where the author showed that the predicted
deviation from Planck's law is less than one part over $10^{-8}$,
well beyond the sensitivity of present day experiments.
\section{Conclusions}
\label{sec:conclusions}
In this paper we derive a complete theoretical framework of the free quantum radiation field at the
Planck scale, based on a quantum Weyl automaton derived from first principles in Ref.
\cite{d2013derivation}. Differently from previous arguments based just on discreteness of geometry,
the present approach provides fully quantum theoretical treatment that allows for precise
observational predictions which involve electromagnetic radiation, e.~g. about deep-space
astrophysical sources. Within the present framework the electromagnetic field emerges from two
correlated massless Fermionic fields whose evolution is given by the Weyl automaton. Then the
electric and magnetic field are described in terms of bilinear operators of the two constituent
Fermionic fields. This framework recalls the so-called ``neutrino theory of light'' considered in
Refs. \cite{de1934nouvelle,jordan1935neutrinotheorie,kronig1936relativistically,perkins1972statistics,perkins2002quasibosons}.
The automaton evolution leads to a set of modified Maxwell's equations whose dynamics differs from
the usual one for ultra-high wavevectors. This model predicts a longitudinal component of the
polarization and a $\bvec k}\def\bq{\bvec q$-dependent speed of light. This last effect could be observed by measuring
the arrival times of light originated at cosmological distances, like in some $\gamma$-ray bursts,
exploiting the huge distance scale to magnify the tiny corrective terms to the relativistic
kinematics. This prediction agrees with the one presented in Ref. \cite{amelino1998tests} where
$\gamma$-ray bursts were for the first time considered as tests for physical models with
non-Lorentzian dispersion relations. Within this perspective, our quantum cellular automaton singles
out a specific modified dispersion relation as emergent from a Planck-scale microscopic dynamics.
Another major feature of the proposed model, is the composite nature of the photon which leads to a
modification of the Bosonic commutation relations. Because of the Fermionic structure of the photon
we expect that the Pauli exclusion principle could cause a saturation effects when a critical energy
density is achieved. However, an order of magnitude estimation shows that the effect is very far
from being detectable with the current laser technology.
As a spin-off of the analysis of the composite nature of the photons, we proved a result that
strenghten the thesis that the amount of entanglement quantifies whether a pair of Fermions can be
treated as a Boson \cite{PhysRevA.71.034306,PhysRevLett.104.070402}. Indeed we showed that, even in
the case of several composite Bosons, the amount of entanglement for each pair is a good measure of
how much the different pair of Fermions can be treated as independent Bosons. This question was
proposed as an open problem in Ref. \cite{PhysRevLett.104.070402}.
The results of this work leave a lot of room for future investigation. The major question is the
study of how symmetry transformations can be represented in the model. The scenario we considered
is restricted to a fixed reference frame and in order to properly recover the standard theory we
should discuss how the Poincar\`{e} group acts on our physical model. This analysis could be done
following the lines of Ref. \cite{bibeau2013doubly} where it is shown how a QCA dynamical model is
compatible with a deformed relativity model \cite{amelino2002relativity,PhysRevLett.88.190403} which exhibits a non-linear
action of the Poincar\`{e} group.
\acknowledgements{
This work has been supported in part by the Templeton Foundation under
the project ID\# 43796 {\em A Quantum-Digital Universe}. }
|
1,108,101,565,211 | arxiv | \section{Introduction}
Momentum methods play a crucial role in numerous areas, including machine learning~\cite{Lin2020}, signal processing~\cite{beck2009fast}, and control~\cite{Qu}.
Typical momentum techniques, including heavy-ball method~(HB)~\cite{polyak1964some} and Nesterov's accelerated gradient method~(NAG)~\cite{nesterov1983method}, improve the performance of gradient descent~(GD) for tackling convex tasks both in theoretical and empirical performance.
In the case of a quadratic strongly convex problem, HB has an accelerated convergence rate compared to GD~\cite{polyak1964some}, implying that HB requires fewer iterations than GD to reach the same training error.
In 1983, Nesterov~\cite{nesterov1983method} proposed the NAG method and proved that it has the optimal convergence rate for convex problem with Lipschitz gradient.
Given the success of momentum methods in convex optimization, they have also been widely adopted in training neural networks for faster convergence~\cite{sutskever2013importance,dozat2016incorporating,ma2018quasi}.
Nowadays, many popular modern methods have taken advantage of momentum techniques, such as Adam~\cite{kingma2014adam}, AMSGrad~\cite{reddi2019convergence}, and AdaBound~\cite{luo2019adaptive}.
In many popular deep learning libraries, momentum methods and their variants are implemented as the default optimizers~\cite{DBLP:conf/nips/PaszkeGMLBCKLGA19, gulli2017deep, DBLP:conf/osdi/AbadiBCCDDDGIIK16}.
Nonetheless, the optimization problem for the neural network is both non-convex and non-smooth due to the usage of the non-linear activation functions.
In general, it is NP-hard to obtain the global-optimal solution for handling non-convex problems~\cite{DBLP:journals/mp/MurtyK87}.
From a theoretical view, it remains unclear whether momentum methods are capable of learning a neural network with low training loss, let alone the acceleration of momentum methods over GD.
Recently, some theoretical progress has been made towards bridging this gap by analyzing the convergence of (stochastic) GD for training an over-parameterized two-layer ReLU neural network~\cite{du2018gradient,li2018learning,DBLP:conf/icml/DuLL0Z19,DBLP:conf/icml/Allen-ZhuLS19,arora2019fine,song2019quadratic}, where the number of the parameters is much larger than that of the training data.
The main idea is to investigate the trajectory of gradient-based methods via a kernel matrix called neural tangent kernel~(NTK), which was first introduced by Jacot~\cite{NIPS2018_8076} to study the optimization of infinite wide neural networks.
However, most existing literature is concerned with GD.
To our knowledge, there are only two recent papers on the convergence of momentum methods in training neural networks~\cite{wang2020provable, bu2020dynamical}.
Focusing on a discrete-time setting, Wang \textit{et al.}~\cite{wang2020provable} proved HB is able to achieve a linear convergence rate to the global optimum and attains an acceleration beyond GD.
From a continuous-time perspective, Bu \textit{et al.}~\cite{bu2020dynamical} found a similar result for HB.
Nevertheless, their analysis relies on the approximation between a second-order ordinary differential equation~(ODE) and the momentum method with an infinitesimal learning rate, which is far from practical implementations.
Moreover, their result showed that NAG with a time-varying momentum coefficient converges at an asymoptotic sublinear rate, which is inferior to GD~\cite{du2018gradient,wu2019global} and HB~\cite{wang2020provable}.
In contrast, when optimizing a neural network, it was empirically observed that NAG outperforms GD and exhibits comparable (even better) performance compared to HB~\cite{sutskever2013importance, DBLP:conf/icml/SchmidtSH21}.
Therefore, there is a lack of enough understandings about the acceleration of NAG.
In this work, we consider training a randomly initialized over-parameterized two-layer ReLU neural network with NAG.
In fact, there are several variants of NAG proposed by Nesterov~\cite{nesterov2003introductory}.
We focus on NAG with a constant momentum parameter, which is the default scheme of NAG implemented in PyTorch~\cite{DBLP:conf/nips/PaszkeGMLBCKLGA19}, Keras~\cite{gulli2017deep} and TensorFlow~\cite{DBLP:conf/osdi/AbadiBCCDDDGIIK16}.
Inspired by~\cite{du2018gradient,wang2020provable}, we exploit the connection between the NTK and the wide neural network to establish theoretical convergence guarantees for NAG.
Specifically, our contributions can be summarized as follows:
\begin{enumerate}
\item Firstly, we intuitively show that the residual dynamics of an infinite width neural network trained by NAG can be approximated by a linear discrete dynamical system, whose coefficient matrix is determined by NAG's hyperparameters and the NTK matrix.
When the spectral norm of the coefficient matrix is less than 1, NAG is able to attain a global minimum at an asymptotic linear convergence rate according to Gelfand's formula~\cite{1941Normierte}.
\item Secondly, borrowing the idea from the infinite width case, we establish the residual dynamics of NAG in training a finite width neural network.
By analyzing the dynamics, we show that NAG converges to a global minimum at a non-asymptotic rate $(1-\Theta(1/\sqrt{\kappa}))^t$, where $\kappa > 1$ is the condition number of the NTK matrix and $t$ is the number of the iterations.
Moreover, compared to the convergence rate $(1-\Theta(1/{\kappa}))^t$ of GD~\cite{du2018gradient,wu2019global}, our result provides theoretical guarantees for the acceleration of NAG over GD.
\item Thirdly, we demonstrate that NAG exhibits a different residual dynamics compared to HB~\cite{wang2020provable}, but the corresponding coefficient matrix shares a similar spectral norm, which results in a comparable convergence rate as HB.
Our analysis of the residual dynamics induced by NAG is of independent interest and may further extend to study other NAG-like algorithms and the convergence of NAG in training other types of neural network.
\item Finally, we conduct extensive experiments on six benchmark datasets.
In the convergence analysis, we empirically show that NAG outperforms GD and obtains a comparable and even better performance compared to HB, which verifies our theoretical results.
Furthermore, using all six datasets, we investigate the impact of the over-parameterization on two quantities related to our proof. The result also suggests the correctness of our findings.
\end{enumerate}
\section{Related work}
\textbf{First-order methods.}
With the growing demands for handling large-scale machine learning problems,
first-order methods that only access the objective values and gradients have become popular due to their efficiency and effectiveness.
For convex problems, GD is the most well-known first-order method, which achieves $\mathcal{O}(1/t)$ convergence rate with $t$ iterations~\cite{nesterov2003introductory}.
Momentum methods make a further step by exploiting the history of gradients.
Among first-order methods, NAG obtains the optimal rate $\mathcal{O}(1/t^2)$ for convex problem with Lipschitz gradient~\cite{nesterov2003introductory}.
Focusing on non-smooth convex problems, Tao \textit{et al.}~\cite{DBLP:journals/tnn/TaoPWT20} proved that NAG improves the convergence rate of stochastic gradient descent by a factor $\log(t)$.
In contrast, Lessard \textit{et al.}~\cite{lessard2016analysis} found a counterexample that HB may fail to find the global optimum for some strongly convex problems.
On the other hand, several researches established a connection between the discrete-time methods and the ODE models.
In the limit of infinitesimally learning rate, Su \textit{et al.}~\cite{JMLR:v17:15-084} formulated a second-order ODE associated with NAG.
The convergence of NAG is then linked to the analysis of the related ODE solution.
Shi \textit{et al.}~\cite{DBLP:journals/corr/abs-1810-08907} further developed a more accurate high-resolution ODE that helps distinguish between HB and NAG.
For non-convex problems, it is intractable to find a global optimum.
As an alternative, current researches consider the convergence to the stationary point or local minimum as a criterion for evaluation~\cite{DBLP:journals/mp/CarmonDHS20,DBLP:conf/icml/Jin0NKJ17,DBLP:conf/icml/CarmonDHS17,DBLP:journals/siamjo/DiakonikolasJ21}.
In contrast to previous work, we show a non-asymptotic convergence result for NAG to arrive at a global minimum for a non-convex and non-smooth problem.
\textbf{Convergence theory of over-parameterized neural networks.}
Du \textit{et al.}~\cite{du2018gradient} was the first to prove the convergence rate of GD for training a randomly initialized two-layer ReLU neural network.
Their results showed that GD can linearly converge to a global optimum when the width of the hidden layer is large enough.
Based on the same neural network architecture, Li and Liang~\cite{li2018learning} investigated the convergence of stochastic gradient descent on structured data.
Wu \textit{et al.}~\cite{wu2019global} improved the upper bound of the learning rate in~\cite{du2018gradient}, which results in a faster convergence rate for GD.
On the other hand, Jacot~\textit{et al.}~\cite{NIPS2018_8076} introduced the NTK theory, which establishes a link between the over-parameterized neural network and the neural tangent kernel.
Their result was further extended to investigate the convergence of GD for training different architectures of neural networks, including convolutional~\cite{arora2019exact}, residual~\cite{DBLP:journals/corr/abs-2002-06262} and graph neural network~\cite{DBLP:conf/nips/DuHSPWX19}.
While these results are mostly concerned with GD, there are few theoretical guarantees for momentum methods.
Recently, some researchers have drawn attention to analyzing the convergence of momentum methods with NTK theory.
Wang \textit{et al.}~\cite{wang2020provable} studied the convergence of HB using a similar setting as~\cite{du2018gradient}.
They proved that, as compared to GD, HB converges linearly to the global optimum at a faster rate.
Bu \textit{et al.}~\cite{bu2020dynamical} established the convergence results of HB and NAG by considering their limiting ODE from a continuous perspective.
Nonetheless, their analysis is asymptotic and far from practice because they use the infinitesimal learning rate and the approximation of Dirac delta function.
In contrast, our analysis focuses on the discrete-time situation and yields a non-asymptotic convergence rate of NAG with a finite learning rate, which is close to the reality.
Furthermore, some researchers applied optimal transport theory to analyze the training dynamics of neural networks in the mean field setting~\cite{DBLP:journals/corr/abs-1804-06561, chizat2018global}, where the evolution of the parameter can be approximated by a distributional dynamics.
However, their results are limited to (stochastic) GD.
\section{Preliminaries}
\subsection{Notation}
In the paper, we use lowercase, lowercase boldface and uppercase boldface letters to represent scalars, vectors and matrices, respectively.
Let $[n]$ denote $\{1, 2, \cdots, n\}$.
For any set $S$, let $|S|$ be its cardinality.
Let $\|\cdot\|$ be the $\ell_2$ norm of the vector or the spectral norm of the matrix, and $\|\cdot\|_F$ be the Frobenius norm.
We denote $\langle \cdot, \cdot \rangle$ as the Euclidean inner product.
We use $\lambda_{max}(\textbf{X})$, $\lambda_{min}(\textbf{X})$ and $\kappa(\textbf{X})$ to denote the largest eigenvalue, smallest eigenvalue and condition number of the matrix $\textbf{X}$, respectively.
For initialization, we use $\mathcal{N}(0, \textit{I})$ and $Rademacher(1/2)$ to denote the standard Gaussian distribution and the Rademacher distribution, respectively.
We adopt $\mathbb{I}\{\omega\}$ as the indicator function, which outputs 1 when the event $\omega$ is true and 0 otherwise.
The training dataset is denoted by $\mathcal{D} = \{\mathbf{x}_i, y_i\}_{i=1}^n$, where $\mathbf{x}_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$ are the features and label of the $i$-th sample, respectively.
For two sequences $\{a_n\}$ and $\{b_n\}$, we write $a_n = \mathcal{O}(b_n)$ if there exists a positive constant $0 < C_1 < +\infty$ such that $a_n \leq C_1 b_n$, write $a_n = \Omega(b_n)$ if there exists a positive constant $0 \leq C_2 < + \infty$ such that $a_n \geq C_2 b_n$, and write $a_n = \Theta(b_n)$ if there exists two positive constants $0 < C_3, C_4 < +\infty$ such that $a_n \leq C_3 b_n$ and $b_n \leq C_4 a_n$.
\subsection{Problem setting}
\label{problem setting}
In this subsection, we first briefly introduce the update procedures of three commonly used methods: GD, HB and NAG.
Then we provide the details of the architecture and the initialization scheme of the neural network.
Finally, we introduce the main idea of the NTK theory~\cite{NIPS2018_8076}.
\textbf{GD, HB and NAG.} In this paper, we mainly focus on the supervised learning problem in the deterministic setting.
Our aim is to train a model $f: \mathbb{R}^d \to \mathbb{R}$ to predict unobserved features correctly.
The parameter of the model denotes by $\bm{\theta}$.
In order to estimate $\bm{\theta}$, the common approach is to solve the objective function ${L}$ defined on the training dataset $\mathcal{D}$ as:
\begin{eqnarray}
\label{empirical risk}
\mathop{\min}_{\bm{\theta}} {L}(\bm{\theta}) = \frac{1}{n}\sum_{i=1}^n \ell(y_i, f(\bm{\theta}; \mathbf{x}_i)),
\end{eqnarray}
where $\ell: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ denotes the loss function.
The above problem is also referred to as empirical risk minimization~(ERM).
Meanwhile, current machine learning problems often involve large-scale training datasets and complex models.
GD has become a common choice due to its simplicity and efficiency, which updates the parameter $\bm{\theta}$ as
\begin{eqnarray}
\bm{\theta}_{t+1} = \bm{\theta}_t - \eta \nabla {L}(\bm{\theta}_t),
\end{eqnarray}
where $\eta > 0$ is the learning rate and $\nabla {L}(\bm{\theta}_t)$ is the gradient with respect to the parameter at the $t$-th iteration.
HB starts from the initial parameter $\bm{\theta}_{-1} = \bm{\theta}_0$ and updates as follows:
\begin{equation}
\label{eq:HB_update}
\bm{\theta}_{t+1} = \bm{\theta}_t + \beta(\bm{\theta}_t - \bm{\theta}_{t-1}) - \eta \nabla {L}(\bm{\theta}_t),
\end{equation}
where $\beta \in [0, 1)$ is the momentum parameter.
NAG is another important development of momentum methods and has several types~\cite{nesterov2003introductory}.
In this paper, we focus on NAG with a constant momentum parameter $\beta$.
Given the initial parameters $\bm{\theta}_0$ and $\mathbf{v}_0 = \bm{\theta}_0$,
NAG involves the update procedures in two steps
\begin{eqnarray}
\label{eq:NAG-SC}
{\mathbf{v}}_{t+1} &=& \bm{\theta}_t - \eta \nabla {L}(\bm{\theta}_t) \\
\bm{\theta}_{t+1} &=& {\mathbf{v}}_{t+1} + \beta({\mathbf{v}}_{t+1}-{\mathbf{v}}_{t}),
\end{eqnarray}
which can be reformulated in a equivalent form without $\mathbf{v}$
\begin{eqnarray}
\label{eq:NAG-SC_one_line}
\bm{\theta}_{t+1} &=& \bm{\theta}_t + \beta(\bm{\theta}_t - \bm{\theta}_{t-1}) - \eta \nabla {L}(\bm{\theta}_t) - \beta\eta(\nabla {L}(\bm{\theta}_t) - \nabla {L}(\bm{\theta}_{t-1})).
\end{eqnarray}
Compared with HB~(\ref{eq:HB_update}), NAG has an additional term $\beta\eta(\nabla {L}(\bm{\theta}_t) - \nabla {L}(\bm{\theta}_{t-1}))$, which computes the difference between two consecutive gradients and is referred to as gradient correction~\cite{shi2019acceleration}.
\begin{figure}[!t]
\label{architecture}
\centering
\includegraphics[scale=0.3]{architecture.eps}
\caption{The architecture of the two-layer fully connected neural network with ReLU activation.}
\label{interval}
\end{figure}
\textbf{The details of the neural network.} In this work, we consider a two-layer fully connected neural network $f: \mathbb{R}^d \to \mathbb{R}$ as follows:
\begin{equation}
\label{eq:two_layer neural network}
f(\mathbf{W}, \textbf{a};\mathbf{x})= \frac{1}{\sqrt{m}} \sum_{r=1}^m a^r \sigma(\langle \mathbf{w}^r, \mathbf{x} \rangle),
\end{equation}
where $\mathbf{x} \in \mathbb{R}^d$ denotes the input features, $\mathbf{W}=(\mathbf{w}^1, \mathbf{w}^2, \cdots,\mathbf{w}^m) \in \mathbb{R}^{d \times m}$ denotes the weight matrix of the hidden layer, $\textbf{a} = (a^1,a^2,\cdots, a^m) \in \mathbb{R}^m $ denotes the output weight vector and $\sigma(z)=z \cdot \mathbb{I}\{z \geq 0\}$ denotes the ReLU activation function.
Figure~1 shows the architecture of the neural network.
The parameters follow the random initialization scheme as $\mathbf{w}^r \sim \mathcal{N}(0, \textit{I}_d)$ and $a^r \sim Rademacher(1/2)$ for any $r \in [m]$.
Following the settings in~\cite{du2018gradient,wang2020provable,arora2019fine}, we keep the output layer $\mathbf{a}$ fixed after initialization and only optimize the weight matrix $\mathbf{W}$ through minimizing the square loss
\begin{equation}
\label{eq:objective}
{L}(\mathbf{W},\mathbf{a}) = \frac{1}{2}\sum_{i=1}^n (y_i - f(\mathbf{W}, \textbf{a};\mathbf{x}_i))^2.
\end{equation}
Then the gradient for the weight vector of the $r$-th neuron can be calculated as:
\begin{equation}
\label{eq:gradient_objective}
\frac{\partial {L}(\mathbf{W},\mathbf{a})}{\partial \mathbf{w}^r} = \frac{1}{\sqrt{m}}\sum_{i=1}^n (f(\mathbf{W}, \textbf{a};\mathbf{x}_i) - y_i) a^r \mathbf{x}_i \mathbb{I}\{{\langle \mathbf{w}^r, \mathbf{x}_i \rangle \geq 0}\}.
\end{equation}
Although this model is a simple two-layer neural network, its loss landscape is still non-convex and non-smooth due to the use of ReLU activation function.
However, the objective function ${L}$ becomes convex when the weight matrix $\mathbf{W}$ is fixed and just optimizes the output layer $\mathbf{a}$, and this setting has been studied in~\cite{ICML-2018-Nguyen0}.
\begin{algorithm}[!t]\caption{Training Two-Layer Fully Connected ReLU Neural Network with NAG.}\label{alg:alg_main_text}
\begin{algorithmic}[1]
\State \textbf{Parameters}: learning rate $\eta > 0$, momentum parameter $0\leq\beta<1$.
\State \textbf{Initialization}: $\mathbf{v}^r(0) = \mathbf{w}^r(0) \sim \mathcal{N}(0,I_d)$, $a^r \sim Rademacher(1/2)$ for $r\in [m]$.
\For{$t = 0, \ldots, T$}
\For{$ r=1, \ldots, m$}
\State Calculate gradient $\frac{\partial {L}(\mathbf{w}(t), \mathbf{a})}{\partial \mathbf{w}^r(t)}$ for $\mathbf{w}^r$ using~(\ref{eq:gradient_objective}).
\State Update $\mathbf{v}^r$: $\mathbf{v}^r(t+1) = \mathbf{w}^r(t) - \eta \frac{\partial {L}(\mathbf{w}(t),\mathbf{a})}{\partial \mathbf{w}^r(t)}$.
\State Update $\mathbf{w}^r$: {{$\mathbf{w}^r(t+1) = \mathbf{v}^r(t+1) + \beta (\mathbf{v}^r(t+1) - \mathbf{v}^r(t))$}}.
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{NTK theory.} This theory was first introduced by Jacot~\cite{NIPS2018_8076} to study the optimization of infinite wide neural networks.
It is closely related to a Gram matrix $\bm{H}_t$, which is defined as:
\begin{equation}
\label{eq:NTK}
\bm{H}_t(\mathbf{x}_i, \mathbf{x}_j)=\langle \nabla_{\bm{\theta}} f(\bm{\theta}_t;\mathbf{x}_i) , \nabla_{\bm{\theta}} f(\bm{\theta}_t;\mathbf{x}_j) \rangle, \forall \; (i,j) \in [n]\times [n],
\end{equation}
where $f$ is the neural network model and $\theta$ represents its parameter.
Clearly, $\bm{H}_t$ is positive semi-definite due to the property of Gram matrix and varies according to $\theta_t$.
As the width of the neural network goes to infinity,
the limit matrix $\bar{\bm{H}} := \lim_{m\to\infty}\bm{H}_0$ is
determined by the initialization and architecture of the corresponding neural network $f$,
which is the so-called NTK matrix.
When the neural network is sufficiently over-parameterized, $\bm{\theta}_t$ barely changes from its initial $\bm{\theta}_0$, which in turn guarantees $\bm{H}_t$ stays close to $\bar{\bm{H}}$ during training~\cite{du2018gradient,NIPS2018_8076,arora2019fine}.
As a result, the over-parameterized neural network behaves similarly to its linearization around $\bm{\theta}_0$.
Given the specific two-layer neural network~(\ref{eq:two_layer neural network}) and the objective function~(\ref{eq:objective}), it has the corresponding $\bm{H}_t$ as:
\begin{eqnarray}
\label{eq:gram matrix h_t}
\bm{H}_t(\mathbf{x}_i, \mathbf{x}_j)=\frac{1}{m}\sum_{r=1}^m \langle \mathbf{x}_i, \mathbf{x}_j \rangle \mathbb{I}\{\langle \mathbf{w}_t^r, \mathbf{x}_i \rangle \geq 0 \& \langle \mathbf{w}_t^r, \mathbf{x}_j \rangle \geq 0\},
\end{eqnarray}
and the NTK $\bar{\bm{H}}$ can be calculated with the expected value
\begin{eqnarray}
\label{limiting NTK}
\bar{\bm{H}}(\mathbf{x}_i, \mathbf{x}_j) &=& \mathbb{E}_{\mathbf{w} \sim N(0, \textit{I})}[\langle \mathbf{x}_i, \mathbf{x}_j \rangle \mathbb{I}\{\langle \mathbf{w}, \mathbf{x}_i \rangle \geq 0 \& \langle \mathbf{w}, \mathbf{x}_j \rangle \geq 0\}] \nonumber\\
&=& \langle \mathbf{x}_i, \mathbf{x}_j \rangle \frac{\pi - arccos(\langle \mathbf{x}_i, \mathbf{x}_j \rangle)}{2\pi}.
\end{eqnarray}
In addition, the above $\bar{\bm{H}}$ is strictly positive when the training dataset satisfies $\mathbf{x}_i \neq \mathbf{x}_j$ for all $i \neq j$~\cite{du2018gradient}.
\section{Main results}
In this section, we analyze the dynamics of NAG's residual error from a discrete view and give a non-asymptotic convergence rate with specific learning rate and momentum parameter, which is inspired by~\cite{du2018gradient} and \cite{wang2020provable}.
\subsection{Intuition behind our proof}
\label{section4_1}
To start with, we intuitively illustrate the main idea of our proof under the infinite width assumption.
As mentioned in Section~\ref{problem setting}, some theoretical and empirical works~\cite{NIPS2019_8559,NIPS2019_9063} have shown that the outputs of the over-parameterized neural network can be approximated by its first-order Taylor expansion around its initial parameter as:
\begin{equation}
\label{eq:linearzation}
f(\bm{\theta};\mathbf{x}) \approx f(\bm{\theta}_0;\mathbf{x}) + \langle\nabla f(\bm{\theta}_0;\mathbf{x}),\bm{\theta} - \bm{\theta}_0 \rangle.
\end{equation}
By taking the derivative on both sides of~(\ref{eq:linearzation}), it has
\begin{eqnarray}
\label{eq:approx gradient}
\nabla_{\bm{\theta}} f(\bm{\theta};\mathbf{x}) \approx \nabla_{\bm{\theta}} f(\bm{\theta}_0;\mathbf{x}).
\end{eqnarray}
For simplicity, let $\mathcal{X}=(\mathbf{x}_1, \cdots, \mathbf{x}_n) \in \mathbb{R}^{d \times n}$ and $\mathcal{Y}= (y_1, \cdots, y_n) \in \mathbb{R}^n$ be the concatenation of the features and the corresponding labels of dataset $\mathcal{D}$.
In addition, we define $f(\bm{\theta}; \mathcal{X}) = (f(\bm{\theta}; \mathbf{x}_1), \cdots, f(\bm{\theta}; \mathbf{x}_n)) \in \mathbb{R}^n$,$\nabla_{\bm{\theta}} f(\bm{\theta}; \mathcal{X}) = (\nabla_{\bm{\theta}}f(\bm{\theta}; \mathbf{x}_1), \cdots, \nabla_{\bm{\theta}}f(\bm{\theta}; \mathbf{x}_n))^{\top} \in \mathbb{R}^{n \times k}$ and $\bm{\xi} =(f(\bm{\theta}; \mathbf{x}_1) -y_1, \cdots, f(\bm{\theta}; \mathbf{x}_n) - y_n) \in \mathbb{R}^n$ as the concatenated outputs, gradients and residual errors of the neural network, respectively.
Plugging NAG's update rule~(\ref{eq:NAG-SC_one_line}) into (\ref{eq:linearzation}), it has
\begin{eqnarray}
\label{eq:NAG_transform}
&&f(\bm{\theta}_{t+1};\mathcal{X})\nonumber \\
\!\!\!\!\!\!&\approx& \!\!\!\! f(\bm{\theta}_0;\mathcal{X}) \!+\! \nabla_{\bm{\theta}} f(\bm{\theta}_0;\mathcal{X})\big(\bm{\theta}_t \!-\! \eta \nabla_{\bm{\theta}} {L}(\bm{\theta}_t) \!+\! \beta(\bm{\theta}_t \!-\! \bm{\theta}_{t\!-\!1}) \!-\! \eta\beta\big(\nabla_{\bm{\theta}} {L}(\bm{\theta}_t) \!-\! \nabla_{\bm{\theta}} {L}(\bm{\theta}_{t\!-\!1}) \big) \!-\! \bm{\theta}_0 \big) \nonumber\\
\!\!\!\!\!\!&\approx&\!\!\!\! f(\bm{\theta}_t;\mathcal{X}) - \eta\beta\nabla_{\bm{\theta}} f(\bm{\theta}_0;\mathcal{X})\big(\nabla_{\bm{\theta}} {L}(\bm{\theta}_t) -\nabla_{\bm{\theta}} {L}(\bm{\theta}_{t-1}) \big) \!-\! \eta \nabla_{\bm{\theta}} f(\bm{\theta}_0;\mathcal{X})\nabla_{\bm{\theta}}{L}(\!\bm{\theta}_t;\mathcal{X}) \nonumber\\
&+& \!\!\!\! \beta \big( f(\bm{\theta}_t;\mathcal{X}) \!-\!f(\bm{\theta}_{t-1};\mathcal{X}) \big) ,
\end{eqnarray}
where the last approximation uses~(\ref{eq:linearzation}).
Expanding $\nabla_{\bm{\theta}}{L}(\bm{\theta}_t)$ with~(\ref{eq:objective}), it has
\begin{eqnarray}
\label{eq:expansion of L}
\nabla_{\bm{\theta}}{L}(\bm{\theta}_t) = \nabla_{\bm{\theta}}f(\bm{\theta}_t;\mathcal{X})^{\top}(f(\bm{\theta}_t; \mathcal{X}) - \mathcal{Y}).
\end{eqnarray}
Then, plugging (\ref{eq:approx gradient}) and (\ref{eq:expansion of L}) into (\ref{eq:NAG_transform}), it has the approximated residual error as:
\begin{eqnarray}
\bm{\xi}_{t+1} \!\!\!&=&\!\!\! f(\bm{\theta}_{t+1};\mathcal{X}) - \mathcal{Y} \nonumber \\
\label{eq:NAG_approx_one}
\!\!\!&\approx&\!\!\!\!\! \bm{\xi}_t \!-\! \eta \bm{H}_0\bm{\xi}_t \!+\! \beta(\bm{\xi}_t \!-\! \bm{\xi}_{t-1}) \!-\!\eta\beta\bm{H}_0(\bm{\xi}_t-\bm{\xi}_{t-1}).
\end{eqnarray}
Reformulating~(\ref{eq:NAG_approx_one}), it has
\begin{eqnarray}
\label{eq:residual_recurisvie}
\begin{bmatrix} \bm{\xi}_{t+1} \\ \bm{\xi}_t \end{bmatrix} \approx \begin{bmatrix}
(1\!+\!\beta)(\mathbf{I}_n\!-\!\eta \bm{H}_0) \!&\!
\beta(\!-\mathbf{I}_n\!+\!\eta \bm{H}_{0}) \\
\mathbf{I}_n \!&\! \textbf{0}_n
\end{bmatrix} \begin{bmatrix} \bm{\xi}_{t} \\ \bm{\xi}_{t-1} \end{bmatrix},
\end{eqnarray}
where $\mathbf{I}_n \in \mathbb{R}^{n\times n}$ and $\textbf{0}_n \in \mathbb{R}^{n \times n}$ denote the identity matrix and zero matrix, respectively.
Similar to the quadratic convex optimization case~\cite{lessard2016analysis,flammarion2015averaging}, the residual error in~(\ref{eq:residual_recurisvie}) follows a linear dynamical system.
When the spectral norm of the coefficient matrix in~(\ref{eq:residual_recurisvie}) is less than one, the residual error decays to zero at an asymptotic linear convergence rate according to Gelfand's formula~\cite{1941Normierte}.
However, this result is asymptotic and depends on the infinite width assumption.
In contrast, we rely on a mild assumption about the width and provide a non-asymptotic convergence result.
\subsection{Residual dynamics for NAG}
Our analysis depends on an important event: $$A_{ir} = \{\exists \mathbf{w}: \|\mathbf{w} - \mathbf{w}_0^r\| \leq R, \mathbb{I}\{\langle \mathbf{w}, \mathbf{x}_i \rangle\} \neq \mathbb{I}\{\langle \mathbf{w}_0^r, \mathbf{x}_i \rangle\}\},$$
where $R>0$ is a constant.
This event describes whether there exists a weight vector that restricts in a neighbourhood of its initial but has a different activation pattern compared to the initial for the same input.
Here, the activation pattern is defined as the output of $\mathbb{I}\{\langle \mathbf{w}, \mathbf{x} \rangle\}$.
Then one can define the set $S_i = \{r\in [m]: \mathbb{I}\{A_{ir}\} = 0\}$ and its complementary set $S_i^{\perp} = [m] \setminus S_i $ to separate the neurons with two parts.
By utilizing the intuition introduced in Section~\ref{section4_1}, we provide the recursion formulation of the residual error for a finite width two-layer neural network trained by NAG.
\begin{lemma}
\label{lemma: rec form}
Let $\bm{\xi}_t$ be the residual error vector of the $t$-th iterate in NAG for any $t \in [T]$, it has
\begin{equation}
\label{eq:recursion}
{{\bm{\xi}_{t+1} = \bm{\xi}_t - \eta \bm{H}_0\bm{\xi}_t + \beta(\bm{\xi}_t - \bm{\xi}_{t-1}) -\eta\beta\bm{H}_0(\bm{\xi}_t -\bm{\xi}_{t-1}) +\bm{\psi}_t + \bm{\phi}_t}},
\end{equation}
where
\begin{eqnarray}
\label{eq: the definition of psib}
\bm{\psi}_t= \beta\eta(\bm{H}_{t-1} - \bm{H}_0)\bm{\xi}_{t-1} -(1+\beta)\eta(\bm{H}_t - \bm{H}_0)\bm{\xi}_t,
\end{eqnarray}
and the $i$-th element of $\bm{\phi}_t$ is bounded by
\begin{equation}
\label{eq:bound of phib}
|\bm{\phi}_t[i]| \leq \frac{ \sup_{j\in [n]}|S_j^{\perp}|\sqrt{n}\eta}{m}\left[(2+4\beta)\|\bm{\xi}_t\|+ 3\beta\|\bm{\xi}_{t-1} \|+2\sum_{i=0}^{t-1}\beta^{t+1-i}\|\bm{\xi}_i\|\right].
\end{equation}
\end{lemma}
The proof of Lemma 1 can be found in~\ref{app:lemma_1}.
Denotes $\mathbf{z}_t = [\bm{\xi}_t; \bm{\xi}_{t-1}]$ as the augmented residual error at iteration $t$,
then~(\ref{eq:recursion}) can be reformulated as:
\begin{eqnarray}
\label{eq:matrix_form_residual}
\mathbf{z}_{t+1} = \mathbf{M} \mathbf{z}_t + \bm{\mu}_t,
\end{eqnarray}
where $\bm{\mu}_t = [\bm{\psi}_t+ \bm{\phi}_t;\textbf{0}]$ and the coefficient matrix
$\mathbf{M}=\begin{bmatrix}
(1\!+\!\beta)(\mathbf{I}_n\!-\!\eta \bm{H}_0) \!&\!
\beta(\!-\mathbf{I}_n\!+\!\eta \bm{H}_0) \\
\mathbf{I}_n \!&\! \textbf{0}_n
\end{bmatrix}$.
Note that, compared to the linear dynamical system~(\ref{eq:residual_recurisvie}), the finite width one has an additional term $\bm{\mu}$, which can be regarded as a perturbation and we will discuss its bound later.
Furthermore, as shown in~\cite{wang2020provable}, HB has the residual dynamics as
{\small{\begin{eqnarray}
\label{eq::matrix_form_HB}
\begin{bmatrix} \bm{\xi}_{t+1} \\ \bm{\xi}_t \end{bmatrix} = \begin{bmatrix}
(1\!+\!\beta)\mathbf{I}_n\!-\!\eta \bm{H}_0 \!&\!
-\beta\bm{H}_{0}) \\
\mathbf{I}_n \!&\! \textbf{0}_n
\end{bmatrix} \begin{bmatrix} \bm{\xi}_{t} \\ \bm{\xi}_{t-1}\end{bmatrix} + \bm{\mu}_t^{'},
\end{eqnarray}}}
which differs from~(\ref{eq:matrix_form_residual}) both in the coefficient matrix and perturbation term.
\subsection{Convergence analysis}
\label{main_theory_convergence}
\renewcommand\arraystretch{1.3}
\begin{table*}
\caption{Summary of the Convergence Results.
Let $m$ denotes the width of neural network.
Let $n$ denotes the number of input data points.
Let $\delta$ denotes the failure probability.
Let $t$ denotes the iteration number.
Define $\lambda = \lambda_{min}(\bar{\bm{H}})$ , $\lambda_{max} = \lambda_{max}(\bar{\bm{H}}) + \lambda/4$ and $\kappa = 4\kappa(\bar{\bm{H}})/3 + 1/3$. }
\label{table1}
\centering
\begin{tabular}{ | M{1.3cm}| M{3.3cm}| M{4cm}| M{2.7cm}|}
\hline
{\bf Method} & Width of the Neural Network & Hyperparameter Choice & Convergence Rate \\ \hline
GD~\cite{wu2019global} & $\Omega(\lambda^{-4} \delta^{-3}n^6)$ & $\eta = \Theta(\frac{1}{\lambda_{max}(\bar{\bm{H}})})$ & $(1-\Theta(\frac{1}{\kappa}))^t$ \\ [1.4ex] \hline
HB~\cite{wang2020provable} & $\Omega(\lambda^{-2}n^{4}\kappa^2 \log^3(n/\delta))$ & $\eta = \frac{1}{\lambda_{max}},\beta = (1-\frac{1}{2\sqrt{\kappa}})^2 $ & $(1-\frac{1}{4\sqrt{\kappa}})^t$ \\ [1.4ex] \hline
{NAG} & $\Omega(\lambda^{-2}n^{4}\kappa^2 \log^3(n/\delta))$ & $\eta = \frac{1}{2 \lambda_{max}},\beta = \frac{3\sqrt{\kappa} - 2}{3\sqrt{\kappa} + 2} $ & $(1-\frac{1}{2\sqrt{\kappa}})^t$ \\ [1.4ex] \hline
\end{tabular}
\end{table*}
By recursively using~(\ref{eq:matrix_form_residual}), it has
\begin{eqnarray}
\label{eq: evolution of z_t}
\mathbf{z}_t = \mathbf{M}^t \mathbf{z}_0 + \sum_{i=0}^{t-1}\mathbf{M}^{t-1-i} \bm{\mu}_i.
\end{eqnarray}
Then applying Cauchy-Schwarz inequality on~(\ref{eq: evolution of z_t}), we have
\begin{eqnarray}
\label{eq:matrix_form}
\|\mathbf{z}_t\| \leq \|\mathbf{M}^t \mathbf{z}_0\| + \|\sum_{i=0}^{t-1}\mathbf{M}^{t-1-i} \bm{\mu}_i\|.
\end{eqnarray}
In order to prove the convergence of NAG, it needs to separately derive bounds for the two terms on the right-hand side of (\ref{eq:matrix_form}).
The first term is the norm of the product between the matrix power $\mathbf{M}^t$ and the vector $\mathbf{z}_0$.
We provide its upper bound in the following lemma.
\begin{lemma}
\label{lemma: matrix_vector}
Assume $\bm{H} \in \mathbb{R}^{n \times n}$ is a symmetry positive definite matrix.
Let{\small{ $\mathbf{M} = \begin{bmatrix}
(1\!+\!\beta)(\mathbf{I}_n\!-\!\eta \bm{H}) &
\beta(-\mathbf{I}_n\!+\!\eta \bm{H}) \\
\mathbf{I}_n & \textbf{0}_n
\end{bmatrix} \in \mathbb{R}^{2n \times 2n}$}}.
Suppose a sequence of iterates $\{\mathbf{v}_i\}$ satisfy $\mathbf{v}_t = \mathbf{M}\mathbf{v}_{t-1}$ for any $t \leq T$.
If $\beta$ and $\eta$ are chosen that satisfy $1 > \beta \geq \frac{1-\sqrt{\eta\lambda_{min}(\bm{H})}}{1+\sqrt{\eta\lambda_{min}(\bm{H})}}$ and $0 < \eta \leq 1/\lambda_{max}(\bm{H})$, then it has the bound at any iteration $k \leq T$ as
\begin{equation}
\label{eq:the bound of matrix vector}
\|\mathbf{v}_k\| \leq C \big(\sqrt{\beta(1-\eta\lambda_{min}(\bm{H}))}\big)^k \|\mathbf{v}_0\|,
\end{equation}
where $ C = \frac{2\beta(1-\eta\lambda_{min}(\bm{H})) + 2}{\sqrt{\min\{g(\beta, \eta\lambda_{min}(\bm{H})), g(\beta, \eta\lambda_{max}(\bm{H}))\}}}$ and the function $g$ is defined as $g(x, y) = 4x(1-y) - [(1+x)(1-y)]^2$.
\end{lemma}
The proof is provided in the \ref{app: matrix_vector}.
Given the ranges of the hyperparameters $\eta$ and $\beta$, it is easy to observe that $\sqrt{\beta(1-\eta\lambda_{min}(H))} < 1$, which ensures the decline of $\|\mathbf{v}_k\|$ during evolution.
For further determining the upper bounds for C and the decay rate, we set $\eta$ and $\beta$ with the spectrum of $\bm{H}$.
\begin{lemma}
\label{lemma: specific setting}
Assume $0 < \lambda \leq \lambda_{min}(\bm{H}) \leq \lambda_{max}(\bm{H}) \leq \lambda_{max}$.
Denote ${\kappa} = \lambda_{max}/\lambda$.
With $\eta = 1/2\lambda_{max}$ and $\beta = \frac{3\sqrt{{\kappa}} - 2}{3\sqrt{{\kappa}} + 2}$, it has
\begin{eqnarray}
\sqrt{\beta(1-\eta\lambda_{min}(\bm{H}))} \leq 1 - \frac{2}{3\sqrt{{\kappa}}} ,\;\; C \leq 12\sqrt{{\kappa}}.
\end{eqnarray}
\end{lemma}
Furthermore, it should be noted that $\mathbf{M}$ in (\ref{eq:matrix_form}) is composed by $\bm{H}_0$, which depends on the random initialization of $\mathbf{W}_0$.
For an over-parameterized neural network, the eigenvalues of the random matrix $\bm{H}_0$ can be bounded by the spectrum of the deterministic NTK matrix $\bar{\bm{H}}$~\cite{wang2020provable}, thereby allowing us to determine the hyperparameters with specific values.
\begin{lemma}(Lemma 13 in \cite{wang2020provable})
\label{lemma: bound of H_0}
Denote $\lambda = \lambda_{min}(\bar{\bm{H}})$. Set $m=\Omega(\lambda^{-2}n^2\log(n/\delta))$.
Assume $\mathbf{w}_0^r \sim \mathcal{N}(0, I_d)$ for all $r \in [n]$.
With probability at least $1-\delta$, it holds that
\begin{eqnarray}
\|\bm{H}_0 -\bar{\bm{H}}\|_F \leq \frac{\lambda}{4}&,&
\lambda_{min}(\bm{H}_0) \geq \frac{3}{4}\lambda > 0 \;\;
\;\;\lambda_{max}(\bm{H}_0) \leq \lambda_{max}(\bar{\bm{H}}) + \frac{\lambda}{4}. \nonumber
\end{eqnarray}
As a result, the condition number of $\bm{H}_0$ is bounded by
\begin{eqnarray}
\kappa(\bm{H}_0) \leq \frac{4}{3}\kappa(\bar{\bm{H}}) + \frac{1}{3}.
\end{eqnarray}
\end{lemma}
Now we turn to analyzing the second term on the right-hand side of~(\ref{eq:matrix_form}), which is also composed by the product of the matrix power $\mathbf{M}^i$ and a bounded vector.
Then it only needs to bound the norm of $\bm{\mu}$.
Using the Cauchy-Schwarz inequality, it has $\|\bm{\mu}_t\| \leq \|\bm{\phi}_t\| + \|\bm{\psi}_t\|$.
From (\ref{eq:bound of phib}), we observe that the bound of $|\bm{\phi}_t[i]|$ mainly depends on the term $|S_i^{\perp}|$, which describes how many neurons change their activation patterns on the $i$-th instance during training.
According to recent studies~\cite{du2018gradient,song2019quadratic},
$|S_i^{\perp}|$ has an upper bound $4mR$, which is determined by the distance $R$ between $\mathbf{w}^r_t$ and its initialization for any $r \in [m]$ and $t \in [T]$.
In~\ref{section: supporting lemmas}, Lemma~\ref{lemma: bound of S_i} presents the details.
When $R$ is small enough, it has $|S_i^{\perp}| \ll m$.
On the other hand, the bound of $\|\bm{\psi}_t\|$ is closely related to the distance between $\bm{H}_t$ and $\bm{H}_0$.
Previous works~\cite{du2018gradient,song2019quadratic} showed that the upper bound of $\|\bm{H}_t - \bm{H}_0\|$ is also determined by the distance $R$,
where Lemma~\ref{lemma: H_t and H_0} in~\ref{section: supporting lemmas} gives the details.
In Theorem 1, we derive $R=\mathcal{O}(1/\sqrt{m})$, which helps control the size of $\|\bm{\phi}_t\|$ and $\|\bm{\psi}_t\|$ with an appropriate $m$.
The corresponding bounds for $R$, $\|\bm{\phi}\|$ and $\|\bm{\psi}\|$ are given in the proof of Theorem 1.
Finally, we introduce our main result on the convergence of NAG.
\begin{thm}
Define $\lambda = \frac{3\lambda_{min}(\bar{\bm{H}})}{4}$, $\lambda_{max} = \lambda_{max}(\bar{\bm{H}}) + \frac{\lambda}{4}$ and ${\kappa} = \frac{4}{3}\kappa(\bar{\bm{H}}) + \frac{1}{3}$.
Assume $\mathbf{w}_0^{r} \sim N(0, I_d)$ and $a^r \sim Rademacher(1/2)$ for all $r \in [m]$.
Suppose the number of the nodes in the hidden layer is $m=\Omega(\lambda^{-2}n^{4} \kappa^2 log^3(n/\delta))$.
If the leaning rate $\eta = 1/(2\lambda_{max})$ and the momentum parameter $\beta = \frac{3\sqrt{{\kappa}} - 2}{3\sqrt{{\kappa}} + 2}$,
with probability at least $1-\delta$ over the random initialization, the residual error for NAG at any iteration $t$ satisfies
{{\begin{equation}
\label{eq:theorem_NAG}
\left\|\begin{bmatrix} \bm{\xi}_t \\ \bm{\xi}_{t-1} \end{bmatrix}\right\| \leq {(1 - \frac{1}{2\sqrt{{\kappa}}})^t} 2\gamma \textstyle
\left\|\begin{bmatrix} \bm{\xi}_0 \\ \bm{\xi}_{-1} \end{bmatrix}\right\| ,
\end{equation}}}
where $\gamma =12\sqrt{{\kappa}}$.
For every $r \in [m]$, we have
\begin{equation}
\label{w_r_distance}
\|\mathbf{w}_t^r - \mathbf{w}_0^r\| \leq \frac{48\sqrt{2n\kappa}}{\lambda\sqrt{m}}\|\bm{\xi}_0\| .
\end{equation}
\end{thm}
\textbf{Remark 1}. With the initialization $\mathbf{W}_{-1} = \mathbf{W}_0$, it has $\bm{\xi}_{-1}=\bm{\xi}_0$.
Thus, according to Theorem 1, the training error $\bm{\xi}_t$ of NAG converges linearly to zero at a $(1-\frac{1}{2\sqrt{\kappa}})^t$ rate after $t$ iteration, which indicates NAG is able to achieve the global minimum as GD and HB.
\textbf{Remark 2}. As shown in~\cite{du2018gradient}, GD converges at a rate $(1-{\frac{\eta\lambda}{2}})^t$, but with a small learning rate $\eta = \mathcal{O}(\frac{\lambda}{n^2})$.
\cite{wu2019global} further improved the bound of the learning rate to $\mathcal{O}(\frac{1}{\|{\bar{\bm{H}}}\|})$, where $\|{\bar{\bm{H}}}\| \leq n$ and provides an $\mathcal{O}(\lambda/n)$ improvement.
This results in a faster convergence rate $(1-\Theta(1/\kappa))^t$ for GD.
As shown in Theorem 1, NAG obtains a smaller convergence rate $(1 - \Theta(1/\sqrt{\kappa}))^t$, which validates its acceleration over GD.
Moreover, compard to the convergence rate of HB as proved in~\cite{wang2020provable},
our results show that NAG obtains a comparable convergence rate.
\textbf{Remark 3}. The initial residual error satisfies $\|\bm{\xi}_0\|^2 = \mathcal{O}(nlog(m/\delta)log^2(n/\delta))$ as shown in Lemma~\ref{lemma: init bound}.
Therefore, the upper bound $R$ of $\|\mathbf{w}_t^r - \mathbf{w}_0^r\|$ scales as $\mathcal{O}(1/\sqrt{m})$ for any $r\in[m]$ according to (\ref{w_r_distance}).
This is consistent with the NTK regime that the parameter is hardly changed when the neural network is over-parameterized.
Moreover, the number of the changed activation patterns is bounded by $|S_i^{\perp}|=|\sum_{r=1}^m \mathbb{I}\{\langle \mathbf{w}_t^r, \mathbf{x}_i \rangle\} \neq \mathbb{I}\{\langle \mathbf{w}_0^r, \mathbf{x}_i \rangle\}| \leq 4mR$ according to Lemma~\ref{lemma: bound of S_i}.
As a result, $\sum_{i \in [n]}|S_i^{\perp}|/(mn)$ can be upper bounded with $4R$, which also scales with $\mathcal{O}(1/\sqrt{m})$.
On the other hand, GD has $R = \mathcal{O}(\frac{\sqrt{n}}{\lambda \sqrt{m}}\|\bm{\xi}_0\|)$ according to~\cite{du2018gradient}, which is smaller than NAG due to $\kappa > 1$.
Furthermore, $R$ of HB scales as $\mathcal{O}(\frac{\sqrt{n\kappa}}{\lambda \sqrt{m}}\|\bm{\xi}_0\|)$~\cite{wang2020provable}, which is similar as NAG.
\section{Numerical experiments}
In this section, we conduct extensive experiments to validate our theoretical results, including i) the convergence comparison between NAG, HB and GD. ii) the impact of the over-parameterization on some quantities as introduced in Remark 3.
\subsection{Setup}
Six benchmark datasets are used in the experiments: FMNIST~\cite{xiao2017fashion}, MNIST~\cite{lecun1998gradient}, CIFAR10~\cite{krizhevsky2009learning} and three UCI regression datasets (ENERGY, HOUSING and YACHT)~\cite{DBLP:conf/nips/LeeSPAXNS20}.
The pre-processing of the first three image classification datasets follows the procedures outlined in~\cite{arora2019fine}, where we use the first two classes of images with 10,000 training instances, where the label of the first class is set to +1 and -1 otherwise.
For all six datasets, we normalize all instances with the unit norm.
According to Table~\ref{table1}, the eigenvalues of the NTK matrix $\bar{\bm{H}}$ are used to determine the hyperparameters of each optimizer.
Note that the matrix $\bar{\bm{H}}$ is analytic, so its eigenvalues can be easily calculated based on~(\ref{limiting NTK}).
As described in Section~\ref{problem setting}, we use the same architecture and the initialization scheme of the neural network, which is trained with the square loss (\ref{eq:objective}) in the deterministic setting.
All experiments are conducted on 8 NVIDIA Tesla A100 GPU and the code is written in JAX~\cite{jax2018github}.
\begin{figure*}[!t]
\centering
\subfigure[FMNIST]{\includegraphics[scale=0.47]{fmnist.eps}}
\subfigure[MNIST]{\includegraphics[scale=0.47]{mnist.eps}}\\
\subfigure[CIFAR10]{\includegraphics[scale=0.47]{Cifar10.eps}}
\subfigure[ENERGY]{\includegraphics[scale=0.47]{energy.eps}}\\
\subfigure[HOUSING]{\includegraphics[scale=0.47]{housing.eps}}
\subfigure[YACHT]{\includegraphics[scale=0.47]{yacht.eps}}
\caption{Convergence comparison among GD, HB and NAG.
}
\label{Convergence}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfigure[FMNIST]{\includegraphics[scale=0.47]{fmnist_maximum_distance.eps}}
\subfigure[MNIST]{\includegraphics[scale=0.47]{mnist_maximum_distance.eps}}\\
\subfigure[CIFAR10]{\includegraphics[scale=0.47]{Cifar10_maximum_distance.eps}}
\subfigure[ENERGY]{\includegraphics[scale=0.47]{energy_maximum_distance.eps}}\\
\subfigure[HOUSING]{\includegraphics[scale=0.47]{housing_maximum_distance.eps}}
\subfigure[YACHT]{\includegraphics[scale=0.47]{yacht_maximum_distance.eps}}
\caption{Maximum distance from initialization comparison for NAG with different width $m$.
}
\label{relative_distance}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfigure[FMNIST]{\includegraphics[scale=0.47]{fmnist_activation_pattern.eps}}
\subfigure[MNIST]{\includegraphics[scale=0.47]{mnist_activation_pattern.eps}}\\
\subfigure[CIFAR10]{\includegraphics[scale=0.47]{Cifar10_activation_pattern.eps}}
\subfigure[ENERGY]{\includegraphics[scale=0.47]{energy_activation_pattern.eps}}\\
\subfigure[HOUSING]{\includegraphics[scale=0.47]{housing_activation_pattern.eps}}
\subfigure[YACHT]{\includegraphics[scale=0.47]{yacht_activation_pattern.eps}}
\caption{Activation pattern difference ratio comparison for NAG with different width $m$.
}
\label{activation_pattern}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfigure[FMNIST]{\includegraphics[scale=0.47]{fmnist_maximum_distance_compare.eps}}
\subfigure[MNIST]{\includegraphics[scale=0.47]{mnist_maximum_distance_compare.eps}}\\
\subfigure[CIFAR10]{\includegraphics[scale=0.47]{Cifar10_maximum_distance_compare.eps}}
\subfigure[ENERGY]{\includegraphics[scale=0.47]{energy_maximum_distance_compare.eps}}\\
\subfigure[HOUSING]{\includegraphics[scale=0.47]{housing_maximum_distance_compare.eps}}
\subfigure[YACHT]{\includegraphics[scale=0.47]{yacht_maximum_distance_compare.eps}}
\caption{Maximum distance from initialization comparison among GD, HB and NAG with width $m=20000$.
}
\label{relative_distance_d}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfigure[FMNIST]{\includegraphics[scale=0.47]{fmnist_activation_pattern_compare.eps}}
\subfigure[MNIST]{\includegraphics[scale=0.47]{mnist_activation_pattern_compare.eps}}\\
\subfigure[CIFAR10]{\includegraphics[scale=0.47]{Cifar10_activation_pattern_compare.eps}}
\subfigure[ENERGY]{\includegraphics[scale=0.47]{energy_activation_pattern_compare.eps}}\\
\subfigure[HOUSING]{\includegraphics[scale=0.47]{housing_activation_pattern_compare.eps}}
\subfigure[YACHT]{\includegraphics[scale=0.47]{yacht_activation_pattern_compare.eps}}
\caption{Activation pattern difference ratio comparison among GD, HB and NAG with width $m=20000$.
}
\label{activation_pattern_d}
\end{figure*}
\subsection{Results analysis}
\textbf{Convergence analysis}. We first provide the convergence comparison among NAG, HB and GD.
The neural network is trained with 5 different initialization seed for each dataset, and the width of the hidden layer is $20000$.
The dashed line represents the mean training loss, while the shaded region represents the range of the maximum and minimum performance.
From Fig~\ref{Convergence}, we observe that NAG converges faster than GD on all six datasets.
Furthermore, it is noted that NAG achieves a comparable and even improved performance than HB.
This is in accordance with our theoretical findings.
\textbf{Impact of the over-parameterization}. Secondly, we evaluate the impact of the over-parameterization on two quantities relevant to our theoretical analysis.
One is the maximum distance $\max_{r \in [m]} \|\mathbf{w}_t^r - \mathbf{w}_0^r\|_2$, which is used to demonstrate the change of the parameter with respect to its initial~\cite{du2018gradient}.
The other is the activation pattern difference ratio $\frac{\sum_{i=1}^n\sum_{r=1}^m \mathbb{I}\{\mathbb{I}\{\langle \mathbf{w}_t^r, \mathbf{x}_i \rangle\} \neq \mathbb{I}\{\langle \mathbf{w}_0^r, \mathbf{x}_i \rangle\} \}}{mn}$.
It describes the percentiles of pattern changes among $mn$ patterns~\cite{du2018gradient,wang2020provable}.
In Remark 3, we theoretically show the the upper bounds of these two quantities are all scaled as $\mathcal{O}(1/\sqrt{m})$, indicating that the parameter stays closer to its initial as the width increases.
To observe the impact of the over-parameterization, we set the range of the width as $m \in [1250, 2500, 5000, 10000, 20000]$.
Each neural network of different width is trained with 5 different initialization seed, where the solid line indicates the corresponding mean value.
As shown in Fig~\ref{relative_distance} and Fig~\ref{activation_pattern}, the maximum distance and the activation pattern difference ratio both decrease as the width increases.
Moreover, we compare the above two quantities among NAG, HB and GD.
From Remark 3, we show that the upper bound of the maximum distance from the initialization for NAG is larger than that of GD by a factor $\mathcal{O}(\sqrt{\kappa})$, resulting in a larger activation pattern difference ratio for NAG over GD.
On the other hand, in Remark 3, we also show that NAG has comparable upper bounds for these two quantities as HB.
We conduct the experiments in the same setting as the convergence analysis.
According to Fig~\ref{relative_distance_d} and Fig~\ref{activation_pattern_d}, the two quantities of NAG are larger than that of GD.
Comparing to HB, NAG obtains a comparable or smaller values.
These phenomena support our theoretical results.
\section{Conclusion and future work}
In this paper, we focus on analyzing the training trajectory of NAG for optimizing a two-layer fully connected neural network with ReLU activation.
By exploiting the connection between the NTK and the finite over-parametrized neural network, we show that NAG can achieve a non-asymptotic linear convergence rate to a global optimum.
In the discrete-time scenario, our result provides theoretical guarantees for the acceleration of NAG over GD.
In addition, our result implies NAG obtains a comparable convergence rate as HB.
An important future work is to extend our analysis to deep neural networks with different architectures (e.g., convolutional neural networks, graph neural networks, etc.) and activation functions (e.g., sigmoid, tanh, etc.).
Recently, there are plenty of works studied the convergence of GD on different types of over-paramterized neural networks~\cite{DBLP:conf/icml/DuLL0Z19, DBLP:conf/nips/DuHSPWX19,arora2019exact}.
The key technical challenge lies in deriving and analyzing the associated residual dynamics, which might be complex due to the structure of the neural network.
Meanwhile, in practice, many applications requires numerous entities, yet their interactions are highly incomplete.
Currently, the latent factor model has attracted a lot of attention as a way to deal with this problem~\cite{9238448,9159907,9601264,9647958}.
It brings an interesting future direction for studying the acceleration of NAG for these problems, where the induced dynamics can be investigated using our approach.
|
1,108,101,565,212 | arxiv | \section{Introduction}
The mechanism of the particle production in proton-proton
collisions was studied continously for five decades.
Pions and kaons are belived to be produced in the fragmentation
process. The Lund string model is state of art in this context.
On the other side quarkonium production at high energies is studied
considering two-gluon process. Depending on $C$-parity we have
either $g^* g^* \to {\cal Q}$ ($C$ = +1) or $g^* g^* \to {\cal Q} g$
($C$ = -1) processes when limiting to color singlet mechanisms.
Color octet processes are not under full theoretical control.
Recently our group showed that the production of $\eta_c$ quarkonium
can be understood asumming simple color-singlet $g^* g^* \to \eta_c$
fusion \cite{BPSS2020}. The situation with resonances, such as
$\rho^0$, $f_0(980)$ or $f_2(1270)$ is still different.
We have shown that at large isoscalar meson transverse momenta
the gluon-gluon fusion may be an important mechanism
\cite{LMS2020,LS2020}. At low transverse momenta one has to
included also coalescence mechanism \cite{LS2020}.
For fully heavy tetraquark production the double-parton mechanism may be
required \cite{MSS2021}.
The situation with hidden strangness mesons was not discussed
carefully in the literature. In PYTHIA \cite{Pythia} such mesons
are produced within the Lund-string model.
Recently there was some works on modification of strange hadron
production. Effects of the rope hadronization on strangeness enhancement
in $p p$ collisions at the LHC were discussed e.g. in \cite{BGLT2015}.
Here we wish to explore whether the two-gluon fusion
may be also important production mechanism.
It was suggested that the hadronic production of $\eta'$ meson
could be used to extract two-gluon transition form factor
\cite{MY2000,AP2002}.
The formalism of $\eta'$ meson production in proton-proton collisions
was discusssed already some time ago but no explicit calculation was
performed so far.
In addition, it could not be compared to
the data as the latter was not available at that time (2007).
In the meantime the PHENIX collaboration measured the $\eta'$
production at $\sqrt{s}$ = 200 GeV but no comparison was
made with theoretical results according to our knowledge.
Here we wish to study the situation more carefully and
make comparison to realistic calculation of the gluon-gluon fusion.
Exclusive $p p \to p p \eta'$ production was studied in \cite{SPT2007}
within KMR perturbative approach and the measured cross section could
not be explained at the relatively low $\sqrt{s}$ = 29.1 GeV energy
of the WA102 collaboration experiment at CERN SPS.
On the other hand soft pomeron/reggeon exchanges can be fitted to
describe the experimental data \cite{LNS2014}.
The presence of the two-gluon component in $\eta'$ may be relevant
in the context of its production in the hadronic reaction.
The gluon content in a meson may occur e.g. via mixing with glueballs.
Mixing of scalar glueball with the scalar-isoscalar ``quarkonia'' was
discussed e.g. in \cite{GGF2005}.
According to our knowledge there was not such a discussion for $\eta'$.
However, according to lattice QCD pseudoscalar glueball has a mass
of about 2.6 GeV \cite{Chen2006,SFAPRX2020}, so the mixing
should be rather small. Lower mass pseudoscalar glueball was discussed
in \cite{GLT2009}, which mixes however rather with radial excitations.
Review on experimental searches for the light pseudoscalar glueball
can be found in \cite{MCU2006}.
On the other hand axial anomaly may ``cause'' the presence of gluons
in the $\eta'$ wave function \cite{BM2019}.
No big gluonic content either in $\eta$ or $\eta'$ was found from
the $V \to P \gamma$ and $P \to V \gamma$ radiative decays in
\cite{EN2007}. The KLOE-2 collaboration found $Z_g^2 \approx$ 0.11
probability of the two-gluon component \cite{KLOE2}.
For comparison, the hadronization process for $\eta'$ production
was not discussed carefully in the literature.
Some discussion was presented e.g. in \cite{AGS1994} but in the context
of $e^+ e^-$ collisions.
In this paper we shall discuss possible consequences of gluonic component
in $\eta'$ for its production in proton-proton collisions.
We shall compare our results both with experimental data as well as
with results of the Lund string model.
\section{Sketch of the formalism}
The main color-singlet mechanism of $\phi$ meson production is illustrated
in Fig.\ref{fig:gg_phig}. In this case $\phi$ is produced
in association with an extra ``hard'' gluon due to C-parity conservation.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{pp_gg_phi.eps}
\end{center}
\caption{The leading-order diagram for direct $\phi$
meson production in the $k_t$-factorization approach.}
\label{fig:gg_phig}
\end{figure}
We calculate the dominant color-singlet $g g \to \phi g$
contribution taking into account transverse momenta of initial gluons.
In the $k_t$-factorization the NLO differential cross section can
be written as:
\begin{eqnarray}
\frac{d \sigma(p p \to \phi g X)}{d y_{J/\psi} d y_g d^2 p_{\phi,t} d^2 p_{g,t}}
&& =
\frac{1}{16 \pi^2 {\hat s}^2} \int \frac{d^2 q_{1t}}{\pi} \frac{d^2 q_{2t}}{\pi}
\overline{|{\cal M}_{g^{*} g^{*} \rightarrow \phi g}^{off-shell}|^2}
\nonumber \\
&& \times \;\;
\delta^2 \left( \vec{q}_{1t} + \vec{q}_{2t} - \vec{p}_{H,t} - \vec{p}_{g,t} \right)
{\cal F}_g(x_1,q_{1t}^2,\mu^2) {\cal F}_g(x_2,q_{2t}^2,\mu^2) \; ,
\label{kt_fact_gg_jpsig}
\end{eqnarray}
where ${\cal F}_g$ are unintegrated (or transverse-momentum-dependent)
gluon distributions.
The matrix elements were calculated as done e.g. for $J/\psi g$ production
in \cite{CS2018}.
The corresponding matrix element squared for the $g g \to \phi g$ is
\begin{equation}
|{\cal M}_{gg \to \phi g}|^2 \propto \alpha_s^3 |R(0)|^2 \; .
\label{matrix_element}
\end{equation}
Running coupling contants are used in the calculation.
Different combination of renormalization scales were tried.
Finally we decided to use:
\begin{equation}
\alpha_s^3 \to \alpha_s(\mu_1^2) \alpha_s(\mu_2^2) \alpha_s(\mu_3^2) \; ,
\end{equation}
where $\mu_1^2 = \max(q_{1t}^2,m_t^2)$,
$\mu_2^2 = \max(q_{2t}^2,m_t^2)$ and
$\mu_3^2 = m_t^2$,
where here $m_t$ is the $\phi$ transverse mass.
The factorization scale in the calculation was taken as
$\mu_F^2 = (m_t^2 + p_{t,g}^2)/2$.
The radial wave function at zero can be estimated from the decay
of $\phi \to l^+ l^-$ as is usually done for
$J/\psi(c \bar c)$, see e.g. \cite{Mangoni}
\begin{equation}
\Gamma(\phi \to l^+ l^-) = 16 \pi \frac{\alpha Q_s^2}{M_{\phi}^2}
|\Psi_{\phi}(0)|^2 \left(1 - \frac{16}{3} \frac{\alpha_s}{\pi} \right)
\; ,
\label{Gamma_phi_from_Psi(0)}
\end{equation}
where $Q_s$ is fractional charge of the $s$ quark.
Then
\begin{equation}
|\Psi_{\phi}(0)|^2 = \frac{\Gamma(\phi \to l^+ l^-)}{16 \pi \alpha_{em} Q_s^2}
\frac{M_{\phi}^2}{1 - 16 \alpha_s/(3 \pi)} \; .
\label{Psi(0)}
\end{equation}
In the evalution we use $\alpha_s$ = 0.3.
Using branching fraction from PDG \cite{PDG2020} we get $|\Psi(0)|$.
By convention $|R(0)|^2 = 4 \pi |\Psi(0)|^2$.
Assuming $\alpha_s$ = 0.3 we get $|R(0)|^2$ = 0.11 GeV$^3$.
We shall use this value to estimate the cross section for production
of $\phi$ meson. For comparison for $J/\psi$ (real quarkonium) one gets
$|R(0)|^2 \approx$ 0.8 GeV$^3$.
Similarly we perform calculation for S-wave $\eta'$ meson production.
Here the lowest-order subprocess $g g \to \eta'$ is allowed by
positive $C$-parity of $\eta'$ mesons.
In the $k_t$-factorization approach the leading-order cross section
for the $\eta'$ meson production can be written as:
\begin{eqnarray}
\sigma_{pp \to \eta'} = \int d y d^2 p_t d^2 q_t \frac{1}{s x_1 x_2}
\frac{1}{m_{t,\eta'}^2}
\overline{|{\cal M}_{g^*g^* \to \eta'}|^2}
{\cal F}_g(x_1,q_{1t}^2,\mu_F^2) {\cal F}_g(x_2,q_{2t}^2,\mu_F^2) / 4
\; ,
\label{useful_formula}
\end{eqnarray}
that can be also used to calculate rapidity and transverse
momentum distribution of the $\eta'$ mesons.
Above ${\cal F}_g$ are unintegrated (or transverse-momentum-dependent)
gluon distributions and $\sigma_{g g \to \eta'}$ is
$g g \to \eta'$ (off-shell) cross section.
In the last equation:
$\vec{p}_t = \vec{q}_{1t} + \vec{q}_{2t}$ is transverse momentum
of the $\eta'$ meson
and $\vec{q}_t = \vec{q}_{1t} - \vec{q}_{2t}$ is auxiliary variable
which is used in the integration. Furthermore:
$m_{t,{\eta'}}$ is the so-called $\eta'$ transverse mass and
$x_1 = \frac{m_{t,\eta'}}{\sqrt{s}} \exp( y)$,
$x_2 = \frac{m_{t,\eta'}}{\sqrt{s}} \exp(-y)$.
The factor $\frac{1}{4}$ is the jacobian of transformation from
$(\vec{q}_{1t}, \vec{q}_{2t})$ to $(\vec{p}_t, \vec{q}_{t})$ variables.
The situation is illustrated diagrammatically in Fig.~\ref{fig:gg_etap}.
As for $\phi$ production the running coupling contants are used.
Different combination of scales are tried. The best choice is:
\begin{equation}
\alpha_s^2 \to \alpha_s(\mu_1^2) \alpha_s(\mu_2^2) \; ,
\end{equation}
where $\mu_1^2 = \max(q_{1t}^2,m_t^2)$ and
$\mu_2^2 = \max(q_{2t}^2,m_t^2)$.
Above $m_t$ is transverse mass of the $\eta'$ meson.
The factorization scale(s) for the $\eta'$ meson production are
fixed traditionally as $\mu_F^2 = m_t^2$.
The $g^* g^* \to \eta'$ coupling has relatively simple one-term form:
\begin{equation}
T_{\mu \nu}(q_1,q_2) = F_{g^* g^* \eta'}(q_1,q_2)
\epsilon_{\mu \nu \alpha \beta} q_1^{\alpha} q_2^{\beta} \; ,
\label{gg_etap_coupling}
\end{equation}
where $F_{g^* g^* \to \eta'}(q_1,q_2)$ object is known as the two-gluon
transition form factor.
The matrix element to be used in the $k_t$-factorization is then:
\begin{equation}
{\cal M}^{a b} = \frac{q_{1,\perp}^{\mu} q_{2,\perp}^{\nu}}
{|{\bf q}_1| |{\bf q}_2| } T_{\mu \nu} \; .
\end{equation}
In contrast to the convention for two-photon transition form factor
the strong coupling constants are usually absorbed into the two-gluon
form factor definition.
The matrix element squared for the $g g \to \eta'$ subprocess is
\begin{equation}
|{\cal M}_{gg \to \eta'}|^2 \propto
F_{g^* g^* \to \eta'}^2(q_{1t}^2.q_{2t}^2)
\propto \alpha_s^2 F_{\gamma^* \gamma^* \to \eta'}^2(q_{1t}^2.q_{2t}^2)
\; ,
\label{matrix_element}
\end{equation}
where $F_{g^* g^* \to \eta'}^2(q_{1t}^2,q_{2t}^2)$
and $F_{\gamma^* \gamma^* \to \eta'}^2(q_{1t}^2,q_{2t}^2)$
are two-gluon and two-photon transition form factors of the $\eta'$
meson, respectively.
It was discussed, e.g. in \cite{KPK2003}, in leading-twist collinear
approximation. Such an approach is valid for $Q_1^2 = q_{1t}^2 \gg 0$
and $Q_2^2 = q_{2t}^2 \gg 0$. Here we need such a transition form
factor also for $Q_1^2, Q_1^2 \sim 0$.
There is a simple relation between the two-gluon and two-photon
form factors for the quark-antiquark systems
(see e.g.\cite{BPSS2020,LMS2020,LS2020}).
$\eta'$ meson may have also the two-gluon component in its Fock
decomposition \cite{BM2019}.
The form factor found there can be approximately parametrized as
\begin{equation}
\bar Q^2 F_{g^* g^* \to \eta'}^2(Q_1^2,Q_2^2) \approx 0.2 \pm 0.1 \; \textrm{GeV}
\; ,
\label{LT_gg_formfactor}
\end{equation}
where $\bar Q^2 = (Q_1^2 + Q_2^2)/2$.
A better approach would be to use their Eqs.(5.13-5.16) with parameters
given there. The result from \cite{KPK2003} is:
\begin{equation}
F(\bar Q^2,\omega) = 4 \pi \alpha_s \frac{f_P}{\bar Q^2}
\frac{\sqrt{n_f}}{N_c} A(\omega) \; .
\label{LT_FF}
\end{equation}
In the factorized (in $\bar Q^2$ and $\omega$) formula:
\begin{equation}
A(\omega) = A_{q \bar q}(\omega) + \frac{N_c}{2 n_f} A_{gg}(\omega)
\; ,
\label{qqbar_plus_gg}
\end{equation}
where
\begin{eqnarray}
A_{q \bar q}(\omega) &=& \int_0^1 d x \; \Phi_1(x,\mu_F^2)
\frac{1}{1 - \omega^2(1-2x)^2} \; ,
\\\
A_{gg}(\omega) &=& \int_0^1 d x \; \frac{\Phi_g(x,\mu_F^2)}{x \bar x}
\frac{1 - 2x}{1 - \omega^2(1-2x)^2} \;
\end{eqnarray}
and $\Phi_1$ and $\Phi_g$ are singlet and gluon distribution functions,
respectively.
Above
\begin{equation}
\omega = \frac{Q_1^2 - Q_2^2}{Q_1^2 + Q_2^2} \; .
\label{asymmetry_parameter}
\end{equation}
$\Phi_1$ and $\Phi_g$ undergo QCD evolution \cite{KPK2003} which is
included also in the present paper.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{pp_gg_eta.eps}
\end{center}
\caption{The leading-order diagram for $\eta'$ meson production
in the $k_t$-factorization approach.}
\label{fig:gg_etap}
\end{figure}
\subsection{$F_{\gamma^* \gamma^* \to \eta'}$ form factor}
In Ref.\cite{BGPSS2019} we have shown how to calculate
the transition form factor from the light-cone $Q \bar Q$ wave
function of the $\eta_c$ quarkonium. Here we shall follow the same idea
but for light quark and light antiquark system.
The flavour wave function of $\eta'$ meson can be approximated as
\cite{DGH1992}
\begin{equation}
| \eta' \rangle \approx \frac{1}{\sqrt{3}}( u \bar u + d \bar d + s \bar s )
\; .
\label{flavour_structure}
\end{equation}
The spatial wave function could be calculated e.g. in potential models.
The momentum wave function can be then obtained as a Fourier transform of
the spatial one. We shall not follow this path in the present study.
Instead we shall take a simple, but reasonable, parametrization
of the respective light-cone wave function.
In principle, each component in (\ref{flavour_structure}) may have
different spatial as well as momentum wave function.
Here for simplicity we shall assume one effective wave function for
each flavour component.
We shall take the simple parametrization of the momentum wave function
\begin{equation}
u(p) \propto \exp \left( p^2/(2 \beta) \right) \; .
\label{momentum_wf}
\end{equation}
The light cone wave function is obtained then via the Terentev's
transformation (see e.g. \cite{BGPSS2019}).
We shall use the normalization of the light cone wave function as:
\begin{equation}
\int_0^1 \frac{dz}{z(1-z)} \frac{d^2 k}{16 \pi^3} |\phi(z,k_t)|^2 = 1 \; .
\label{WF_normalization}
\end{equation}
Above
\begin{equation}
\phi(z,k_t) \propto \sqrt{M_{q \bar q}} \exp\left(-p^2/(2 \beta^2)\right)
\end{equation}
and the so-called Terentev's prescription, relating the rest-frame and
light-cone variables, is used:
\begin{equation}
p^2 = \frac{1}{4} \left(M_{q \bar q}^2 - 4 m_{eff}^2 \right) \; .
\end{equation}
Above $M_{q \bar q}$ is the invariant mass of the $q \bar q$ system.
The parameters in the above equations: $m_{eff}$ (hidden in $\phi(z,k_t)$)
and $\beta$ are in principle free.
Here we shall take:
\begin{equation}
m_{eff} = (2/3) m_q + (1/3) m_s \; ,
\label{effective_mass}
\end{equation}
where $m_q$ and $m_s$ are constituent masses of light (u,d) and strange
quarks, respectively. Therefore $m_{eff} \sim$ 0.4 GeV.
The weights are from the flavor wave function (\ref{flavour_structure}).
We shall try a few different $\beta$ values in the range (0.4-0.6) GeV.
The normalization constant can be then obtained from the light-cone wave
function normalization.
Having fixed light-cone wave function one can calculate
electromagnetic $\gamma^* \gamma^* \to \eta'$ transition form factor as:
\begin{eqnarray}
F(Q_1^2, Q_2^2) = - \frac{1}{\sqrt{3}} (e_u^2 + e_d^2 + e_s^2) \sqrt{N_c} \, 4 m_{eff}
&\cdot& \int {dz d^2 \mbox{\boldmath $k$} \over z(1-z) 16 \pi^3} \psi(z,\mbox{\boldmath $k$}) \nonumber \\
\Big\{
{1-z \over (\mbox{\boldmath $k$} - (1-z) \mbox{\boldmath $q$}_2 )^2 + z (1-z) \mbox{\boldmath $q$}_1^2 + m_{eff}^2}
&+& {z \over (\mbox{\boldmath $k$} + z \mbox{\boldmath $q$}_2 )^2 + z (1-z) \mbox{\boldmath $q$}_1^2 + m_{eff}^2}
\Big\} \, .
\label{LC_formfactor}
\end{eqnarray}
The $F(0,0)$ is known and can be calculated from the radiative decay
width \cite{BABAR2018}.
The present BABAR data \cite{BABAR2018} are not sufficiently precise
to get the parameters of our model ($m_{eff}$ and $\beta$).
They could be adjusted in future to precise
experimental data for the $e^+ e^- \to e^+ e^- \eta'$ reaction from
Belle 2
The formula (\ref{LC_formfactor}) can be reduced to a single integral
\begin{eqnarray}
F(Q_1^2, Q_2^2) &=&
\frac{1}{\sqrt{3}} (e_u^2 + e_d^2 + e_s^2) f_{\eta'} \nonumber \\
&\cdot& \int_0^1 dz
\Big\{
\frac{(1-z)\phi(z)}{(1-z)^2 Q_1^2 + z(1-z) Q_2^2 + m_{eff}^2}
+
\frac{z \phi(z)}{z^2 Q_1^2 + z(1-z)Q_2^2 + m_{eff}^2}
\Big\}
\label{formfactor_fromDA}
\end{eqnarray}
when introducing so-called distribution amplitudes $\phi(z)$
and so-called decay constant $f_{\eta'}$ (see e.g.\cite{BGPSS2019}).
We shall use also a simple parametrization of the transition form factor
called non-factorized monopole for brevity
\begin{equation}
F^{nf, monopole}(Q_1^2,Q_2^2) = F(0,0)
\frac{\Lambda^2}{\Lambda^2 + Q_1^2 + Q_2^2} \; .
\label{monopole}
\end{equation}
This two-parameter formula can be correctly normalized at $Q_1^2$ = 0
and $Q_2^2$ = 0 \cite{BABAR2018}. It has also correct asymptotic dependence on
$\bar Q^2 = (Q_1^2 + Q_2^2)/2$. This is very similar to the approach
done long ago by Brodsky and Lepage \cite{BL1981} in the case
of neutral pion.
The so-called vector meson dominance model (factorized monopole)
\begin{equation}
F^{VDM}(Q_1^2,Q_2^2) = F(0,0) \frac{m_V^2}{m_V^2+Q_1^2}
\frac{m_V^2}{m_V^2+Q_2^2}
\end{equation}
has incorrectly strong $\bar Q^2$ dependence \cite{BABAR2018}.
We shall compare results obtained with the form factor calculated
with the light-cone wave function (\ref{LC_formfactor}) with the
parametrization (\ref{monopole}) for $\Lambda$ = 1 GeV.
Results of such a calculation will be treated as a reference ones
for other approaches.
The effect of internal transverse momenta of quarks and antiquarks
in a meson was discussed long ago \cite{Ong1995} postulating some
wave function of $\pi^0$ in the impact parameter space and
including suppression due to so-called Sudakov form factor.
In Ref.\cite{KPK2019} the authors tried to adjust the coefficient of the
lowest-order Gegenbauer palynomials to describe the BABAR data
\cite{BABAR2018} for two virtual photons within the leading-twist
collinear approximation.
However, the corresponding error bars on expansion coefficients
are very large.
The two-gluon transition form factor is closely related to
two-photon transition form factor provided the meson is
of the quark-antiquark type i.e. its wave function as in
Eq.(\ref{flavour_structure}). Then
\begin{equation}
|F_{g^* g^* \to \eta'}(Q_1^2,Q_2^2)|^2 =
|F_{\gamma^* \gamma^* \to \eta'}(Q_1^2,Q_2^2)|^2
\frac{g_s^2}{g_{em}^2} \frac{1}{4 N_c (N_c^2 - 1)}
\frac{1}{(<e_q^2>)^2} \; .
\label{FF_gamgam_to_gg}
\end{equation}
Above $g_{em}^2$ must be taken provided it is included in the definition
of $F_{\gamma^* \gamma^* \to \eta'}$ transition form factor.
Usually it is not.
In Fig.\ref{fig:DA_x} we show $q \bar q$ and $g g$
distribution amplitudes from \cite{KPK2003} for different evolution scales.
Such distribution amplitudes can be used to calculate
$F_{g^* g^* \to \eta'}$ (see Eq.(\ref{qqbar_plus_gg})) needed
in calculating $\eta'$ production in proton-proton collisions.
\begin{figure}
\includegraphics[width=7cm]{DA_z_evolution}
\caption{$q \bar q$ (upper curves) and $g g$ (lower curves) distribution
amplitudes for three different
scales: 1 GeV$^2$ (solid), 10 GeV$^2$ (dashed) and 100 GeV$^2$ (dotted).}
\label{fig:DA_x}
\end{figure}
\section{Results}
In this section we present our results for $\phi$ and $\eta'$ meson
production.
\subsection{$\phi$ production}
In this subsection we show the cross section for $\phi$ meson
production for $\sqrt{s}$ = 200 GeV, $\sqrt{s}$ = 2.76 GeV and
$\sqrt{s}$ = 8 TeV (see Fig.\ref{fig:phi_invariant_cs}).
Our results are shown together with the PHENIX \cite{PHENIX2011} and ALICE
\cite{ALICE2017,kunthia} experimental data, respectively.\\
For each considered case the result of calculation is below the
experimental data. This suggests that the gluon-gluon fusion is not the
dominant production mechanism of $\phi$ meson production.
The fragmentation mechanism was considered in \cite{SIM2014,SI2020}
and it may be the dominant mechanism of $\phi$ meson production.
\begin{figure}
\includegraphics[width=8cm]{gg_gphi_pt_200GeV.eps}
\includegraphics[width=8cm]{gg_gphi_pt_276TeV.eps} \\
\includegraphics[width=8cm]{gg_gphi_pt_8TeV.eps}
\caption{Invariant cross section for $\phi$ production
at $\sqrt{s}$ = 200 GeV, 2.76 GeV and 8 TeV. We show the experimental data
of the PHENIX collaboration \cite{PHENIX2011}, ALICE collaboration
\cite{ALICE2017} and results from \cite{kunthia}.
Here $\Psi(0)$ was calculated from Eq.(\ref{Psi(0)}).
}
\label{fig:phi_invariant_cs}
\end{figure}
\subsection{$F_{\gamma^* \gamma^* \to \eta'}$ form factor}
Before presenting our results for the $\eta'$ production we wish to show
our results for the $F_{\gamma^* \gamma^* \to \eta'}$ form factor.
We will start with our results obtained from the LCWF for
$F_{\gamma \gamma \to \eta'}(0,0)$.
In Fig.\ref{fig_F00} we present $F_{\gamma \gamma \to \eta'}(0,0)$
as a function of $\beta$ and $m_{eff}$. There is a small dependence
on the parameters. The experimental value is
\begin{equation}
F_{\gamma^* \gamma^* \to \eta'}(0,0) =
\sqrt{\frac{4 \Gamma_{\eta' \to 2 \gamma}}{\pi \alpha_{em}^2 m_{\eta'}^3}}
= 0.342 \pm 0.006 \; \textrm{GeV}^{-1} \; .
\end{equation}
A broad range of $\beta$ and $m_{eff}$ is allowed taken into account
the simplicity of our approach.
\begin{figure}
\includegraphics[width=7cm]{F00_betameff.eps}
\caption{$F_{\gamma^* \gamma^* \to \eta'}(0,0)$ as a function of $\beta$
and $m_{eff}$.}
\label{fig_F00}
\end{figure}
Most of the studies on transition form factors has been concentrated on
the case of only one virtual photon. In Fig.\ref{fig:Q2F}
we show $Q^2 F_{\gamma^* \gamma^* \to \eta'}(Q^2)$ for different
values of model parameters $\beta$ = 0.4, 0.5, 0.6 GeV.
In leading twist approach, without QCD evolution of distribution
amplitudes, one should get a constant at large photon virtualities.
In our approach this happens at extremely large virtualities.
Below $Q^2 <$ 50 GeV our model clearly contains higher twists.
In collinear leading twist approach the rise of
$Q^2 F_{\gamma^* \gamma^* \to \eta'}(Q^2)$ is
caused by the evolution (see e.g. \cite{KPK2013}).
\begin{figure}
\includegraphics[width=7cm]{F_Q2.eps}
\caption{$Q^2 F(Q^2)$ as a function of one photon virtuality.
We show results for $m_{eff}$ = 0.4 GeV and for different
values of $\beta$ = 0.4, 0.5, 0.6 GeV (from bottom to top).
For comparison we show experimental data from \cite{CLEO,L3,BABAR2011}.}
\label{fig:Q2F}
\end{figure}
In Fig.\ref{fig:F_Q12Q22} we show the two-photon $\eta'$ form factor
as a function of both photon virtualities.
The form factor drops quickly from
$F_{\gamma^* \gamma^* \to \eta'}(0,0)$ in the region
$Q_1^2 <$ 10 GeV$^2$, $Q_2^2 <$ 10 GeV$^2$.
The change beyond this region is rather mild.
We show our result obtained using the light-cone wave function (left
panel) and for comparison the result from Ref.\cite{KPK2019} (right panel).
The leading-twist result is realiable only for larger virtualities.
Both the results are similar for larger virtualities.
\begin{figure}
\includegraphics[width=7cm]{formfactor_Q12Q22_beta_0.5.eps}
\includegraphics[width=7cm]{formfactor_Q12Q22_from_kornelija.eps}
\caption{$F_{\gamma^* \gamma^* \to \eta'}(Q_1^2,Q_2^2)$ obtained with
the light-cone wave function
for $m_{eff}$ = 0.4 GeV and $\beta$ = 0.5 GeV (left panel)
and the leading-twist result from Ref.\cite{KPK2019} (right panel).}
\label{fig:F_Q12Q22}
\end{figure}
In order to better visualize our result we show in
Fig.\ref{fig:R_Q12Q22} also the ratio:
\begin{equation}
R(Q_1^2,Q_2^2) = F^{LC}(Q_1^2,Q_2^2) / F^{nf, monopole}(Q_1^2,Q_2^2)
\;
\label{ratio}
\end{equation}
and similar obtained when using formula (\ref{formfactor_fromDA}).
We observe that the form factor calculated from (\ref{LC_formfactor})
deviates only slightly from the simple parametrization (\ref{monopole}).
The two parametrizations almost coincide in the broad range of
$(Q_1^2,Q_2^2)$.
A similar result is obtained when using Eq.(\ref{formfactor_fromDA})
with asymptotic distribution amplitude.
\begin{figure}
\includegraphics[width=8cm]{formfactor_ratio_Q12Q22_beta_0.5.eps}
\includegraphics[width=8cm]{formfactor_ratio_Q12Q22_fromDA.eps}
\caption{$R(Q_1^2,Q_2^2)$ (see Eq.(\ref{ratio})) with the
$\gamma^* \gamma^* \to \eta'$ form factor
calculated with $\eta'$ light-cone wave function (left panel).
In this calculation $m_{eff}$ = 0.4 GeV and $\beta$ = 0.5 GeV
is used for example.
In the right panel we show similar ratio obtained from
Eq.(\ref{formfactor_fromDA}) with asymptotic distribution amplitude.
}
\label{fig:R_Q12Q22}
\end{figure}
In Fig.\ref{fig:F_omega} we show the two-photon transition form factor
as a function of asymmetry parameter $\omega$
(see Eq.(\ref{asymmetry_parameter}))
for different values of $\bar Q^2$ specified in the figure caption.
In contrast to the non-factorizable monopole form factor
(\ref{monopole}) we get some dependence on asymmetry parameter
$\omega$. This dependence is somewhat similar to the result of
Ref.\cite{KPK2019}.
In contrast to \cite{KPK2019} our dependence on $\omega$ is not
universal, i.e. different for different values of $\bar Q^2$.
\begin{figure}
\includegraphics[width=8cm]{formfactor_omega.eps}
\caption{The $\gamma^* \gamma^* \to \eta'$ form factor
as a function of $\omega$ for a fixed values of $\bar Q^2$
(from top to bottom: 2, 5, 10, 20, 50, 100 GeV$^2$)
calculated with $\eta'$ light-cone wave function.
In this calculation $m_{eff}$ = 0.4 GeV and $\beta$ = 0.5 GeV.
}
\label{fig:F_omega}
\end{figure}
The two-photon form factor will be transformed to two-gluon form factor
and the latter will be used in the calculation of $\eta'$ production.
For this purpose first a grid for $F_{\gamma^* \gamma^* \to \eta'}$
in the $(\bar Q^2,\omega)$ plane is prepared.
The grid is then used in the interpolation when calculating
differential distributions of $\eta'$ meson
in proton-proton collisions.
\subsection{$\eta'$ production}
In this subsection we discuss the $\eta'$ production considering the
simple gluon-gluon fusion mechanism illustrated in Fig.\ref{fig:gg_etap}.
What are typical gluon transverse momenta for hadronic
$g^* g^* \to \eta'$ process.
In Fig.\ref{fig:dsig_dk1tdk2t_pt_full} we show the
distribution of the cross section
integrated over $\eta'$ transverse momenta in the $(q_{1t},q_{2t})$.
Both small and large gluon virtualities enter into the $p_t$-integrated
cross section.
\begin{figure}
\includegraphics[width=8cm]{dsig_dk1tdk2t_etap_phenix_pt_full.eps}
\caption{
Two-dimensional map in ($q_{1t},q_{2t}$) for the full range
of $\eta'$ transverse momentum.
Here $\sqrt{s}$ = 200 GeV and the KMR UGDF was used.
}
\label{fig:dsig_dk1tdk2t_pt_full}
\end{figure}
In Fig.\ref{fig:dsig_dk1tdk2t_pt_regions} we show similar distributions
as above but for two different regions of meson transverse momentum $p_t$.
The larger $p_t$ the larger gluon transverse momenta enter into the game.
\begin{figure}
\includegraphics[width=7cm]{dsig_dk1tdk2t_etap_phenix_pt_5.eps}
\includegraphics[width=7cm]{dsig_dk1tdk2t_etap_phenix_pt_10.eps}
\caption{
Two-dimensional map in ($q_{1t},q_{2t}$) for two different ranges
of $\eta'$ transverse momentum: 4 GeV $< p_t <$ 6 GeV (left panel)
and 9 GeV $< p_t <$ 11 GeV (right panel).
Here $\sqrt{s}$ = 200 GeV and the KMR UGDF was used.
}
\label{fig:dsig_dk1tdk2t_pt_regions}
\end{figure}
Fig.\ref{fig:dsig_dk1t2dk2t2_pt_fixed} shows a similar distributions
but in the $(q_{1t}^2,q_{2t}^2)$ space used usually for presentation
of transition form factors.
One can observe that at large transverse momentum $p_t$ one is sensitive
to the region of $Q_1^2$ very small and $Q_2^2$ very large or
$Q_1^2$ very large and $Q_2^2$ very small.
These are regions relevant for the leading-twist collinear approach
to two-gluon transition form factor. This is also the region
of the phase space where
the meson light-cone approach gives a small relative enhancement
compared to the naive monopole parametrizations (see Fig.\ref{fig:R_Q12Q22}).
We observe that at $p_t \sim$ 10 GeV the gluon virtualities
$Q_1^2 >$ 20 GeV$^2$ or $Q_2^2 >$ 20 GeV$^2$ are clearly in the domain
of leading-twist approach \cite{KPK2003}.
\begin{figure}
\includegraphics[width=6cm]{dsig_dk1t2dk2t2_pt_5.0.eps}
\includegraphics[width=6cm]{dsig_dk1t2dk2t2_pt_10.0.eps}
\caption{
Two-dimensional map in ($q_{1t}^2,q_{2t}^2$) for
4.5 GeV $< p_t <$ 5.5 GeV (left panel) and
9.5 GeV $< p_t <$ 10.5 GeV (right panel).
Here $\sqrt{s}$ = 200 GeV. In this caluculation the KMR UGDF was used
and the light-cone wave function with $\beta$ = 0.5 GeV.
}
\label{fig:dsig_dk1t2dk2t2_pt_fixed}
\end{figure}
This is shown better in Fig.\ref{fig:dsig_dQ2ave} where
we display distribution in $\bar Q^2$. Only large $\bar Q^2$ occur
for $p_t >$ 10 GeV. This situation is generic, independent
of the form factor used.
\begin{figure}
\includegraphics[width=7cm]{dsig_dQ2_eta.eps}
\caption{
Distribution in $Q_{ave}^2 = \bar Q^2$ for the two distinct cases
from the previous
figure: $p_t$ = 5 $\pm$ 0.5 GeV (solid line) and
$p_t$ = 10 $\pm$ 0.5 GeV (dashed line).
Here $\sqrt{s}$ = 200 GeV. In this caluculation the KMR UGDF was used
and the light-cone wave function with $\beta$ = 0.5 GeV.
}
\label{fig:dsig_dQ2ave}
\end{figure}
Finally in Fig.\ref{fig:dsig_dyd2pt_phenix} we show invariant cross
section for the $\eta'$ meson production for the RHIC energy
$\sqrt{s}$ = 200 GeV relevant for the PHENIX experiment
\cite{PHENIX2011}.
In the left panel we show different results:\\
(a) with non factorized monopole form factor (solid line),\\
(b) with the form factor calculated from the LCWF with
$\beta$ = 0.5 GeV (dashed line),\\
(c) leading twist parametrization (\ref{LT_gg_formfactor}) of
the $F_{g^* g^* \to \eta'}$ result from \cite{KPK2003}
(dash-dotted line),\\
(d) without $F_{g^* g^* \to \eta'}$, except of normalization constant
(dotted line).\\
The result obtained with the leading-twist parametrization
(\ref{LT_gg_formfactor}) can be taken serious only for
$p_t >$ 5 GeV, when $Q_1^2$ or $Q_2^2$ are bigger than 5 GeV$^2$.
We also used the formalism of collinear distribution amplitudes from
\cite{KPK2003}, including their QCD evolution, to calculate
$F_{g^* g^* \to \eta'}(q_{1t}^2,q_{2t}^2)$ from evolved
$q \bar q$ and $g g$ distribution amplitudes.
The evolution scale of distribution amplitudes is taken as
$\mu^2 = \bar Q^2 + \mu_0^2$, where $\mu_0^2$ = 1 GeV$^2$ is used
in our calculation.
The corresponding result is shown by the red thick solid line.
The line is below other lines in the region where the experimental
data exist. For comparison we show somewhat arbitrarily also result with
$q \bar q$ alone (red thick dashed line) and $g g$ alone (red thick
dotted line) in Eq.(\ref{qqbar_plus_gg}). Both the results are much
bigger than the result when both components are included coherently.
Clearly a strong destructive interference effect of both contributions
is observed. The opposite
sign of the $g g$ distribution amplitude would cause constructive
interference in Eq.(\ref{qqbar_plus_gg}).
In all cases we get less cross section than measured by the PHENIX
collaboration at RHIC. This suggests that the gluon-gluon fusion
is probably not the dominant mechanism of $\eta'$ production,
at least in the measured region of transverse momenta.
A natural candidate is fragmentation process, which was
not discussed in the literature so far in the context of $\eta'$ production.
Neglecting the form factor at all leads to overestimation of the cross
section at large transverse momenta of $\eta'$ (see the dotted line
in the left panel of Fig.\ref{fig:dsig_dyd2pt_phenix}).
Such a result one could expect in the TMD (transverse momentum dependent
gluon distributions) approach \cite{Echevarria:2019ynx} where
the incoming gluons should be taken on mass shell. The present result
shows therefore shortcomings of the TMD approach in the context
of meson production.
\begin{figure}
\includegraphics[width=8cm]{eta_pt_200GeV.eps}
\includegraphics[width=8cm]{eta_d2pt_200GeV_DA.eps}
\caption{Invariant cross section as a function of meson transverse momentum.
Here $\sqrt{s}$ = 200 GeV and the KMR UGDF was used in the calculation.
In the left panel results for the nonfactorized monopole, LCWF with
$\beta$ = 0.5 GeV, and simple LT parametrization.
In the right panel we show results obtained using distribution
amplitudes from \cite{KPK2003}. We show the full result as well as
result when only $q \bar q$ or only $g g$ components in
(\ref{qqbar_plus_gg}) are included.
}
\label{fig:dsig_dyd2pt_phenix}
\end{figure}
To better illustrate the role of the initial $g g$ component
in the approach with distribution amplitudes
in Fig.\ref{fig:different_initial_gg} we show the final
(including QCD evolution) result with different initial $g g$ component:
as in \cite{KPK2003} (plus), with opposite sign (minus) and
with initial $g g$ component put to zero. The final results are
quite different. We conclude that the $\eta'$ transverse momentum distribution
is very sensitive to the unknown nonperturbative $g g$ distribution
amplitude.
\begin{figure}
\includegraphics[width=8cm]{eta_pt_200GeV_DA_pm.eps}
\caption{Invariant cross section as a function of meson transverse
momentum in the approach with distribution amplitudes and
different initial $\Phi_{gg}$.
Here $\sqrt{s}$ = 200 GeV and the KMR UGDF was used in the calculation.
}
\label{fig:different_initial_gg}
\end{figure}
So far we used only one unintegrated gluon distribution.
In Fig.\ref{fig:dsig_dyd2pt_phenix_ugdf} we compare results obtained
using different UGDFs. The result obtained with the Jung-Hautmann UGDF
is similar to that obtained with the KMR UGDF.
The GBW UGDF gives sizeable cross section only at low $\eta'$ transverse
momenta as it does not include higher order perturbative effects.
\begin{figure}
\includegraphics[width=8cm]{eta_pt_200GeV_UGDF.eps}
\caption{Invariant cross section as a function of meson transverse momentum.
Here $\sqrt{s}$ = 200 GeV.
We show results with the KMR (solid line), Jung-Hautmann (dashed line)
and GBW (dash-dotted line) UGDFs.
In this calculation the form factor based on the LCWF with
$\beta$ = 0.5 GeV is used for illustration.
}
\label{fig:dsig_dyd2pt_phenix_ugdf}
\end{figure}
What about larger energies ?
The number of $\eta'$ per event as a function of $\eta'$ transverse
momentum is shown in Fig.\ref{fig:dN_dpt_ALICE} for $\sqrt{s}$ = 8 TeV.
We show the result for the non-factorized monopole (\ref{monopole})
two-photon transition form factor (solid line), light-cone wave function
with $\beta$ = 0.5 GeV (dashed line), the result with
the simple leading twist parametrization (\ref{LT_gg_formfactor}) of
the two-gluon transition form factor and the results of collinear
approach with evolution of distribution amplitudes (see Eq.(\ref{LT_FF})).
For comparison we show also result from the Lund string model.
The two-gluon mechanism gives much smaller cross section
than that from the Lund-string model. So even at the LHC we do not
find any region of the phase space ($y, p_t$) where the two-gluon fusion
is the dominant mechanism of $\eta'$ production.
\begin{figure}
\includegraphics[width=8cm]{eta_pt_8TeV.eps}
\caption{Number of $\eta'$ mesons per event as a function of meson
transverse momentum.
Here $\sqrt{s}$ = 8 TeV and the KMR UGDF was used in the calculation.
The result of the Lund string model simulations is shown
as ``data points'' for comparison.}
\label{fig:dN_dpt_ALICE}
\end{figure}
\section{Conclusions}
In this study we have considered production of two ($\phi$ and $\eta'$)
isoscalar mesons with hidden strangeness via gluon-gluon fusion
in proton-proton collisions for different collision energies relevant
for RHIC and the LHCb. The calculations have been performed within
$k_t$-factorization approach with the KMR UGDF which is known
to include effectively higher-order corrections \cite{MS2016,MS2019}.
For the $\phi$ production we extend the calculation performed earlier
for the $J/\psi g$ production by using effective spatial wave function
at the origin
$R_{s \bar s}(0)$ which can be estimated from the decay
$\phi \to e^+ e^-$ by adjusting it to experimental branching ratio.
Having found the parameter we have compared results of our calculation
with the PHENIX and ALICE experimental data. In both cases
the calculated cross section stays below the experimental data
by two (PHENIX) and by one (ALICE) order of magnitude.
This shows that another mechanism is more important.
The fragmentation of $s / {\bar s} \to \phi$ is a natural candidate.
Inspired by the successful description
of $\eta_c$ production in proton-proton collisions \cite{BPSS2020},
here we have considered the $g^* g^* \to \eta'$ fusion with
off-shell initial gluons.
The coupling can be described by the two-gluon nonperturbative
transition form factor. For the quark-antiquark states the latter object
is closely related to the two-photon transition form factor,
studied theoretically and measured by the CLEO, L3 and BABAR collaborations.
The two-photon form factor has been calculated using a light-cone
wave function for different values of model parameters.
The so-obtained form factor has been compared with a simple
non-factorized monopole parametrization as well as the results
obtained recently by Kroll and Passek-Kumericki in the leading-twist
collinear NLO approach.
The two-photon form factors have been translated to the two-gluon ones
assuming the dominance of the quark-antiquark components in the
Fock $\eta'$ wave function expansion. Then it was used in the
$k_t$-factorization approach to calculate the cross section for $\eta'$
production in $p p$ collisions.
The results have been compared with the PHENIX experimental data.
In spite of the expectation of the community
the calculated cross section is definitely smaller than the measured one
obtained by the PHENIX collaboration.
The situation may improve at larger energies but the relevant cross
section at the LHC was not measured so far.
We have presented our predictions for the LHC and has compared
our two-gluon fusion result with the result form the Pythia generator.
For $\sqrt{s}$ = 8 TeV the cross section from the Lund string model is
much above that for the two-gluon fusion mechanism.
Respective data from the ALICE collaboration would be very important
to clarify the situation.
\vskip+5mm
{\bf Acknowledgments}\\
A.S. is indebted to Wolfgang Sch\"afer for long-standing collaboration
on quarkonia production.
We are also indebted to Francois Fillion-Gourdeau for pointing to us
Ref.\cite{FJ2009}, Kornelija Passek-Kumericki for providing us
the two-photon $\eta'$ form factor from their leading-twist analysis
in Ref.\cite{KPK2019} and interesting discussion on
transition form factors, Arvind Khuntia for providing us
experimental data of the ALICE collaboration for $\phi$ production
at $\sqrt{s}$ = 8 TeV presented in his PhD thesis, Jacek Biernat and
Jacek Otwinowski for providing us results of the Lund-string model
generator Phytia, Francesco Giacosa for a discussion on gluonic
components in mesons, and Jacek Oko{\l}owicz for carefull reading
of this manuscript.
This study was partially supported by the Polish National Science Center
grant UMO-2018/31/B/ST2/03537 and by the Center for Innovation and
Transfer of Natural Sciences and Engineering Knowledge in Rzesz{\'o}w.
|
1,108,101,565,213 | arxiv | \section*{References}%
\begin{quotation}\mbox{}\par}
\def\refer#1\par{{\setlength{\parindent}{-\leftmargin}\indent#1\par}}
\def\end{quotation}{\end{quotation}}
{\noindent\small{\bf Abstract:}
The maximum mass of a neutron star (NS) is poorly defined. Theoretical attempts to define this mass have thus far been unsuccessful. Observational results currently provide the only means of narrowing this mass range down. Eclipsing X-ray binary (XRB) pulsar systems are the only interacting binaries in which the mass of the NS may be measured directly. Only 10 such systems are known to exist, 6 of which have yielded NS masses in the range 1.06 - 1.86 M$_{\odot}$.We present the first orbital solutions of two further eclipsing systems, OAO 1657-415 and EXO 1722-363, whose donor stars have only recently been identified. Using observations obtained using the VLT/ISAAC NIR spectrograph, our initial work was concerned with providing an accurate spectral classification of the two counterpart stars, leading to a consistent explanation of the mechanism for spin period evolution of OAO 1657-415. Calculating radial velocities allowed orbital solutions for both systems to be computed. These are the first accurate determinations of the NS and counterpart masses in XRB pulsar systems to be made employing NIR spectroscopy.
}
\section{Introduction}
Despite extensive and ongoing theoretical work on the NS equation of state (EOS), the precise nature of the fundamental physical properties of NS matter is still poorly defined. Observational work can assist in reducing the number of contending theories by eliminating those that place unrealistic constraints on the mass range of observed NS. NS masses may only be determined from binary systems, within this paper we consider a specific class of these objects, those containing an eclipsing X-ray binary pulsar. At present only 10 such systems are known within our Galaxy. Prior to this work 6 of the NS in these systems have had mass determinations, with the donor star in each observable optically. Within this paper we discuss the first mass estimates found for NS employing near-infrared (NIR) spectroscopy. We have studied two eclipsing X-ray pulsar systems containing a High Mass donor, OAO 1657-415 and EXO 1722-363. Initially an accurate spectral classification for each of the donor stars was conducted utilising observations made using the VLT/ISAAC NIR spectrograph and current NIR spectral atlases. Using multi-epoch NIR spectra of each system we were able to determine the radial velocity of the donor star in each of the two systems thus enabling the construction of an orbital solution. This solution was then employed in calculating the mass estimate of each NS and the corresponding high mass donor, placing constraints upon the system inclination and separation of the binary system.
\section{Spectral classification}
\subsection{Spectral classification of EXO 1722-363}
EXO 1722-363 (alternatively designated IGR J17252-3616) was discovered in 1984 by {\it EXOSAT} Galactic plane observations (Warwick et al. 1988). {\it XMM-Newton} observations narrowed down the source position location to 4$^{\prime\prime}$. This allowed the identification of an IR counterpart 2MASS J17251139-3616575 (with magnitude J = 14.2, H = 11.8 and K$_{s}$ = 10.7 (Zurita Heras et al. 2006)).
Examining Fig.1 we can see that all of absorption lines in this spectrum are narrow, indicative of the object being a supergiant. EXO 1722-363 shows the singlet He\,{\sc i} 2.058 $\mu$m line in emission, this line being highly sensitive to wind and temperature properties. The N {\sc iii} 2.115 $\mu$m emission line is a common feature in B0-B1 supergiants. The absence of strong Br$\gamma$ 2.1655 $\mu$m emission features implies that EXO 1722-363 does not exhibit a strong stellar wind.
From a qualitative comparison of spectra from Hanson et al. 2005, we identify EXO 1722-363 as being of spectral type B0-B1 Ia (Mason et al. 2009).
By comparison with evolutionary rotational massive star models (Meynet \& Maeder, 2000) we find an initial progenitor mass for EXO 1722-363 in the range 30M$_{\odot}$ - 40M$_{\odot}$. Following the method for determining spectroscopic distance as detailed in Bibby et al. 2008, we determined a distance for EXO 1722-363 of $8.0_{-2.0}^{+2.5}$ kpc which is comparable within errors to the distance deduced in Thompson et al, 2007.
Comparing our calculated distance with model fluxes derived from spectral fits to EXO 1722-363 (Corbet et al, 2005), we found that EXO 1722-363 has an intrinsic X-ray flux variability (in the range 2-60 keV) such that $F_{\rm min}$ = 0.78 ~ $\times$ 10$^{-10}$ erg cm$^{-2}$ s$^{-1}$ and $F_{\rm max}$ = 12.2 ~ $\times$ 10$^{-10}$ erg cm$^{-2}$ s$^{-1}$. We derive X-ray luminosities for EXO 1722-363 such that L$_{\rm {X_{min}}}$ = 3.4 $\times$ 10$^{35}$ erg s$^{-1}$ and L$_{\rm {X_{max}}}$ = 1.6 $\times$ 10$^{37}$ erg s$^{-1}$. We find this luminosity range entirely consistent with EXO 1722-363 being the donor within an SGXRB system.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{OAO_and_EXO_together_medium_v2.pdf}
\caption{Left Figure : Comparison of EXO 1722-363 and template O9-B3 Ia spectra from Hanson et al., 2005. Right Figure: Topmost shows a spectrum of OAO1657-415 compared with a Ofpe/WNL spectrum of IRS16NW from Martins et al., 2007. }
\end{figure}
\subsection{Spectral classification of OAO 1657-415}
OAO 1657-415 was first detected over 30 years ago by the {\it Copernicus} X-ray satellite.
From an examination of the orbital parameters of the X-ray pulsar it was determined that OAO 1657-415 was a high-mass system, indicating the mass of the donor lay between 14-18 M$_{\odot}$ with a corresponding radius range of 25-32R$_{\odot}$. Determination of these stellar parameters led to a suggested classification of B0-6 I (Chakrabarty et al,\ 1993).The correct identification of the donor in this system required the precise location of OAO 1657-415 to be accurately made. This was achieved by the {\it Chandra X-Ray Observatory} narrowing the X-ray location error radius down to 0.5$^{\prime\prime}$. Optical imaging of this position did not detect any donor candidates down to a magnitude of V$>$23. Near infrared imaging was employed to overcome significant levels of interstellar reddening, resulting in the identification of a donor located within the {\it Chandra} error radius. A corresponding IR counterpart was located in the {\it 2MASS} catalogue, 2MASS J17004888-4139214 (with magnitudes J = 14.1, H = 11.7 and K$_{\rm s}$ = 10.4) with A$_{V}$ = 20.4 $\pm$ 1.3, located at a distance of 6.4 $\pm$ 1.5 kpc (Chakrabarty et al,\ 2002).
NIR K$_{\rm s}$ band spectroscopy of the donor obtained in 2008 (Mason et al,\ 2009) led to a re-evaluation of the spectral classification. Close examination revealed that OAO 1657-415 shared a similar spectral morphology with that of Ofpe/WNL stars. These are stars in transition between the OB main sequence and hydrogen depleted Wolf-Rayet stars, whose evolution follows from a wide range of progenitor masses.
The spectrum of the mass donor in OAO1657-415 is presented in Fig. 1, and is dominated by He\,{\sc i}
2.058 $\mu$m and Br$\gamma$ emission, the former stronger than the latter. We find a poor
correspondence with the spectra of B0-6 supergiants (Hanson et al, 1996, Hanson et al, 2005) - as suggested for the
mass donor by Chakrabarty et al,\ 2002 on the basis of a combination of photometric and X-ray data. However, comparison with the spectra of massive transitional objects presented by Morris et al,\ 1996 is more encouraging. In particular OAO 1657-415 shows pronounced similarities to the hot Ofpe/WNL
stars. Consequently we may not {\em a priori} determine a unique distance to OAO 1657-415 on the basis of this classification. We thus find inevitably unconstructive limits of 4.4~kpc $<$ {\it d} $<$ 12~kpc. In turn this results in 1.5 $\times$ $10^{36}$ ~erg$ ^{-1} < ~ L_{\rm X} < 10^{37}$~erg s$^{-1}$, also entirely consistent with observed luminosities of SGXRBs. Adopting the distance derived by Audley et al, 2006 leads to log(L/L$_{\odot}$) $\sim$ 5.7. For such a luminosity, comparison to the evolutionary tracks for massive stars (Meynet \& Maeder, 2000)
imply an initial mass of $\sim$40~M$_{\odot}$.
\section {OAO 1657-415 : A mechanism for spin-period evolution}
We now turn to the implications of the Ofpe/WNL classification for the X-ray properties of OAO 1657-415. The
anomalous position of OAO 1657-415 within the Corbet diagram (Figure 2) (Corbet et al, 1986), is then naturally
explained in terms of the properties of its stellar wind. Compared to normal OB supergiants
(Crowther et al,\ 2006), Ofpe/WNL stars typically demonstrate systematically lower wind velocities and higher mass loss rates (Martins et al.\ 2007).
This combination of wind properties permits a higher accretion rate and hence transfer of angular momentum to the
NS, in turn leading to a smaller (instantaneous) equilibrium spin period with respect to normal OB
supergiants ($P_{\rm spin}
\propto$ \.{M}$^{-3/7}$$v_{\infty}^{12/7}$ from Eqn. 12 of Waters et al, 1989, where $P_{\rm spin}$, \.{M}
and $v_{\infty}$ are the spin period of the NS and the mass loss rate and terminal velocity of the mass donor
wind respectively).
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{corbet_diagram.pdf}
\caption{Corbet diagram marking position of OAO 1657-415 and other HMXBs. OAO 1657-415 is marked by an X, EXO 1722-363 by a filled diamond. SGXRB Roche-Lobe Overflow systems (Squares), Be/X binaries (Triangles), SGXRB Wind-fed systems (Diamonds) and anomalous systems (+) }
\end{figure}
\section{Orbital solution for EXO 1722-363}
The orbital solution we have calculated was obtained from archival ESO VLT data. Using a small subset of the available archive data, (11 spectra taken at different epochs spanning a wide range of orbital phase, from a set of 104 in total) we were able to measure radial velocities and construct the orbital solution shown in Fig. 3 (left). These spectra were centered on 2.1$\mu$m, having an integration time of 700s, and were obtained using the SW MRes mode with a 0.6$^{\prime\prime}$ slit. This resulted in high S/N spectra at a resolution R $\approx$ 4200.
The resulting NS mass that we have determined from our orbital solution for EXO 1722-363 is consistent with the canonical mass of 1.4~M$_{\odot}$ measured in most other eclipsing HMXBs, except for that in Vela X-1, (Quaintrell et al.,\ 2003). The NS mass range we have determined stems from a lower and upper limit obtained using the following constraints - {\it Lower} : the system is viewed edge on (i.e. {\it i} = 90$^\circ$), {\it Upper} : the donor star fills its Roche lobe. Utilising this orbital solution we find a NS mass range of 1.5 - 1.6 M$_{\odot}$ (Mason et al. 2010). In a similar way the measured mass and radius of the supergiant donor, $M \sim
13 - 15$~M$_{\odot}$ and $R \sim 25 - 28$~R$_{\odot}$ is determined, and this lends support to the B0-1 Ia spectral classification that we
previously found (Mason et al.,\ 2009).
\begin{figure}[h]
\centering
\includegraphics[width=14cm]{merged_orbital_EXO_OAO_v2.pdf}
\caption{Left : Radial velocity data for the donor in EXO~1722--363. Right : Radial velocity data for the donor in OAO 1657-415. In both cases the solid line is the best fitting sinusoid with three free parameters, the dashed line is that with a fixed zero phase in line with the published ephemeris. In the case of EXO~1722--363 the orbital phase is based upon the ephemeris of Thompson et al, 2007. For OAO 1657-415 the orbital phase is based upon the ephemeris of Bildsten et al, 1997. }
\end{figure}
\section{Orbital solution for OAO 1657-415}
As the mass donor in OAO 1657-415 is faint (H $\sim$ 11.7) we employed the NIR spectrometer ISAAC on the VLT to obtain high resolution (R $\sim$ 3000) and S/N spectra in the H band.
Observations were conducted between 2008 May 13th and 2008 September 25th in the SW MRes mode with a 0.8$^{\prime\prime}$ slit. Cross-correlation was performed using the standard IRAF routine {\it fxcor}. 12 high quality spectra were obtained that covered a wide range of orbital phase, sufficient to determine a dynamical mass solution for OAO 1657-415 (Fig. 3).
Utilising this orbital solution we find a NS mass range of $\approx$ 1.4 - 1.7 M$_{\odot}$ with a corresponding mass range for the counterpart star of $\approx$ 14 - 17 M$_{\odot}$. For a more precise mass determination please refer to Mason et al,\ 2011.
\section*{Acknowledgements}
ABM acknowledges support from an STFC studentship. JSC acknowledges support from an RCUK fellowship.
This research is partially supported by grants AYA2008-06166-C03-03 and
Consolider-GTC CSD-2006-00070 from the Spanish Ministerio de Ciencia e
Innovaci\'on (MICINN).Based on observations carried out at the European Southern Observatory, Chile through programmes 081.D-0073(A and B) and 077.B-0872(A) .
\footnotesize
\beginrefer
\refer Audley, M.~D., Nagase, F., Mitsuda, K., et al., 2006, MNRAS, 367, 1147
\refer Bibby, J.~L., Crowther, P.~A., Furness, J.~P., et al., 2008, MNRAS, 386, 23
\refer Bildsten, L., Chakrabarty, D., Chiu, J. et al. 1997, ApJS, 113, 367
\refer Chakrabarty, D., Grunsfeld, J.~M., Prince, T.~A., et al, 1993, ApJ, 403, L33 Martins, F.
\refer Chakrabarty, D., Wang, Z., Juett, A.~M., et al., 2002, ApJ, 573, 789
\refer Corbet, R.~H.~D., Thorstensen, J.~R., Charles, P.~A. et al, 1986, MNRAS, 220, 1047
\refer Corbet, Robin H. D., Markwardt, Craig B., Swank, Jean H., 2005, ApJ, 633, 377.
\refer Crowther, P.~A., Lennon, D.~J., Walborn, N.~R., 2006, A\&A, 446, 279
\refer Hanson, M.~M., Conti, P.~S., Rieke, M.~J, 1996, ApJS, 107, 281
\refer Hanson, M. M., Kudritzki, R.P., Kenworthy, M. A., et al, 2005, ApJS, 161, 154
\refer Martins, F., Genzel, R., Hillier, D.~J. et al, 2007, A\&A, 468, 233
\refer Mason, A.~B., Clark, J.~S., Norton, A.~J., et al., 2009, A\&A, 505, 281
\refer Mason, A.~B., Norton, A.~J., Clark, J.~S. et al, 2010, A\&A, 509, 79
\refer Mason, A.B., Norton, A.~J., Clark, J.~S. et al, 2011, Submitted.
\refer Meynet, G., Maeder, A., 2000, A\&A, 361, 101
\refer Morris, P.~W., Eenens, P.~R.~J., Hanson, M.~M. et al, 1996, 470, 597
\refer Quaintrell, H., Norton, A.~J., Ash, T.~D.~C. et al, 2003, A\&A, 401, 313
\refer Thompson, Thomas W. J., Tomsick, John A., in 't Zand, J. J. M., et al., 2007, ApJ, 661, 447.
\refer Warwick, R. S., Norton, A. J., Turner, et al., 1988, MNRAS, 232, 551
\refer Waters, L.~B.~F.~M., van Kerkwijk, M.~H. et al, 1989, A\&A, 223, 196
\refer Zurita Heras, J. A., de Cesare, G., Walter, R., et al., 2006, A\&A, 448, 261
\end{quotation}
\end{document}
|
1,108,101,565,214 | arxiv | \section{Introduction}
The concept of two dimensional (2D) Dirac materials began with graphene~\cite{Novoselov2004,castro2012,Novoselov1379,Novoselov2012,Novoselov2004,castro}
which was not only a big step in entering the 2D world but also a playground for 2+1 dimensional quantum field theories. When the Dirac theory
comes to condensed matter, it can be deformed in many ways~\cite{Vozmediano2016}. One interesting deformation of the Dirac cone is to tilt it~\cite{Cabra2013}.
There are already materials which host tilted 2+1 dimensional Dirac cones. In addition to organic conductor
$\alpha$-BEDT~\cite{Tajima2009,Kobayashi2009,Katayama2006,Goerbig2008,Hirata2016} which has a weak coupling between layers,
a recent example of monolayer borophene~\cite{Mannix15,Cheng2016,Feng2017} has also been added to the list of tilted Dirac cone materials.
The effect of tilting and anisotropic Fermi velocity on the electronic and collective properties of tilted Dirac cone
has been investigated~\cite{Nishine2010,Nishine2011,Goerbig2014,Suzumura2015,Tarun2017,Tilted2018}.
In Ref.~\onlinecite{Tilted2018}, we have obtained an analytic representation of the polarization function for
tilted Dirac cone systems from which we have found a kink in the plasmon dispersion.
Moreover strong enough tilt gives rise to an additional overdamped plasmon mode which energetically lives in the intraband
particle-hole continuum (PHC). The undamped and overdamped plasmon modes at long wavelengths limit have square
root and linear dispersion and both of them are depended to the direction of the momentum ${\boldsymbol{q}}$.
Not only the physics of a 2D monolayer borophene as a prototypical tilted Dirac cone material is interesting,
but it also possible to think of multilayer of such 2D systems composed of tilted Dirac cones, or combination
of tilted and upright Dirac cones. When these layers come close together in a periodic arrangement,
situations with potentially new physics can be created. The early investigations of the heterostructure of 2D systems, especially 2D
electron gas dates back to the development of molecular beam epitaxy, which was
employed to explore macroscopic sample of A-B super lattice like Ga and As compound.~\cite{Dingle1980}
The simplest of such systems are double layers~\cite{Jafari2014TI}.
When the layers are far enough to prevent their band overlap, the collective characteristic of the heterostructure of 2D systems will be different
from monolayer one. The double layer combination of 2D electron gas in a either uniform dielectric
background~\cite{Chang1980,DasSarma1981,Das-sarma1982,Olego1982,Pinczuk1986,Santoro1988,Hwang2009,Stauber2012} or arbitrary dielectric
media~\cite{Profumo2010,Profumo2012,Gan2012} forms two plasmon modes
corresponding to in-phase and out-of-phase plasmon oscillations of individual layers.
The in-phase mode which appears in higher energy, in the long wave length limit disperses as $\omega\sim\sqrt{q}$,
while the out-of-phase mode disperses linearly $\omega\sim q$.
Indeed the square root behavior of a monolayer follows from a general hydrodynamic treatment, and is independent of
macroscopic details~\cite{Fetter1973}, and holds for every 2D electron gas.
Given that in addition to organic materials a 2D monolayer of borophene hosts tilted Dirac cone, it is timely to consider
the double layers of tilted Dirac cone systems. This can be either the double layer of borophene-borophene (DLB) or a
double layer of borophene-graphene (DLBG) where the borophene hosts tilted Dirac cone spectrum, while graphene
hosts upright Dirac cone. Based on our analytical calculation of the polarization function for tilted Dirac cone~\cite{Tilted2018},
we will investigate how the kink feature of monolayer tilted Dirac cone shows up in the double layer setting.
In the double layer system, the PHC will be union of the PHC of individual layers. We have
established (and will further establish) that in monolayer tilted Dirac cone system, there exists a $\omega_{\rm kink}({\boldsymbol{q}},\eta)$ curve
at which a kink in the plasmon dispersion appears which is controlled by the tilt parameter $\eta$.
In this work, we find that in DLB systems when the dopings in two layers are different, there will be two such
scales, and therefore the number of kinks in each of the in-phase and out-of-phase modes will be doubled.
More interestingly in the DLBG system, although the graphene layer does not have a kink in the decoupled limit,
as a result of coupling by Coulomb forces to the borophene layer, both resulting plasmon modes will develop
a kink at the $\omega_{\rm kink}$ energy scale.
Another feature of monolayer tilted Dirac cone system is the presence of an additional plasmon mode
which disperses linearly and is heavily damped. We show that in DLB system there is only one such mode,
implying that the in-phase overdamped plasmons survive in DLB, while the out-of-phase mode does not exist.
This paper is organized as follows. In section~\ref{model} we give a brief introduction double layer dielectric function and
represent the tilted Dirac cone Hamiltonian. Then we analytically study the plasmon modes in the long wavelength limit. In section~\ref{DLB} we
investigate the role of distance, tilting and doping in DLB.
In section~\ref{DLBG}, the plasmon modes are studied in double layer of borophene-graphene.
Section~\ref{ODP} deals with the additional overdamped plasmon branch.
The paper ends with a summary in section~\ref{summary}.
\section{Formulation of plasmons in DLB}\label{model}
To describe the collective excitations of the borophene layers in our double layer system, we need the linear response function of charge density to the external
potential and the dielectric function for the double layer system. To begin, the Hamiltonian for DLB which are separated along $z$ direction
and are interacting via the long range Coulomb interacton~\cite{Das-sarma1982,Hwang2009}
\begin{eqnarray}
H=&&\sum_{l,\sigma,{\boldsymbol{k}}}\psi^{\dag}_{{\boldsymbol{k}},\sigma,l} h({\boldsymbol{k}}) \psi_{{\boldsymbol{k}},\sigma,l}+\\
&&\frac{1}{2A}\sum_{l,l',\sigma,\sigma',{\boldsymbol{p}},{\boldsymbol{q}},{\boldsymbol{k}}} V_{ll'}({\boldsymbol{q}})
\psi^{\dag}_{{\boldsymbol{p}}+{\boldsymbol{q}},\sigma,l}\psi^{\dag}_{{\boldsymbol{k}}-{\boldsymbol{q}},\sigma',l'}\psi_{{\boldsymbol{k}},\sigma',l'}\psi_{{\boldsymbol{p}},\sigma,l},\nonumber
\label{total-hamiltonian}
\end{eqnarray}
where, $h({\boldsymbol{k}})$ is the Hamiltonian for tilted Dirac cone, $l,l'=1,2$ are layer indices, and $V_{ll'}$ denotes the
interlayer ($l\neq l'$) and intralayer ($l=l'$) Coulomb matrix element.
The operator $\psi_{{\boldsymbol{p}},\sigma,l}$ anihilates an electron at Bloch state ${\boldsymbol{p}}$, with $\sigma$ in layer $l$ of area $A$.
The tilted Dirac Hamiltonian which describes the low energy electronic properties of borophene and organic conductors (under high pressure)
is given by~\cite{Goerbig2014,Proskurin2014,Tajima2009},
\begin{equation}
h^{R/L}({\boldsymbol{k}})=\pm\hbar \begin{pmatrix} v_{x0}k_x + v_{y0}k_y & v_x k_x\mp i v_y k_y\\ v_x k_x\pm i v_y k_y & v_{x0}k_x + v_{y0}k_y \end{pmatrix}.
\label{matrixform}
\end{equation}
Here, L (R) stands for left (right) valley, the diagonal Fermi velocities $v_{x0}, v_{y0}$ represent the tilt of the Dirac cone and the
off-diagonal Fermi velocities $v_x\neq v_y$ represent the anisotropic of the principal Fermi velocity.
As a special case of Eq.~\eqref{matrixform}, if we assume that the diagonal Fermi velocity is zero and off-diagonal Fermi velocities are equal (symmetric),
it will reduce to the graphene case. The noninteracting single-particle energy and eigenstates of tilted Dirac fermions after affecting a transformation on Cartesian coordinate $k_x, k_y $,
are given by
\begin{align}
&E^{R/L}_{\lambda}({\boldsymbol{k}})=\pm \hbar v_x (\eta k_x +\lambda k),\nonumber\\
&\ket{{\boldsymbol{k}},\lambda}^{R}= \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ \lambda e^{i\theta_k} \end{pmatrix}
,~~~\ket{{\boldsymbol{k}},\lambda}^{L}=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -\lambda e^{-i\theta_k} \end{pmatrix},
\label{energydispersion}
\end{align}
where $\lambda=\pm$ and the tilt parameter $\eta$ is defined as
\begin{equation}
\eta=\sqrt{\frac{v_{x0}^2}{v_x^2}+\frac{v_{y0}^2}{v_y^2}}.
\label{eta}
\end{equation}
The intralayer and interlayer Coulomb interaction for the pair parallel of 2D borophene layer in a medium with different dielectric constant
around ($\epsilon_1, \epsilon_d, \epsilon_2$ form top to bottom) is given by\cite{Profumo2010}
\begin{eqnarray}
&&V_{11}=\frac{4 \pi e^2}{q D(q)}\bigg((\epsilon_d+\epsilon_2) e^{qd}+(\epsilon_d-\epsilon_2)e^{-qd}\bigg),\nonumber\\&&
V_{21}=V_{12}=\frac{8 \pi e^2}{q D(q)}(\epsilon_d),
\end{eqnarray}
where,
\begin{equation}
D(q)=(\epsilon_1+\epsilon_d)(\epsilon_d+\epsilon_2)e^{qd}+(\epsilon_1-\epsilon_d)(\epsilon_d-\epsilon_2)e^{-qd}.
\end{equation}
Here, it has been assumed that the first (second) layer, $l=1$ ($l=2$), is located at $z=d$ ($0$)
and has been sandwiched between two dielectric media $\epsilon_1$ at the top and $\epsilon_d$ ($\epsilon_d$ and $\epsilon_2$ at the bottom).
Hence, $V_{22}$ can be find thorough $V_{11}$ by changing variable $\epsilon_2\to \epsilon_1$.
Within the random phase approximation (RPA), and ignoring the interlayer PH propagators,
the dressed density response function can be expressed as~\cite{DasSarma1981,Hwang2009,Das-sarma1982,Santoro1988}
\begin{equation}
\chi({\boldsymbol{q}},\omega)=\chi_0({\boldsymbol{q}},\omega)\mathbb{1}-\boldsymbol{V}(q),
\label{dressed}
\end{equation}
with $\boldsymbol{V}(q)$ as electron-electron interaction matrix (in the space of layer indices) and $\chi_0$ as noninteracting density response function in which as long as the band overlap between layers is absent, the off diagonal (interlayer) density response tensor element is zero, and therefor a unit matrix on the right hand side
multiplies the scalar $\chi_0$. The dielectric function derived from Eq.~\eqref{dressed} is given by the matrix equation
\begin{equation}
\boldsymbol{\varepsilon}({\boldsymbol{q}},\omega)=\mathbb{1}-\chi_0({\boldsymbol{q}},\omega) \boldsymbol{V}(q).
\label{dielectric.eqn}
\end{equation}
Now the dispersion of collective excitations is obtained by
\begin{equation}
\det\boldsymbol{\varepsilon}({\boldsymbol{q}},\omega)=0.
\label{secular.eqn}
\end{equation}
To begin investigating the plasmon mode properties in the DLB we first analyze the density response and its plasmons in long wavelength limit, analytically.
Using linear response theory, the density fluctuation of the noninteracting borophene monolayer in the presence of external electromagnetic filed is given by,
\begin{eqnarray}
&& \chi_0(\boldsymbol{q},\omega)=\\&& \frac{g \gamma^2}{A } \sum_{{\boldsymbol{k}},\lambda , \lambda'=\pm} \frac{n_{{\boldsymbol{k}},\lambda}-n_{{\boldsymbol{k}}',\lambda'}}
{\hbar \omega+E_{{\boldsymbol{k}},\lambda}-E_{{\boldsymbol{k}}',\lambda'}+i0^+} f_{\lambda, \lambda'} ({\boldsymbol{k}}, {\boldsymbol{k}}')\nonumber,
\label{pai}
\end{eqnarray}
where ${\boldsymbol{k}}'={\boldsymbol{k}}+{\boldsymbol{q}}$, $n_{{\boldsymbol{k}},\lambda}$ is Fermi distribution function with $\lambda$ as band index, the factor $\gamma^2=v_x/v_y$ is Jacobian transformation,
$g$ includes spin degeneracy, $A$ is area of system and the form factor $f_{\lambda, \lambda'} ({\boldsymbol{k}}, {\boldsymbol{k}}')$ is defined as an expectation value of
the density operator $\sigma_0=\mathbb{1}$ between two eigenstates $\ket{{\boldsymbol{k}},\lambda}$, $\ket{{\boldsymbol{k}}',\lambda'}$ of the tilted Dirac cone, which is given by
\begin{equation}
f_{\lambda, \lambda'} ({\boldsymbol{k}}, {\boldsymbol{k}}')=\bra{{\boldsymbol{k}},\lambda}\ket{{\boldsymbol{k}}',\lambda'}= \frac{1}{2} (1+\lambda \lambda'\cos(\theta_{\boldsymbol{k}}-\theta_{{\boldsymbol{k}}'})),
\end{equation}
where $\theta_{{\boldsymbol{k}}}$ is a direction of momentum ${\boldsymbol{k}}$ to $x$ axis.
Here we just consider contribution of one valley (say right) in each layer and ignore the effect of intervalley process, which
require large momentum transfer. The analytical result of noninteracting density response function has been calculated in
Ref.~\onlinecite{Tilted2018} by the present authors. In the long wavelength limit $q \to 0$ this result can be summarized as,
\begin{eqnarray}
\chi_0(q\rightarrow0,\omega)\approx \begin{cases}
\frac{\mu q^2}{4\pi \omega^2} F(\eta) &\eta\ll q ~~ ,~~ \frac{\omega \eta }{q}\ll 1, \\
\frac{\mu q^2 }{4 \pi\omega^2 }G(\eta,\phi) &
\eta\gg q ~~,~~ \frac{\omega \eta }{q}\gg 1,
\label{longwl.eqn}
\end{cases}
\end{eqnarray}
where,
\begin{eqnarray}
&&G(\eta,\phi)=\frac{g}{4\pi \hbar^2 \eta^2 v_x v_y}\bigg(\cos2\phi+\frac{\eta^2+(\eta^2-2)\cos2\phi}{\sqrt{1-\eta^2}}\bigg),
\nonumber\\&&
F(\eta)=\frac{g}{4\pi \hbar^2 v_x v_y} (1-\frac{2\omega\eta}{q}).
\label{DF.eqn}
\end{eqnarray}
It can be easily seen that the density response function in Eq.~\eqref{DF.eqn} depends on the tilting parameter $\eta$, and the direction $\phi$ of wave vector
${\boldsymbol{q}}$. Note that in the limit $\eta\to 0$, the ($\eta\omega/q\ll1$ piece of the) above density response function reduces to graphene. Furthermore,
the collective excitations of monolayer borophene is a function of $\eta$ and ${\boldsymbol{q}}$ and show square root behavior as typical plasmon in
2D material~\cite{Fetter1973,Nishine2010,Nishine2011,Tilted2018}. However, in the DLB layer, the collective modes are different from monolayer.
For quite general values of $\epsilon_1,\epsilon_d,\epsilon_d$, there will be two branches of plasmons for DLB system, one dispersing as
$q^{1/2}$ and the other dispersing as $q^1$. For simplicity let us assume that $\epsilon_1=\epsilon_d=\epsilon_2$ are a uniform background
dielectric constant. Combining the above equations with Eq.~\eqref{dielectric.eqn} to solve the secular equation~\eqref{secular.eqn} gives
\begin{eqnarray}
&&\omega_+^2\approx \begin{cases}
e^2 q(\mu_1+\mu_2) F(\eta) & \eta\ll q ~~ ,~~ \frac{\omega \eta }{q}\ll 1, \\
e^2 q (\mu_1+\mu_2)G(\eta,\phi) & \eta\gg q ~~,~~ \frac{\omega \eta }{q}\gg 1,
\label{collectivep.eqn}
\end{cases}
\end{eqnarray}
and,
\begin{eqnarray}
&&\omega_-^2\approx \begin{cases}
2e^2 q^2 d F(\eta)/(\mu_1+\mu_2) & \eta\ll q ~~ ,~~ \frac{\omega \eta }{q}\ll 1, \\
2e^2 q^2 d G(\eta,\phi)/(\mu_1+\mu_2) &\eta\gg q ~~,~~ \frac{\omega \eta }{q}\gg 1,
\label{collectiven.eqn}
\end{cases}
\end{eqnarray}
The $\omega_+\propto \sqrt q$ is the in-phase oscillations of charge density in two layers,
and therefore conforms to the generic $\sqrt q$ hydrodynamic behavior of 2D systems~\cite{Fetter1973}.
The $\omega_-$ is the out-of-phase collective oscillations of the density in two layers.
Some times the $\omega_-$ is referred to as the "acoustic" plasmon which is misleading.
The acoustic modes in phonon systems refer to the in-phase oscillations of the ions in the
same unit cell, while here the linearly dispersing plasmon mode corresponds to out-of-phase oscillations.
It is interesting to note that the in-phase plasmon mode does not depend on separation $d$ of the two
layers, while the out-of-phase (linearly dispersing) is proportional to $\sqrt{d}$, and its energy increases by
increasing the separation $d$ of the layers.
Both plasmonic branches depend on the chemical potential of each layer and the linear one is also sensitively dependent on the separation of the layers.
Note that up to this point we have assumed that except for the chemical potential, all other parameters of the two layers forming the DLB are the same.
In addition, if we consider different dielectric media, the qualitative behavior remains the same, but the two plasmonic branches will disperse at lower
energy and the group velocity of acoustic mode will be modified~\cite{Profumo2012}. In this paper we assume that our double layer system is placed in the
medium with a uniform background dielectric constant, i.e. $\epsilon_1=\epsilon_d=\epsilon_2$. Moreover, when studying bilayers composed of graphene and borophene,
their Fermi velocities is assumed to be $v_F=c/300, c/1000$, respectively, where $c$ is the light velocity.
The fine structure constant is $e^2/\hbar c=1/137$.
We also assume that the Fermi velocity in $x$ and $y$ directions are the same, $v_x=v_y=v_{F}$.
Moreover, the kink feature in which we are interested is best seen for the direction $\phi=\pi/2$ of the momentum ${\boldsymbol{q}}$.
So we will report our plots for this direction.
energy $\hbar\omega/\mu_1$ (momentum $q/k_{F_1} $) where the subscript $1$ refers to layer $1$ which is
taken as reference in case the corresponding quantities are different from layer $2$, and the vector
${\boldsymbol{q}}$ is along the $y$ direction.
\section{Borophene-borophene double layer}\label{DLB}
In this section we consider, the pair of parallel borophene layers which are placed in background dielectric constant ($\epsilon_1=\epsilon_d=\epsilon_2=1$)
and separated by a distance $d$ in $z$ direction and investigate the dependence of the two plasmon dispersions on
various parameters such as the tilting parameters ($\eta_1,\eta_2$), chemical potentials ($\mu_1,\mu_2$).
The background dielectric constant appears as an overall constant that reduces $V({\boldsymbol{q}})$
to which the results are not very sensitive. So we take the background dielectric constant to be $1$.
Moreover, all plots will be in the $(\omega,{\boldsymbol{q}})$ space, where the vertical (horizontal) axis is the dimensionless
\subsection{Distance and tilt dependence}
To begin with, we consider DLB with both layers at the same chemical potential ($\mu_1=\mu_2$).
Since both layers are made of borophene, they have the same tilting parameter. We take the tilt parameter to be
$\eta_1=\eta_2=0.45$~\cite{Goerbig2014}.
In Fig.~\ref{dif-dis.fig}, we show how the plasmon modes disperse by increasing their separation in $z$ direction.
Here, we have plotted the plasmon mode dispersion along with the loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$ to clearly show the damping structure.
We have assumed in all panels of Fig.~\ref{dif-dis.fig} that the direction of
${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$.
The separation between the two borophene layers in each panel is: (a) $k_{F_1}d=0.35$, (b) $k_{F_1}d=1.8$, (c) $k_{F_1} d=5.3$ and (d) $k_{F_1} d=8.9$.
In this figure, the in-phase (out-of-phase) plasmon mode i.e. $\omega_+$ ($\omega_-$) has been shown with purple (black) curves.
The red line denotes the plasmon mode for the monolayer borophene. When the separation becomes infinitely large, the two layers are expected to
be decoupled, and therefore the in-phase and out-of-phase modes both become degenerate with the monolayer (red) mode.
The boundary of interband and intraband PHC which is defined by $\omega_{\rm kink}$ and $\omega_s$ has been shown with
dotdashed curve and line, respectively. These boundaries are given by~\cite{Nishine2010,Tilted2018}
\begin{align}
&\omega_{\rm kink}({\boldsymbol{q}},\eta)=v_Fq\eta\cos\phi+\frac{2\mu}{1-\eta^2}\\
&-\sqrt{(v_Fq)^2+\frac{4 v_Fq \mu\eta \cos\phi }{1-\eta^2}+\left(\frac{2\mu\eta}{1-\eta^2}\right)^2}\nonumber\\
\label{omega_kink.eqn}
&\omega_s=v_Fq (1+\eta \cos\phi)
\end{align}
The reason the lower boundary of inter-band PHC is given the name $\omega_{\rm kink}$ is that in
Ref.~\onlinecite{Tilted2018} we established that at this boundary the plasmon dispersion of a single-layer
tilted Dirac cone system develops a kink.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig1.png}
\caption{(Color online) Plasmon mode dispersions in DLB system with the same chemical potential ($\mu_1=\mu_2$) and same tilting parameter ($\eta_1=\eta_2=0.45$)
accompanied with intensity plot of loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$. Here, the vertical (horizontal) axis is the dimensionless
energy $\hbar\omega/\mu_1$ (momentum $q/k_{F_1} $) and the direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$. The spacing between layers, in various panels
are: (a) $k_{F_1} d=.35$, (b) $1.8$, (c) $5.3$, and (d) $8.9$. Purple, black and red solid lines are the in-phase ($\omega_+$), out-of-phase ($\omega_-$) and
monolayer plasmon modes, respectively. The pink dotdashed line and curve are the borders of intra-band ($\omega_{\rm kink}$) and inter-band ($\omega_s$) PHC, respectively.
See the text for details.}
\label{dif-dis.fig}
\end{figure}
As can be clearly seen in Fig.~\ref{dif-dis.fig}, both in-phase and out-of-phase (linear) plasmon mode in all panels
maintains their kink in the double layer system as well. Note that since tilt parameter for both layers is the
same, they are both characterized with the same $\omega_{\rm kink}$ curve. That is why the combined system develops
its kinks on the same curve. Note that the in-phase mode is always above the single-layer mode, while the
out-of-phase mode is always below the single-layer mode. This is consistent with picture of two harmonic
oscillators coupled via inter-layer Coulomb forces which then splits the degenerate modes into two,
lying above and below the degeneracy limit. Since the coupling between the layers becomes zero in the $d\to\infty$
limit, both curves must tend to the single-layer curve by increasing $d$. This can be clearly seen
by looking at the trends from panels (a) to (d).
It is very pleasant to notice the distance dependence of linear mode. By increasing the distance of layers,
the linear (out-of-phase) mode increases. This is consistent with our analytic result in Eq.~\eqref{collectiven.eqn}
which suggests that the linear mode depends on distance as $\sqrt d$.
Upon entering the interband PHC, both modes acquire damping. The undamped portion of the dispersion
relation conforms to intuition and both modes tend to the same monolayer dispersion when the distance
becomes very large. However the damped portion of the plasmon dispersions which are inside the interband PHC
do not degenerate to the same curve. This feature is similar to the case of double layer graphene~\cite{Hwang2009},
which is the special case where $\eta_1=\eta_2=0$.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig2.png}
\caption{(Color online) Intensity plot of loss function $|\Im \varepsilon({\boldsymbol{q}},\omega)|$ and the plasmon dispersions for combination of
two tilted Dirac cone layers. The tilt in the first layer is assumed to be zero, i.e. $\eta_1=0$, and we vary the tilt strength in the second
layer. The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$ and the layer spacing is $k_{F_1}d=5.3$.
The vertical (horizontal) axis is the dimensionless energy $\omega/\mu_1$ (momentum $q/k_{F_1} $).
The tilt parameter $\eta_2$ of the second layers is (a) $0$, (b) $0.3$, (c) $0.6$ and (d) $0.9$.
Black and purple solid lines are the plasmon modes for $\omega_+$ and $\omega_-$, respectively.
The dotdashed curves are the boundary of PHC as before.}
\label{eta0-eta.fig}
\end{figure}
To further investigate the role of tilt parameter, let us assume that one of the layers is not tilted, i.e. $\eta_1=0$,
and vary the tilt strength $\eta_2$ of the other layer. This will teach us how the kink which is the
hallmark of tilted Dirac cone evolves in a double layer system. For this purpose in Fig.~\ref{eta0-eta.fig}
we plot the plasmon dispersion for ${\boldsymbol{q}}$ in $\phi=\pi/2$ direction. The velocity of both layers are
assumed to be identical to that of borophene $\sim c/1000$. In all panels the distance is given by $k_{F_{1}}d=5.3$.
The chemical potentials of both layers are also assumed to
be the same, so that we only focus on the variation of $\eta_2$, as indicated in the legend of each panel.
As in the Fig.~\ref{dif-dis.fig}, the in-phase and out-of-phase modes are plotted by purple and black lines and the
PHC with dotdashed lines.
The separation $d$ of the layers is chosen to be large enough such that the two modes in panel (a) are very
close to each other. As can be seen, the effect of tilt in the second layer is to push them away from
each other. Furthermore, larger tilt in the second layer increases the energy of the in-phase mode.
Now let us see how the kink is imparted to the two modes.
As can be seen from panel (a) in Fig.~\ref{eta0-eta.fig} where both layers have zero tilting
there is not kink whatsoever. By increasing $\eta_2=0.3,0.6,0.9$ as in panels (b), (c) and (d), both modes
develops a kink. This can be intuitively understood as follows: When the layers are decoupled,
only the second layer has a kink, as $\eta_1=0$. But when they are coupled by Coulomb forces, the in-phase and out-of-phase
eigen-modes will be linear combinations of the modes in layers $1$ and $2$. Depending on the relative magnitudes
of the coefficients in the linear combination that forms the two eigen modes, the kink will be more manifest
in either or both of the symmetric and asymmetric modes. As can be seen in panel (b) for $\eta_2=0.3$, the kink
in the out-of-phase mode is more manifest, while in panel (c) corresponding to $\eta_2=0.6$, the kink
for the in-phase mode is more manifest. This observation can be analytically formalized as follows:
The eigen-modes for $\eta_1=0$ and $\eta_2\ne 0$ are given by
\begin{align}
& \omega_+^2=\frac{e^2 q\mu }{2} \big(F(\eta_1=0)+G(\eta_2,\phi)\big),\nonumber\\
&\omega_-^2= e^2 q^2 \mu d \frac{F(\eta_1=0)G(\eta_2,\phi) }{F(\eta_1=0)+G(\eta_2,\phi) }.
\label{collective-eta0-eta.eqn}
\end{align}
This is obtained by plugging the long wavelength expression of Eq.~\eqref{longwl.eqn} in the
characteristic equation~\eqref{secular.eqn}.
Note that although this equation is valid in the hydrodynamic limit where $q$ is very
small and kinks appear at higher $q$, but still this equation shows how the
function $G(\eta_2,\phi)$ enters both $\omega_\pm$ modes. This
function encodes information about the kink which is now shared by both $\omega_\pm$ modes.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig3.png}
\caption{(Color online) The plasmon dispersions for DLB system with equal nonzero tilting parameter($\eta_1=\eta_2=\eta$) combined with the density plot of loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$. The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$ and the layer spacing is $k_{F_1}d=5.3$. The vertical (horizontal) axis is the dimensionless quantity $\omega/\mu_1$ ($q/k_{F_1} $) . The tilting parameter for panel (a), (b), (c), (d) is $\eta=0.3, 0.45, 0.6, 0.9$ respectively. Black and purple solid lines are the plasmon dispersions for $\omega_+$ and $\omega_-$, respectively. The dotted and Dashed green lines are the boundary of particle hole continuum. }
\label{eta-eta.fig}
\end{figure}
Now let us return to the problem of identically tilted layers. Again both layers have the same chemical potential ($\mu_1=\mu_2$),
the same tilting parameter ($\eta_1=\eta_2$), and the same velocities. In Fig.~\ref{eta-eta.fig} we have plotted the symmetric and asymmetric plasmon modes for
different tilting parameter. In this figure the tilting parameter of both layers in panels (a), (b), (c) and (d) is given by
$\eta_1=\eta_2=0.3, 0.45, 0.6, 0.9$, respectively. As before, the direction ${\boldsymbol{q}}$ is fixed at $y$ axis.
It can be seen that by increasing the tilting parameter from panel (a) to (d), first of all
both modes have kinks at $\omega_{\rm kink}$ energy scale. This is intuitive, as both layers
have their own kink at $\omega_{\rm kink}$, and so does their both symmetric and asymmetric combinations.
Secondly by increasing the kink
the splitting between the modes on the $\omega_{\rm kink}$ boundary increases. Third, the
dispersions become steeper by increasing the tilt strength. In particular note the very steep
dispersion in panel (d) which we have deliberately chosen plot for $\eta_1=\eta_2=0.9$. This
large group velocity can be understood from Eq.~\eqref{collectivep.eqn} and Eq.~\eqref{collectiven.eqn}.
Both these equations suggest that the plasmon energy depends on $\sqrt G$. On the other hand
according to Eq.~\eqref{DF.eqn}, at least near $\eta\sim 1$, the auxiliary function $G$ behaves as
\begin{equation}
G(\eta,\phi)\sim \frac{1-\cos 2\phi}{\sqrt{1-\eta^2}}.
\end{equation}
This implies that for $\eta\approx 1^-$, both in-phase and out-of-phase modes behave like
\begin{equation}
\omega_{\pm}\sim \frac{\sin\phi}{(1-\eta^2)^{1/4}}
\left\{\begin{matrix}
q^{1/2}\\
q^1
\end{matrix}\right.
\end{equation}
This singular behavior near $\eta\approx 1$ explains why the plasmon modes become steeper for very large tilting.
To establish the claim of our previous work~\cite{Tilted2018} that the kink is associated with the
energy scale $\omega_{\rm kink}$ of Eq.~\eqref{omega_kink.eqn}, let us now introduce two such curves
corresponding to two different kinks. For this purpose in Fig.~\ref{eta45-eta.fig}, we
consider DLB with different tilting parameter in each layer. We suppose the first layer has fixed tilting parameter $\eta_1=0.45$ and the
other layer has a tilt parameter different from $\eta_1=0.45$. Panels (a) and (b) of this figure correspond to
$\eta_2=0.3,0.6$, respectively. Since each $\eta$ according to Eq.~\eqref{omega_kink.eqn} gives rise to a
distinct boundary $\omega_{\rm kink}$ for the inter-band particle-hole excitations, with two different $\eta_1\ne\eta_2$
we will have two of them which are plotted as dotdashed lines in Fig.~\ref{eta45-eta.fig}.
Here again the general trends of plasmon modes are the same as Fig.~\ref{eta0-eta.fig} and Fig.~\ref{eta-eta.fig}.
But an important difference is that here as a result of two different tilting parameters of layers, we have two different
boundaries given by $\omega_{\rm kink}(\eta_1)$ and $\omega_{\rm kink}(\eta_2)$. Now every one of the plasmon branches --
either in-phase or out-of-phase modes -- develops a kink upon crossing every one of these boundaries. This
gives us a total number of {\rm four kinks} in the plasmon dispersion, two for each mode.
Again this can be seen analytically. The eigen modes for arbitrary and nonzero $\eta_1$ and $\eta_2$
are given by
\begin{align}
& \omega_+^2=\frac{e^2 q\mu }{2} \big(G(\eta_1,\phi)+G(\eta_2,\phi)\big),\nonumber\\
&\omega_-^2= e^2 q^2 \mu d \frac{G(\eta_1,\phi)G(\eta_2,\phi) }{G(\eta_1,\phi)+G(\eta_2,\phi) }.
\label{collective-eta1-eta2.eqn}
\end{align}
Every $G$ factor contains its own kink information, and therefore both $\omega_\pm$ modes will
inherit two kinks, one from the $G$ function of each leyer.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig4.png}
\caption{(Color online) Intensity plot of loss function $|\Im \varepsilon({\boldsymbol{q}},\omega)|$ and the plasmon dispersions for combination of borophene layers with different nonzero tilting parameter. The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$ and the layer spacing is $k_{F_1}d=5.3$. The vertical (horizontal) axis is the dimensionless quantity $\omega/\mu_1$ ($q/k_{F_1} $) . The tilting of the first layer in all panels is $\eta_1=0.45$ and the other layer tilting in each panel is, $\eta_2=0.3$ in (a) and $0.6$ in (b).
Other conventions are the same as previous figures.}
\label{eta45-eta.fig}
\end{figure}
\subsection{Role of doping}
An interesting lesseon can be learned by studying the plasmon modes of two borophene layers
where layer $1$ is doped ($\mu_1\ne 0$), while the second layer is undoped ($\mu_2=0$).
When such two layers are infinitely separated, such that the collective charge oscillations
in the two layers are decoupled, in layer $1$ we have standard plasmons, while in layer $2$, since
the doping level is zero, there are no plasmon oscillations at the RPA level~\cite{Triplet2017,mishchenko}.
Although there will be other types of spin-flip modes~\cite{Jafari2002,Jafari2004,Ebrahimkhas2009,Jafari2012,Ganesh2013,Jafari2014TI,Hedegard2015,Maslov2017}
Therefore in terms of counting the collective degrees of freedom, we only have one mode.
When the two layers are brought closer at a distance of $k_{F_1}d=5.3$ as in Fig.~\ref{BB1.fig}
to let them couple, it is not surprising to see that there is only one solution, which clearly
corresponds to the $\sqrt{q}$ dispersion. This is a further confirmation that the $\sqrt q$ mode
is indeed the in-phase mode.
The PHC consist of two contributions. In the doped layer, there is a window below the $\omega_{\rm kink}$.
But in the undoped layer, this window is filled with interband PH excitations. Therefore the total PHC
which is the union of the PHC for the two layers,
consists of no gapped (white) region which will then make the plasmon mode of essentially layer $1$ Landau
damped by creasing interband PH excitations in the layer $2$.
Indeed we have checked that the dispersion of the present DLB system is almost degenerate with that
of a single layer $1$, as long as we are concerned with $\omega<\omega_{\rm kink}$.
However, for $\omega>\omega_{\rm kink}$ the plasmon branch enters
the interband PHC of layer $1$ itslef, and its dispersion is heavily affected by the presence of layer $2$,
and it will no longer be nearly degenerate with the dispersion of a monolayer $1$.
Finally note that by increasing the common tilt parameter $\eta_1=\eta_2$ of the two layers,
the energy of the plasmon mode increases. This is a generic behavior in all combinations,
as in e.g. Fig.~\ref{eta-eta.fig}.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig5.png}
\caption{(Color online) The plasmon dispersions for DLB with undoped-doped combination of borophene layer along with intensity
plot of loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$ . The vertical (horizontal) axis is the dimensionless energy
$\hbar \omega/\mu_1$ (momentum $q/k_{F_1} $). The direction $\phi$ of ${\boldsymbol{q}}$ in both panels is $\pi/2$.
The separation of layers is set by $k_{F_1}d=5.3$.
The tilting parameter in both layers is the same, $\eta_1=\eta_2$ which in panel (a) is set to $0.3$,
while in panel (b) the tilt parameters are $0.6$. The purple curve is the in-phase. The out-of-phase mode is absent in this case.
The pink dotdashed lines are the boundary of interband and intraband PHC. Note that the $\omega_{\rm kink}$ boundary
belongs only to the doped layer.}
\label{BB1.fig}
\end{figure}
\begin{figure}[b]
\includegraphics[width = .45\textwidth] {fig6.png}
\caption{(Color online) The plasmon mode dispersions for DLB system with different chemical potential ($\mu_1\neq\mu_2)$ and same tilting parameter,
$\eta_1=\eta_2=0.45$, and same Fermi velocity. The color code as before indicates the intensity plot of loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$.
The vertical (horizontal) axis is the dimensionless energy $\hbar\omega/\mu_1$ (momentum $q/k_{F_1} $).The separation of layers is set by $k_{F_1}d=5.3$. The direction of ${\boldsymbol{q}}$ is fixed in $y$ direction.
The doping ratios in panels (a) and (b) are $\mu_2/\mu_1=0.5$ and $\mu_2/\mu_1=0.9$, respectively.
Black and purple solid lines are the out-of phase and in-phse plasmon modes, respectively.
The pink dotdashed lines are the boundary of interband and intraband borophene PHC. }
\label{BB2.fig}
\end{figure}
Next, we assume in the DLB we have identically tilted layers with identical velocities, which are doped differently. The difference in doping is quantified by
doping ratio. In Fig.~\ref{BB2.fig}, we have plotted the plasmon dispersion for DLB system, for the doping ratio $\mu_2/\mu_1$ given by (a) $0.5$, and (b) $0.9$.
The color code is the same as previous figures and
the direction of wave vector ${\boldsymbol{q}}$ is fixed in $y$ direction.
An important player in this case is the upper border $\omega_s$ of the intra-band PHC.
As for the border $\omega_{\rm kink}$ of the interband PHC, there will be three possibilities
to form the interband particle-hole excitations: (i) within the layer $1$, (ii) within the layer $2$, (iii) cross layer involving
particle-hole excitations between layer $1$ and $2$. In the present approximation where interlayer PH propagators are
not included, the third item above is absent.
The lower bound $\omega_{\rm kink}$ of the intralayer interband for each of
the layers are plotted by dottdashed lines. As can be seen first of all, both modes when cross every one of these boundaries,
develop a kink. So we end up having two kinks for each mode. Second point to notice is that, in panel (b) where the chemical
potentials are closer to each other, the $\omega_{\rm kink}$ boundaries approach to each other. In this case, we have two
decent $\omega_\pm$ modes. However by reducing the ratio $\mu_2/\mu_1$, the nearly triangle shaped region shrinks more and more.
As a result, the out-of-phase mode (black line) is attracted more and more to the intraband PHC. At the limit $\mu_2/\mu_1=0$
of Fig.~\ref{BB1.fig}, the out-of-phase mode is entirely swallowed by the intraband PHC.
\section{Brophene-Graphene}\label{DLBG}
So far we have assumed that both layers are composed of borophene, such that the Fermi velocities are identical.
In this section, we are going to study a double layer composed of borophene and graphene. In this case, a new
player will be the difference in the Fermi velocity of the two layers. The Fermi velocity sets the slope of
the boundary $\omega_s$ of the intraband PHC for every layer.
The monolayer graphene as a 2D Dirac material with Fermi velocity $c/300$ and borophene
layer as a 2D tilted Dirac material with Fermi velocity $\approx c/1000$ are a good candidate for constructing a double layer system.
Another candidate for the tilted Dirac cone layer at the bottom can be organic material~\cite{SuzumuraReview} which has even smaller
Fermi velocity.
In what follows we consider a double layer of borophene-graphene and study the effect of different Fermi velocity and chemical potential.
In this case the two branches of plasmons in the long wave length limit will be given by
\begin{align}
& \omega_+^2=\frac{e^2 q\mu }{2} \big(F^{(2)}(0)+G^{(1)}(\eta_1,\phi)\big),\nonumber\\
&\omega_-^2= e^2 q^2 \mu d \frac{F^{(2)}(0)G^{(1)}(\eta_1,\phi) }{F^{(2)}(0)+G^{(1)}(\eta_1,\phi) },
\label{collective-BG.eqn}
\end{align}
where the superscript in parenthesis indicates their layer indices. More explicitly, $F^{(2)}$ is the
same as function $F$ but specialized for layer $2$ whose Fermi velocity is $v_{F_2}$. The argument $0$ of
this function indicates that the tilt parameter $\eta_2=0$ as it stands for graphene layer.
Similarly $G^{(1)}$ is the same function $G$ for the layer $1$ whose Fermi velocity is $v_{F_1}$, and its tilt is $\eta_1$.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig7.png}
\caption{(Color online) Plasmon dispersions in DLBG system along with intensity plot of loss function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$.
The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$, the layers separated by $k_{F_1}d=5.3$ have same chemical potential ($\mu_1=\mu_2$) and the tilting parameter of
borophene layer (number $1$) in each panel is: (a) $\eta_1=0.3$, (b) $\eta_2=0.45$ (c) $\eta=0.6$, and (d) $\eta=0.9$.
Purple and black solid lines are the plasmon dispersions for $\pm$ modes, respectively.
The dotdashed (dashed) pink lines are the boundary of interband and intraband borophene (graphene) PHC.
The vertical (horizontal) axis is the dimensionless energy $\hbar\omega/\mu_1$ (momentum $q/k_{F_1} $).
The graphene layer, $2$ has no tilting, $\eta_2=0$.}
\label{BG1.fig}
\end{figure}
First in Fig.~\ref{BG1.fig} we show the plasmon modes in the DLBG with equal chemical potential ($\mu_1=\mu_2$) but different Fermi velocity $v_{F_1}\neq v_{F_2}$.
As pointed out, the subscripts $1,2$ stand for borophene and graphene respectively.
The tilting parameter for graphene layer, $\eta_2=0$ and tilting parameter for borophene layer in panel (a), (b), (c), (d)
are taken to be $\eta_1=0.3, 0.45, 0.6 , 0.9$, respectively. The unit of energy is taken to be $\mu_1$ which equals $\mu_2$.
However, since the Fermi velocities are different, for the unit of momentum one must specify either of the Fermi wave vectors, $k_{F_1}$ or $k_{F_2}$
as the unit of energy. We adopt the former, and therefore the horizontal axis is the dimensionless momentum $q/k_{F1}$,
and the vertical axis (as before) is the dimensionless energy $\omega/\mu_1$.
The color code is the same as previous figures. The
PHC boundary of borophene (graphene) has been defined by dotdashed (dashed) lines~\cite{Nishine2010,Tilted2018}.
As can be seen from, Fig.~\ref{BG1.fig} the PHC boundary of graphene has the larger slope as a result of its larger Fermi velocity value.
Since the plasmons of monolayer of graphene are split off from its intraband PHC, in a combined DLBG system too, the level repulsion
from the intraband PHC of graphene pushes both modes to higher energies. This feature not only holds for the undamped portion of the
plasmon branches, but it also holds for the damped portion of both branches that enters the interband PHC of the union of interlayer and
intralayer PH excitations. So the essential role of the difference in the velocity of the two layers is to sustain both
plasmon branches at velocities larger than the greater of the two.
Note that in the DLBG system the PHC will be the union of intralayer PH excitations of both layers.
In this way, the interband portion of the PHC for moderate $\eta_1$ comes below the $\omega_{\rm kink}$ curve.
This is manifest in panels (a), (b) and (c) of Fig.~\ref{BG1.fig} where the dashed boundary (of graphene PHC)
has come below the dotdashed boundary (of the borophene PHC). In this way, the damping of $\pm$ modes in panels
(a) and (b) start at lower energy and momenta than anticipated from $\omega_{\rm kink}$ curve.
Please note that, although the damping might start before the modes hit $\omega_{\rm kink}$ (dotdashed upper boundary),
but the kink always starts once the modes cross the $\omega_{\rm kink}$ boundary.
This establishes that the $\omega_{\rm kink}$ very well deserves the subscript "kink".
Finally, again the generic property of both modes can be observe that
the energy of both modes increases by increasing the tilt parameter.
Note that as argued for Fig.~\ref{eta0-eta.fig}, in the decoupled limit, only borophene layer
has kinks, while in the coupled graphene-borophene double layer, both dispersions
have a kink at $\omega_{\rm kink}$.
\begin{figure}[t]
\includegraphics[width = .49\textwidth] {fig8.png}
\caption{(Color online) Plasmon dispersions in DLBG for different chemical potential together with intensity plot of loss
function $|\Im \varepsilon^{-1}({\boldsymbol{q}},\omega)|$. The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$ and the tilting parameter of borophene layer
is $\eta_1=0.45$ which is separated by $k_{F_1}d=5.3$, where the chemical potential $\mu_1$ is fixed and is unit of energy.
The chemical $\mu_2$ is given by the ratio $\mu_2/mu_1$ which is (a) $0.5$ and (b) $0.9$.
The rest of the conventions are as in Fig.~\ref{BG1.fig}.
}
\label{BG2.fig}
\end{figure}
Next, we consider DLBG with different chemical potential ($\mu_2\ne\mu_1$) and of course with different Fermi velocities in Fig.~\ref{BG2.fig}.
In this figure the borophene layer
is assumed to have the tilting $\eta_1=0.45$ and its chemical potential ($\mu_1$) is greater than the chemical potential of graphene
($\mu_2$). As can be seen the undamped window for plasmon mode dispersion is more restricted as a result of different chemical potentials.
Let us start by panel (b) where chemical potentials are different, but close to each other. In this case both in-phase and out-of-phase modes
are present, and their group velocity scale is set by the greater velocity (which belongs to graphene). Decreasing the chemical potential $\mu_2$ of
graphene, the "nearly" triangular window which is formed by the union of intralayer PHC of both layers, shrinks and the out-of-phase mode starts to
sink into the intraband PHC dominated by PH excitaions of graphene. By further decrease in the $\mu_2$, the out-of-phase mode will entirely
disappear. This feature is similar to one considered in Fig.~\ref{BB1.fig}, where the out-of-phase mode is swallowed by the PHC.
In addition, as in all figures, both modes will have their kinks at their intersection with $\omega_{\rm kink}$.
Note that in panel (b) of Fig.~\ref{BG2.fig} and panel (c) of Fig.~\ref{BG1.fig} the out-of-phase mode is interrupted.
The region of interruption in both cases happens when the intrabanc PHC of graphene hits the plasmon branch.
The density of PH excitations in intraband PHC are always much larger than the interband ones, and hence are
able to destroy the plasmon branches that hits this portion of PHC.
\begin{figure}[t]
\includegraphics[width = .47\textwidth] {fig9.png}
\caption{(Color online) Plasmon dispersions in DGL and DLB and with the same chemical potential. The direction of ${\boldsymbol{q}}$ is fixed by $\phi=\pi/2$. Here the tilting parameter for each line is: the blue dotted line points $\eta_1=\eta_2=0$ for DGL, the green dashed line points $\eta_1=0.3, \eta_2=0.45$ for DLB, the green dotdashed line points $\eta_1=\eta_2=0.45$ and the pink solid line points $\eta_1=\eta_2=0.45$ for DLB. The vertical (horizontal) axis is the dimensionless quantity $\hbar \omega/\mu_1$ ($q/k_{F_1} $). The separation of layers is set by $k_{F_1}d=5.3$. }
\label{overdamped.fig}
\end{figure}
\section{Overdamped Plasmon branch}\label{ODP}
In our previous work~\cite{Tilted2018}, we noted that an exclusive consequence of the tilt,
in addition to kinks in the dispersion of plasmons in monolayer system, is to provide
a unique chance for the emergence of an overdamped branch of plasmon excitations
which lies deep in the intraband PHC. Since the density of intraband PH excitations is
quite large, this provides a significant bath for Landau damping of this plasmon branch,
and therefore it gets quickly damped. Although this branch is heavily damped,
but since it lives in lower energy than the standard plasmon branch,
in time scales smaller than its lifetime $\tau$, it will be able to interact with
other low-energy excitations, including the single-particle excitations. Therefore it is
important to study this branch in the double layers as well.
In Ref.~\onlinecite{Tilted2018}, we found that the overdamped plasmon branch for borophene monolayer
is in the energy range $\omega<\omega_s$. This mode is
caused by strong enough tilt, and disperses linearly. In the case of monolayer graphene where there
is no tilt, such an overdamped mode does not exist at all. It is interesting to note that when it comes to double layer
graphene, such an overdamped mode will appear. This has not been explored in earlier publications
addressing the double layer systems~\cite{Hwang2009,Gan2012,Profumo2012}. However, we find that even
for upright Dirac cones in a double layer system, an overdamped branch emerges.
Fig.~\ref{overdamped.fig}, shows the dispersion of overdamped plasmon for a double layer composed of
tilted Dirac cone systems, where Fermi velocities and chemical potentials and are the same.
The dispersion has been plotted for $\phi=\pi/2$. The distance is fixed by $k_{F_1}d=5.3$.
Values of tilt parameter for each layer is indicated in the legend.
As can be seen, even for $\eta_1=\eta_2=0$ there is an overdamped plasmon branch.
The effect of tilt in each of layers is to reduce the energy of the overdamped plasmon
mode. The solid line represents the overdamped plasmon mode for quite large tilts $\eta_1=\eta_2=0.9$.
This mode disperses linearly over a much larger range of momenta, while for smaller values of
tilt parameters, the linear dispersion holds upto $q\sim k_{F_1}$.
When one of the layers is doped and the other one is undoped, there will be no
overdamped solution.
\section{Summary and conclusion}\label{summary}
In this work, we investigated plasmon oscillations in double layer systems where
either one or both of the layers have tilted Dirac cone spectrum. It is well known that in this
context, there will be two plasmon modes. The in-phsae mode will disperse as $\sqrt q$ --
consistent with the hydrodynamic picture -- while
the out-of-phase mode disperses as $q^1$.
The in-phase (symmetric) mode always lies at higher energies that the out-of-phase (asymmetric) mode.
This is in contrast to the intuition from molecular orbitals where the symmetric combination of atomic
orbitals usually has lower energy than the asymmetric combination. This is because in the present case,
we are dealing with a symmetric combination of particle-hole objects, and not single-particle orbitals.
An extra minus sign coming from the fermion loop places the symmetric plasmons at higher energies.
When the tilted Dirac cone systems are
combined in a double layer framework, interesting plasmonic features arises.
The tilt of the Dirac cone
is manifested in its plasmons as a kink when it crosses $\omega_{\rm kink}$. Such a
kink is absent in Dirac cone without tilt. In a bilayer setting we find quite generically
that even when one of the layers hosts tilted Dirac cone, the plasmonic kink will be
inherited by both in-phase and out-of-phase mode. This kink in both branches takes place
at precisely $\omega_{\rm kink}$. In situation such as in Fig.~\ref{BB2.fig} where due to difference
in the chemical potential of the two tilted Dirac cone layers there are two $\omega_{\rm kink}$ energy
scales, each of the plasmon branches develops a kink upon crossing every $\omega_{\rm kink}$ (dotdashed pink) curve.
In this situations there will be a total number of four kinks; two kinks for every plasmon branch.
When one of the layers is graphene with larger Fermi velocity, the small window where
undamped plasmons can live will become smaller and will be set by the larger Fermi velocity
of graphene. This pushes both in-phase and out-of-phase plasmon modes to higher energy.
Therefore the typical plasmonic group velocity in such double layer systems with two
different Fermi velocities, is set by the greater of the two velocities.
Another unique feature of tilted Dirac cone monolayer is the existence of
linearly dispersing overdamped plasmon mode inside the intraband PHC.
Although this mode does not exist in monolayers of upright Dirac cone systems
such as graphene, in the double layer setting such a mode emerges.
In the double layer systems with tilt, this mode continuously reduces its
slope by increasing the tilt. This mode is the in-phase overdampled oscillation
of the individual tilted layers~\cite{Tilted2018}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.